Sie sind auf Seite 1von 268

Front cover

DB2 for z/OS and


OS/390 Version 7
Performance Topics
Description of performance and
availability related functions

Performance measurements of
new or enhanced functions

Usage considerations based


on performance

Paolo Bruni
Leif Pedersen
Mike Turner
Jan Vandensande

ibm.com/redbooks
International Technical Support Organization

DB2 for z/OS and OS/390 Version 7


Performance Topics

July 2001

SG24-6129-00
Take Note! Before using this information and the product it supports, be sure to read the general
information in “Special notices” on page 235.

First Edition (July 2001)

This edition applies to Version 7 of IBM DATABASE 2 Universal Database Server for z/OS and OS/390 (DB2
for z/OS and OS/390 Version 7), Program Number 5675-DB2 and the DB2 Version 7 Utilities Suite, Program
Number 5697-E98.

Comments may be addressed to:


IBM Corporation, International Technical Support Organization
Dept. QXXE Building 80-E2
650 Harry Road
San Jose, California 95120-6099

When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in
any way it believes appropriate without incurring any obligation to you.

© Copyright International Business Machines Corporation 2001. All rights reserved.


Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set
forth in GSA ADP Schedule Contract with IBM Corp.
Contents

Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv
Special notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
IBM trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Key performance enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Performance highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Query performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Transaction performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Utility performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.4 Subsystem and host platform synergy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.5 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Chapter 2. Overview of DB2 Version 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11


2.1 DB2 at a glance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.1 Application enablement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.2 Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.3 Network computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.4 Performance and availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.5 Data sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.6 New features and tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 DB2 V7 packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Net Search Extender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Migration to DB2 Version 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.1 Planning for migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.2 Reasons to migrate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.1 Functions requiring no effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.2 Functions requiring a small effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.3 Functions requiring some planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Chapter 3. SQL performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25


3.1 Changes to Explain tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.1 PLAN_TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.2 DSN_STATEMNT_TABLE changed columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.3 Explain headings used in this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Parallelism for IN-List index access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.4 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

© Copyright IBM Corp. 2001 iii


3.3 Row value expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.1 Equal and not equal predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.2 Quantified predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.3 IN predicate with subquery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.4 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.7 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Correlated subquery to join transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4.1 Removal of uniqueness constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.2 Support for EXISTS predicate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4.3 Searched UPDATE and DELETE support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4.4 Subqueries that will still not be transformed in DB2 V7 . . . . . . . . . . . . . . . . . . . . 42
3.4.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4.6 Restrictions and potential problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4.8 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.5 UPDATE/DELETE with self-referencing subselect . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.5.1 Executing the self-referencing UPDATE/DELETE . . . . . . . . . . . . . . . . . . . . . . . . 46
3.5.2 Restrictions on usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.5.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.5.5 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.6 UNION everywhere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.6.1 UNION syntax changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.6.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.6.3 Requirements for subquery pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.6.4 UNION ALL materialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.6.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.6.7 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.7 Scrollable cursors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.7.1 Functional description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.7.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.7.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.7.4 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.8 Fetch first n rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.8.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.8.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.8.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.8.4 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.9 MIN/MAX enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.9.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.9.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.9.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.9.4 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.10 Fewer sorts with ORDER BY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.10.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.10.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.10.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.10.4 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.11 Joining on columns of different data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.11.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

iv DB2 for z/OS and OS/390 Version 7 Performance Topics


3.11.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.11.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.11.4 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.12 VARCHAR index-only access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.12.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.12.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.12.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.12.4 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.13 Bind improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.13.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.13.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.13.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.13.4 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.14 Star join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.14.1 Recent star join enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Chapter 4. DB2 subsystem performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87


4.1 Asynchronous preformat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.1.1 Asynchronous preformat performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.2 Parallel data set open . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.2.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.3 Virtual storage constraint relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.3.1 Instrumentation enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.3.2 Database address space — virtual storage consumption. . . . . . . . . . . . . . . . . . . 92
4.4 CATMAINT utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.4.1 CATMAINT utility performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.5 Evaluate uncommitted. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.5.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.5.2 Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.6 Log-only recovery improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.7 Reduced logging for variable-length rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.8 DDL concurrency improvement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Chapter 5. Availability and capacity enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . 101


5.1 Online DSNZPARM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.1.1 SET SYSPARM command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.1.2 Displaying the current settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.1.3 Parameter behavior with online change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.2 Consistent restart enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.3 CANCEL THREAD NOBACKOUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.4 Adding workfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.5 Log Manager enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.5.1 Log suspend and resume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.5.2 Time controlled checkpoint interval. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.5.3 Retry of log read request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.5.4 Long running UR warning message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.6 IRLM enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.6.1 Dynamic change of IRLM timeout value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.6.2 Subsecond deadlock detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Chapter 6. Utility performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113


6.1 Dynamic utility jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.1.1 LISTDEF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.1.2 TEMPLATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Contents v
6.1.3 OPTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6.1.4 RESTART processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.1.5 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.2 LOAD partition parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.2.1 LOAD partitions in parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.2.2 Building indexes in parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.2.3 Performance measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.3 Online LOAD RESUME. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
6.3.1 Performance measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.3.2 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.3.3 Performance recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.4 UNLOAD utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.4.1 Performance measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.4.2 Unloading partitions in parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.4.3 Unload from copy data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.5 Online REORG enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.5.1 Fast switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.5.2 BUILD2 parallelism for NPIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.5.3 DRAIN and RETRY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.6 RUNSTATS enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.6.1 Statistics history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.6.2 DB2 catalog changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.6.3 Force update of aggregate statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6.6.4 Performance considerations using new statistics columns . . . . . . . . . . . . . . . . . 141
6.7 COPYTOCOPY utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.7.1 Performance measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.8 Cross Loader. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.8.1 The EXEC SQL statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6.9 MODIFY RECOVERY enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Chapter 7. Network computing performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151


7.1 FETCH FIRST n ROWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
7.1.1 OPTIMIZE FOR m ROWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.1.2 Performance measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.1.3 Singleton select. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
7.2 Stored procedures with COMMIT/ROLLBACK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.3 Java stored procedures with JVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.3.1 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7.4 SQLJ and JDBC performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.5 Unicode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
7.5.1 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
7.6 DRDA server elapsed time reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Chapter 8. Data sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165


8.1 Coupling Facility Name Class Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.2 Group attach enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.2.1 Bypass group attach processing on local connect — groupoverride . . . . . . . . . 167
8.2.2 Support for local connect using STARTECB parameter . . . . . . . . . . . . . . . . . . . 167
8.2.3 group attach support for DL/I batch jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
8.2.4 CICS group attach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
8.2.5 IMS group attach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.3 IMMEDWRITE bind option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.3.1 IMMEDWRITE bind option before DB2 V7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

vi DB2 for z/OS and OS/390 Version 7 Performance Topics


8.3.2 IMMEDWRITE bind option in DB2 V7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.3.3 IMMEDWRITE bind option performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.3.4 IMMEDWRITE bind option performance measurement . . . . . . . . . . . . . . . . . . . 171
8.4 DB2 Restart Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.4.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
8.4.2 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
8.4.3 Performance measurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
8.5 Persistent Coupling Facility Structure sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8.6 Miscellaneous items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.6.1 Notifying incomplete units of recovery during shutdown . . . . . . . . . . . . . . . . . . . 176
8.6.2 More efficient message handling for CF/structure failure . . . . . . . . . . . . . . . . . . 176
8.6.3 Automatic Alter for CF structures (Auto Alter). . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.6.4 CF lock structure duplexing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.6.5 CF lock structure size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.6.6 Purge retained locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
8.6.7 CURRENT MEMBER register. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

Chapter 9. Performance tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179


9.1 IFI enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9.1.1 New IFCIDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9.1.2 Changed IFCIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
9.1.3 DSNZPARM to synchronize IFI records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
9.2 DB2 PM changes and new functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
9.2.1 Statistics Report Long . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
9.2.2 Accounting Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
9.3 Index Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.3.1 Index Advisor process for DB2 for z/OS and OS/390 . . . . . . . . . . . . . . . . . . . . . 193
9.3.2 Modeling DB2 for z/OS and OS/390 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
9.3.3 Collecting the SQL workload. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
9.3.4 Running the Index Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
9.3.5 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
9.3.6 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
9.4 DB2 Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

Chapter 10. Synergy with host platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197


10.1 zSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
10.1.1 DB2 and zSeries overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
10.1.2 Performance measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
10.2 VSAM Data Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
10.2.1 Hardware versus software striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
10.2.2 Description of VSAM striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
10.2.3 VSAM Data Striping and DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
10.2.4 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
10.3 ESS enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10.3.1 Parallel Access Volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10.3.2 VSAM I/O striping versus PAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
10.3.3 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

Appendix A. Updatable DB2 subsystem parameters. . . . . . . . . . . . . . . . . . . . . . . . . . 211

Appendix B. Enabling VSAM I/O striping in OS/390 V2R10 . . . . . . . . . . . . . . . . . . . . 215

Appendix C. A method to assess the size of the buffer pools . . . . . . . . . . . . . . . . . . 219


Statistics report information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

Contents vii
Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Spreadsheet example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

Appendix D. Sample PL/1 program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

Appendix E. Sample utilities output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227


COPY output with LISTDEF and TEMPLATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Job output of LOAD partition in parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Job output of UNLOAD utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Job output of Online REORG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 237


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 237
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 237
Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 238
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 238
IBM Redbooks collections. . . . . . . . . . . . . . . . . . . . . . ...... ....... ...... ...... 238

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

viii DB2 for z/OS and OS/390 Version 7 Performance Topics


Figures

1-1 Improvements in maximum log rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7


1-2 Elapsed time in renaming a data set by device type . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1-3 Online LOAD RESUME and INSERT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1-4 CATMAINT executions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3-1 DB2 Version 7 plan table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3-2 Table and index definitions for IN-List parallelism example. . . . . . . . . . . . . . . . . . . . 29
3-3 IN-List parallelism for single table access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3-4 IN-List parallelism for outer table of join. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3-5 IN-List performance test for single table access . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3-6 IN-List performance test 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3-7 Row value expression in basic predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3-8 Row value expression in quantified predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3-9 IN predicate syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3-10 Row value expression for IN subquery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3-11 Row value expressions Explain example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3-12 Row value expression and Explain example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3-13 Rewritten query and Explain example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3-14 Table and index definitions for correlated subquery examples . . . . . . . . . . . . . . . . . 39
3-15 Subquery transformation — example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3-16 Subquery transformation — example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3-17 Subquery transformation — example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3-18 Subquery transformation for UPDATE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3-19 Subquery transformation for DELETE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3-20 Subquery transformation performance test for select . . . . . . . . . . . . . . . . . . . . . . . . 43
3-21 Subquery transformation performance test for update . . . . . . . . . . . . . . . . . . . . . . . 44
3-22 UPDATE with self-referencing subquery in WHERE clause . . . . . . . . . . . . . . . . . . . 45
3-23 UPDATE with self-referencing subquery in SET clause . . . . . . . . . . . . . . . . . . . . . . 45
3-24 Two-step processing for an UPDATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3-25 Positioned update with self-referencing subquery not supported . . . . . . . . . . . . . . . 47
3-26 Self-referencing UPDATE performance test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3-27 CREATE VIEW syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3-28 UNION in a view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3-29 Table expression syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3-30 UNION in a table expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3-31 Basic predicate syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3-32 Quantified predicate syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3-33 IN predicate syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3-34 EXISTS predicate syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3-35 UNION in a predicate example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3-36 DECLARE GLOBAL TEMPORARY TABLE syntax . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3-37 UNION optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3-38 Explain output for UNION optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3-39 UNION without pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3-40 Distribution of joins and aggregations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3-41 Possible query rewrite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3-42 Multiple ORDER tables, one per month. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3-43 DECLARE CURSOR syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3-44 Fetch syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

© Copyright IBM Corp. 2001 ix


3-45 Open scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3-46 Fetch positioning options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3-47 Cursor definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3-48 Using a DTT instead of a scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3-49 Repositioning without scrollable cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3-50 Repositioning with scrollable cursor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3-51 Existence checking SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3-52 FETCH FIRST effect on access path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3-53 FETCH FIRST SQL traces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3-54 Existing MIN function access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3-55 MAX function access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3-56 ORDER BY sort avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3-57 Sort avoidance in a join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3-58 Join column mismatch Explains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3-59 Effect of the RETVLCFK parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3-60 Index-only access when VARCHAR in WHERE clause only. . . . . . . . . . . . . . . . . . . 80
3-61 Star join example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4-1 Synchronous and asynchronous preformat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4-2 CPU bound test case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4-3 I/O bound test case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4-4 Parallel data set open performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4-5 STORAGE STATISTICS report layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4-6 Comparison of migration performance — non-data-sharing case . . . . . . . . . . . . . . . 96
4-7 Comparison of migration performance — data sharing case. . . . . . . . . . . . . . . . . . . 97
5-1 RECOVER POSTPONED CANCEL command response . . . . . . . . . . . . . . . . . . . . 106
5-2 DSN1LOGP SUMMARY OF COMPLETED EVENTS report. . . . . . . . . . . . . . . . . . 106
5-3 DSN1LOGP example - START DATABASE FORCE log record . . . . . . . . . . . . . . . 107
6-1 Sample JCL from DB2 V6 and DB2 V7 using LISTDEF . . . . . . . . . . . . . . . . . . . . . 114
6-2 DISPLAY UTILITY output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6-3 Sample JCL from DB2 V6 and DB2 V7 using TEMPLATE . . . . . . . . . . . . . . . . . . . 115
6-4 Output from PREVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6-5 Parallel LOAD jobs per partition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6-6 Partition parallel LOAD without building indexes in parallel. . . . . . . . . . . . . . . . . . . 119
6-7 DISPLAY UTILITY output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6-8 Partition parallel LOAD with indexes built in parallel . . . . . . . . . . . . . . . . . . . . . . . . 121
6-9 UNLOAD enhanced functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6-10 New UNLOAD utility versus REORG EXTERNAL UNLOAD. . . . . . . . . . . . . . . . . . 129
6-11 Unloading partition table in parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6-12 UNLOAD utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6-13 Data unavailability when running REORG SHRLEVEL CHANGE. . . . . . . . . . . . . . 132
6-14 Clustering sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6-15 Indirect row reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6-16 Fragmented LOB table space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6-17 Non-fragmented LOB table space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6-18 Physical view before reorganization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6-19 Physical view after reorganization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6-20 COPYTOCOPY utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6-21 Cross Loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6-22 Output from Cross Loader using DRDA and 3-part names . . . . . . . . . . . . . . . . . . . 150
7-1 Fetching n rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
7-2 Throughput per SQL statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7-3 CPU time per transaction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7-4 Elapsed time outside of DB2 per transaction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

x DB2 for z/OS and OS/390 Version 7 Performance Topics


7-5 Non-singleton SELECT versus singleton SELECT . . . . . . . . . . . . . . . . . . . . . . . . . 156
7-6 Interpreted Java stored procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7-7 Java runtime environment in DB2 V7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8-1 Output of D CF command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
9-1 IFCID 0217 description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
9-2 IFCID 0225 description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
9-3 DB2 PM Accounting Class 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
9-4 DSNTIPN panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
9-5 DB2 PM Statistics Report Long with data set statistics . . . . . . . . . . . . . . . . . . . . . . 186
9-6 DB2 PM Statistics Report data set statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
9-7 IFCID 199 record trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9-8 Statistics report GBP information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
9-9 Statistics report P-lock detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
9-10 Accounting report Class 3 changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
9-11 Accounting global contention detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
9-12 Accounting GBP information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9-13 Index Advisor process for DB2 for z/OS and OS/390 . . . . . . . . . . . . . . . . . . . . . . . 193
10-1 Buffer pool in data spaces performance 64-bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
10-2 Data space versus hiperpool update performance . . . . . . . . . . . . . . . . . . . . . . . . . 201
10-3 Single thread table scan performance — compressed data . . . . . . . . . . . . . . . . . . 202
10-4 Insert performance with and without compression. . . . . . . . . . . . . . . . . . . . . . . . . . 203
10-5 Effect of PAV on I/O response time in msec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
10-6 Parallel query performance with and without PAV. . . . . . . . . . . . . . . . . . . . . . . . . . 208
B-1 DEVSERV QDASD command response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
B-2 Sample data class definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
B-3 Sample storage class definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
B-4 Sample DEFINE CLUSTER command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
B-5 Extract from LISTCAT command response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
C-1 Sample spreadsheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

Figures xi
xii DB2 for z/OS and OS/390 Version 7 Performance Topics
Tables

3-1 New PLAN_TABLE columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27


3-2 Explain headings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3-3 Columns in the CUSTOMER table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3-4 IN-List performance test results for single table access . . . . . . . . . . . . . . . . . . . . . . 31
3-5 Columns in the PART table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3-6 Columns in the LINEITEM table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3-7 IN-List performance test results for join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3-8 Subquery transformation select performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3-9 Subquery transformation update performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3-10 Subquery transformation and parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3-11 Self-referencing UPDATE test results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3-12 Self-referencing UPDATE versus INSERT and non-self-referencing update . . . . . . 48
3-13 Select 10 million rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3-14 Select 200 thousand rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3-15 Select one row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3-16 Scroll versus normal cursor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3-17 Accounting Class 2 and Class 3 times. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3-18 Locking with scroll cursors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3-19 Scrollable cursor versus DTT bufferpool data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3-20 Scrollable cursor versus DTT Class 2 and 3 times . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3-21 Scrollable cursor versus DTT locking comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3-22 Buffer pool activity for repositioning tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3-23 Class 2 and Class 3 times for repositioning test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3-24 Buffer pool activity for insensitive versus sensitive cursor . . . . . . . . . . . . . . . . . . . . . 69
3-25 Class 2 and 3 times for insensitive versus sensitive cursor. . . . . . . . . . . . . . . . . . . . 69
3-26 MAX test results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3-27 ORDER BY test results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3-28 Join column mismatch test results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3-29 VARCHAR index-only test results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3-30 Star join access plan: case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3-31 Star join access plan: case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3-32 Star join access plan: case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4-1 Catalog tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4-2 CATMAINT performance measurement results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5-1 Subsecond deadlock detection impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6-1 LOAD in parallel without SORTKEYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6-2 LOAD in parallel with SORTKEYS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6-3 Optimal performance for LOAD without SORTKEYS . . . . . . . . . . . . . . . . . . . . . . . 123
6-4 Optimal performance for LOAD with SORTKEYS . . . . . . . . . . . . . . . . . . . . . . . . . . 124
6-5 Inserting 2000000 rows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6-6 10 parallel jobs inserting 2,000,000 rows with different buffer pool definitions . . . . 126
6-7 Running Online LOAD RESUME in parallel with different buffer pool definitions . . 126
6-8 Online REOG using FASTSWITCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6-9 BUILD2 — example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6-10 BUILD2 — example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6-11 BUILD2 — example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6-12 BUILD2 — example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6-13 BUILD2 — example 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

© Copyright IBM Corp. 2001 xiii


6-14 BUILD2 — example 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6-15 BUILD2 — example 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6-16 BUILD2 — example 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6-17 Using COPYTOCOPY utility to make additional full image copies . . . . . . . . . . . . . 148
6-18 Modify recovery enhancement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7-1 Mapping DB2 data types to Java data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
7-2 Recommendations and responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8-1 Immediate write hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8-2 IMMEDWRITE(PH1) vs. IMMEDWRITE(YES) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8-3 IMMEDWRITE(PH1) vs. IMMEDWRITE(NO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8-4 RESTART LIGHT storage measurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
8-5 Restart Light — restart time measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
9-1 New IFCIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
9-2 Changed IFCIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
9-3 Index Advisor workload collection jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10-1 Utility CPU time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
10-2 DB2 log throughput and CPU times. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10-3 Define-Extent Domain in DB2 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

xiv DB2 for z/OS and OS/390 Version 7 Performance Topics


Preface

IBM DATABASE 2 Universal Database Server for z/OS and OS/390 Version 7 (DB2 Universal
Database Server for z/OS and OS/390 Version 7, or just DB2 V7 throughout this book) is the
eleventh release of DB2 for MVS. It brings to this platform many new data support, application
development, and SQL language enhancements for e-business while building upon the
traditional capabilities of availability, scalability, and performance.

This IBM Redbook describes the performance implications of several enhancements made
available with DB2 V7. These enhancements include performance and availability delivered
through new and enhanced utilities, dynamic changes to the value of many of the system
parameters without stopping DB2, and the new Restart Light option for data sharing
environments. There are many improvements to usability, such as scrollable cursors, support
for UNICODE encoded data, support for COMMIT and ROLLBACK within a stored procedure,
the option to eliminate the DB2 precompile step in program preparation, and the definition of
view with the operators UNION or UNION ALL.

This book will help you understand why migrating to Version 7 of DB2 can be beneficial for
your applications and your DB2 subsystems. It provides considerations and
recommendations for the implementation of the new functions and for evaluating their
performance and their applicability to your DB2 environments.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world working at the
International Technical Support Organization San Jose Center.

Paolo Bruni is on assignment as a Data Management Specialist for DB2 for OS/390 at the
International Technical Support Organization, San Jose Center, where he conducts projects
on all areas of DB2 for OS/390. Before joining the ITSO in 1998, Paolo worked as a
Consultant IT Architect in IBM Italy. During his 31 years with IBM, in development and in the
field, Paolo’s work has been mostly related to database systems.

Leif Pedersen is an Advisory IT specialist in IBM Denmark with 12 years of experience with
DB2. During this period, Leif first designed and developed business applications running
against DB2, and, in the last 8 years, he assumed the role of DB2 system administrator. His
areas of expertise include DB2 performance and database design.

Mike Turner is an independent consultant based in the United Kingdom. He has 30 years of
experience in the mainframe database management field. His areas of expertise include DB2
performance, recovery, and data sharing. He has taught extensively on these topics.

Jan Vandensande is an IT consultant and teacher at Jeeves Consulting in Belgium. He has


20 years of experience in the database management field. Before joining Jeeves Consulting
in 1995, Jan worked as an IMS and DB2 system administrator in the financial sector. His
areas of expertise include backup and recovery, data sharing, and performance.

Thanks to the following people for their contributions to this project:

Emma Jacobs
Yvonne Lyon
IBM International Technical Support Organization, San Jose Center

© Copyright IBM Corp. 2001 xv


Jeff Berger
Chuck Bonner
Ben Budiman
John Campbell
Hsiuying Cheng
Roy Cornford
Dan Courter
Kathy Devine
Graig Friske
Gene Fu
James Guo
Robert Gilles
Akiko Hoshikawa
Eva Hu
Jeff Josten
Gopal Krishnan
Ching Lee
Lee Liu
Bruce McAlister
Roger Miller
Mary Mui
Chris Munson
Mai Nguyen
Jim Pickel
Jim Pizor
Dave Raiman
Jim Ruddy
Akira Shibamyia
Kalpana Shyam
Joe Sinnott Jr
Jim Teng
Annie Tsang
Yoichi Tsuji
Steve Turnbough
Frank Vitro
Jane Wang
Yung Wang
Maryela Weihrauch
Manfred Werner
David Witkowski
Chung Wu
Casey Young
Jon Youngberg
IBM Silicon Valley Laboratory

Ying Chang
IBM San Jose Laboratory

Vasilis Karras
IBM International Technical Support Organization, Poughkeepsie Center

Norbert Jenninger
DB2 PM Development, IBM Germany

Bart Steegmans
IBM Belgium

xvi DB2 for z/OS and OS/390 Version 7 Performance Topics


Special notice
This publication is intended to help managers and professionals understand and evaluate the
performance and applicability to their environment of the new functions introduced by IBM
DATABASE2 Universal Database Server for z/OS and OS/390 Version 7 and DB2 Version 7
Utilities Suite. The information in this publication is not intended as the specification of any
programming interfaces that are provided by IBM DB2 UDB Server for z/OS and OS/390
Version 7 and the DB2 Version 7 Utilities Suite. See the PUBLICATIONS section of the IBM
Programming Announcement for IBM DB2 UDB Server for z/OS and OS/390 Version 7 and
DB2 Version 7 Utilities Suite for more information about what publications are considered to
be product documentation.

IBM trademarks
The following terms are trademarks of the International Business Machines Corporation in the
United States and/or other countries:
e (logo)® Redbooks
IBM ® Redbooks Logo
OS/390 S/390
DB2 DATABASE 2

Comments welcome
Your comments are important to us!

We want our IBM Redbooks to be as helpful as possible. Please send us your comments
about this or other Redbooks in one of the following ways:
򐂰 Use the online Contact us review redbook form found at:
ibm.com/redbooks
򐂰 Send your comments in an Internet note to:
redbook@us.ibm.com
򐂰 Mail your comments to the address on page ii.

Preface xvii
xviii DB2 for z/OS and OS/390 Version 7 Performance Topics
1

Chapter 1. Introduction
IBM DB2 Universal Database Server for z/OS and OS/390 Version 7 (DB2 V7 throughout this
redbook) is the eleventh release of DB2 for MVS and brings to this platform the data support,
application development and SQL language enhancements for e-business while building
upon the traditional capabilities of availability, scalability, and performance.

The DB2 V7 environment is available for the z/OS and OS/390 platforms either for new
installations of DB2 or for migrations from DB2 UDB for OS/390 Version 6. A skip-level
migration, from DB2 for OS/390 Version 5, is also supported for the first time.

In this chapter we briefly introduce the major DB2 V7 enhancements specifically related to
performance in the areas of SQL, subsystem function, capacity and availability, utilities,
network computing, data sharing, and performance tools, and provide some general
considerations on the utilization and expectation of the performance related functions.

Chapter 2, “Overview of DB2 Version 7” on page 11 contains a more general but brief
description of most the functions available with DB2 V7. Detailed information is provided in
the documents and Web sites listed in “Related publications” on page 237. The redbook DB2
UDB Server for OS/390 and z/OS Version 7 Presentation Guide, SG24-6121, is also useful
and is referenced frequently throughout this publication.

This redbook is not exhaustive and does not include all descriptions and measurements
related to the new V7 features. DB2 and the MVS platform continue to evolve; the related
performance measurements are a work in progress. More information is planned to be
provided before the end of the year and will be included in other redbooks and redpapers.

As for each new version of DB2, the goal of DB2 V7 is to provide performance as good as or
better than the DB2 V6 equivalent functions. This performance goal is verified by the DB2
product development staff at IBM’s Silicon Valley Laboratory in an iterative process, by means
of analyzing the path length for single-thread and multi-thread measurements, at the different
stages of DB2 product development. This is an on-going process during the whole
development cycle that involves a minimum of two very active releases of the product. Similar
measurements take place also for the performance assessment of the new DB2 V7 functions.

This document outlines the performance-related functions of DB2 V7, describes the
measured environments, and draws conclusions and recommendations based on the
measurements.

© Copyright IBM Corp. 2001 1


1.1 Key performance enhancements
This section lists the major performance and availability enhancements provided by DB2 V7.
For convenience of presentation, they are grouped into the same areas of the product that are
detailed in the following chapters of this redbook.

SQL
These enhancements affect SQL performance:
– Plan table changes: The V7 PLAN_TABLE has two new columns, and both the
PLAN_TABLE and the DSN_STATEMENT_TABLE have some new values in the
existing columns.
– Parallelism for IN-list index access: Parallelism for an IN-list predicate is now
allowed whenever it is supported by an index and the prerequisites for parallelism are
met (read-only SQL, CURRENT DEGREE = ‘ANY’).
– Row value expressions: A predicate can compare a list of columns to a list of
expressions.
– Correlated subquery to join transformation: There are some additional situations
when transformation can take place for correlated subqueries only.
– UPDATE/DELETE with self-referencing subselect: DB2 now allows searched
UPDATE and DELETE statements to use the target tables within subqueries in the
WHERE or SET clauses.
– Union everywhere: The use of unions is expanded to anywhere that a subselect
clause was valid in previous versions of DB2.
– Scrollable cursors: Scrolling backwards as well as forwards is now allowed. This
enhancement also provides the ability to place the cursor at a specific row within the
result set. A scrollable cursor can be sensitive or insensitive to updates made outside
the cursor, by the same user or by different users.
– MIN/MAX enhancement: A more efficient use of indexes for evaluation MIN and MAX
functions is now provided.
– Fewer sorts with order by: A better use of indexes now provides ordering of result
sets.
– Joining on columns of different data types: A join predicate can be Stage 1 and
potentially Indexable when there is a data type or length mismatch for numeric
columns. In order for this to be true you must, in your SQL, cast one column to the data
type of the other.
– VARCHAR index-only access: It is no longer necessary to set RETVLCFK=YES to
get index-only access in this special case.
– Bind improvements: Improvements to the DB2 Optimizer processing will result in
significantly reduced storage requirements and reduced CPU use during the BIND
command execution for joins involving more than 9 tables.
– Star join: Externalized parameter to allow star join at subsystem level and other
maintenance in this rapidly evolving area.

DB2 subsystem
These enhancements affect DB2 subsystem performance:
– Asynchronous preformat: Asynchronous preformat improves insert performance.
– Parallel data set open: Data set open/close processing for partitions in the same
partitioned table space/index is no longer a serial process.

2 DB2 for z/OS and OS/390 Version 7 Performance Topics


– Virtual Storage Constraint Relief and new traces: VSCR is a permanent concern;
new traces help you in monitoring virtual storage usage.
– CATMAINT utility: Performance of the CATMAINT utility is improved.
– Evaluate uncommitted: A new DSNZPARM, EVALUNC, specifies whether predicate
evaluation can occur on uncommitted data.
– Log-only recovery improvement: Frequent update of HPGRBRBA improves log-only
recovery.
– Reduced logging for variable-length rows: Under certain conditions the amount of
logging done for updates of variable-length rows can be reduced.
– DDL concurrency improvement: Row level locking on catalog table spaces with no
links defined can improve DDL concurrency.

Availability and capacity


These enhancements provide new and enhanced functionality in this area:
– Online DSNZPARMs: You can now activate a new set of DB2 subsystem parameters
without having to recycle DB2. This option allows for a dynamic change of a major part
of the subsystem startup parameters.
– Consistent restart: RECOVER and LOAD REPLACE are now allowed against objects
associated with postponed units of recovery. Cancel of postponed recovery is now an
option.
– CANCEL THREAD NOBACKOUT: A NOBACKOUT option is added to the CANCEL
THREAD command.
– Adding workfiles: You can now CREATE and DROP a workfile table space without
having to stop the workfile database.
– Log manager: Suspend/resume log activity allows for snapshot copy of the DB2
environment. Checkpoint frequency can be controlled by a time interval. Critical log
read access errors do not force DB2 down, but can be retried. A new warning message
for long running units of recovery is introduced.
– Dynamic change of time-out value: A new option on the modify IRLM command
allows the DBMS time-out value as known to IRLM to be dynamically modified.

Utilities
These are the enhancements to various DB2 utilities and functions:
– Dynamic utility jobs: It is now possible to specify a pattern of DB2 objects to run
utilities against and dynamically allocate unload data sets, work data sets, sort data
sets, and so on.
– LOAD partition parallelism: It is now possible to load partitioned table spaces in
parallel with only one job instead of running multiple jobs per partitions.
– Online LOAD RESUME: It is now possible to load data into tables with a utility that
acts like a normal insert program.
– UNLOAD: This is a new utility to unload data from tables without any influence on
business applications.
– Online REORG enhancements: The FAST SWITCH phase has been improved and
BUILD2 phase is now supporting parallelism for building NPIs. A new drain
specification statement has been added to overriding the utility locking time-out value
specified at subsystem level.

Chapter 1. Introduction 3
– Statistics history: It is now possible to store all generations of statistics information
and not only the current information from RUNSTATS.
– COPYTOCOPY: This new utility provides the opportunity to make additional full or
incremental image copies.
– Cross Loader: This new functionality makes it possible to move data across multiple
databases, even across multiple platforms.

Network computing
These are enhancements to network computing in DB2 V7:
򐂰 FETCH FRIST n ROWS ONLY: This allows a limit on the number of rows returned to a
result set.
򐂰 Stored procedures with COMMIT/ROLLBACK: This makes it possible to implement
more complex stored procedures that are invoked by Windows and UNIX clients.
򐂰 Java stored procedures with JVM: This implements support for Java stored procedures
as interpreted Java executing in a Java Virtual Machine (JVM) as well as support for
user-defined external (non-SQL) functions written in Java.
򐂰 Considerations on SQLJ and JDBC performance: These recommendations, if they are
implemented, can have a dramatic effect on the performance of your SQLJ applications.
򐂰 UNICODE: This provides full support for a third encoding scheme.
򐂰 DRDA server elapsed time reporting: This makes it easier to monitor bottlenecks in
client/server applications.

Data sharing
These enhancements improve availability, performance, and usability of DB2 data sharing:
򐂰 Coupling Facility Name Class Queues: Name Class Queues allows the Coupling
Facility Control Code (CFCC) to organize the GBP directory entries in queues based on
DBID, PSID and partition number.This enhancement reduces the performance impact of
purging group buffer pool (GBP) entries for GBP dependent page sets.
򐂰 Group attach enhancements: An application can now connect to a specific DB2 member
even if the DB2 member name and the DB2 group name are identical. Applications, using
the group attach name, can request to be notified when a member of the data sharing
group becomes active on the executing OS/390 image. With DB2 V7 you can specify the
group attach name for DL/I batch applications.
򐂰 IMMEDWRITE bind option: The IMMEDWRITE bind/rebind option, introduced by APAR
in DB2 V5 and V6, is fully implemented in DB2 V7.
򐂰 DB2 Restart Light: The “light” restart option allows you to restart a failed DB2 member on
the same or another OS/390 image with a minimum storage footprint in order to release
any retained locks.
򐂰 Persistent Coupling Facility structure sizes: After altering the size of DB2 structures,
DB2 V7 keeps track of their size across DB2 executions and structure allocations.
򐂰 Miscellaneous items:
• During normal shutdown processing, DB2 now notifies you of any unresolved units
of work (indoubt or postponed abort UOWs) that will hold retained locks after the
DB2 member has shut down.
• DB2 V7 reduces the MVS to DB2 communication that occurs during a coupling
facility/structure failure. This enhancement will decrease recovery time in such a
failure situation.

4 DB2 for z/OS and OS/390 Version 7 Performance Topics


• OS/390 Version 2 Release 10 introduces a new function, Auto Alter, for automatic
tuning of CF structures.
• APAR PQ45407 allows the user to enable coupling facility duplexing for the IRLM
lock structure.

Performance tools
These enhancements are related to performance tools in DB2 V7:
򐂰 IFI enhancements for V7: Several new IFCIDs have been added, others have changed,
and the DSNZPARM has a new parameter to help in synchronizing the writing of IFCID
records.
򐂰 DB2 PM changes and new functions: Statistics and accounting records have been
improved to help in identifying the event involved.
򐂰 Index Advisor: This is an extension of the DB2 for UNIX and Windows optimizer that
provides recommendations for indexes based on the tables and views defined in the
database, the existing indexes, and the SQL workload.
򐂰 DB2 Estimator: Several new DB2 for OS/390 V7 functions have been added to DB2
Estimator, including: the new UNLOAD utility, more parallel LOAD of partitions, UNICODE
encoding scheme, new built-in functions, the FETCH FIRST n ROWS ONLY clause, and
scrollable cursors.

Other enhancements
These improvements are not strictly internal to DB2, but still impact DB2 performance:
򐂰 DB2 and zSeries: DB2 will benefit from large real memory support, faster processors,
and better hardware compression
򐂰 VSAM striping: Striping becomes available for VSAM data sets with DFSMS in OS/390
V2R10
򐂰 ESS enhancements: The IBM Enterprise Storage Server (ESS) exploits the Parallel
Access Volume and Multiple Allegiance features of OS/390. FlashCopy is the ESS feature
to use for DB2 backup in combination with log suspend/resume.

1.2 Performance highlights


DB2 V7 offers new capabilities and possible performance improvements for most areas of the
product to all types of users: application programmers, system programmers, database
administrators, and operators. It also offers tremendous improvements in availability.

Here we list the performance enhancements which we considered most important based on
current measurement results and value. We have arbitrarily grouped them in the areas of:
򐂰 Query performance
򐂰 Transaction performance
򐂰 Utility performance
򐂰 Subsystem and host platform synergy
򐂰 Migration

1.2.1 Query performance


As for the previous releases, DB2 V7 continues to further enhance the exploitation of
available indexes for performance advantage and widen the applicability of stage 1
predicates. More efficient access paths are now chosen for:

Chapter 1. Introduction 5
򐂰 Correlated subquery transformation to join:
SELECT A1 FROM TABA WHERE A1 IN
(SELECT B1 FROM TABB WHERE B2=A2)...
This is transformed into:
SELECT A1 FROM TABA,TABB WHERE A1=B1 AND B2=A2...
The transformation improves performance by allowing a reduction in CPU time and up to
10 times faster elapsed time. The larger the outer table in comparison with the result table,
the more the benefits. The IN, +ANY, +SOME, EXISTS predicates can be transformed to
join for SELECT, UPDATE, and DELETE. Duplicate rows must be removed from TABB.
򐂰 Sorting can be avoided for the = predicate on ORDER BY key:
SELECT FROM TABA WHERE A2=5 ORDER BY A1,A2,A3
This is equivalent to:
SELECT FROM TABA WHERE A2=5 ORDER BY A1,A3
The presence of an index on A1,A3 may now avoid a sort if the optimizer evaluates it as
convenient.
򐂰 The same index can now be used to have a direct access to both MAX and MIN values,
avoiding the need for 2 indexes, one ascending, one descending, or a costly index scan:
SELECT MAX(A1) FROM TABA WHERE A1<n
SELECT MIN(A1) FROM TABA WHERE A1>m

Query parallelism is also improved. With DB2 V7, restrictions have been removed and
parallelism on a IN-List predicate is allowed whenever it is supported by an index and the
prerequisites for parallelism are met (read-only SQL, CURRENT DEGREE = ‘ANY’.)
Parallelism for IN-list access show improvements in elapsed time up to 5 times.

1.2.2 Transaction performance


For applications with a massive number of inserts, like the ERP systems, and the highly
variable workloads typical of e-business, the insert and the related logging can become a
bottleneck. With V7, several enhancements can contribute to eliminate the contention caused
by such peak activity.
򐂰 VSAM I/O striping for DB2 Log (see Figure 1-1)
The introduction of the ESS disks has more than doubled the maximum write rate when
comparing to RAMAC technology. The support for VSAM striping applied to Log data sets
shows almost three times improvement, in the case of 4 stripes, when comparing to the
ESS F20 without striping. Additional information is now provided by DB2 statistics when
monitoring the log activity.

6 DB2 for z/OS and OS/390 Version 7 Performance Topics


Writing to active Logs - Maximum rates
30
27
25

20 18.5
MB/sec
15
11.4 10.5
10 8.2

5 4.2 4.5
2.5 3.2

0
3390-3 Ramac3 ESS E20 ESS F20 - 1 ESS F20 - 4
RVA T82 RVA X82 ESS F20 - 0 ESS F20 - 2

Figure 1-1 Improvements in maximum log rates

򐂰 Asynchronous preformatting
Disk space needs to be preformatted before a row/key can be inserted. Prior to V7,
preformatting takes place at CREATE or REORG time, and then when the limit of the
highest allocated RBA is reached during INSERT. This was a synchronous operation
which could impact performance. With V7, an internal threshold triggers the formatting
asynchronously before the end of the formatted area is reached. The improvement is
consistent with the Data Set Extend Wait Time Class 3 accounting reduction and can
avoid delays of up to one second to the deferred writes when multiple I/O is present.

1.2.3 Utility performance


The utilities have seen major changes with DB2 V7.
򐂰 Faster and more available online REORG
The SWITCH phase is critical for online REORG since it precludes accesses. Prior to V7
most of the time in this phase is spent renaming data sets. The FASTSWITCH option
avoids the data set rename time and provides a 10 times faster SWITCH phase, with a
consequent 10 times improvement in availability. Figure 1-2 shows the improvement per
data set on three device types. The 0.7 to 2 seconds per data set is reduced to 0.05 to 0.2
seconds. FASTSWITCH is now the default.
The BUILD2 phase of online REORG is dedicated to rebuilding the non partitioning
indexes. With V7 this phase has been revised and parallelism has been added. With one
NPI the improvement is about 30% in CPU and elapsed time. With five NPIs built by
parallel tasks the improvement in elapsed time reaches 80%.

Chapter 1. Introduction 7
S W IT C H a n d F A S T S W IT C H tim e s

Seconds per data set


V6 V7
1.5

0.5

0
3390 RVA2 ESS

Figure 1-2 Elapsed time in renaming a data set by device type

򐂰 Online LOAD RESUME


This new utility operates functionally like SQL INSERT, but it is managed just like any other
utility. Since it interfaces the INSERT at a lower level than the standard API, it reduces the
CPU overhead by 37% when compared to a standard SQL program while allowing
concurrent accesses (see Figure 1-3). Corresponding elapsed time reduction is also
possible.

O n lin e LO AD RESUM E

Elapsed time CPU time


Time in seconds

300 281
230 214
200 171

100

0
INSERT Online LOAD RESUME

Figure 1-3 Online LOAD RESUME and INSERT

8 DB2 for z/OS and OS/390 Version 7 Performance Topics


򐂰 Parallel partition LOAD
DB2 V7 can now execute the LOAD of a partitioned table space in parallel by partition
within the same job execution. This execution shows a 30% improvement when comparing
with loading the whole table. DB2 will activate as many tasks as many are the partitions as
long as the same number of input data sets have been prepared. If only one index is
present, it could still be convenient to execute the jobs in parallel by partition; but when
multiple indexes are present, the best option is to use this new parallel partition LOAD with
the SORTKEYS option.
򐂰 The new UNLOAD utility is similar to REORG UNLOAD EXTERNAL, but it provides more
functions and a 15 to 18% reduction in CPU time. REORG UNLOAD EXTERNAL,
introduced with DB2 V6 uses up to 9 times less CPU than DSNTIAUL.

1.2.4 Subsystem and host platform synergy


A traditional strong point of DB2 has always been the tight integration and synergy with the
host platform. DB2 exploits processors, functions and instructions, fully utilizes the parallel
sysplex for data sharing, and takes advantage of the enhancements in storage controllers and
disks. The Workload Manager is a key element in making sure that even at maximum
utilization the business importance honored in mixed workloads with very low concurrency
overhead.

The introduction of the new zSeries processor and OS/390 2.10 has enabled the exploitation
of real storage exceeding the already theoretical 2 GB limit. The new z900 servers have
increased the single processor speed by 20%, and the number of engines in a complex is
increased to 16. The maximum configurable real storage is 64 GB. The dataspace buffer pool
support, introduced with DB2 V6, can now be utilized to provide virtual storage constraint
relief to the DBM1 address space. Larger buffer pools can reduce disk I/O rate. With virtual
storage and I/O relieved, DB2 can promote an almost linear performance scalability taking full
advantage of the bigger and faster processors.

A dataspace buffer pool with sufficient real storage allows up to 8 million buffers (32 GB with 4
KB pages) with potential for more direct I/O support and more concurrent parallel prefetch. By
contrast, the hiperpool provides less storage, 8 GB total, no direct I/O support, and it cannot
contain updated pages.

The zSeries shows the following performance benefits for DB2:


򐂰 15 to 30% faster on single engine, and 30 to 75% more throughput than the G6 turbo
򐂰 3 to 4 times more efficient hardware compression than G6 turbo
򐂰 30 to 60% faster on compressed data

1.2.5 Migration
CATMAINT, the Catalog migration utility, has been largely improved (see Figure 1-4). The
execution of step1 of the utility, the actual migration, shows more than 10 times reduction in
elapsed time and 40 times reduction in CPU time when comparing a V5->V6 migration with
V6->V7. Catalog getpages are reduced 7 times and the internal sorts have been almost
completely eliminated.

Chapter 1. Introduction 9
C a t a lo g M ig r a tio n U t ility (C A T M A IN T )

1000
944
900 CPU time Elapsed time
834
Time in seconds 800 768
700
600
500
400
300 250
192 194
200
100 82
6
0
V5->V6 V5->V6 V5->V7 V6->V7

Figure 1-4 CATMAINT executions

10 DB2 for z/OS and OS/390 Version 7 Performance Topics


2

Chapter 2. Overview of DB2 Version 7


With DB2 V7, the DB2 Family delivers more scalability and availability for your e-business and
business intelligence applications. Using the powerful environment provided by S/390 and
OS/390, and the new zSeries and z/OS, you can leverage your existing applications while
developing and expanding your electronic commerce for the future.

In this chapter, we summarize the main enhancements of DB2 V7, we look at the changes in
the packaging of the product, and we provide some considerations on migration.

© Copyright IBM Corp. 2001 11


2.1 DB2 at a glance
In this section, we briefly introduce the main enhancements introduced by DB2 V7,
associating them with the areas of:
򐂰 Application enablement
򐂰 Utilities
򐂰 Network computing
򐂰 Performance and availability
򐂰 Data sharing
򐂰 Features and tools

2.1.1 Application enablement


Numerous SQL enhancements in this area provide greater flexibility and family compatibility.

UNION everywhere
This enhancement satisfies a long-standing requirement. It provides the ability to define a
view based upon the UNION of subselects: Users can reference the view as if it were a single
table while keeping the amount of data manageable at the table level.

Scrollable cursors
Scrollable cursors give your application logic ease of movement through the result table using
simple SQL and program logic. This frees your application from the need to cache the
resultant data or to reinvoke the query in order to reposition within the resultant data.

Support for scrollable cursors enables applications to use a powerful new set of SQL to fetch
data using a cursor at random, both in the forward and backward direction. The syntax can
replace cumbersome logic techniques and coding techniques and improve performance.
Scrollable cursors are especially useful for screen-based applications. You can specify that
the data in the result table remain static, or do the data updates dynamically.

You can also specify that the data in the result table remain insensitive or sensitive to
concurrent changes in the database. If you choose to be sensitive to changes, you can
update the database. For example, an accounting application can require that data remain
constant, while an airline reservation system application must display the latest flight
availability information.

Row expression in IN predicate


This function allows the ability to define a subselect whose resultant data is multiple columns,
along with the ability to use those multiple columns as a single unit in comparison operations
of the outer level select. It is useful for multiple column key comparisons and needed for some
key applications.

Limited fetch
This function consists of a FETCH FIRST ’n’ ROWS SQL clause and a fast implicit close.
A new SQL clause and a fast close improve performance of applications in a distributed
environment. You can use the FETCH FIRST ’n’ ROWS clause to limit the number of rows
that are prefetched and returned by the SELECT statement. You can specify the FETCH
FIRST ROW ONLY clause on a SELECT INTO statement when the query can return more
than one row in the answer set. This tells DB2 that you are only interested in the first row, and
you want DB2 to ignore the other rows.

12 DB2 for z/OS and OS/390 Version 7 Performance Topics


Enhanced management of constraints
You can specify a constraint name at the time you create primary or unique keys. DB2
introduces the restriction of dropping an index required to enforce a constraint.

Precompiler services
With DB2 V7, you can take advantage of precompiler services to perform the tasks currently
executed by the DB2 precompiler. This API can be called by the COBOL compiler. By using
this option, you can eliminate the DB2 precompile step in program preparation and take
advantage of language capabilities that had been restricted by the precompiler. Use of the
host language compiler enhances DB2 family compatibility, making it easier to import
applications from other database management systems and from other operating
environments.

SQL — enhanced stored procedures


Stored procedures, introduced with DB2 V5, have increased program flexibility and portability
among relational databases. DB2 V7 accepts COMMIT and ROLLBACK statements issued
from within a stored procedure. This enhancement will prove especially useful for applications
in which the stored procedure has been invoked from a remote client.

Java support
DB2 V7 implements support for the JDBC 2.0 standard and, in addition, support for
userid/password usage on SQL CONNECT via URL, and the JDBC Driver execution under
IMS.

DB2 V7 also allows you to implement Java stored procedures as both compiled Java using
the OS/390 High Performance Java Compiler (HPJ) and interpreted Java executing in a Java
Virtual Machine (JVM), as well as support for user-defined external (non-SQL) functions
written in Java.

Self-referencing subselect on UPDATE or DELETE


Now, you can use a subselect to determine the values used in the SET clause of an UPDATE
statement. Also, you can have a self-referencing subselect. In previous releases of DB2, in a
searched UPDATE and DELETE statement, the WHERE clause cannot refer to the object
being modified by the statement. V7 removes the restriction for the searched UPDATE and
DELETE statements, but not for the positioned UPDATE and DELETE statements. The
search condition in the WHERE clause can include a subquery in which the base object of
both the subquery and the searched UPDATE or DELETE statement are the same. The
following code sample is for an application which gives a 10% increase to each employee
whose salary is below the average salary for the job code:
UPDATE EMP X SET SALARY = SALARY * 1.10
WHERE SALARY < (SELECT AVG(SALARY) FROM EMP Y
WHERE X.JOBCODE = Y.JOBCODE);

The base object for both the UPDATE statement and the WHERE clause is the EMP table.
DB2 evaluates the complete subquery before performing the update.

Allow DBADM to create views and aliases for others


DBADM authority is expanded to include creating aliases and views for others. This function
is activated at installation time.

XML Extender
DB2 V7 provides more flexibility for your enterprise applications and makes it easier to call
applications. The family adds DB2 XML Extender support for the XML data type. This
extender allows you to store an XML object either in an XML column for the entire document,
or in several columns containing the fields from the document structure.

Chapter 2. Overview of DB2 Version 7 13


2.1.2 Utilities
In this section we describe enhancements to several DB2 utilities.

Dynamic utility jobs


With DB2 V7, database administrators can submit utility jobs more quickly and easily. Now
you can:
򐂰 Dynamically create object lists from a pattern-matching expression
򐂰 Dynamically allocate the data sets required to process those objects

Using a LISTDEF facility, you can standardize object lists and the utility control statements
that refer to them. Standardization reduces the need to customize and change utility job
streams over time. The use of TEMPLATE utility control statements simplifies your JCL by
eliminating most data set DD cards. Now you can provide data set templates, and the DB2
product dynamically allocates the data sets that are required based on your allocation
information. Database administrators require less time to maintain utilities jobs, and database
administrators who are new to DB2 will learn to perform these tasks more quickly.

UNLOAD
With DB2 V7 you can take advantage of a new utility, UNLOAD, which provides faster data
unloading than was available with the DSNTIAUL program. The UNLOAD utility combines the
unload functions of REORG UNLOAD EXTERNAL with the ability to unload data from an
image copy.

More parallelism with LOAD with multiple inputs


Using DB2 V7, you can easily load large amounts of data into partitioned table spaces for use
in data warehouse or business intelligence applications. Parallel load with multiple inputs runs
in a single step, rather than in different jobs. The utility loads each partition from a separate
data set so one job can load partitions in parallel. Parallel LOAD reduces the elapsed time for
loading the data when compared to loading the same data with a single job in earlier
releases. Using load parallelism is much easier than creating multiple LOAD jobs for
individual parts.

The SORTKEYS keyword enables the parallel index build of indexes. Each load task takes
input from a sequential data set and loads the data into a corresponding partition. The utility
then extracts index keys and passes them in parallel to the sort task that is responsible for
sorting the keys for that index. If there is too much data to perform the sort in memory, the sort
product writes the keys to the sort work data sets. The sort tasks pass the sorted keys to their
corresponding build task, each of which builds one index. If the utility encounters errors
during the load, DB2 writes error and error mapping information to the error and map data
sets.

Online LOAD RESUME


Earlier releases of DB2 restrict access to data during LOAD processing. DB2 V7 gives you
the choice of allowing user read and write access to the data during LOAD processing, so you
can load data concurrently with user transactions.

Cross Loader
The enhancement allows EXEC SQL statements results as input to the Load utility. Both local
DB2s and remote DRDA compliant databases can be accessed.

14 DB2 for z/OS and OS/390 Version 7 Performance Topics


Online REORG enhancements
Online REORG makes your data more available. Online REORG enhancements improve the
availability of data in two ways: Online REORG no longer renames data sets, greatly reducing
the time that data is unavailable during the SWITCH phase. You specify a new keyword,
FASTSWITCH, which keeps the data set name unchanged and updates the catalog to
reference the newly reorganized data set. The time savings can be quite significant when
DB2 is reorganizing hundreds of table spaces and index objects. Also, additional parallel
processing improves the elapsed time of the BUILD2 phase of REORG SHRLEVEL(CHANGE) or
SHRLEVEL(REFERENCE).

CopyToCopy
This feature provides the capability to produce additional image copies recorded in the DB2
catalog.

Statistics history
As the volume and diversity of your business activities grow, you may require changes to the
physical design of DB2 objects. V7 of DB2 collects statistic history to track your changes.
With historical statistics available, DB2 can predict the future space requirements for table
spaces and indexes more accurately and run utilities to improve performance. DB2 Visual
Explain utilizes statistics history for comparison with new variations that you enter so you can
improve your access paths. DB2 stores statistics in catalog history tables. To maintain
optimum performance of processes that access the tables, use the MODIFY STATISTICS
utility. The utility can delete records that were written to the catalog history tables before a
specific date or that are recorded as a specific age.

2.1.3 Network computing


Network computing includes the following enhancements:

Global transactions
Privileged application can use multiple DB2 agents or threads to perform processing that
requires coordinated commit processing across all the threads. DB2 V7 (and V6 RML), via a
transaction processor, treats these separate DB2 threads as a single “global transaction” and
commits all or none.

Security enhancements
You can more easily manage your workstation clients who seek access to data and services
from heterogeneous environments with DB2 support for Windows Kerberos authentication.
This enhancement:
򐂰 Eliminates the flow of unencrypted user IDs and passwords across the network.
򐂰 Enables single-logon capability for DRDA clients by using the Kerberos principle name as
the global identity for the end user.
򐂰 Simplifies security administration by using the Kerberos principle name for connection
processing and by automatically mapping the name to the local user ID.
򐂰 Uses the Resource Access Control Facility (RACF) product to perform much of the
Kerberos configuration. RACF is a familiar environment to administrations of OS/390.
򐂰 Eliminates the need to manage authentication in two places, the RACF database, and a
separate Kerberos registry.

Chapter 2. Overview of DB2 Version 7 15


UNICODE support
In the increasingly global world of business and e-commerce, there is a growing need for data
arising from geographically disparate users to be stored in a central server. Previous releases
of DB2 have offered support for numerous code sets of data in either ASCII or EBCDIC
format. However, there was a limitation of only one code set per system. DB2 V7 supports
UNICODE encoded data. This new code set is an encoding scheme that is able to represent
the characters (code points) of many different geographies and languages.

Network monitoring
DB2 V7 introduces reporting server elapsed time at the workstation. Workstations accessing
DB2 data can now request that DB2 return the elapsed time of the server, which is used to
process a request in reply from the DB2 subsystem. The server elapsed time allows remote
clients to quickly determine the amount of time it takes for DB2 to process a request. The
server elapsed time does not include any network delay time, which allows workstation
clients, in real-time, to determine performance bottlenecks among the client, the network, and
DB2.

2.1.4 Performance and availability


Several new enhancements are provided to improve performance and availability.

Online subsystem parameters


One of the causes of a planned outage for DB2 is the need to alter one or more of the system
parameters, known as ZPARMS. DB2 V7 enables you to change the value of many of these
system parameters without stopping DB2.

Log manager enhancements


Log manager updates help you with DB2 operations by providing:
򐂰 Suspend update activity
򐂰 Retry critical log read access
򐂰 Time interval system checkpoint frequency
򐂰 Long running UR information

Consistent restart enhancements


DB2 V7 provides more control of the availability of the user objects associated with the failing
or cancelled transaction without restarting DB2. Consistent restart enhancements remove
some of the restrictions imposed by the current consistent restart function. A new feature to
prevent backout of data and log is added to DB2’s -CANCEL THREAD command.

Adding space to the workfiles


DB2 V7 allows you to CREATE and DROP workfile table space, without having to STOP the
workfile database.

2.1.5 Data sharing


Data sharing customers can benefit from several new enhancements.

Coupling Facility Name Class Queues


DB2 V7 exploits the OS/390 and z/OS support for the Coupling Facility Name Class Queues.
This enhancement reduces the performance impact of purging group buffer pool (GBP)
entries for GBP-dependent page sets.

16 DB2 for z/OS and OS/390 Version 7 Performance Topics


Group Attach enhancements
A number of enhancements are made to Group Attach processing in DBV7:
򐂰 An application can now connect to a specific DB2 member of a data sharing group.
򐂰 You can now connect to a DB2 data sharing group by using the group attach name.
򐂰 You can specify the group attach name for your DL/I batch applications.

Restart Light
A new feature of the START DB2 command allows you to choose Restart Light for a DB2
member. Restart Light allows a DB2 data sharing member to restart with a minimal storage
footprint, and then to terminate normally after DB2 frees retained locks. The reduced storage
requirement can make a restart for recovery possible on a system that might not have enough
resources to start and stop DB2 in normal mode. If you experience a system failure in a
Parallel Sysplex, the automated restart in “light” mode removes retained locks with minimum
disruption. You should consider using DB2 Restart Light with restart automation software,
such as OS/390 Automatic Restart Manager.

IMMEDWRITE bind option


A V6 enhancement offers you the choice to immediately write updated group-buffer-pool
dependent buffers. In V7, the option is reflected in the DB2 catalog and externalized on the
installation panels.

Persistent structure size changes


In earlier releases of DB2, any changes you make to structure sizes using the SETXCF
START,ALTER command might be lost when you rebuild a structure and recycle DB2. Now
you can allow changes in structure size to persist when you rebuild or reallocate a structure.

2.1.6 New features and tools


A new feature, DB2 Warehouse Manager, makes it easy to design and deploy a data
warehouse on your S/390: you can extract operational data from your DB2 for OS/390 and
import it into an S/390 warehouse without transferring your data to an intermediate platform.
Prototyping the applications is quicker, you can query and analyze data, and help your users
access data and understand information. The new DB2 Warehouse Manager feature gives
you a full set of tools for building and using a data warehouse based on DB2 for OS/390.

Net Search Extender adds the power of fast full-text retrieval to Net.Data, Java, or DB2 CLI
applications. It also offers application programmers a variety of search functions.

With V7, DB2 delivers even more tools, which have been the subject of specific
announcements in September 2000 and March 2001, together with several IMS tools. You
have the opportunity of a trial period to discover and verify the benefits of these tools, some
completely new to the server, others being new versions of tools already available with DB2
V6. Some of the new tools are:
򐂰 DB2 Bind Manager, to avoid unnecessary binds
򐂰 DB2 Log Analysis Tool, to assist in using the log information
򐂰 DB2 SQL Performance Analyzer, to evaluate the cost of a query before it runs
򐂰 DB2 Change Accumulation, to consolidate copies and logging offline
򐂰 DB2 Recovery Manager, to simplify recovery of data from DB2 and IMS

Chapter 2. Overview of DB2 Version 7 17


2.2 DB2 V7 packaging
DB2 V7 incorporates several features which include tools for data warehouse management,
Internet data connectivity, replication, database management and tuning, installation, and
capacity planning.

These features and tools work directly with DB2 applications to help you use the full potential
of your DB2 system. When ordering the DB2 base product, you can select the free and
chargeable features to be included in the package.

With DB2 V7, most utilities have been grouped in separate independent products. All the
utilities are shipped deactivated with the Base Engine. The corresponding product licences
must be obtained to activate the specific functions of the utilities. However, all utilities are
always available for execution on DB2 Catalog and Directory and for the Sample Database.
򐂰 DB2 Extenders:
– Text
– Image
– Audio
– Video
– XML

Optional no-charge features


The optional no-charge features are the same as with DB2 V6:
򐂰 Net.Data
Net.Data, a no-charge feature of DB2 V7, takes advantage of the S/390 capabilities as a
premier platform for electronic commerce and Internet technology. Net.Data is a
full-featured and easy to learn scripting language allowing you to create powerful Web
applications. Net.Data can access data from the most prevalent databases in the industry:
DB2, Oracle, DRDA-enabled data sources, ODBC data sources, as well as flat file and
web registry data. Net.Data Web applications provide continuous application availability,
scalability, security, and high performance.
򐂰 REXX language support
REXX language and REXX language stored procedure support are shipped as a part of
the DB2 V7 base code. You need to specify the feature and the media when ordering DB2.
Documentation is still accessible from the Web. The DB2 installation job DSNTIJRX binds
the REXX language support to DB2 and makes it available for use.
򐂰 DB2 Management Clients Package
The DB2 Management Clients Package is the new name for DB2 V7 for the previous
DB2 Management Tools Package. It is a collection of workstation-based client tools that
you can use to work with and manage your DB2 for OS/390 and z/OS environments.
This is a separately orderable no-charge feature of DB2 V7. It currently consists of the
packaging and shipping of:
– DB2 Installer
– Visual Explain
– DB2 Estimator
– DB2 Connect Personal Edition (special license)
– Control Center
– Stored Procedure Builder
For functional details on the DB2 Management Clients Package products, please refer to
the redbook DB2 UDB for OS/390 Version 6 Management Tools Package, SG24-5759.

18 DB2 for z/OS and OS/390 Version 7 Performance Topics


Charge features
Two new charge features are shipped with DB2 V7:
򐂰 The DB2 Warehouse Manager
򐂰 The DB2 Net Search Extender

The Query Management Facility product is part of the DB2 Warehouse Manager feature, but
it is also still available as a separate feature on its own.

With DB2 V7, the DB2 Utilities have been separated from the base product and they are now
offered as separate products licensed under the IBM Program License Agreement (IPLA),
and the optional associated agreements for Acquisition of Support. The DB2 Utilities are
grouped in these categories:
򐂰 DB2 Operational Utilities, program number 5655-E63, which include Copy, Load (with the
DB2 family Cross Loader function), Rebuild, Recover, Reorg, Runstats, Stospace, and the
new utilities Unload and Exec SQL.
򐂰 DB2 Diagnostic and Recovery Utilities, program number 5655-E62, which include Check
Data, Check Index, Check LOB, Copy, CopyToCopy, Mergecopy, Modify Recovery, Modify
Statistics, Rebuild, and Recover.
򐂰 DB2 Utilities Suite, program number 5697-E98, which combines the functions of both DB2
Operational Utilities and DB2 Diagnostic and Recovery Utilities in the most cost effective
option.

DB2 Warehouse Manager


The DB2 Warehouse Manager feature brings together the tools to build, manage, govern, and
access DB2 for OS/390-based data warehouses.

DB2 Warehouse Manager is a new, separately orderable, priced feature of


DB2 V7.

The DB2 Warehouse Manager currently consists of


򐂰 DB2 Warehouse Center
򐂰 390 Warehouse Agent
򐂰 DB2 UDB EEE
򐂰 Query Management Facility for OS/390
򐂰 Query Management Facility High Performance Option
򐂰 Query Management Facility for Windows

Query Management Facility


The Query Management Facility (QMF) is the tightly integrated, powerful, and reliable tool for
query and reporting within IBM’s DB2 family. QMF for OS/390 is also a separately orderable,
priced feature of DB2 V7.

QMF currently consists of the packaging and shipping of:


򐂰 Query Management Facility for OS/390
򐂰 QMF High Performance Option
򐂰 Query Management Facility (QMF) for Windows

Chapter 2. Overview of DB2 Version 7 19


2.2.1 Net Search Extender
DB2 Net Search Extender contains a DB2 stored procedure that adds the power of fast
full-text retrieval to Net.Data, Java, or DB2 CLI applications. It offers application programmers
a variety of search functions, such as fuzzy search, stemming, Boolean operators, and
section search.

2.3 Migration to DB2 Version 7


Migration with full fallback protection is available when you have either DB2 V5 or DB2 V6.
You should ensure that you are fully operational on DB2 V5, or later, before migrating to DB2
V7. The interest in DB2 V7 is growing, and a large number of users will start considering a
migration before year-end. The introduction program of DB2 has provided good quality
feedback from the customers.

2.3.1 Planning for migration


If you were using the DB2 Utilities, you should understand the changes in their packaging.
The current standard documentation starting from the announcements at ibm.com/ibmlink
provides details, and these changes are also mentioned in the DB2 for OS/390 Version 7
Presentation Guide, SG24-6121.

If you are planning to skip Version 6, you must get the right level of education and make sure
that all of the incompatible changes are found. You need to plan for adequate time. PQ34667
must be applied, prerequisite software must be installed, and third parties must have their
support in place. Finally, you must plan for testing for the applications and settings that are
unique to your installation.

If you are running Version 5 today and considering moving to Version 6 or waiting for Version
7, you should consider that the customer base for Version 6 is now covering most workloads,
and weigh that against saving a cycle but running some risks on going to Version 7. Of course
there are functions that are only in Version 7, and if you need them, you may want to get there
quickly, but with some extra testing.

While we expect the majority of our customers to be migrated to Version 6 in the next few
months, there are some important quality improvements in Version 7 that may allow some
customers to save time by skipping over Version 6. As time goes by, release skipping to V7 is
expected to become more typical. The ability to skip from Version 5 to Version 7 will be a big
help for customers who are running Version 3 or 4 today, or have just migrated to Version 5.
They can migrate to Version 5 if needed, and then skip over Version 6 directly to Version 7
next year.

The result of extra work in testing the DB2 V7 at the lab is a more solid product, with a
significantly higher availability under stress conditions. Your choice will depend upon whether
you need some specific functions of Version 6 or Version 7 this year. If so, then it is time to
find the needed resources and migrate. Keep in mind that moving from Version 5 to Version 7
is more than half of the work of migrating to Version 6 and then to Version 7, especially if you
migrate to the new release much earlier than you intended.

2.3.2 Reasons to migrate


Everyone who is building on Java requires V7. This is the prerequisite level for WebSphere
4.0. Customers using Java will need to move to Java 2, J2EE. They will also need the
performance improvements that are only found in V7.

20 DB2 for z/OS and OS/390 Version 7 Performance Topics


Most of the customers and key vendors need the performance and availability enhancements
described of V7. For instance ERP and CRM products such as SAP, PeopleSoft, and Siebel
can benefit from the following functions:
򐂰 Online change of most system parameters
򐂰 Cancel thread without rollback
򐂰 Improved utility and SQL parallelism and performance

Several of these changes are also crucial for warehouse type of solutions. The improvement
provided by correlated subqueries is huge for applications that make use of them, such as
PeopleSoft.

There are some useful functions for users and vendors, such as:
򐂰 CURRENT MEMBER special register
򐂰 Self-referencing update and delete
򐂰 Fetch limited number of rows
򐂰 Improved DRDA performance
򐂰 DRDA server elapsed time reporting
򐂰 Utility lists, patterns, and dynamic allocation
򐂰 Better handling of DEFER DEFINE data sets

Application developers will greatly benefit from:


򐂰 UNION and UNION ALL in a view
򐂰 Row expressions

But query users will also make use of:


򐂰 Performance enhancements in:
– IN-list index access parallelism
– Correlated subquery performance
– Fewer sorts for ORDER BY and some predicates
򐂰 New functionalities such as:
– Scrollable cursors
– UNION everywhere, in views,...
– Update from a self-referencing subselect
– Order by expression
– SQL scalar functions
– XML extenders
– Unicode support
– Windows Kerberos security

DBAs and system programmers will appreciate the warehouse integration utilities:
– New UNLOAD utility
– LOAD enhancements:
• LOAD from cursor, SELECT statement (Cross Loader)
• LOAD partition parallelism within the same job execution
• Online or sharelevel change LOAD
– Pattern matching and dynamic allocation
– Faster switch and BUILD2 for online REORG
– New CopyToCopy utility
– Statistics history
– Data sharing: Restart Light
– CANCEL THREAD NOBACKOUT

Chapter 2. Overview of DB2 Version 7 21


2.4 Implementation considerations
When considering the migration of a full DB2 production environment, it is important to know
which new functions will be activated by "default" and which ones need special attention. In
order to help in evaluating the enhancements in terms of their applicability and return on
investment, in the following sections we list the most important enhancements, relating them
to their implementation effort.

2.4.1 Functions requiring no effort


The following performance benefits are immediately available after migration, without any
change to the DB2 subsystem or the applications:
򐂰 In the area of subsystem function:
– Asynchronous preformatting to improve INSERT intensive workloads
– Parallel data set open for partitions
– CATMAINT much faster execution time
– Less logging for VARCHAR
– New instrumentation records for additional information on virtual storage usage and
Log statistics
򐂰 In the area of utilities:
– BUILD2 enhancements
򐂰 In the area of SQL usage there are several enhancements (bind required), including:
– MIN, MAX using the same index
– Order BY reducing sort need
– Varchar index only access
– Large SQL Bind enhancements for very complex queries
– Parallel IN list performance
– Correlated subquery performance

2.4.2 Functions requiring a small effort


The following performance benefits can be obtained with some system programming,
operations, or database administration effort:
򐂰 In the area of utilities:
– Online REORG FASTSWITCH (DB2 default) needs some caution to allow for the new
data set names.
– Parallel partition LOAD only needs some input data set preparation by partition.
– Online REORG has new parameters to RETRY to complete the execution.
򐂰 In the area of availability and capacity:
– Online ZPARMs need proper administrative set up (authorization and impact
evaluation).
– Consistent restart enhancements require modification of the operational procedures.
– Dynamic allocation of workfiles now does not need to issue STOP DSNDB07.
– Several Log management enhancements (suspend resume, time driven checkpoints,
new messages for long running UR, RETRY for critical read) need coordination with
operational procedure.
– Restart Light, when specified for a member of a data sharing group, allows smaller
storage blueprint under critical restart conditions to release pending locks.

22 DB2 for z/OS and OS/390 Version 7 Performance Topics


– Name Class queue for data sharing (just check for the right level of CFCC) provides
better performance.
– Group attach enhancements are provided.
򐂰 In the area of SQL usage, there are several enhancements that can be obtained with
minor application modifications:
– Fetch first n rows
– Joining columns of different numeric data types via casting
– Commit and Rollback within stored procedures
– Current Member register can be used to identify the member of the data sharing group

2.4.3 Functions requiring some planning


The following new functions need to be activated gradually, after you have completely and
successfully migrated and consolidated your DB2 V7 environment. They also require
interactions with application developers and/or operations.
򐂰 In the area of SQL usage:
– UNION everywhere
– Scrollable cursors
– Java enhancements
– Unicode support
򐂰 In the area of utilities:
– Cross Loader with EXEC SQL
– CopyToCopy
– Online Load Resume
– Unload
– LISTDEF and Templates
– RUNSTATS history
– Data spaces with Series
– VSAM striping for log data sets
򐂰 For Data Sharing
– Lock structure duplexing increases the availability of your data sharing group and
depends upon a z/OS V1R2 feature

Chapter 2. Overview of DB2 Version 7 23


24 DB2 for z/OS and OS/390 Version 7 Performance Topics
3

Chapter 3. SQL performance


In this chapter, we discuss performance enhancements that affect SQL performance:
򐂰 Plan table changes: The V7 PLAN_TABLE has two new columns. Both PLAN_TABLE
and DSN_STATEMNT_TABLE have some new values in the existing columns.
򐂰 Parallelism for IN-list index access: Parallelism for an IN-list predicate is now allowed
whenever it is supported by an index and the prerequisites for parallelism are met
(read-only SQL, CURRENT DEGREE = ‘ANY’).
򐂰 Row value expressions: A predicate can compare list of columns to list of expressions.
򐂰 Correlated subquery to join transformation: There are some additional situations when
transformation can take place for correlated subqueries only.
򐂰 UPDATE/DELETE with self-referencing subselect: DB2 now lets searched UPDATE
and DELETE statements use target tables within subqueries in WHERE or SET clauses.
򐂰 Union everywhere: The use of unions is expanded to anywhere that a subselect clause
was valid in previous versions of DB2.
򐂰 Scrollable cursors: Scrolling backwards as well as forwards is now allowed, providing the
ability to place the cursor at a specific row within the result set. A scrollable cursor can be
sensitive or insensitive to updates made outside it, by the same user or different users.
򐂰 MIN/MAX enhancement: A more efficient use of indexes for evaluation MIN and MAX
functions is now provided.
򐂰 Fewer sorts with order by: Indexes can be better used to provide ordering of result sets.
򐂰 Joining on columns of different data types: A join predicate can be Stage 1 and
potentially Indexable when there is a data type or length mismatch for numeric columns.
For this to be true, in your SQL you must cast one column to the data type of the other.
򐂰 VARCHAR index-only access: It is no longer needed to set RETVLCFK=YES to get
index-only access in this special case.
򐂰 Bind improvements: Improvements to the DB2 Optimizer processing will result in
significantly reduced storage requirements and reduced CPU use during the BIND
command execution for joins involving more than 9 tables.
򐂰 Star join: An externalized parameter allows star join at the subsystem level and other
maintenance in this rapidly evolving area.

© Copyright IBM Corp. 2001 25


3.1 Changes to Explain tables
Explain provides information in three tables: PLAN_TABLE, for the output of binding a plan or
a package, the DSN_STATEMNT_TABLE, for the estimated cost of executing an SQL
statement, and the DSN_FUNCTION_TABLE for user-defined functions referred to in a
statement. Here we see the changes in V7 that impacted some of these tables.

3.1.1 PLAN_TABLE
The V7 PLAN_TABLE has two new columns, giving it a total of 51 columns and some new
values. The complete PLAN_TABLE definition is shown in Figure 3-1.

QUERYNO INTEGER NOT NULL PREFETCH CHAR(1) NOT NULL WITH DEFAULT
QBLOCKNO SMALLINT NOT NULL COLUMN_FN_EVAL CHAR(1) NOT NULL WITH DEFAULT
APPLNAME CHAR(8) NOT NULL MIXOPSEQ SMALLINT NOT NULL WITH DEFAULT
PROGNAME CHAR(8) NOT NULL --------- 28 column format ---------
PLANNO SMALLINT NOT NULL VERSION VARCHAR(64) NOT NULL WITH DEFAULT
METHOD SMALLINT NOT NULL COLLID CHAR(18) NOT NULL WITH DEFAULT
CREATOR CHAR(8) NOT NULL ---------30 column format ---------
TNAME CHAR(18) NOT NULL ACCESS_DEGREE SMALLINT
TABNO SMALLINT NOT NULL ACCESS_PGROUP_ID SMALLINT
ACCESSTYPE CHAR(2) NOT NULL JOIN_DEGREE SMALLINT
MATCHCOLS SMALLINT NOT NULL JOIN_PGROUP_ID SMALLINT
ACCESSCREATOR CHAR(8) NOT NULL ---------34 column format ---------
ACCESSNAME CHAR(18) NOT NULL SORTC_PGROUP_ID SMALLINT
INDEXONLY CHAR(1) NOT NULL SORTN_PGROUP_ID SMALLINT
SORTN_UNIQ CHAR(1) NOT NULL PARALLELISM_MODE CHAR(1)
SORTN_JOIN CHAR(1) NOT NULL MERGE_JOIN_COLS SMALLINT
SORTN_ORDERBY CHAR(1) NOT NULL CORRELATION_NAME CHAR(18)
SORTN_GROUPBY CHAR(1) NOT NULL PAGE_RANGE CHAR(1) NOT NULL WITH DEFAULT
SORTC_UNIQ CHAR(1) NOT NULL JOIN_TYPE CHAR(1) NOT NULL WITH DEFAULT
SORTC_JOIN CHAR(1) NOT NULL GROUP_MEMBER CHAR(8) NOT NULL WITH DEFAULT
SORTC_ORDERBY CHAR(1) NOT NULL IBM_SERVICE_DATA VARCHAR(254) NOT NULL WITH DEFAULT
SORTC_GROUPBY CHAR(1) NOT NULL --------43 column format ----------
TSLOCKMODE CHAR(3) NOT NULL WHEN_OPTIMIZE CHAR(1) NOT NULL WITH DEFAULT
TIMESTAMP CHAR(16) NOT NULL QBLOCK_TYPE CHAR(6) NOT NULL WITH DEFAULT
REMARKS VARCHAR(254) NOT NULL BIND_TIME TIMESTAMP NOT NULL WITH DEFAULT
--------- 25 column format --------- ------46 column format ------------
OPTHINT CHAR(8) NOT NULL WITH DEFAULT
HINT_USED CHAR(8) NOT NULL WITH DEFAULT
PRIMARY_ACCESSTYPE CHAR(1) NOT NULL WITH DEFAULT
-------49 column format------------
New columns PARENT_QBLOCKNO SMALLINT NOT NULL WITH DEFAULT
TABLE_TYPE CHAR(1)
-------51 column format-----------

Figure 3-1 DB2 Version 7 plan table

26 DB2 for z/OS and OS/390 Version 7 Performance Topics


New PLAN_TABLE columns
The two new columns are PARENT_QBLOCKNO and TABLE_TYPE. They are described in
Table 3-1.
Table 3-1 New PLAN_TABLE columns

Column name Description

PARENT_QBLOCKNO Contains the QBLOCKNO of the parent query block. It shows the
hierarchy of dependencies between query blocks.

TABLE_TYPE The type of the new table. Possible values are:


F — table function
Q — temporary intermediate result table (queued — not materialized)
T — table
W — temporary intermediate result table (using a Logical Work File —
materialized)

The TABLE_TYPE values W and Q are mainly used where UNION or UNION ALL have been
included in a view or table expression (see Section 3.6, “UNION everywhere” on page 49).

Changed PLAN_TABLE columns


The column QBLOCK_TYPE can contain three new values:
򐂰 TABLEX — table expression
򐂰 UNION — UNION
򐂰 UNIONA — UNION ALL

3.1.2 DSN_STATEMNT_TABLE changed columns


In V7 there is a new reason for a query having COST_CATEGORY = ‘B’, in the case where a
subquery includes a HAVING clause. This is represented in the column REASON by the text
‘HAVING CLAUSE’.

3.1.3 Explain headings used in this chapter


The Explain headings that are used in figures in the following sections of this chapter are
listed in Table 3-2 for reference.
Table 3-2 Explain headings

Heading PLAN_TABLE column

QB QBLOCKNO

PN PLANNO

MT METHOD

TABLE TNAME

AT ACCESSTYPE

MC MATCHCOLS

INDEX ACCESSNAME

PF PREFETCH

IXO INDEXONLY

Chapter 3. SQL performance 27


Heading PLAN_TABLE column

PQ PARENT_QBLOCKNO

TT TABLE_TYPE

SORTN Concatenation of SORTN_UNIQ, SORTN_JOIN, SORTN_ORDERBY, and


SORTN_GROUPBY

SORTC Concatenation of SORTC_UNIQ, SORTC_JOIN, SORTC_ORDERBY, and


SORTC_GROUPBY

JT JOIN_TYPE

CFE COLUMN_FN_EVAL

PM PARALLELISM_MODE

AD ACCESS_DEGREE

AG ACCESS_PGROUP_ID

JD JOIN_DEGREE

JG JOIN_PGROUP_ID

SCG SORTC_PGROUP_ID

SNG SORTN_PGROUP_ID

QBT QBLOCK_TYPE

3.2 Parallelism for IN-List index access


DB2 V6 introduced the parallelism for IN-List access for the inner table of a parallel group
(ACCESSTYPE=N). See DB2 UDB for OS/390 Version 6 Performance Topics, SG24-5351 for
details. DB2 could only use parallelisms when a table was joined which had an IN-List coded
in the predicate and DB2 had decided that the table was to be the inner table. DB2 V7 has
removed this restriction and now allows parallelism on a IN-List predicate whenever it is
supported by an index and the prerequisites for parallelism are met (read-only SQL,
CURRENT DEGREE = ‘ANY’.)

3.2.1 Description
IN-List parallelism is now fully supported in DB2 V7. DB2 V7 adds parallelism support not
only for the outer table, but also for access to a single table. This enhancement has the
potential of improving elapsed times significantly over the non-parallel performance,
depending on the number of parallel processes available.

In general the maximum degree of parallelism that can be chosen is determined as follows:
򐂰 For I/O intensive queries: the smaller of the number of values in the IN-list and the number
of partitions
򐂰 For CPU intensive queries: the smaller of the number of values in the IN-list and the
number of CPU engines.

The examples below were Explained in both V6 and V7 via SPUFI after issuing a SET
CURRENT DEGREE = ‘ANY’. The definition of the tables and indexes used in the examples
are shown in Figure 3-2.

28 DB2 for z/OS and OS/390 Version 7 Performance Topics


Table CUST_PART: Table ORD_PART:
CUST_NO INTEGER NOT NULL, ORD_NO INTEGER NOT NULL,
CUST_NAME CHAR(20), CUST_NO INTEGER,
TOWN CHAR(20), ORD_DATE DATE,
CRED_LIM DEC(7,2), AMOUNT DECIMAL(7,2),
PRIMARY KEY (CUST_NO) PRIMARY KEY (ORD_NO),
FOREIGN KEY ORD_CUST (CUST_NO)
640 rows REFERENCES CUST
ON DELETE RESTRICT

800 rows

Note: CUST_PART is partitioned into 4 partitions Note: ORD_PART is partitioned into 4 partitions
by CUST_NO values: by CUST_NO values:
CREATE UNIQUE INDEX XCUSTNOP CREATE INDEX X2CNO
ON TABLE CUST_PART(CUST_NO) ON TABLE ORD_PART(CUST_NO)
CLUSTER CLUSTER
(PART 1 VALUES (2000), (PART 1 VALUES (2000),
PART 2 VALUES (4000), PART 2 VALUES (4000),
PART 3 VALUES (6000), PART 3 VALUES (6000),
PART 4 VALUES (99999)) ... PART 4 VALUES (99999)) ...

Figure 3-2 Table and index definitions for IN-List parallelism example

Single table example


Figure 3-3 shows a single table query with an IN-list predicate. The table ORD_PART is
partitioned into four partitions using the index X2CNO. In both V6 and V7 the Optimizer has
chosen the special index access for IN-list predicates (ACCESS_TYPE = ‘N’). The V6 Explain
shows no parallelism, but the DB2 V7 Explain shows query CPU parallelism with degree two
(PARALLELISM_MODE = ‘C’ and ACCESS_DEGREE = 2). The Explain was performed on
an LPAR with two CPUs assigned. The Optimizer considered the access path to be
CPU-bound, and therefore the parallel degree was limited by the number of CPUs.

CPU parallelism is always chosen by the Optimizer in preference to I/O parallelism, except in
a few cases where CPU parallelism is not supported, but I/O parallelism is. CPU parallelism
has all the benefits of I/O parallelism (multiple I/O streams) plus more. CPU parallelism is
used for both I/O-bound and CPU-bound queries.

SELECT *
FROM ORD_PART
WHERE CUST_NO IN (1, 2001, 4001, 6001)

Version 6 Explain
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC PM AD AG JD JG SCG SNG
1 1 0 ORD_PART N 1 X2CNO N NNNN NNNN -- -- -- -- -- --- ---

Version 7 Explain
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC PM AD AG JD JG SCG SNG
1 1 0 ORD_PART N 1 X2CNO S N NNNN NNNN C 2 1 -- -- --- ---

Figure 3-3 IN-List parallelism for single table access

Chapter 3. SQL performance 29


Join outer table example
Figure 3-4 shows a two-table inner join of the ORD_PART table used in the previous example
with a CUST_PART table that is partitioned on the CUST_NO column with the same ranges
as the ORD_PART table. There is an IN-list predicate on the ORD_PART table CUST_NO
column.

In both V6 and V7, the Optimizer has chosen a Hybrid join (METHOD = 4) with ORD_PART
as the outer table of the join. The access to the outer table is via the index X2CNO using the
special index access for IN-list predicates (ACCESS_TYPE = ‘N’).

In V6, this IN-list access prevents the use of parallelism. In V7, the Optimizer chooses CPU
parallelism with degree three for both the table accesses (ACCESS_DEGREE = 3) and for
the join (JOIN_DEGREE = 3).

SELECT *
FROM ORD_PART O INNER JOIN CUST_PART C
ON O.CUST_NO = C.CUST_NO
WHERE O.CUST_NO IN (1, 2001, 4001, 6001)

Version 6 Explain
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC PM AD AG JD JG SCG SNG
1 1 0 ORD_PART N 1 X2CNO N NNNN NNNN -- -- -- -- -- --- ---
1 2 4 CUST_PART I 1 XCUSTNOP L N NNNN NNNN -- -- -- -- -- --- ---

Version 7 Explain
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC PM AD AG JD JG SCG SNG
1 1 0 ORD_PART N 1 X2CNO S N NNNN NNNN C 3 1 -- -- --- ---
1 2 4 CUST_PART I 1 XCUSTNOP L N NNNN NNNN C 3 1 3 1 --- ---

Figure 3-4 IN-List parallelism for outer table of join

3.2.2 Performance
The performance tests were performed on a 9672-ZZ7 LPAR with four processors, RAMAC 3
DASD, and OS/390 2.8.

A single table performance test was run. The SQL and Explain output are shown in
Figure 3-5. The CUSTOMER table used in the test is taken from the TPC-H benchmark
definition. The table contained 4.5 million rows.

The column definitions of the CUSTOMER table are shown Table 3-3. The index XNKEY is an
index on the column C_NATIONKEY. The results of the test are shown in Table 3-4. In this
test the query was I/O bound and therefore the Optimizer has chosen to use parallelism with
degree seven, which is the number of values in the IN-list predicate. The performance results
in Table 3-4 show the normal pattern for successful use of parallelism: a large reduction in
elapsed time in return for a small increase in CPU time.

30 DB2 for z/OS and OS/390 Version 7 Performance Topics


SELECT C_NATIONKEY, SUM(C_ACCTBAL)
FROM CUSTOMER CUSTOMER CARDF = 4500000
WHERE C_MKTSEGMENT = 'BUILDING' C_NATIONKEY COLCARDF = 25
AND C_NATIONKEY IN (12, 14, 16, 18, 21, 22, 23)
GROUP BY C_NATIONKEY; XNKEY is an index on CUSTOMER(C_NATIONKEY)

Version 6 Explain
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC PM AD AG JD JG SCG SNG
1 1 0 CUSTOMER N 1 XNKEY S N NNNN NNNN -- -- -- -- -- --- ---

Version 7 Explain
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC PM AD AG JD JG SCG SNG
1 1 0 CUSTOMER N 1 XNKEY S N NNNN NNNN C 7 1 -- -- --- ---

Figure 3-5 IN-List performance test for single table access

Table 3-3 Columns in the CUSTOMER table


Column name Definition Notes

C_CUSTKEY INTEGER Primary Key,


COLCARDF = 4,500,000

C_NAME VARCHAR(25)

C_ADDRESS VARCHAR(40)

C_NATIONKEY INTEGER COLCARDF = 25

C_PHONE CHAR(15)

C_ACCTBAL DECIMAL

C_MKTSEGMENT CHAR(10)

C_COMMENT VARCHAR(117)

Table 3-4 IN-List performance test results for single table access
Test Elapsed secs CPU secs

V6 1032.13 20.16

V7 223.32 21.63

Delta -78.4% +7.3%

A second test was run, using a join. The SQL and Explain outputs are shown in Figure 3-6.
The PART and LINEITEM tables used in the test are also taken from the TPC-H benchmark.
The PART table contained 6 million rows and the LINEITEM table contained 180 million rows.

The column definitions of the PART table are shown Table 3-5. The index XPSTPM is an
index on the PART table columns P_SIZE, P_TYPE, P_PARTKEY, and P_MFGR. The column
definitions of the LINEITEM table are shown Table 3-6. The index XLPART is an index on the
LINEITEM table column L_PARTKEY.

Chapter 3. SQL performance 31


The results of the test are shown in Table 3-7. Once again we see the normal result of the
successful use of parallelism: a large reduction in elapsed time at the cost of a small increase
in CPU time. The reduction in elapsed time is in line with the parallelism degree of 10, while
the small CPU increase is due to the management of the extra tasks.

SELECT COUNT(*), SUM(L_TAX), AVG(L_TAX)


FROM PART, LINEITEM
WHERE P_SIZE IN (3, 4, 27, 28, 31, 32, 45, 46, 38, 50)
AND P_TYPE LIKE 'SMALL%'
AND P_MFGR = Manufacturer#3' XPSTPM is an index on
AND P_PARTKEY = L_PARTKEY PART(P_SIZE,P_TYPE,P_PARTKEY,P_MFGR)
AND L_QUANTITY < 5;
XLPART is an index on
LINEITEM(L_PARTKEY)

Version 6 Explain
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC PM AD AG JD JG SCG SNG
1 1 0 PART N 2 XPSTPM N NNNN NNNN -- -- -- -- -- --- ---
1 2 1 LINEITEM I 1 XLPART N NNNN NNNN -- -- -- -- -- --- ---

Version 7 Explain
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC PM AD AG JD JG SCG SNG
1 1 0 PART N 2 XPSTPM N NNNN NNNN C 10 1 -- -- --- ---
1 2 1 LINEITEM I 1 XLPART N NNNN NNNN C 10 1 10 1 --- ---

Figure 3-6 IN-List performance test 2

Table 3-5 Columns in the PART table


Column name Definition Notes

P_PARTKEY INTEGER Primary Key


COLCARDF = 6,000,000

P_NAME VARCHAR(55)

P_MFGR CHAR(25)

P_BRAND CHAR(10)

P_TYPE VARCHAR(25)

P_SIZE INTEGER

P_CONTAINER CHAR(10)

P_RETAILPRICE DECIMAL

P_COMMENT VARCHAR(23)

32 DB2 for z/OS and OS/390 Version 7 Performance Topics


Table 3-6 Columns in the LINEITEM table
Column name Definition Notes

L_ORDERKEY INTEGER Primary key column 1 of 2

L_PARTKEY INTEGER

L_SUPPKEY INTEGER

L_LINENUMBER INTEGER Primary key column 2 of 2

L_QUANTITY DECIMAL

L_EXTENDEDPRICE DECIMAL

L_DISCOUNT DECIMAL

L_TAX DECIMAL

L_RETURNFLAG CHAR(1)

L_LINESTATUS CHAR(1)

L_SHIPDATE DATE

L_COMMITDATE DATE

L_RECEIPTDATE DATE

L_SHIPINSTRUCT CHAR(25)

L_SHIPMODE CHAR(10)

L_COMMENT VARCHAR(44)

Table 3-7 IN-List performance test results for join

Test Elapsed secs CPU secs

V6 2611.92 22.22

V7 278.46 23.7

Delta -89.3% +6.7%

3.2.3 Conclusions
Significant improvements in elapsed time can be achieved by exploiting query parallelism for
queries containing IN-list predicates.

3.2.4 Recommendations
Consider rebinding with DEGREE(ANY) those packages containing large queries for which
Explain shows an access type of ‘N’.

Chapter 3. SQL performance 33


3.3 Row value expressions
Prior to DB2 DB2 V7, a predicate could compare a single column to a single expression. DB2
V7 supports row value expressions, and so a predicate can compare a list of columns to a list
of expressions. This is possible for the following predicates:
򐂰 = and <>
򐂰 quantified predicates
򐂰 IN and NOT IN with a subquery

The use of each of these predicates is described below.

3.3.1 Equal and not equal predicates


The equal and not equal operators now support multiple expressions on each side of the
predicate as shown in Figure 3-7.

The new operators are:


(exp, exp,...) = (exp,exp,....)
(exp, exp,...) <> (exp,exp,...)

expression = expression
<> (fullselect)
>
<
>=
<=
=
>
<
, ,
( expression2 ) = ( expression2 )
<> New in V7
=
Basic predicates

SELECT *
Show all indexes for table FROM SYSIBM.SYSINDEXES T1
PAOLOR2.ACCOUNT WHERE (T1.TBCREATOR, T1.TBNAME) = ('PAOLOR2', 'ACCOUNT')
ORDER BY T1.CREATOR, T1.NAME;

Figure 3-7 Row value expression in basic predicates

3.3.2 Quantified predicates


Quantified predicates also support row value expressions, refer to Figure 3-8.

The new operators are:


(exp, exp,...) = ANY (fullselect)
(exp, exp,...) = SOME (fullselect)
(exp, exp,...) <> ALL (fullselect)

34 DB2 for z/OS and OS/390 Version 7 Performance Topics


expression1 = SOME (fullselect1 )
<> ANY
> ALL
<
>=
<=
=
>
<
,
( expression2 ) = SOME (fullselect2 )
ANY
New in V7
,
( expression2 ) <> ALL (fullselect2)
=
Quantified predicate
SELECT T1.CREATOR, T1.NAME
Show all DB2 indexes that FROM SYSIBM.SYSINDEXES T1
support a unique constraint WHERE (T1.CREATOR, T1.NAME) = SOME
created in DB2 V7. (SELECT T2.IXOWNER, T2.IXNAME
FROM SYSIBM.SYSTABCONST T2
WHERE T2.TYPE = 'U')
ORDER BY 1, 2;

Figure 3-8 Row value expression in quantified predicates

3.3.3 IN predicate with subquery


DB2 V7 extends the syntax of the IN and NOT IN subquery predicate to allow multiple
columns to be returned by the subquery. Multiple expressions must appear on the left-hand
side when a subquery that returns multiple columns is specified on the right-hand side. If
multiple expressions are coded, then they must be enclosed in parentheses.

It is important that the number and data type of the expressions on the left-hand side of the
predicate match the number and data type of the columns in the subquery result set. See
Figure 3-9 for the IN predicate syntax. See Figure 3-10 for an example.

expression IN (fullselect)
NOT ,
( expression )
,
( expression ) IN (fullselect) New in V7
NOT

IN predicate

Figure 3-9 IN predicate syntax

Chapter 3. SQL performance 35


The predicate is evaluated by matching the first expression value against the first column
value from the subquery result set, the second expression value against the second column
value until all expression and column values are compared. For an IN predicate, the predicate
is true only if all expressions match their respective columns for at least one row in the
subquery result. For a NOT IN predicate, the predicate is true if at least one expression does
not match its equivalent column for each row in the subquery result.

Row expression for IN subquery


View all tables in the that do not have any indexes defined

SELECT T1.CREATOR, T1.NAME SELECT T1.CREATOR, T1.NAME


FROM SYSIBM.SYSTABLES T1 FROM SYSIBM.SYSTABLES T1
WHERE T1.CREATOR NOT IN WHERE (T1.CREATOR, T1.NAME) NOT IN
(SELECT T2.TBCREATOR (SELECT T2.TBCREATOR, T2.TBNAME
FROM SYSIBM.SYSINDEXES T2 FROM SYSIBM.SYSINDEXES T2);
WHERE T2.TBCREATOR = T1.CREATOR
AND T2.TBNAME = T1.NAME)
OR T1.NAME NOT IN
(SELECT T2.TBNAME
FROM SYSIBM.SYSINDEXES T2
WHERE T2.TBCREATOR = T1.CREATOR
All versions
AND of DB2
T2.TBNAME = T1.NAME); DB2 V7

The same result set is returned by both these queries

Figure 3-10 Row value expression for IN subquery

3.3.4 Restrictions
If the number of expressions or columns returned on the right hand side do not match the
number of expressions on the left hand side, then SQLCODE -216 will be returned:

-216 THE NUMBER OF ELEMENTS ON EACH SIDE OF A PREDICATE OPERATOR DOES


NOT MATCH. PREDICATE OPERATOR IS op.

Multiple expressions are not valid in an IN-list, only a subquery that returns multiple rows. For
example, this predicate is valid:
WHERE (T1.NAME, T1.CREATOR) IN (SELECT T2.TBCREATOR, T2.TBNAME
FROM SYSIBM.SYSINDEXES T2)

However, this predicate is not valid:


WHERE (T1.NAME, T1.CREATOR) IN ((‘SYSIBM’,’SYSTABLES’),
(‘SYSIBM’,’SYSINDEXES’))

3.3.5 Performance
This is primarily a usability enhancement, but it can impact performance. Rewriting a query to
use row expressions may significantly change the access path for that query. The change
may improve or worsen performance depending on the new access path.

36 DB2 for z/OS and OS/390 Version 7 Performance Topics


Let us look at the example shown in Figure 3-7 on page 34 and the equivalent V6 query. Both
queries result in the same access path, as shown here in Figure 3-11.

It is a matching index scan on two columns of the index DSNDXX03, which contains key
columns TBCREATOR, TBNAME, CREATOR, and NAME. Since the access path is the
same, there is no performance impact of using a row expression in this case.

The column headings of the Explain output are described in Table 3-2 on page 27.

Without row expressions: With row expressions:


SELECT *
FROM SYSIBM.SYSINDEXES T1
SELECT *
WHERE T1.TBCREATOR = 'PAOLOR2' FROM SYSIBM.SYSINDEXES T1
AND T1.TBNAME = 'ACCOUNT' WHERE (T1.TBCREATOR, T1.TBNAME) = ('PAOLOR2', 'ACCOUNT')
ORDER BY T1.CREATOR, T1.NAME; ORDER BY T1.CREATOR, T1.NAME;

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 SYSINDEXES I 2 DSNDXX03 L N NNNN NNNN SELECT
1 2 3 0 N NNNN NNYN SELECT

Figure 3-11 Row value expressions Explain example 1

Let us look at another example, the row expression shown in Figure 3-12.

Row expressions:

SELECT L_DISCOUNT,SUM(L_QUANTITY),
AVG(L_QUANTITY*L_EXTENDEDPRICE)
FROM LINEITEM
WHERE L_ORDERKEY BETWEEN 10220001 and 1030000
AND L_QUANTITY>10
AND(L_SUPPLY,LPARTKEY)IN
(SELECT PS_SUPPKEY,PS_PARTKEY
FROM PARTSUPP
PS_SUPPLYCOST BETWEEN 115 AND 116)
GROUP BY DISCOUNT;

Row expressions Explain:

QB MT TABLE AT MC IXO
1 0 LINEITEM N 3 Y
1 3
2 0 PARTSUPP I 0 Y
2 3

Figure 3-12 Row value expression and Explain example 2

This query can be rewritten using EXISTS, or using a join, as shown in Figure 3-13. Here the
rewrite can significantly alter the access path, as shown in Figure 3-12. Performance in these
queries depends on the number of rows that are qualified from the PARTSUPP table. If the
number of qualified rows from the subquery is large, the row expression generally tends to
have worse performance.

Chapter 3. SQL performance 37


Rewritten to correlated EXISTS: Rewritten to JOIN:

SELECT L_DISCOUNT,SUM(L_QUANTITY), SELECT L_DISCOUNT,SUM(L_QUANTITY),


AVG(L_QUANTITY*L_EXTENDEDPRICE) AVG(L_QUANTITY*L_EXTENDEDPRICE)
FROM LINEITEM FROM LINEITEM,PARTSUPP
WHERE L_ORDERKEY BETWEEN 10220001 and 1030000 WHERE L_ORDERKEY BETWEEN 10220001 and 1030000
AND L_QUANTITY>10 AND L_QUANTITY>10
AND EXISTS(SELECT * FROM PARTSUPP AND L_SUPPKEY=PS_SUPPKEY
WHERE L_SUPPKEY=PS_SUPPKEY AND L_PARTKEY=PS_PARTKEY
AND L_PARTKEY=PS_PARTKEY AND PS_SUPPLYCOST BETWEEN 115 AND 116
AND PS_SUPPLYCOST BETWEEN 115 AND 116) GROUP BY DISCOUNT;
GROUP BY DISCOUNT;

Row expressions Explain: Row expressions Explain:

QB MT TABLE AT MC IXO QB MT TABLE AT MC IXO


1 0 LINEITEM I 1 N 1 0 LINEITEM I 1 N
1 3 1 1 PARTSUPP I 3 Y
2 0 PARTSUPP I 3 Y 1 3

Figure 3-13 Rewritten query and Explain example 2

3.3.6 Conclusions
Row expressions can make a query easier to read and understand. Performance can vary.

3.3.7 Recommendations
Always compare the access paths with and without row expressions so that you are aware of
any performance implications.

3.4 Correlated subquery to join transformation


Prior to DB2 V7, DB2 can transform a subquery (either correlated or non-correlated) into a
join if a number of conditions are met:
򐂰 The subquery select list contains a single column guaranteed (by a unique index) to be
unique.
򐂰 The subquery appears in the WHERE clause of a SELECT statement.
򐂰 The subquery does not contain GROUP BY, HAVING, or column functions.
򐂰 The subquery has only one table in the FROM clause.
򐂰 The comparison operator of the predicate containing the subquery is IN, = ANY, or
= SOME.
򐂰 For a non-correlated subquery, the left side of the predicate is a single column with the
same data type and length as the single column returned by the subquery (for a correlated
subquery the left side can be any expression).
򐂰 The transformation will not result in more than 15 tables in the join.

DB2 V7 adds some additional situations when transformation can take place, for correlated
subqueries only (for non-correlated subqueries, all conditions continue to apply):
򐂰 The requirement for the single column returned by the subquery to be unique is removed.
򐂰 The EXISTS predicate is supported.
򐂰 A subquery in a WHERE clause can be transformed, not only when the outer statement is
a SELECT, but also when it is a searched UPDATE or DELETE statement.

38 DB2 for z/OS and OS/390 Version 7 Performance Topics


Transformation for correlated subqueries in these new cases can only happen if the
correlation predicate is an equal predicate (of the form T1.C1 = T2.C2). If the correlation
predicate uses other operators, such as > and <, then the subquery will not be transformed.
There is no transformation for a subquery in the SET clause of an UPDATE statement.

The transformation is most likely to provide performance benefits when the result table in the
subquery is much smaller than the outer table. The join gives DB2 the opportunity to process
the smaller table first.

Figure 3-14 shows the definition of the tables used in the following examples. The table CUST
is a Customer table. The table ORD is an Order table connected to the CUST table by RI.

Table CUST: Table ORD:


CUST_NO INTEGER NOT NULL, ORD_NO INTEGER NOT NULL,
CUST_NAME CHAR(20), CUST_NO INTEGER,
TOWN CHAR(20), ORD_DATE DATE,
CRED_LIM DEC(7,2), AMOUNT DECIMAL(7,2),
PRIMARY KEY (CUST_NO) PRIMARY KEY (ORD_NO),
FOREIGN KEY ORD_CUST (CUST_NO)
400 rows REFERENCES CUST
ON DELETE RESTRICT
Index XCUSTNO:
Unique index on
160 rows
CUST(CUST_NO)
Index XOCNO: Index XODATE:
Non-unique Non-unique
index on index on
ORD(CUST_NO) ORD(ORD_DATE)

Figure 3-14 Table and index definitions for correlated subquery examples

3.4.1 Removal of uniqueness constraint


Let us look first at the removal of the uniqueness requirement. Example 1 in Figure 3-15
shows a query in which the uniqueness of the subquery column is not guaranteed by a
unique index. The CUST_NO column in the ORD table is not unique. In DB2 V6 the
correlated subquery is not transformed into a join. In DB2 V7 the correlated subquery is
transformed into a nested loop join with the ORD table as the inner table of the join.

In DB2 V7, in order to avoid incorrect results, duplicates must be eliminated when accessing
the ORD table. In Example 1 DB2 does this by retrieving only the first qualifying row when
using the matching index scan on the index XOCNO (an index on the column CUST_NO).

Chapter 3. SQL performance 39


Example 1:

SELECT *
FROM CUST C
WHERE CUST_NO IN
(SELECT CUST_NO
FROM ORD O
WHERE O.AMOUNT = C.CRED_LIM );

Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST R 0 S N NNNN NNNN SELECT
2 1 0 ORD R 0 S N NNNN NNNN CORSUB

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST R 0 S N NNNN NNNN SELECT
1 2 1 ORD I 1 XOCNO N NNNN NNNN SELECT

Figure 3-15 Subquery transformation — example 1

Example 2 in Figure 3-16 shows the same query as before, but with a restrictive predicate in
the subquery (ORD_DATE < ‘1999-01-01’) which reduces the size of the subquery result set.
In DB2 V7 the query is again transformed into a nested loop join, but this time the ORD table
is the outer table. It is accessed using the index XODATE which is an index on the
ORD_DATE column.

This shows one of the advantages of the transformation. It gives DB2 flexibility to decide in
which order to process the tables. Once again, duplicate CUST_NO values must be
eliminated from the ORD table to avoid incorrect results. In Example 2 this is done by sorting
to eliminate the duplicate CUST_NO values. This is shown in the Explain by SORTC_UNIQ =
SORTC_JOIN = ‘Y’ in the join row.

E x a mp l e 2:

S E LE C T *
F R O M C U ST C
W H E R E C US T _ NO I N
( S EL E CT C US T _ N O
F R OM O RD O
W H ER E O . A M OU N T = C .C R E D_ L IM
AN D O . O R D_ D AT E < ' 1 9 99 - 01 - 0 1 ') ;

V e rs i on 6 Ex p l ai n :

QB PN MT T A BL E AT MC I N D EX PF I XO SORTN S OR T C JT QBT
1 1 0 C U ST R 0 S N NNNN N NN N S E LE C T
2 1 0 ORD I 1 X O D AT E N NNNN N NN N C O RS U B

V e rs i o n 7 Ex p l a in :

QB PN MT T A B LE AT MC I N D EX PF IXO S OR T N S O RT C JT QBT
1 1 0 ORD I 1 X O D AT E N N NN N N N NN S E L EC T
1 2 1 CUST I 1 X C U ST N O N N NN N Y Y NN S E L EC T

Figure 3-16 Subquery transformation — example 2

40 DB2 for z/OS and OS/390 Version 7 Performance Topics


3.4.2 Support for EXISTS predicate
Example 3 in Figure 3-17 shows the transformation of a correlated subquery using an
EXISTS predicate.

Example 3:

SELECT *
FROM CUST C
WHERE EXISTS
(SELECT 1
FROM ORD O
WHERE O.CUST_NO = C.CUST_NO);

Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST R 0 S N NNNN NNNN SELECT
2 1 0 ORD I 1 XOCNO Y NNNN NNNN CORSUB

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST R 0 S N NNNN NNNN SELECT
1 2 1 ORD I 1 XOCNO Y NNNN NNNN SELECT

Figure 3-17 Subquery transformation — example 3

3.4.3 Searched UPDATE and DELETE support


Now let us look at subquery transformations for searched UPDATE and DELETE statements.
Figure 3-18 shows an UPDATE statement with a correlated subquery in the WHERE clause.
In DB2 V7 it is transformed into a nested loop join. Once again, DB2 eliminates duplicates by
retrieving only the first qualifying row when using the matching index scan on the index
XOCNO.

UPDATE CUST C
SET CRED_LIM = CRED_LIM * 1.1
WHERE CUST_NO IN
(SELECT CUST_NO
FROM ORD O
WHERE O.AMOUNT = C.CRED_LIM);

Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST R 0 S N NNNN NNNN UPDATE
2 1 0 ORD R 0 S N NNNN NNNN CORSUB

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST R 0 S N NNNN NNNN UPDATE
1 2 1 ORD I 1 XOCNO N NNNN NNNN UPDATE

Figure 3-18 Subquery transformation for UPDATE

Chapter 3. SQL performance 41


Figure 3-19 shows a similar transformation for a DELETE statement.

DELETE FROM CUST C


WHERE CUST_NO IN
(SELECT CUST_NO
FROM ORD O
WHERE O.AMOUNT = C.CRED_LIM);

Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST R 0 S N NNNN NNNN DELETE
2 1 0 ORD R 0 S N NNNN NNNN CORSUB

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST R 0 S N NNNN NNNN DELETE
1 2 1 ORD I 1 XOCNO N NNNN NNNN DELETE

Figure 3-19 Subquery transformation for DELETE

3.4.4 Subqueries that will still not be transformed in DB2 V7


The following subqueries will not be transformed into joins, for the reasons given.

This query is not transformed because of the column function in the subquery select list:
SELECT * FROM ORD O
WHERE CUST_NO IN
(SELECT MAX(CUST_NO)
FROM CUST C
WHERE TOWN = 'MANCHESTER')

The uniqueness requirement has only been removed for correlated subqueries. Therefore the
following query is not transformed because the subquery is not a correlated subquery and the
CUST_NO column in the ORD table is not unique:
SELECT *
FROM CUST C
WHERE CUST_NO IN
(SELECT CUST_NO
FROM ORD O
WHERE ORD_NO = 10 )

The following query is not transformed because the correlation predicate is not an equal
predicate, but a greater than predicate:
SELECT *
FROM CUST C
WHERE CUST_NO IN
(SELECT CUST_NO
FROM ORD O
WHERE O.AMOUNT > C.CRED_LIM )

42 DB2 for z/OS and OS/390 Version 7 Performance Topics


The following update is not transformed because the subquery is not in the WHERE clause
(and also because of the use of the MAX function in the subquery):
UPDATE CUST C
SET CRED_LIM = (SELECT MAX(AMOUNT)
FROM ORD O
WHERE O.CUST_NO = C.CUST_NO )

3.4.5 Performance
A performance test was run using the SQL shown in Figure 3-20. In DB2 V6 the access path
was a table space scan for the outer table T1 and a matching index scan on the correlation
column C1 of table T2 for the subquery. In DB2 V7 this was transformed to a nested loop join
with T2 as the outer table accessed via a matching index scan on the column C3. Table T1
was joined via a matching index scan on the correlation column C1.

SELECT C1, C2, C3


FROM T1 X
WHERE EXISTS
(SELECT C1
FROM T2
WHERE T2.C3 = 'A' AND T2.C1 = X.C1)

X1C1 is an index on table T1 column C1


T1 has 624473 rows. X2C1 is an index on table T2 column C1
T2 has 1654700 rows of which 23 have C3 = 'A' X2C3 is an index on table T2 column C3
after duplicate C1 values are removed
Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 T1 R 0 S N NNNN NNNN SELECT
2 1 0 T2 I 1 X2C1 N NNNN NNNN CORSUB

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 T2 I 1 X2C3 N NNNN NNNN SELECT
1 2 1 T1 I 1 X1C1 N NNNN NNNN SELECT

Figure 3-20 Subquery transformation performance test for select

The test was repeated a number of times. The total measurements are shown in Table 3-8.
Table 3-8 Subquery transformation select performance

Elapsed (sec) CPU (sec) Data getpage Index getpage

V6 227 15.6 216046 18608

V7 2.8 0.04 146 85

Delta -98.8% -99.7% -99.9% -99.5%

Chapter 3. SQL performance 43


An update test was also run as shown in Figure 3-21. The results are shown in Table 3-9.

UPDATE T1 X
SET C2 = C2 + 1
WHERE C3 IN
(SELECT C2
FROM T2
WHERE T2.C3 = 'A' AND T2.C1 = X.C1)

X1C1 is an index on table T1 column C1


T1 has 624473 rows. X2C1 is an index on table T2 column C1
T2 has 1654700 rows of which 23 have C3 = 'A' X2C3 is an index on table T2 column C3
after duplicate C1 values are removed
Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 T1 R 0 S N NNNN NNNN UPDATE
2 1 0 T2 I 1 X2C1 N NNNN NNNN CORSUB

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 T2 I 1 X2C3 N NNNN NNNN UPDATE
1 2 1 T1 I 1 X1C1 N NNNN YYNN UPDATE

Figure 3-21 Subquery transformation performance test for update

Table 3-9 Subquery transformation update performance

Elapsed (sec) CPU (sec) Data getpage Index getpage

V6 225 17.24 216020 18591

V7 0.4 0.02 142 76

Delta -99.8% -99.9% -99.9% -99.6%

3.4.6 Restrictions and potential problems


There is no parallelism support for a correlated subquery that has been transformed into a
join. Let us look at the Select performance measurements again, and include a V6 case with
parallelism of degree 7. The results are shown in Table 3-10. We can see that transformation
has reduced the elapsed time more than parallelism did, but has also reduced CPU cost
(which increased with parallelism).
Table 3-10 Subquery transformation and parallelism

Elapsed (sec) CPU (sec) Data getpage Index getpage

V6 seq. 227 15.6 216046 18608

V6 72.6 16.72
parallel,
degree 7

V7 2.8 0.04 146 85

44 DB2 for z/OS and OS/390 Version 7 Performance Topics


It is possible that, in some cases, this new feature will degrade performance, primarily for
dynamic SQL. That is most likely to happen when the outer query returns a small number of
rows and the subquery returns a large number of rows. In such a case, transformation
provides no performance benefit, but there is additional overhead to Prepare the statement
with transformation activated. If this is a problem, you can implement APAR PQ45052. This
APAR provides the possibility of disabling correlated subquery transformation when so
advised by IBM support.

3.4.7 Conclusions
Significant performance benefits can result for SQL statements that contain correlated
subqueries. The benefit is most apparent for subqueries which return small result sets while
the outer statement processes a large number of rows. The bigger the outer table result
relative to the subquery result, the better the performance improvement.

3.4.8 Recommendations
Consider rebinding packages containing SQL statements which might benefit from
transformation. These SQL statements will contain correlated subqueries that return small
result sets.

3.5 UPDATE/DELETE with self-referencing subselect


DB2 V7 allows searched UPDATE and DELETE statements to use the target tables within
subqueries in the WHERE or SET clauses. This enhancement does not support UPDATE and
DELETE WHERE CURRENT OF when the cursor definition contains a self referencing
subquery.

For example, the UPDATE statement in Figure 3-22 targets the table CUST. The WHERE
clause in this statement uses a subquery to get the maximum credit limit for all accounts in
the CUST table. Figure 3-23 shows an UPDATE with a self-referencing subquery in the SET
clause.

UPDATE CUST
SET CRED_LIM = CRED_LIM * 1.1
WHERE CRED_LIM <
(SELECT MAX(CRED_LIM)
FROM CUST )

Figure 3-22 UPDATE with self-referencing subquery in WHERE clause

UPDATE CUST C1
SET CRED_LIM = (SELECT MAX(CRED_LIM)
FROM CUST C2
WHERE C2.TOWN = C1.TOWN)

Figure 3-23 UPDATE with self-referencing subquery in SET clause

Chapter 3. SQL performance 45


In previous versions of DB2, this self-referencing would not be allowed. The following -118
SQLCODE would be returned, and the statement would fail:
-118 THE OBJECT TABLE OR VIEW OF THE DELETE OR UPDATE STATEMENT IS ALSO IDENTIFIED IN A
FROM CLAUSE

DB2 now will accept the syntax of this statement, and the SQLCODE -118 will not be
returned.

3.5.1 Executing the self-referencing UPDATE/DELETE


In order to maintain data integrity, this enhancement requires that the subquery is evaluated
completely before any updates or deletes take place.

For a non-correlated subquery, the subquery will be evaluated once and the result produced,
the WHERE clauses will be tested and any valid rows will be updated or deleted.

Two-step processing of the outer statement may be required for correlated subqueries. The
first step creates a work file and inserts the RID for a delete or RID and column value for an
update. After all rows of the outer statement have been processed, the second step reads the
work file. For each row in the work file, DB2 repositions on the record in the base table
pointed to by the RID, and then either updates or deletes the row as required.

This two-step processing is not shown in the Explain output. Two-step processing will be used
for an UPDATE whenever a column that is being updated is also referenced in the WHERE
clause of the UPDATE or is used in the correlation predicate of the subquery. This is shown in
the examples in Figure 3-24. For a DELETE statement, two-step processing will always be
used for a correlated subquery.

Two-step processing will occur because the


UPDATE CUST C1
SET CRED_LIM = CRED_LIM *1.1 updated column appears in the WHERE clause
WHERE CRED_LIM < (SELECT MAX(CRED_LIM) of the UPDATE.
FROM CUST C2
WHERE C2.TOWN = C1.TOWN);

UPDATE ORD O1 Two-step processing will occur because the


SET CUST_NO = CUST_NO + 1000
updated column appears in the correlation
WHERE AMOUNT < (SELECT MAX(AMOUNT)
predicate of the subquery.
FROM ORD O2
WHERE O2.CUST_NO = O1.CUST_NO);

UPDATE CUST C1
Two-step processing will NOT occur because the
SET CRED_LIM = CRED_LIM *1.1
WHERE EXISTS (SELECT 1
updated column does not appear either in the
FROM CUST C2 WHERE clause of the UPDATE or the correlation
WHERE TOWN = 'MANCHESTER' predicate of the subquery.
AND C2.CUST_NO = C1.CUST_NO);

Figure 3-24 Two-step processing for an UPDATE

46 DB2 for z/OS and OS/390 Version 7 Performance Topics


3.5.2 Restrictions on usage
DB2 positioned updates and deletes will still return the SQLCODE -118 if a subquery in the
WHERE clause references the table being updated or deleted from. See Figure 3-25 for an
example.

EXEC SQL DECLARE CURSOR C1 CURSOR FOR


SELECT T1.CUST_NO, T1.CUST_NAME, T1.CRED_LIM
FROM CUST T1
WHERE T1.CREDIT_LIMIT < (SELECT AVG(T2.CRED_LIM)
FROM CUST T2)
FOR UPDATE OF T1.CRED_LIM;
.
.
EXEC SQL OPEN C1;
.
.
EXEC SQL FETCH C1 INTO :HV-CNO, :HV-CNAME, :HV-CRDLMT;
.
.
EXEC SQL UPDATE CUST
SET CRED_LIM = CRED_LIM * 1.1
WHERE CURRENT OF C1; S Q L C O D E -1 1 8 a t B in d tim e

Figure 3-25 Positioned update with self-referencing subquery not supported

3.5.3 Performance
The only performance issues relate to the two-step processing of a correlated subquery. A
simple test was run. The test compared an update using a correlated subquery accessing a
different table to a similar update where the correlated subquery referenced the same table as
the update. These figures are not from measurements in a fully controlled environment (DB2
was dedicated, not the hardware), but they are indicative of relative cost. The two update
statements and their Explain outputs are shown in Figure 3-26. The table CUST2 is identical
in every way to the table CUST.

Query 1 - not self-referencing Query 2 - self-referencing


UPDATE CUST C1 UPDATE CUST C1
SET CRED_LIM = CRED_LIM *1.1 SET CRED_LIM = CRED_LIM *1.1
WHERE CRED_LIM < (SELECT MAX(CRED_LIM) WHERE CRED_LIM < (SELECT MAX(CRED_LIM)
FROM CUST2 C2 FROM CUST C2
WHERE C2.TOWN = C1.TOWN) WHERE C2.TOWN = C1.TOWN)
; ;

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT CFE QBT


1 1 0 CUST R 0 S N 0 T NNNN NNNN UPDATE
2 1 0 CUST2 R 0 S N 1 T NNNN NNNN R CORSUB

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT CFE QBT


1 1 0 CUST R 0 S N 0 T NNNN NNNN UPDATE
2 1 0 CUST R 0 S N 1 T NNNN NNNN CORSUB

Figure 3-26 Self-referencing UPDATE performance test

Chapter 3. SQL performance 47


Each table (CUST and CUST2) contained 160 rows. There were 16 rows for each of the 10
different values of the column TOWN, of which 15 had less than the maximum value of
CRED_LIM for that TOWN value. The performance figures related to the getpage and update
counts from the Accounting trace are shown in Table 3-11. BP1 contained both tables and
BP7 contained the work table spaces in DSNDB07.
Table 3-11 Self-referencing UPDATE test results
Test BP1 getpage BP1 update BP7 getpage BP7 update

Non-self- 987 224 0 0


reference

Self-reference 1063 224 4 4

Delta +76 0 +4 +4

The extra activity in BP7 for the self-referencing test is caused by creating a work file
containing the RIDs and new column values for the 150 rows to be updated. That work file is
then read to apply the updates. The extra getpages to BP1 are caused by this second
updating step. The rows in the CUST table are mapped two per page (MAXROWS 2).
Therefore 150 rows, if randomly distributed, would be held on 75 pages. The actual increase
in BP1 getpages is 76.

Perhaps a more realistic comparison is to replace the non-self-referencing case with one that
copies the rows into the CUST2 table before doing the update using that CUST2 table in the
subquery. Let us measure the following INSERT:
INSERT INTO CUST2
SELECT * FROM CUST

If we do this immediately followed by the update shown as Query 1 in Figure 3-26 on page 47,
we get the results shown in Table 3-12. The self-referencing update performs better overall in
this case.
Table 3-12 Self-referencing UPDATE versus INSERT and non-self-referencing update
Test BP1 getpage BP1 update BP7 getpage BP7 update

Insert + non- 1080 552 0 0


self-reference

Self-reference 1063 224 4 4

Delta -17 -328 +4 +4

3.5.4 Conclusions
This enhancement is primarily functional. There may be some performance penalty for using
a self-referencing correlated subquery because of the two-step processing.

3.5.5 Recommendations
Use this feature where it is needed. Be aware of the cost of two-step processing when using a
correlated subquery, although this will probably still perform better than the equivalent
solution without a self-referencing update.

48 DB2 for z/OS and OS/390 Version 7 Performance Topics


3.6 UNION everywhere
Prior to DB2 V7, the use of UNIONs was limited to only those places which accepted a
fullselect clause. This meant that UNIONs were basically limited to the SELECT statement.
UNIONs could not be used to create a VIEW, in table expressions, in predicates, or in
INSERT and UPDATE statements.

DB2 V7 has expanded the use of unions to anywhere that a subselect clause was valid in
previous versions of DB2. This enhancement extends compatibility with other members of the
DB2 UDB family and complies with the SQL99 standard.

Wherever a subselect was supported in DB2 V6, it can be replaced by a fullselect in DB2 V7.
It is now possible to code a UNION or UNION ALL in:
򐂰 A view
򐂰 A table expression
򐂰 A predicate
򐂰 An INSERT statement
򐂰 The SET clause of an UPDATE statement
򐂰 A declared temporary table

3.6.1 UNION syntax changes


The syntax of the CREATE VIEW is shown in Figure 3-27.

V6
CREATE
CREATE v ie w - n a m e ,
V IE W
V IE W
S y n ta x ( c o lu m n -n a m e )
V7
CREATE A S s u b s e le c t
V IE W CASCADED
CHECK
S y n ta x W IT H
LO CAL O P T IO N

C R E A T E V IE W v ie w - n a m e ,
( c o lu m n - n a m e )
AS f u lls e le c t
CASCADED
W IT H C H E C K O P T IO N
LO CAL

Figure 3-27 CREATE VIEW syntax

Chapter 3. SQL performance 49


An example is shown in Figure 3-28.

CREATE VIEW CUST_X AS


SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM
FROM CUST
WHERE TOWN = 'MANCHESTER'
UNION ALL
SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM
FROM CUST
WHERE CRED_LIM > 1000
UNION ALL
SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM
FROM CUST
WHERE CUST_NAME LIKE 'S%' ;

V7: OK

V6: DSNT408I SQLCODE = -154, ERROR: THE STATEMENT IS INVALID BECAUSE THE VIEW OR
TABLE DEFINITION CONTAINS A UNION, A UNION ALL, OR A REMOTE OBJECT

Figure 3-28 UNION in a view

The syntax of a table expression is shown in Figure 3-29.

V 6 ta b le - s p e c ta b le -s p e c
ta b le - n a m e
v ie w -n a m e c o r r e la t i o n - c l a u s e
t a b l e - lo c a t o r - r e f e r e n c e
( subs e le c t ) c o r r e la t i o n - c l a u s e
TABLE

V 7 ta b le - s p e c t a b l e - f u n c t i o n - r e fe r e n c e
jo in e d - ta b le

ta b le -s p e c
ta b le - n a m e
v ie w -n a m e c o r r e la tio n - c la u s e
t a b l e - lo c a t o r - r e f e r e n c e
( ) c o r r e la tio n - c la u s e
fu lls e le c t
T A B LE

t a b l e - f u n c t i o n - r e fe r e n c e
jo in e d - ta b le

Figure 3-29 Table expression syntax

50 DB2 for z/OS and OS/390 Version 7 Performance Topics


An example is shown in Figure 3-30.

SELECT *
FROM (SELECT * FROM CUST
WHERE TOWN = 'MANCHESTER'
UNION ALL
SELECT * FROM CUST
WHERE CRED_LIM > 1000 ) AS T
WHERE CUST_NO < 1000
ORDER BY CUST_NO

V7: OK

V6: DSNT408I SQLCODE = -199, ERROR: ILLEGAL USE OF KEYWORD UNION, TOKEN ) WAS
EXPECTED

Figure 3-30 UNION in a table expression

The syntax for a basic predicate is shown in Figure 3-31.

e xpre ssio n = exp re ssio n


<> (fullse le c t)
>
<
>=
<=
=
> c ha nge d fro m su bs e lec t
<
, ,
( e xpression 2 ) = ( e xpre ssio n 2 )
<>
<
B a s ic pre d ic a te s

Figure 3-31 Basic predicate syntax

The syntax for a quantified predicate is shown in Figure 3-32.

e x p re s s io n 1 = SOME (fu lls e le c t1 )


<> ANY
> ALL
<
>=
<=
= c h a n g e d f ro m s u b s e le c t
>
<
,
( e x p re s s io n 2 ) = SOME (f u lls e le c t2 )
ANY

,
( e x p re s s io n 2 ) <> ALL (fu lls e le c t2 )
=
Q u a n tifie d p re d ic a te

Figure 3-32 Quantified predicate syntax

Chapter 3. SQL performance 51


The syntax for an IN predicate is shown in Figure 3-33.

changed from subselect

expression IN (fullselect)
NOT ,
( expression )
,
( expression ) IN (fullselect)
NOT

IN predicate

Figure 3-33 IN predicate syntax

The syntax for an EXISTS predicate is shown in Figure 3-34.

ch an g ed fro m s u b s ele ct

E X IS T S (fu lls e le ct)

E X IS T S p re d ica te

Figure 3-34 EXISTS predicate syntax

An SQL example of a UNION in a predicate is shown in Figure 3-35.

SE LE CT *
FR OM O RD O
WH ER E CU S T_ NO I N
( SE L EC T CU ST _N O F RO M CU ST
W HE RE T OW N = ' MA NC HE ST ER '
UN I ON A LL
SE L EC T CU ST _N O F RO M CU ST
W HE RE C RE D_ LI M > 1 00 0 )
OR DE R BY CU ST _N O, O RD _ NO

V7 : OK

V6 SPU F I: D SN T 40 8I S QL CO DE = - 0 84 , ER RO R : U N AC CE PT A BL E SQ L ST AT E ME NT

Figure 3-35 UNION in a predicate example

52 DB2 for z/OS and OS/390 Version 7 Performance Topics


The syntax for the DECLARE TEMPORARY TABLE, extended to support a fullselect, and
therefore a UNION, is shown in Figure 3-36.

DECLARE GLOBAL TEMPORARY TABLE table-name

,
( column-spec )

LIKE table-name
view-name COLUMN ATTRIBUTES
INCLUDING IDENTITY

AS-(-fullselect-)-DEFINITION ONLY
AS-attribute

changed from subselect DECLARE GLOBAL TEMPORARY TABLE

Figure 3-36 DECLARE GLOBAL TEMPORARY TABLE syntax

3.6.2 Optimization
DB2 uses two techniques to optimize access paths containing UNIONs. These are:
򐂰 Distribution of predicates, joins, and aggregation.
򐂰 Pruning of subqueries.

Here is the precise order in which these operations are done:


1. Distribute the predicates.
2. Prune the subselects.
3. Distribute the joins.
4. Distribute the aggregations.

An example of using the first two of these operations is shown in Figure 3-37. The predicates
from the query WHERE clause are distributed to the WHERE clauses of the individual
subqueries. Then pruning removes the subquery with the predicate:
WHERE TOWN = 'MANCHESTER'
AND TOWN IN ('LONDON', 'LEEDS')

This is so, because the predicate will always evaluate to False. Any subselect where the
predicates, after predicate distribution, always evaluate to False will be removed by pruning.

Chapter 3. SQL performance 53


CREATE VIEW CUST_X AS
SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM
FROM CUST SELECT *
WHERE TOWN = 'MANCHESTER' FROM CUST_X
UNION ALL WHERE TOWN IN ('LONDON', 'LEEDS');
SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM
FROM CUST
WHERE CRED_LIM > 1000
UNION ALL SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM
SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM FROM CUST
FROM CUST WHERE TOWN = 'MANCHESTER'
WHERE CUST_NAME LIKE 'S%'; AND TOWN IN ('LONDON', 'LEEDS')
UNION ALL
SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM
FROM CUST
WHERE CRED_LIM > 1000
Predicate AND TOWN IN ('LONDON', 'LEEDS')
distribution UNION ALL
SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM
FROM CUST
WHERE CUST_NAME LIKE 'S%'
AND TOWN IN ('LONDON', 'LEEDS');

SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM


FROM CUST
WHERE CRED_LIM > 1000
AND TOWN IN ('LONDON', 'LEEDS');
UNION ALL
Subquery pruning SELECT CUST_NO, CUST_NAME, TOWN, CRED_LIM
FROM CUST
WHERE CUST_NAME LIKE 'S%'
AND TOWN IN ('LONDON', 'LEEDS');

Figure 3-37 UNION optimization

The Explain output is shown in Figure 3-38. The first Explain example has the rows ordered in
the normal way, in ascending query block number. It is easier to see what is happening if we
re-sequence the rows as shown in the second Explain example, so that the parent query
block comes before its dependent query blocks.

When UNION optimization takes place, the QBLOCKNO values are not indicative of
dependencies between the query blocks. You must use the PARENT_QBLOCK (PQ) values
to identify dependencies between query blocks. In our example, both query block 1 and query
block 5 are children of query block 2. For a description of the headings used in the Explain
output, see Table 3-2 on page 27.

54 DB2 for z/OS and OS/390 Version 7 Performance Topics


SELECT *
FROM CUST_X
WHERE TOWN IN ('LONDON', 'LEEDS');

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT QBT


1 1 0 CUST R 0 S N 2 T NNNN NNNN NCOSUB
2 1 0 0 0 W NNNN NNNN SELECT
5 1 0 CUST R 0 S N 2 T NNNN NNNN NCOSUB

Resequence rows into parent/child order

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT QBT


2 1 0 0 0 W NNNN NNNN SELECT
1 1 0 CUST R 0 S N 2 T NNNN NNNN NCOSUB
5 1 0 CUST R 0 S N 2 T NNNN NNNN NCOSUB

Note new V7 PLAN_TABLE columns:


PQ = PARENT_QBLOCK (QBLOCKNO of parent query block)
TT = TABLE_TYPE (F = Table Function, Q = Temporary intermediate result table (not materialized),
T = Table, W = Workfile)

Figure 3-38 Explain output for UNION optimization

In this example, we can see the effect of pruning. There are only two component subqueries.
The Explain row with QBLOCK_TYPE (QBT) set to ‘SELECT’ does not represent any actual
work when the query executes. It is merely documentation showing that the two component
queries are combined with a UNION ALL.

For static SQL, the pruning shown here is performed at Bind time. It depends on the fact that
both the predicates in the view (or table expression) and the predicates in the main SELECT
are specified as literal constants. If host variables are used, then DB2 cannot determine at
Bind time that the combination of predicates will always evaluate to False.

Some pruning can be deferred until execution time if host variables are used. The pruning
takes place once the contents of the host variables are known. This will not show in the
Explain output.

Now let us look at a simpler example, shown in Figure 3-39, where no pruning takes place.

SELECT *
FROM CUST_X
ORDER BY CUST_NO

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT QBT


1 1 0 CUST_X R 0 S N 0 W NNNN NNNN SELECT
1 2 3 0 N 0 -- NNNN NNYN SELECT
2 1 0 0 1 W NNNN NNNN UNIONA
3 1 0 CUST R 0 S N 2 T NNNN NNNN NCOSUB
4 1 0 CUST R 0 S N 2 T NNNN NNNN NCOSUB
5 1 0 CUST R 0 S N 2 T NNNN NNNN NCOSUB

Figure 3-39 UNION without pruning

Chapter 3. SQL performance 55


We do not need to resequence the Explain rows this time. Query blocks 3, 4, and 5, are the
three component subqueries. Query block 2 documents that they are combined using UNION
ALL and written to a work file. Query block 1 shows that the rows are read from that workfile
and sorted into ORDER BY sequence.

Now let us look at the distribution of joins and aggregations. Figure 3-40 shows a join using
the AVG function against the view containing UNIONs.

When the Explain rows are reordered, we can see that the join has been distributed to the
individual component subqueries. These are then written to a workfile and sorted for the
GROUP BY.

The TABLE_TYPE (TT) column can contain either ‘W’ or ‘Q’ when a temporary result table is
processed. The value ‘W’ indicates that the result of a UNION or UNION ALL is materialized
into a Logical Work File (LWF) and subsequently read from that LWF. The value ‘Q’ indicates
that the result of the UNION or UNION ALL is fed directly to the parent Query Block without
materialization.

SELECT X.CUST_NO, AVG(O.AMOUNT)


FROM CUST_X X INNER JOIN ORD O
ON X.CUST_NO = O.CUST_NO
GROUP BY X.CUST_NO
ORDER BY X.CUST_NO

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC CFE QBT


1 1 0 ORD I 0 XOCNO S N 2 T NNNN NNNN NCOSUB
1 2 4 CUST I 1 XCUSTNO L N 2 T NNNN NNNN NCOSUB
2 1 0 0 6 W NNNN NNNN UNIONA
6 1 0 DSNWFQB(02) R 0 S N 0 W NNNN NNNN SELECT
6 2 3 0 N 0 -- NNNN NNNY S SELECT
7 1 0 ORD I 0 XOCNO S N 2 T NNNN NNNN NCOSUB
7 2 4 CUST I 1 XCUSTNO L N 2 T NNNN NNNN NCOSUB
8 1 0 ORD I 0 XOCNO S N 2 T NNNN NNNN NCOSUB
8 2 4 CUST I 1 XCUSTNO L N 2 T NNNN NNNN NCOSUB

Resequence rows into parent/child order

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC CFE QBT


6 1 0 DSNWFQB(02) R 0 S N 0 W NNNN NNNN SELECT
6 2 3 0 N 0 -- NNNN NNNY S SELECT
2 1 0 0 6 W NNNN NNNN UNIONA
1 1 0 ORD I 0 XOCNO S N 2 T NNNN NNNN NCOSUB
1 2 4 CUST I 1 XCUSTNO L N 2 T NNNN NNNN NCOSUB
7 1 0 ORD I 0 XOCNO S N 2 T NNNN NNNN NCOSUB
7 2 4 CUST I 1 XCUSTNO L N 2 T NNNN NNNN NCOSUB
8 1 0 ORD I 0 XOCNO S N 2 T NNNN NNNN NCOSUB
8 2 4 CUST I 1 XCUSTNO L N 2 T NNNN NNNN NCOSUB

Figure 3-40 Distribution of joins and aggregations

We cannot see how the AVG function is processed from the Explain, but the result of the
internal rewrite will be similar to that shown in Figure 3-41. The CASE expression and the use
of SUM and COUNT values from the component selects are required to ensure correct
handling of null values when calculating the average.

56 DB2 for z/OS and OS/390 Version 7 Performance Topics


SELECT X.CUST_NO, AVG(O.AMOUNT)
FROM CUST_X X INNER JOIN ORD O
ON X.CUST_NO = O.CUST_NO
GROUP BY X.CUST_NO
ORDER BY X.CUST_NO

SELECT CUST_NO,
CASE WHEN SUM(CNT_X) = 0 THEN NULL
ELSE SUM(SUM_X)/SUM(CNT_X)
END
FROM ( SELECT C1.CUST_NO, SUM(O1.AMOUNT), COUNT(O1.AMOUNT)
FROM CUST C1 INNER JOIN ORD O1
ON C1.CUST_NO = O1.CUST_NO
WHERE C1.TOWN = 'MANCHESTER'
GROUP BY C1.CUST_NO
UNION ALL
SELECT C2.CUST_NO, SUM(O2.AMOUNT), COUNT(O2.AMOUNT)
FROM CUST C2 INNER JOIN ORD O2
ON C2.CUST_NO = O2.CUST_NO
WHERE C2.CRED_LIM > 1000
GROUP BY C2.CUST_NO
UNION ALL
SELECT C3.CUST_NO, SUM(O3.AMOUNT), COUNT(O3.AMOUNT)
FROM CUST C3 INNER JOIN ORD O3
ON C3.CUST_NO = O3.CUST_NO
WHERE C3.CUST_NAME LIKE 'S%'
GROUP BY C3.CUST_NO
) AS X (CUST_NO, SUM_X, CNT_X)
GROUP BY X.CUST_NO
ORDER BY X.CUST_NO

Figure 3-41 Possible query rewrite

3.6.3 Requirements for subquery pruning


When defining a view or a nested table expression that incorporates UNION or UNION ALL
connectors, it may be necessary to code redundant predicates in the view definition or nested
table expression. Consider the example shown in Figure 3-42, where a logical order table is
stored in multiple DB2 tables, with each table holding one month of data. Each table definition
looks like the ORD table shown in Figure 3-14 on page 39. We can define a view on these
tables as follows:
CREATE VIEW ALL_ORD (ORD_NO, CUST_NO, ORD_DATE, AMOUNT) AS
SELECT * FROM MONTH1
UNION ALL
SELECT * FROM MONTH2
UNION ALL
SELECT * FROM MONTH3
UNION ALL
SELECT * FROM MONTH4
...

We can then execute a SELECT against this view which restricts the rows retrieved by
ORD_DATE value:
SELECT * FROM ALL_ORD
WHERE ORD_DATE BETWEEN ‘2001-03-01’ AND ‘2001-03-31’;

Chapter 3. SQL performance 57


Order data in multiple tables
by month

Stores ORD_DATE values between


MONTH1 2001-01-01 and 2001-01-31

Stores ORD_DATE values between


MONTH2 2001-02-01 and 2001-02-28

Stores ORD_DATE values between


MONTH3 2001-03-01 and 2001-03-31

Stores ORD_DATE values between


MONTH4 2001-04-01 and 2001-04-30

and so on ...

Figure 3-42 Multiple ORDER tables, one per month

However, we will not get subquery pruning in this case. We know that the range of
ORD_DATE values in each table is limited, but the DB2 Optimizer does not know this.

In order to get subquery pruning, we must code redundant predicates in the view definition to
tell the Optimizer that the ORD_DATE values are limited for each table. If we change the view
definition as follows, then the SELECT statement can benefit from subquery pruning:
CREATE VIEW ALL_ORD (ORD_NO, CUST_NO, ORD_DATE, AMOUNT) AS
SELECT * FROM MONTH1
WHERE ORD_DATE BETWEEN ‘2001-01-01’ AND ‘2001-01-31’
UNION ALL
SELECT * FROM MONTH2
WHERE ORD_DATE BETWEEN ‘2001-02-01’ AND ‘2001-02-28’
UNION ALL
SELECT * FROM MONTH3
WHERE ORD_DATE BETWEEN ‘2001-03-01’ AND ‘2001-03-31’
UNION ALL
SELECT * FROM MONTH4
WHERE ORD_DATE BETWEEN ‘2001-04-01’ AND ‘2001-04-30’
...

With this revised view definition, the Optimizer can recognize that our SELECT statement
only needs to access the MONTH3 table. The subqueries that access each of the other tables
can be pruned, thus significantly improving performance.

58 DB2 for z/OS and OS/390 Version 7 Performance Topics


The same is true if we had used a nested table expression in the SELECT instead of a view.
We still need to code the redundant predicates to get subquery pruning:
SELECT * FROM (
SELECT * FROM MONTH1
WHERE ORD_DATE BETWEEN ‘2001-01-01’ AND ‘2001-01-31’
UNION ALL
SELECT * FROM MONTH2
WHERE ORD_DATE BETWEEN ‘2001-02-01’ AND ‘2001-02-28’
UNION ALL
SELECT * FROM MONTH3
WHERE ORD_DATE BETWEEN ‘2001-03-01’ AND ‘2001-03-31’
UNION ALL
SELECT * FROM MONTH4
WHERE ORD_DATE BETWEEN ‘2001-04-01’ AND ‘2001-04-30’
... ) AS ALL_ORD
WHERE ALL_ORD.ORD_DATE BETWEEN ‘2001-03-01’ AND ‘2001-03-31’;

3.6.4 UNION ALL materialization


At the time of writing, there is a restriction when UNION ALL is used in a view or in a nested
table expression. The DB2 Optimizer may materialize the result of the UNION ALL into a work
file. This is not always necessary and may cause some performance impact. The PTF for
APAR PQ47178 solves this problem. This APAR is intended to avoid or reduce materialization
of the result set in cases where this is not necessary.

The APAR should improve performance for queries in which all the following conditions are
true:
򐂰 The UNION ALL result table is referenced in the parent Query Block (QB) FROM clause
as a table expression or view.
򐂰 The parent QB references no other tables in the FROM clause after join distribution has
taken place. In other words, any joins must be distributed by the Optimizer.
򐂰 The parent QB needs to perform a DB2 sort on the UNION ALL result table, for example,
because of a GROUP BY or ORDER BY.
򐂰 There are no predicates to be applied to the UNION ALL result table. In other words, all
predicates must have been distributed.
򐂰 Only simple columns (not arithmetics or scalar functions) and simple column functions
(containing simple columns and no expressions) appear in the select list of the parent QB.

If all these conditions are met for a query, the APAR should improve performance. In the
Explain output you will see the TABLE_TYPE change from ‘W’ (workfile materialization) to ‘Q’
(queue without materialization). The query shown in Figure 3-40 on page 56 would meet the
criteria for improvement by the APAR.

3.6.5 Performance
A number of tests were performed to compare the effect of accessing one large partitioned
table versus a UNION of 10 smaller non-partitioned tables. The partitioned table had 10
partitions and contained 10 million rows. It had a partitioning index and one non-partitioning
index. The 10 small tables each contained 1 million rows and had two indexes.

The tests selected 10 million rows, 200000 rows and one row respectively from the large table
and from a view which is a UNION ALL of the ten small tables. The Class 2 elapsed and CPU
times in seconds for each test are shown in Table 3-13, Table 3-14, and Table 3-15.

Chapter 3. SQL performance 59


Table 3-13 Select 10 million rows

Target Class 2 Elapsed Class 2 CPU

View of 10 tables 751 206

Large table 777 204

Table 3-14 Select 200 thousand rows

Target Class 2 Elapsed Class 2 CPU

View of 10 tables 19.1 4.9

Large table 18 4.9

Table 3-15 Select one row

Target Class 2 Elapsed Class 2 CPU

View of 10 tables 0.018 0.006

Large table 0.017 0.005

It can be seen that the performance of the view with UNION ALL is comparable to the single
large table in these tests.

3.6.6 Conclusions
This is primarily a usability enhancement. Our tests showed no significant performance
degradation.

3.6.7 Recommendations
Always check the performance impact of using UNION or UNION ALL by Explaining sample
queries. Make sure that PTFs for APAR PQ47178 and PQ48588 are applied in order to
achieve the best possible performance.

3.7 Scrollable cursors


With the previous versions of DB2, you could only scroll cursors in a forward direction. When
a cursor was opened, it would be positioned before the first row of the result set. When a
FETCH was executed, the cursor would move forward one row.

If you wanted to move backwards in a result set, there were a number of options. One option
was to CLOSE the current cursor and to re-OPEN it with a new starting value. The cursor
would be something like this:
SELECT ORD_NO, CUST_NO, ORD_DATE, AMOUNT
FROM ORD
WHERE ORD_NO >= :START-ORD
ORDER BY ORD_NO;

If you remember the ORD_NO value that you wish to re-position to, you can close the cursor,
move the re-positioning value to the host variable START-ORD and then open the cursor
again. If the access path involves a DB2 sort for the ORDER BY, then this can be an
expensive process as each open re-sorts a subset of the rows.

60 DB2 for z/OS and OS/390 Version 7 Performance Topics


A second alternative was to build arrays in the program’s memory. The result set would be
opened and all the rows would be read into the array. The program would, then move
backwards and forwards within this array. This option needed to be carefully planned, as it
could waste memory through low utilization of the space, or it could restrict the number of
rows returned to the user to some arbitrary number. It was also hampered by the break
between the actual data in the table and the data in the array. If other users were changing
data, the program would not be aware of this, as there was no fixed relationship between the
array data and the table data once read. If you needed to detect updates made by other users
after the array was populated, the normal procedure was to re-select the row to see if the data
had changed.

DB2 V7 introduces scrollable cursors, which allow scrolling backwards as well as forwards.
They also provide the ability to place the cursor at a specific row within the result set. A
scrollable cursor can be sensitive or insensitive to updates made outside the cursor, by the
same user or by different users.

Scrollable cursors will be of most use for client/server applications and conversational
transactions. CICS and IMS transactions are normally written using a pseudo-conversational
structure, in which the transaction is terminated at each screen I/O. This termination at screen
I/O will prevent the effective use of scrollable cursors because all cursors are closed by
transaction termination.

3.7.1 Functional description


New keywords have been added to both the DECLARE CURSOR and FETCH statements to
support scrollable cursors. The new syntax for DECLARE CURSOR is shown in Figure 3-43,
and for FETCH in Figure 3-44.

DECLARE cursor-name
INSENSITIVE SCROLL
New in V7 SENSITIVE STATIC

CURSOR FOR select-statement


WITH HOLD statement-name
WITH RETURN
DECLARE CURSOR

Figure 3-43 DECLARE CURSOR syntax

The DECLARE CURSOR statement specifies that the cursor is scrollable by including the
SCROLL keyword. If SCROLL is specified, then either INSENSITIVE or SENSITIVE STATIC
must also be specified. In DB2 V7, SENSITIVE cursors are always STATIC. SENSITIVE
DYNAMIC scrollable cursors will be supported in a later release of DB2 for z/OS and OS/390.

INSENSITIVE cursors are strictly read-only. You cannot specify FOR UPDATE OF. They will
not be aware of updates made by others.

SENSITIVE STATIC cursors are updatable and may also be made sensitive to updates made
by others.

Chapter 3. SQL performance 61


FETCH
INSENSITIVE NEXT
SENSITIVE PRIOR
FIRST
LAST
CURRENT
BEFORE
AFTER New in V7
ABSOLUTE host-variable
integer-constant
RELATIVE host-variable
integer-constant

FROM
cursor-name
single-fetch-clause

single-fetch-clause
,
INTO host-variable
USING DESCRIPTOR descriptor-name

FETCH

Figure 3-44 Fetch syntax

The FETCH statement can be either INSENSITIVE or SENSITIVE. If the cursor is defined as
INSENSITIVE SCROLL, then only FETCH INSENSITIVE is supported. If the cursor is defined
as SENSITIVE STATIC SCROLL, then a FETCH for the cursor can be either INSENSITIVE or
SENSITIVE.

It is the mode of the FETCH that determines whether or not updates made by others are seen
by the cursor. A SENSITIVE cursor always sees its own updates but, when a specific row is
FETCHed, it is the SENSITIVE or INSENSITIVE specification on the FETCH that determines
whether or not updates to the row made by others will be seen by the FETCH.

Scrollable cursors make use of the Declared Temporary Tables (DTTs) introduced in DB2 V6.
When the cursor is opened, the result set is always copied to an internally defined DTT as
shown in Figure 3-45. In order to use scrollable cursors on a DB2 subsystem, a TEMP
database must be defined containing one or more segmented table spaces.

During opening of the cursor, locks are taken as normal on the base table while rows are read
and copied to the DTT. Once the open cursor processing is completed, there will be no page
or row locks left on the base table unless you run with isolation RR or RS. After the open, the
cursor is positioned just before the first row in the result set, the same as for a non-scrollable
cursor.

A FETCH INSENSITIVE will return the appropriate row from the DTT. A FETCH SENSITIVE
will first refresh the row in the DTT from the base table in order to pick up any committed
updates.

In the future, SENSITIVE DYNAMIC scrollable cursors will not use a DTT, but will process the
base table directly.

62 DB2 for z/OS and OS/390 Version 7 Performance Topics


DECLARE C1 INSENSITIVE SCROLL CURSOR FOR
SELECT CUST_NO, CUST_NAME, CRED_LIM
FROM CUST
WHERE TOWN = 'LONDON';
...
OPEN C1;
...
FETCH C1 INTO :WK-CNO, :WK-CNM, :WK-CRED;

Base Table
DB2 Table Result Table
DB2 Declared Temp Table

SCROLL

Exclusive access by agent

Fixed number of rows

Goes away at Close Cursor


Accessed by many

Figure 3-45 Open scrollable cursor

Once the scrollable cursor is open, you can use the various positioning options of the FETCH
statement to retrieve rows. The effect of the various FETCH options are summarized in
Figure 3-46. A full description of the positioning options can be found in the DB2 UDB Server
for OS/390 and z/OS Version 7 Presentation Guide, SG24-6121.

Chapter 3. SQL performance 63


...BEFORE... ...ABSOLUTE 0...

...FIRST... ...ABSOLUTE 1...

...ABSOLUTE 4...

...RELATIVE -3...

...PRIOR... ...RELATIVE -1...

FETCH... CURRENT CURSOR POSITION ...CURRENT... ...RELATIVE 0...

...NEXT... ...RELATIVE 1...

...RELATIVE 3...

Result set ...LAST... ...ABSOLUTE -1...

...AFTER...

Figure 3-46 Fetch positioning options

3.7.2 Performance
Some limited tests were performed using the ORD table described previously in Figure 3-14,
“Table and index definitions for correlated subquery examples” on page 39. The table
contained 160 rows, two rows per page (MAXROWS 2 was specified) so that 80 pages
contained rows. All tests were performed using V7 GA code and COBOL application
programs. These figures are not from measurements in a fully controlled environment (DB2
was dedicated, not the hardware), but they are indicative of relative cost.

Non-scrollable versus scrollable cursor


First, a normal cursor and an INSENSITIVE SCROLL cursor were used as defined in
Figure 3-47.

Non-scrollable cursor: Scrollable cursor:


EXEC SQL DECLARE C1 INSENSITIVE SCROLL
EXEC SQL DECLARE C1 CURSOR FOR CURSOR FOR
SELECT ORD_NO, SELECT ORD_NO,
VALUE(CUST_NO,0), VALUE(CUST_NO,0),
ORD_DATE, ORD_DATE,
VALUE(AMOUNT,0) VALUE(AMOUNT,0)
FROM ORD FROM ORD
ORDER BY ORD_NO ORDER BY ORD_NO
END-EXEC. END-EXEC.

Figure 3-47 Cursor definitions

64 DB2 for z/OS and OS/390 Version 7 Performance Topics


The access path chosen for both cursors was a table space scan followed by a sort for the
ORDER BY. The table space containing the ORD table was assigned to BP1, the 4-KB work
table spaces in DSNDB07 were assigned to BP7, and the TEMP table spaces in the TEMP
database were assigned to BP8. All packages were bound with ISOLATION(CS).

A test was run using each cursor in which the cursor was opened and all rows were fetched in
a forward direction. The getpage and update counts for each bufferpool as shown by the
accounting trace are shown in Table 3-16.
Table 3-16 Scroll versus normal cursor

Cursor type BP1 getpage BP7 getpage BP7 update BP8 getpage BP8 update

Normal 83 8 6 0 0

Scroll 83 8 6 7 165

The tests were run when the ORD table was entirely resident in the buffer pool, and therefore
there was no I/O to the table during any of the tests. The buffer pool for the DSNDB07 table
spaces was also large enough to avoid any I/O. In the normal cursor case, there was no read
or write I/O from any buffer pool. The buffer pool for the TEMP database was also large
enough to avoid read I/O, and in the scroll cursor test, there were just two synchronous write
I/Os and no read I/Os in the TEMP buffer pool. The BP8 getpages and updates reflect the
activity on the temporary tables.

The Class 2 and Class 3 times (in seconds) are shown in Table 3-17. These figures are not
from measurements in a fully controlled environment (DB2 was dedicated, not the hardware)
but they are indicative of relative cost.
Table 3-17 Accounting Class 2 and Class 3 times

Cursor type C2 elapsed C2 CPU C3 waits

Normal 0.011378 0.010248 0.000000

Scroll 0.027238 0.013660 0.012494

The Class 3 wait times shown for the Scroll cursor were made up of two waits for Write I/O
and one wait service task at commit. The commit processing is necessary because Undo log
records are created for DTTs to support rollback to a commit or a savepoint.

The locking impact is shown in Table 3-18. With the non-scroll cursor, there were these locks:
an S-Lock on the SKPT, an IS-Lock on the table space containing the ORD table, an S-Lock
on the work table space used by the sort, and an S-Lock on the DBD for DSNDB07.
Table 3-18 Locking with scroll cursors

Cursor type Lock requests

Normal 4

Scroll 10

With the scroll cursor, there were six additional lock requests. These are DBD locks, pageset
locks and mass delete locks on the TEMP database and its table spaces. There had been no
recent updates to the ORD table, and so lock avoidance occurred on all data pages in both
the normal and scroll cursor cases.

Chapter 3. SQL performance 65


Scrollable cursor versus Declared Temporary Table
In order to compare the scrollable cursor to an equivalent capability in DB2 V6, the scrollable
cursor test described above was compared to using a Declared Temporary Table (DTT) in the
program and inserting rows by SQL before retrieving them. This allows the V6 result set to be
insensitive to later updates, as is the case with an insensitive scrollable cursor. The SQL
statements to declare and populate the DTT are shown in Figure 3-48. The buffer pool activity
is compared in Table 3-19 and the Accounting Class 2 and Class 3 times (in seconds) in
Table 3-20. The locking differences are shown in Table 3-21. Again, ISOLATION(CS) was
used for all packages.

Scrollable cursor: Equivalent using a DTT:


EXEC SQL DECLARE C1 INSENSITIVE SCROLL
EXEC SQL DECLARE C1
CURSOR FOR
CURSOR FOR
SELECT ORD_NO,
SELECT ORD_NO,
VALUE(CUST_NO,0),
VALUE(CUST_NO,0),
ORD_DATE,
ORD_DATE,
VALUE(AMOUNT,0)
VALUE(AMOUNT,0)
FROM ORD
FROM SESSION.ORD_TEMP
ORDER BY ORD_NO
ORDER BY ORD_NO
END-EXEC.
END-EXEC.
...
EXEC SQL DECLARE GLOBAL TEMPORARY TABLE
SESSION.ORD_TEMP
LIKE ORD
ON COMMIT DELETE ROWS
END-EXEC.
...
EXEC SQL INSERT INTO SESSION.ORD_TEMP
SELECT *
FROM ORD
END-EXEC.

Figure 3-48 Using a DTT instead of a scrollable cursor

Table 3-19 Scrollable cursor versus DTT bufferpool data

Test BP0 getpage BP1 getpage BP7 getpage BP7 update BP8 getpage BP8 update

Scroll 0 83 8 6 7 165

DTT 54 83 8 6 295 415

Table 3-20 Scrollable cursor versus DTT Class 2 and 3 times

Test C2 elapsed C2 CPU C3 waits

Scroll 0.027238 0.013660 0.012494

DTT 0.196631 0.034427 0.160060

Table 3-21 Scrollable cursor versus DTT locking comparison

Test Lock requests

Scroll 10

DTT 63

66 DB2 for z/OS and OS/390 Version 7 Performance Topics


It can be seen that, by any measure, the scrollable cursor performs better than the DTT
equivalent code. The increased Class 3 wait times for the DTT tests were attributable mainly
to an increase in the number of synchronous write I/Os for BP8 (from two to eleven) and the
resulting increase in database I/O wait time and service task wait time. The extra costs of the
DTT solution are in two areas: catalog accesses (BP0 getpages), and extra processing in the
DTT itself (BP8 getpages and updates).

Repositioning: close/reopen versus scrollable cursor


Another test was performed to measure performance when repositioning. For this test the
ORD_PART table was used, which contained 800 rows in 400 data pages in four partitions.
The batch test program read forward through the table in ascending ORD_NO order. On five
occasions, it backed up 10 rows, and then resumed reading forwards again. The idea was to
simulate paging backwards in an on-line browser transaction. Cursor Stability was used in
both cases. The first version of the program used Version 6 techniques to back up. It closed
the cursor and reopened it with a new starting value. The cursor and its Explain output are
shown in Figure 3-49.

EXEC SQL DECLARE C1 CURSOR FOR


SELECT ORD_NO,
VALUE(CUST_NO,0),
ORD_DATE,
VALUE(AMOUNT,0)
FROM PAOLOR7.ORD_PART
WHERE ORD_NO >= :START-ORD
ORDER BY ORD_NO
END-EXEC.

Explain
QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT QBT
1 1 0 ORD_PART I 1 XOR2NO L N 0 T NNNN NNNN SELECT
1 2 3 0 N 0 -- NNNN NNYN SELECT

XOR2NO is an index on the ORD_NO column with CLUSTRRATIOF = 69.0%

Figure 3-49 Repositioning without scrollable cursor

The second version of the program used an insensitive scrollable cursor and repositioned
using FETCH RELATIVE as shown in Figure 3-50.

Chapter 3. SQL performance 67


EXEC SQL DECLARE C1 INSENSITIVE SCROLL
CURSOR FOR
SELECT ORD_NO,
VALUE(CUST_NO,0),
ORD_DATE,
VALUE(AMOUNT,0)
FROM PAOLOR7.ORD_PART
ORDER BY ORD_NO
END-EXEC.
...
Repositioning:
EXEC SQL FETCH RELATIVE -11 C1
INTO :ORD-NO, :CUST-NO,
:ORD-DATE :DATE-I, :AMOUNT
END-EXEC

Explain
QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT QBT
1 1 0 ORD_PART R 0 S N 0 T NNNN NNNN SELECT
1 2 3 0 N 0 -- NNNN NNYN SELECT

Figure 3-50 Repositioning with scrollable cursor

The figures from the two tests are shown in Table 3-22 and Table 3-23. The index XOR2NO
was in BP2. The Class 2 and 3 times are in seconds.

Table 3-22 Buffer pool activity for repositioning tests

Test BP1 getpage BP2 getpage BP7 getpage BP7 update BP8 getpage BP8 update

Close/open 1124 15 49 50 0 0

Scroll relative 404 0 18 16 27 819

Table 3-23 Class 2 and Class 3 times for repositioning test

Test C2 elapsed C2 CPU C3 waits

Close/open 0.152763 0.063726 0.000006

Scroll relative 0.096643 0.053857 0.033327

Delta -37% -15%

It can be seen that in our test the scrollable cursor performed significantly better than the
close and re-open approach. The performance benefits of a scrollable cursor over
repositioning with a non-scrollable cursor are likely to increase with the size of the cursor
result set and with the number of repositioning operations performed. The benefits will be
most apparent where a DB2 sort is invoked for the close and re-open.

68 DB2 for z/OS and OS/390 Version 7 Performance Topics


Insensitive versus sensitive cursor
A third test was run to compare an insensitive to a sensitive static cursor. In each case the
cursor was opened and all rows were fetched in a forward direction. The getpage and update
counts for each bufferpool from the accounting trace are shown in Table 3-24.
Table 3-24 Buffer pool activity for insensitive versus sensitive cursor

Cursor type BP1 getpage BP7 getpage BP7 update BP8 getpage BP8 update

Insensitive 83 8 6 7 165
cursor

Sensitive cursor 83 8 6 9 167


and insensitive
fetch

Sensitive cursor 243 8 6 9 167


and sensitive
fetch

The increase in BP1 getpages when a FETCH SENSITIVE is used is due to the need to
re-reference the row in the base table for each fetch in order to pick up any updates. There
are 160 rows in the result set and therefore the test programs issued 160 FETCH statements
to read them, resulting in 160 extra getpages. The number of lock requests in all three cases
was 10, as full lock avoidance took place both on the initial access to the base table and on
the re-references for the sensitive fetch. This extra work was reflected in the Class 2 and 3
times as shown in Table 3-25.
Table 3-25 Class 2 and 3 times for insensitive versus sensitive cursor

Cursor type C2 elapsed C2 CPU C3 waits

Insensitive Cursor 0.027238 0.013660 0.012494

Sensitive cursor and 0.023866 0.012189 0.010623


insensitive fetch

Sensitive cursor and 0.026508 0.014327 0.011523


sensitive fetch

3.7.3 Conclusions
In general, trying to draw conclusions from small differences, it costs more to use a scrollable
cursor than a non-scrollable cursor. It also costs more to use a sensitive cursor than an
insensitive cursor, at least if you use FETCH SENSITIVE. The cost overheads are very
reasonable for the additional functionality.

3.7.4 Recommendations
There are costs associated with using scrollable cursors, so only use them when you need
the extra functionality. You might choose to use an insensitive scrollable cursor for any of the
following reasons:
򐂰 You need to position in the result set other than to the next row in a forward direction.
򐂰 You wish to freeze the result set so that it is not changed by any updates made outside the
cursor.

Chapter 3. SQL performance 69


The second objective can be met by a non-scrollable cursor if the DB2 access path for a
cursor invokes the DB2 Sort, for example, to satisfy an ORDER BY. The FETCHes then
retrieve the rows from the Logical Work File (in a work table space in DSNDB07). However,
the access path can change to one that does not include a sort if a new index is created or if
another access path is chosen on rebind. An insensitive scrollable cursor always guarantees
that the result set is frozen at cursor open time.

Only use a sensitive scrollable cursor if you really need to pick up updates and deletes made
outside the cursor. As we have seen, there is significant cost resynchronizing with the base
table at each FETCH.

3.8 Fetch first n rows


This enhancement is primarily a network computing enhancement. See 7.1, “FETCH FIRST n
ROWS” on page 152 for a full description. In this section we describe a special case in which
the use of the FETCH FIRST n ROWS ONLY clause can improve performance for local SQL.

3.8.1 Description
There are many possible solutions to the problem of existence checking (checking whether
any row satisfies a specified condition). For dynamic SQL, when no supporting programming
language, such as COBOL, is available, one approach that is used is to use a SELECT
statement like that shown in Figure 3-51. The problem with this SQL is that it may return more
than one row and we only need one row to tell us whether or not there are any rows that meet
the criteria AMOUNT > 1000. Handling the possible existence of multiple rows in the result
set creates unnecessary performance overhead.

Existence checking: Are there any ORD rows with AMOUNT > 1000?

SELECT 1
FROM ORD
WHERE AMOUNT > 1000;
Figure 3-51 Existence checking SQL

If we coded this existence check as static SQL in a COBOL program, we would have to
DECLARE a cursor, OPEN the cursor, and FETCH the first row. If we code it as a singleton
SELECT, then we will get an SQLCODE -811 at execution time if more than one row satisfies
the predicate.

In DB2 V7 you can use FETCH FIRST 1 ROW ONLY both to avoid the performance overhead
for dynamic SQL and to avoid the -811 SQLCODE for static SQL. Figure 3-52 shows the
effect of adding FETCH FIRST 1 ROW ONLY on the access path. Both statements were
Explained under V7. Both use a tablespace scan to access the table, but when the FETCH
FIRST 1 ROW ONLY is added, DB2 no longer chooses sequential prefetch.

See also 7.1.3, “Singleton select” on page 155 for more details on the comparison.

70 DB2 for z/OS and OS/390 Version 7 Performance Topics


SELECT 1
FROM ORD
WHERE AMOUNT > 1000;

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC


1 1 0 ORD R 0 S N 0 T NNNN NNNN

SELECT 1
FROM ORD
WHERE AMOUNT > 1000
FETCH FIRST 1 ROW ONLY;

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC


1 1 0 ORD R 0 N 0 T NNNN NNNN

Figure 3-52 FETCH FIRST effect on access path

If we code a singleton SELECT as follows:


SELECT 1
INTO :WK-NUM
FROM ORD
WHERE AMOUNT > 1000
FETCH FIRST 1 ROW ONLY

Then the statement will execute without error, returning a single row if there are any matching
rows and returning SQLCODE +100 if there are no matching rows.

3.8.2 Performance
The SELECT statements shown in Figure 3-52 were executed through SPUFI and an SQL
Trace used to see how many rows were processed by the tablespace scan in each case.
Figure 3-53 shows extracts from the SQL traces for both statements.

Without the FETCH FIRST, 32 rows were returned and 33 FETCHes executed, 32 to return
the rows and the last receiving SQLCODE +100. With FETCH FIRST 1 ROW ONLY added to
the SELECT, only one row was processed and two FETCHes were executed, the second
receiving SQLCODE +100. In these tests the getpage count for the tablespace was reduced
from 83 to 2 by adding the FETCH FIRST 1 ROW ONLY clause.

Chapter 3. SQL performance 71


SQL Trace without FETCH FIRST
OPEN 16:31:24.65 0.000012 0.000012 STMT# 190 CURSOR: C1 ISO(CS) SQLSTATE: 00000 SQLCODE: 0
REOPTIMIZED(NO) KEEP UPDATE LOCKS: NO

FETCH 16:31:24.65 0.000138 0.000136 STMT# 183 CURSOR: C1 SQLSTATE: 00000 SQLCODE: 0
--- WORKLOAD HILITE ----------------------------------------------------------------------------------------------------------
SCANS : 1 RECS/SORT: N/P I/O REQS: N/P SUSPENDS : N/P EXITS : N/P AMS : N/P
ROWSPROC: 160 WORK/SORT: N/P AET/I/O : N/P AET/SUSP : N/P AET/EXIT : N/P AET/AMS : N/P
PAGESCAN: 83 PASS/SORT: N/P DATACAPT: N/P RIDS UNUSED: N/P CHECKCON : N/P DEGREE REDUCTION : N/P
LOB_PAGSCAN: 0 LOB_UPD_PAGE : 0
--- SCAN ACTIVITY ------------------------------------------------------------------------------------------------------------
------ROWS------ --QUALIFIED AT-- ----------ROWS----------- --MASS- --PAGES- ---------RI--------
DATABASE PAGESET SCANS PROCESS EXAMINE STAGE 1 STAGE 2 INSERTS UPDATES DELETES DELETES SCANNED SCANS DELETES
MEMBER TYPE
DB246129 TS612902 1 160 160 32 32 0 0 0 1 83 0 0
N/P SEQD

FETCH 16:31:24.66 0.000102 0.000100 STMT# 183 CURSOR: C1 SQLSTATE: 00000 SQLCODE: 0

FETCH 16:31:24.66 0.000056 0.000054 STMT# 183 CURSOR: C1 SQLSTATE: 00000 SQLCODE: 0

...

SQL Trace with FETCH FIRST


OPEN 16:39:57.53 0.000012 0.000012 STMT# 190 CURSOR: C1 ISO(CS) SQLSTATE: 00000 SQLCODE: 0
REOPTIMIZED(NO) KEEP UPDATE LOCKS: NO

FETCH 16:39:57.54 0.000139 0.000138 STMT# 183 CURSOR: C1 SQLSTATE: 00000 SQLCODE: 0
--- WORKLOAD HILITE ----------------------------------------------------------------------------------------------------------
SCANS : 1 RECS/SORT: N/P I/O REQS: N/P SUSPENDS : N/P EXITS : N/P AMS : N/P
ROWSPROC: 4 WORK/SORT: N/P AET/I/O : N/P AET/SUSP : N/P AET/EXIT : N/P AET/AMS : N/P
PAGESCAN: 2 PASS/SORT: N/P DATACAPT: N/P RIDS UNUSED: N/P CHECKCON : N/P DEGREE REDUCTION : N/P
LOB_PAGSCAN: 0 LOB_UPD_PAGE : 0
--- SCAN ACTIVITY ------------------------------------------------------------------------------------------------------------
------ROWS------ --QUALIFIED AT-- ----------ROWS----------- --MASS- --PAGES- ---------RI--------
DATABASE PAGESET SCANS PROCESS EXAMINE STAGE 1 STAGE 2 INSERTS UPDATES DELETES DELETES SCANNED SCANS DELETES
MEMBER TYPE
DB246129 TS612902 1 4 4 1 1 0 0 0 1 2 0 0
N/P SEQD

FETCH 16:39:57.54 0.000045 0.000045 STMT# 183 CURSOR: C1 SQLSTATE: 02000 SQLCODE: 100

CLOSE 16:39:57.54 0.000012 0.000012 STMT# 197 CURSOR: C1 SQLSTATE: 00000 SQLCODE: 0

Figure 3-53 FETCH FIRST SQL traces

3.8.3 Conclusions
In a non-client/server environment, the use of FETCH FIRST n ROWS is probably limited.
However, it can be used as an efficient existence check by avoiding the limitation of a
singleton select and reducing the overhead of handling cursors with multiple rows.

3.8.4 Recommendations
Use for existence checking where appropriate to improve performance or to simplify code in
an application program.

3.9 MIN/MAX enhancement


DB2 V7 provides more efficient use of indexes for evaluation MIN and MAX functions.

3.9.1 Description
When a SELECT MIN(col) is coded and there is an ascending index on col, then DB2 can use
a one-fetch index access (ACCESSTYPE = I1) to retrieve the MIN value. Two examples of
this are shown in Figure 3-54.

The index XCUSTNO is an ascending index on the column CUST_NO. On the other hand, if a
SELECT MAX(col) is coded in DB2 V6, and if there is no descending index on col, only an
ascending index, then DB2 performs a non-matching index scan (ACCESSTYPE = I,
MATCHCOLS = 0) to locate the MAX value.

72 DB2 for z/OS and OS/390 Version 7 Performance Topics


SELECT MIN(CUST_NO)
FROM CUST;

Version 6 and Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST I1 0 XCUSTNO Y NNNN NNNN SELECT

XCUSTNO is an ascending index on the column CUST_NO

SELECT MIN(CUST_NO)
FROM CUST
WHERE CUST_NO > 10;

Version 6 and Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST I1 1 XCUSTNO Y NNNN NNNN SELECT

Figure 3-54 Existing MIN function access

In V7, DB2 can use a new facility to read an index backwards. It can now use a one-fetch
index access on an ascending index to evaluate a MAX function, as shown in Figure 3-55.
Similarly, DB2 can use a one-fetch index access on a descending index to evaluate a MIN
function.

SELECT MAX(CUST_NO)
FROM CUST;

Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST I 0 XCUSTNO Y NNNN NNNN SELECT

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST I1 0 XCUSTNO Y NNNN NNNN SELECT

SELECT MAX(CUST_NO)
FROM CUST
WHERE CUST_NO < 1000;

Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST I 1 XCUSTNO Y NNNN NNNN SELECT

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST I1 1 XCUSTNO Y NNNN NNNN SELECT

Figure 3-55 MAX function access

Chapter 3. SQL performance 73


This ability to read an index backwards is limited, for the present, to this special case of
evaluating a MAX (or MIN) function. A SELECT statement with an ORDER BY... DESC will
not use an ascending index to avoid the sort. For example, the following statement results in a
table space scan followed by a sort when only an ascending index on CUST_NO is defined
on the CUST table:
SELECT * FROM CUST ORDER BY CUST_NO DESC;

3.9.2 Performance
The first MAX function shown in Figure 3-55 was tested. The Explain output was as shown.
The tests were run in an environment where the DB2 subsystem was dedicated to the tests,
but the OS/390 LPAR was not dedicated. As a result, elapsed and CPU times produced by
the tests could are only indicative of comparison. The ascending index XCUSTNO was
assigned to BP2 and was very small: 2 leaf pages and one root page. The results of the test
on V6 and V7 are shown in Table 3-26. The Class 2 times are in seconds.
Table 3-26 MAX test results

Class 2 elapsed Class 2 CPU BP2 getpages

V6 0.037438 0.010332 3

V7 0.006577 0.001387 2

Delta -82.4% -86.6% -33.3%

The test was run with all index pages in the buffer pool (BP2) and there was no Class 3 wait
time. The test probably represent the least performance benefit in percentage terms that is
likely to be achieved by the improved access path. With a larger index the performance
improvements should be greater.

3.9.3 Conclusions
It is no longer necessary to create a descending index to support a MAX function where an
ascending index already exists. It may be possible to drop some indexes that were created
explicitly for the purpose of supporting the MAX (or MIN) function.

3.9.4 Recommendations
Consider rebinding packages containing SELECT MAX(col) statements if there is an
ascending index on col, or SELECT MIN(col) statements if there is an descending index on
col. They may benefit from the improved access path.

Review whether any indexes created to support MAX or MIN processing can be dropped.

3.10 Fewer sorts with ORDER BY


In V7, DB2 can make better use of indexes to provide ordering of result sets. This can result
in fewer sorts for queries that contain an ORDER BY clause.

74 DB2 for z/OS and OS/390 Version 7 Performance Topics


3.10.1 Description
When the WHERE clause contains an equal predicate on a column, and that predicate is a
Boolean term (the entire WHERE clause is false if the predicate is false), then that column
can be ignored in the ORDER BY clause because it will have no effect on the ordering of the
result set. When DB2 evaluates the capability of an index to provide ordering, then that same
column in the index can also be ignored.

An example is shown in Figure 3-56. We have created an extra index on the CUST table, with
key columns CUST_NAME, TOWN, and CRED_LIM. Because of the predicate TOWN =
value in each of the queries shown, the column TOWN can be ignored in both the ORDER BY
clause and the index definition. DB2 treats the query as though it specified ORDER BY
CUST_NAME, CRED_LIM. It also treats the index XCNTC as though it were defined on the
columns (CUST_NAME, CRED_LIM). It therefore recognizes that the index can satisfy the
ORDER BY clause and no sort is required. It is, of course, necessary that the remaining
columns in the ORDER BY are in the same order as the remaining columns in the index
definition.

CREATE INDEX XCNTC


ON CUST (CUST_NAME, TOWN, CRED_LIM)
USING STOGROUP SG246129 PRIQTY 120 SECQTY 12
BUFFERPOOL BP2
;

SELECT * SELECT *
FROM CUST FROM CUST
WHERE TOWN = 'MANCHESTER'
ORDER BY TOWN, CUST_NAME, CRED_LIM
or WHERE TOWN = :WK-TOWN
ORDER BY TOWN, CUST_NAME, CRED_LIM
; ;

Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST I 1 XCTOWN L N NNNN NNNN SELECT
1 2 3 0 N NNNN NNYN SELECT

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST I 0 XCNTC N NNNN NNNN SELECT

Figure 3-56 ORDER BY sort avoidance

DB2 V6 uses a matching index scan on the index XCTOWN (an index on the column TOWN)
and then sorts the result to get the correct sequence for the ORDER BY. DB2 V7 uses a
non-matching index scan on the index XCNTC to both evaluate the WHERE clause (by index
screening) and to ensure the correct sequence for the ORDER BY.

It does not matter where in the ORDER BY column list or the index column order the
predicate column comes, DB2 can still avoid the sort. In our example, all the following
ORDER BY clauses could also be satisfied by the index XCNTC:
򐂰 ORDER BY CUST_NAME, TOWN, CRED_LIM
򐂰 ORDER BY CUST_NAME, CRED_LIM, TOWN
򐂰 ORDER BY CUST_NAME, CRED_LIM

Chapter 3. SQL performance 75


The only constraint is that CUST_NAME must come before CRED_LIM in the ORDER BY,
just as it does in the index.

Figure 3-57 shows another example. This time it is a join of two tables and once again a sort
for the ORDER BY is avoided. The CUST column TOWN is in the index XCNTC but not in the
ORDER BY. It can be ignored because of the equal predicate on TOWN. The ORD column
AMOUNT is in the ORDER BY, but not in the index. It can also be ignored because of the
equal predicate on AMOUNT. Therefore, in V7, the CUST index XCNTC is used to establish
the correct order for the result rows via a non-matching index scan. The ORD rows are then
joined using a nested loop join.

SELECT *
FROM CUST C INNER JOIN ORD O
ON C.CUST_NO = O.CUST_NO
WHERE O.AMOUNT = 1000
AND C.TOWN = 'MANCHESTER'
ORDER BY AMOUNT, CUST_NAME, CRED_LIM;

Version 6 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 ORD R 0 S N NNNN NNNN SELECT
1 2 4 CUST I 1 XCUSTNO L N NNNN NYNN SELECT
1 3 3 0 N NNNN NNYN SELECT

Version 7 Explain:

QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC JT QBT


1 1 0 CUST I 0 XCNTC N NNNN NNNN SELECT
1 2 1 ORD I 1 XOCNO N NNNN NNNN SELECT

Figure 3-57 Sort avoidance in a join

DB2 will not always eliminate sorts for ORDER BY in the situations discussed above. It will
only eliminate a sort if that produces a lower cost access path. A sort is still required if the
ORDER BY applies to the result of a UNION.

3.10.2 Performance
These figures are not from measurements in a fully controlled environment (DB2 was
dedicated, not the hardware) but they are indicative of relative cost. The query shown in
Figure 3-56 on page 75 was run against a CUST table containing 10 rows. The results are
shown in Table 3-27. The extra getpages, updates and lock requests shown for V6 are due to
the use of a work table space in DSNDB07 by the DB2 sort.
Table 3-27 ORDER BY test results

C2 elapsed C2 CPU Total Total BP Lock


getpages updates requests

V6 0.018109 0.017487 9 4 4

V7 0.001658 0.001330 3 0 2

Delta -92.4% -90.8%

76 DB2 for z/OS and OS/390 Version 7 Performance Topics


3.10.3 Conclusions
Avoiding a sort is always good news. Elapsed and CPU times will be reduced for queries that
benefit from this enhancement.

3.10.4 Recommendations
Consider rebinding packages that might benefit from this enhancement. They will contain
SQL with ORDER BY clauses that could be satisfied by composite indexes in the way
described above.

3.11 Joining on columns of different data types


DB2 V6 (and V5 with APARs PQ22046 and PQ24933) allowed a join predicate to be Stage 1
and potentially Indexable even if there was a mismatch on data type or length. This
enhancement was limited to string data types only.

DB2 V7 allows a join predicate to be Stage 1 and potentially Indexable when there is a data
type or length mismatch for numeric columns. In order for this to be true you must, in your
SQL, cast one column to the data type of the other.

3.11.1 Description
In Figure 3-58 we look at some examples. The CUST table is the same definition as we have
seen before. We have created a copy of the ORD table, called ORD_DEC, in which the only
difference is the definition of the CUST_NO column which is DECIMAL(15,0) instead of
INTEGER. XCUSTNO is an index on CUST(CUST_NO). Index XOCND is an index on
ORD_DEC(CUST_NO).

Table ORD_DEC:
Table CUST: CREATE TABLE ORD_DEC
CUST_NO INTEGER NOT NULL, (ORD_NO INTEGER NOT NULL,
CUST_NAME CHAR(20), CUST_NO DECIMAL(15,0),
TOWN CHAR(20), ORD_DATE DATE,
CRED_LIM DEC(7,2), AMOUNT DECIMAL(7,2),
SELECT * PRIMARY KEY (CUST_NO) PRIMARY KEY (ORD_NO))
FROM CUST C INNER JOIN ORD_DEC O
ON C.CUST_NO = O.CUST_NO
SELECT *
FROM CUST C INNER JOIN ORD_DEC O
ON C.CUST_NO = INTEGER(O.CUST_NO)
SELECT *
FROM CUST C INNER JOIN ORD_DEC O
ON DECIMAL(C.CUST_NO,15,0) = O.CUST_NO

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT CFE QBT


1 1 0 CUST R 0 S N 0 T NNNN NNNN SELECT
1 2 1 ORD_DEC R 0 S N 0 T NNNN NNNN SELECT

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT CFE QBT


1 1 0 ORD_DEC R 0 S N 0 T NNNN NNNN SELECT
1 2 1 CUST I 1 XCUSTNO N 0 T NNNN NNNN SELECT

QB PN MT TABLE AT MC INDEX PF IXO PQ TT SORTN SORTC JT CFE QBT


1 1 0 CUST R 0 S N 0 T NNNN NNNN SELECT
1 2 1 ORD_DEC I 1 XOCND L N 0 T NNNN NNNN SELECT

Figure 3-58 Join column mismatch Explains

Chapter 3. SQL performance 77


Let us look at Figure 3-58 and see what it tells us. In all three examples a nested loop join is
performed for the inner join of the CUST and ORD_DEC tables.

The first example does not use a cast function and therefore the data type mismatch makes
the join predicate Stage 2. DB2 uses a tablespace scan for the inner table (ORD_DEC)
access despite the existence of an index on the ORD_DEC join column.

In the second example we have cast the ORD_DEC column CUST_NO in the join predicate
to match the data type (INTEGER) of the CUST table CUST_NO column. DB2 has changed
the order of table access. Now the CUST table is the inner table of the join and the index
XCUSTNO on CUST(CUST_NO) is used to access the CUST table.

In the third example we have cast the CUST column CUST_NO in the join predicate to match
the data type (DECIMAL(15,0)) of the ORD_DEC table CUST_NO column. DB2 has made
ORD_DEC the inner table and used the index XOCND on ORD_DEC(CUST_NO) to access
the ORD_DEC table.

We can see from these examples that it makes a difference which side of the join predicate
we apply the cast function to. When we cast the ORD_DEC column to the CUST column data
type DB2 can use the index on the CUST column but not the index on the ORD_DEC column.
When we cast the CUST column to the ORD_DEC column data type DB2 can use the index
on the ORD_DEC column but not the index on the CUST column.

If we use the alternative syntax to cast a column to the data type of the other column, we still
see the same effect. So, in the second example we get the same Explain output as shown in
Figure 3-58 if we specify the join predicate as:
ON C.CUST_NO = CAST(O.CUST_NO AS INTEGER)

In the third example we get the same Explain output if we specify the join predicate as:
ON CAST(C.CUST_NO AS DECIMAL(15,0)) = O.CUST_NO

If we cast to a decimal data type, we can make the precision different from the matching
column without loosing the Stage 1 benefit, but if we make the scale different then the
predicate remains Stage 2. In our example, the CUST_NO column in ORD_DEC is defined as
DECIMAL(15,0). The following join predicate, with a different precision in the cast, will use the
index on ORD_DEC(CUST_NO):
ON DECIMAL(C.CUST_NO,12,0) = O.CUST_NO

However, the next example, with a different scale, uses a tablespace scan:
ON DECIMAL(C.CUST_NO,15,1) = O.CUST_NO

The same need to cast exists if the join columns are INTEGER and SMALLINT respectively.
If we create a table ORD_SMINT, the same as ORD except that the CUST_NO column is
defined as SMALLINT instead of INTEGER, then we see the following, if O is the correlation
name of the ORD_SMINT table.
This predicate is Stage 2:
ON C.CUST_NO = O.CUST_NO
This predicate is Stage 1 and Indexable:
ON C.CUST_NO = INTEGER(O.CUST_NO)
This predicate is Stage 1 and Indexable:
ON SMALLINT(C.CUST_NO) = O.CUST_NO

78 DB2 for z/OS and OS/390 Version 7 Performance Topics


3.11.2 Performance
The three joins shown in Figure 3-58 on page 77 were tested. These figures are not from
measurements in a fully controlled environment (DB2 was dedicated, not the hardware) but
they are indicative of relative cost. The results are shown in Table 3-28. Both of the queries
which cast one join column to the data type of the other performed much better than the
query without the cast, showing reductions of more than 90% in elapsed and CPU time.
Table 3-28 Join column mismatch test results
Test C2 elapsed C2 CPU BP1 getpages BP2 getpages

No cast 0.293461 0.249546 13,363 0

Cast Dec to Int 0.015480 0.014593 107 2

Cast Int to Dec 0.021187 0.018263 163 161

3.11.3 Conclusions
This enhancement solves most of the problems of joining on columns with different
declarations. It will be necessary to recode joins to include the casting function in order to get
the performance benefits.

It is important to choose the correct join column to cast to the data type of the other. You need
to know what indexes are available on each of the columns and, if both columns are indexed,
you need to decide which index will give the best performance in the join.

3.11.4 Recommendations
It will be worth identifying joins that can take advantage of this enhancement and recoding
them. It is more likely that joins with column mismatches will be found in Data Warehousing
systems and similar systems where data is gathered from diverse sources. It is less likely that
joins with mismatched columns will be found in systems where the database has been
designed from scratch to meet the requirements of specific applications.

3.12 VARCHAR index-only access


DB2 V6 (and V5 via APAR PQ10465) introduced the DSNZPARM RETVLCFK. This
parameter controls the ability of the Optimizer to use of Index-Only access when processing
VARCHAR columns that are part of an index. In V6 it is necessary to set RETVLCFK=YES in
order for it to be possible to get index-only access when a VARCHAR column that is an index
key column is referenced either in the select list or in the WHERE clause.

Let us use a variant of our CUST table, called CUST_VAR, in which the two CHAR columns
are defined as VARCHAR. Let us also create an index on those columns and one other:
CREATE TABLE CUST_VAR
(CUST_NO INTEGER NOT NULL,
CUST_NAME VARCHAR(20),
TOWN VARCHAR(20),
CRED_LIM DEC(7,2))

CREATE INDEX XVTNC


ON PAOLOR7.CUST_VAR (TOWN, CUST_NAME, CRED_LIM)
...

Chapter 3. SQL performance 79


If we look at Figure 3-59 we can see the effect of the RETVLCFK parameter. In each of the
two SQL statements shown in the figure, a VARCHAR column from the index XVTNC is
specified in the WHERE clause and another VARCHAR column from the same index is
specified in the select list. In both cases, changing the parameter from NO to YES has
allowed the Optimizer to change from a matching index scan with data access, to an
index-only access. Because there is no data access, the List Prefetch also disappears from
the Explain output. The RETVLCFK parameter still works the same way in V7.

SELECT TOWN, CUST_N AME SELECT CUST_NAME


FROM CUST_VAR FROM CUST_VAR
WHERE TOWN = 'MAN CHESTER'; WHERE TOWN = 'MANCHESTER';

V 6 a n d V 7 E x p la in w it h R E T V L C F K = N O
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC
1 1 0 CUST_VAR I 1 XVTNC L N NNNN NNNN

V 6 a n d V 7 E x p la in w it h R E T V L C F K = Y E S
QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC
1 1 0 CUST_VAR I 1 XVTNC Y NNNN NNNN

Figure 3-59 Effect of the RETVLCFK parameter

3.12.1 Description
The enhancement in V7 concerns the case when a VARCHAR index key column is specified
in the WHERE clause, but only non-VARCHAR columns from the index are specified in the
select list. We can see such a case in Figure 3-60. The VARCHAR column TOWN is specified
in the WHERE clause, as before, but the only column in the select list is the DECIMAL column
CRED_LIM, which is also part of the index XVTNC.

SELECT CRED_LIM
FROM CUST_VAR
WHERE TOWN = 'MANCHESTER';

V6 and V7 Explain with RETVLCFK=YES


QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC
1 1 0 CUST_VAR I 1 XVTNC L Y NNNN NNNN

V6 Explain with RETVLCFK=NO


QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC
1 1 0 CUST_VAR I 1 XVTNC L N NNNN NNNN

V7 Explain with RETVLCFK=NO


QB PN MT TABLE AT MC INDEX PF IXO SORTN SORTC
1 1 0 CUST_VAR I 1 XVTNC Y NNNN NNNN

Figure 3-60 Index-only access when VARCHAR in WHERE clause only

80 DB2 for z/OS and OS/390 Version 7 Performance Topics


In DB2 V6, you could only get an index-only access in this case by specifying
RETVLCFK=YES. Many sites do not want to set the parameter to YES because it impacts
every application program that includes in its select list a VARCHAR column that is an index
key column. If an index-only access is chosen by DB2, then the VARCHAR values are
returned to the application program padded with blanks and with the column length set to the
maximum. When the VARCHAR is returned from the data row it is returned as a true variable
length value without padding. Not all programs can handle this access path dependent
variation in how data is returned from a SELECT.

In V7, you do not need to set RETVLCFK=YES to get index-only access in this special case.
As can be seen in the example, V7 provides index-only access for VARCHARs in the WHERE
clause but not in the select list regardless of the setting of the parameter.

3.12.2 Performance
The SQL shown in Figure 3-60 on page 80 was tested in V6 and V7 with RETVLCFK=NO.
These figures are not from measurements in a fully controlled environment (DB2 was
dedicated, not the hardware) but they are indicative of relative cost. The tablespace was
allocated to BP1 and the index to BP2. The resulting getpage counts for the two tests are
shown in Table 3-29. In V7 we have avoided the 16 data getpages that were required in V6.
Table 3-29 VARCHAR index-only test results
Test BP1 getpages BP2 getpages

V6 16 2

V7 0 2

3.12.3 Conclusions
This enhancement can be of benefit for installations that have VARCHAR columns in indexes.

3.12.4 Recommendations
For those installations that avoid the use of VARCHAR columns in indexes, there is now
slightly less reason for doing so. If you have VARCHAR columns in indexes consider
rebinding those packages that may be able to exploit the index-only access.

3.13 Bind improvements


Improvements to the DB2 V7 Optimizer processing result in significantly reduced storage
requirements in the DBM1 address space and also reduced CPU use during the BIND
command execution when performing access path selection for an SQL statement that joins
more than 9 tables.

3.13.1 Description
The DB2 V7 Optimizer uses a new two-pass approach for analyzing possible access paths
whenever inner joins of more than 9 tables are processed without any outer joins. This
enhancement is also extended to outer joins by APAR PQ48306. The two-pass approach
consumes much less RDS subpool storage than V6 for these multi-table joins. It also reduces
CPU cost. CPU cost is also reduced in V7 by using a more efficient search to locate cost
entries.

Chapter 3. SQL performance 81


3.13.2 Performance
Measurements were performed on inner joins first without PQ48306. Tests of more than 15
tables were conducted by setting the hidden parameter &SPRMMXT=’20’. These showed
comparable storage use for V7 and V6 for joins of up to 9 tables. For larger number of tables,
simple queries showed a reduction of up to 20% in storage use and very complex queries
showed a reduction of 69% for a 15-table join and 73% for a 19-table join.

CPU reduction is only applicable for complex joins of 10 or more tables. The tests showed a
CPU reduction of 19% for a 15-table join and 70% for a 19-table join.

Another set of measurements, after applying the fix for PQ48306, with two different BI and
CRM workloads, has shown large virtual storage reductions for outer joins (up to 25 times) as
well as further reductions (up to 2 times) with the inner joins.

3.13.3 Conclusions
This enhancement and the follow-up APAR are of great interest for DB2 environments running
complex BI and CRM type applications. These sites tend to use dynamic SQL and perform
joins of large numbers of tables. For such sites, it is now possible to avoid failures on
PREPARE caused by insufficient storage and reach large performance improvements in CPU
time for inner and outer joins.

3.13.4 Recommendations
If you expect to benefit from this enhancement, make sure that APAR 48306 is applied to
extend the benefits to outer joins as well as inner joins.

3.14 Star join


A new way of processing multiple-table joins was added as an option to DB2 V6. This is
known as the star join performance enhancement because it is oriented to improve star join
performance. A star join consists of several (dimension) tables being joined to a single central
(fact) table using the table design pattern known as a star schema. The improvement also
applies to joins of tables using the snowflake schema design which involves one or more extra
levels of dimension tables around the first level of dimension tables. For a more detailed
description of star join see DB2 UDB Server for OS/390 Version 6 Technical Update,
SG24-6108.

In DB2 DB2 V7, the parameter that enables or disables this feature is changed from a hidden
to an externalized keyword. Also, the default value for the parameter is DISABLE, whereas it
was ENABLE in V6.

When you want to specify a value deviating from default, you must manually add the keyword
STARJOIN to the invocation of the DSN6SPRM macro in the job DSNTIJUZ which assembles
and link edits the DSNZPARM subsystem parameter load module. The parameter cannot be
set through the install panels. Acceptable values are ENABLE, DISABLE or 1-32768. See
Figure 3-61 for a description.

Star join support for DB2 V6 was delivered by the fixes to APARs PQ28813 and PQ36206.

82 DB2 for z/OS and OS/390 Version 7 Performance Topics


STAR JOIN
Acceptable values: DISABLE, ENABLE, (1, 2-32768)
Default : DISABLE
DSNZPxxx : DSN6SPRM STARJOIN

DISA BL E = No sta r j oin


ENAB LE = Ena ble st ar joi n - DB 2 w ill op tim ize fo r s tar jo in
1 = The fa ct tab le wil l b e t he lar ges t t abl e i n s tar jo in quer y.
No fac t/d ime nsi on rat io che cki ng is don e.
2-32 76 8 = Thi s i s t he sta r j oin fa ct tab le to lar ges t d ime nsi on
tab le rat io.

Figure 3-61 Star join example

3.14.1 Recent star join enhancement


A star join enhancement has been introduced by the recent APAR PQ43846 for V6. The V7
forward fit is PQ47833 and its PTF will be available around the end of June, 2001. An
additional APAR will add a new ZPARM to change the new threshold to enable star join based
on the number of tables in a query block.

Star join enhancement in V6 and V7


The enhancement to the star join in DB2 V6 and V7 through APARs PQ43846 and PQ47833
respectively is intended:
򐂰 To improve the average performance of star joins.
򐂰 To minimize the possibility of performance degradation of queries involving star schema
when star join is enabled.

Star join was introduced in DB2 V6 in order to process queries qualified as the star schema
򐂰 Bind the queries using a greedy algorithm based on heuristics to generate the join
sequence
򐂰 Skip scanning the dimension tables, if possible, using the next key feedback feature

With this implementation, a star schema query that contains large number of dimension
tables is able to be bound without SQL code -101 or -129. The execution time has also been
largely cut for some queries with star schema. However, queries could run faster using the
optimal access plan generated by the regular access path selection algorithm and users have
had difficulties to selectively apply the star join method only to certain queries.

The new enhancement is designed to compensate this situation and consists of the following
features:
򐂰 Queries with star schema are optimized despite the number of tables in the query block.
򐂰 Star join is disabled for a query block having less than 10 tables (see the note below) even
if star join is enabled by the ZPARM. With this new threshold, potential performance
degradation can be prevented for relatively small OLTP type queries that are qualified as
the star schema.

Chapter 3. SQL performance 83


򐂰 Multiple possible star join access paths are explored and their costs are evaluated. With
this feature, partial star join plans will be considered to maximize the efficiency of the star
join method.
For the same star join access paths, the costs of "non-star-join" plans are also evaluated.
When a "non-star-join" access plan is evaluated, if SET CURRENT DEGREE = 'ANY' is
specified, perturbations might be applied to the join sequence to exploit the parallelism.
The best access plan is selected based on the comparison of the costs.

Consequently, the optimal access plan for a query with star schema can be any of these:
򐂰 A full star join plan (all the dimension tables and the fact tables are "star joined").
򐂰 A partial star join plan (not all the dimension tables are "star joined"). The residual
dimension tables are joined after the fact table using the regular join methods. The join
order of the residual dimension tables are determined by heuristic rules.
򐂰 Not a star join plan at all. If Explain is run for the query, the join_type column of the
plan_table will not contain 'S' in such a case.

Users should note that the new implementation could generate different access plans,
depending on the value of the CURRENT DEGREE special register.

For example, the following tables show some of the possible access plans generated for a
star schema query involving the fact table F and the dimension tables D1, D2, D3, D4 (this
example is for illustration purpose only -- actually star join will not be enabled in this case
because the number of tables in the query block is less than 10):

Possible case 1: In Table 3-30, all dimension tables and the fact tables are "star joined". The
first column represents the tables in the join order; the second column is the join_type.
Table 3-30 Star join access plan: case 1
D1 ‘S’

D2 ‘S’

D3 ‘S’

D4 ‘S’

F ‘S’

Possible case 2: In Table 3-31, some dimension tables and the fact tables are "star joined"
(a partial star join case). The rest of the dimension tables are joined after the fact table using
regular join methods (the order is determined by heuristic rules).
Table 3-31 Star join access plan: case 2
D1 ‘S’

D2 ‘S’

F ‘’

D4 ‘’

D3 ‘’

84 DB2 for z/OS and OS/390 Version 7 Performance Topics


Possible case 3: In Table 3-32, none of the tables are "star joined". This occurs when the
estimated cost of this plan is lower than the corresponding star join cost.
Table 3-32 Star join access plan: case 3
D1 ‘’

D2 ‘’

F ‘’

D4 ‘’

D3 ‘’

Note: The star join activation is based on a threshold represented by the number of tables in
a query block.

A new ZPARM will be defined through an APAR so that the users can change the value (the
default is 10). This ZPARM is in effect only when the star join is enabled by the already
existing ZPARM STARJOIN.

The best environment for star join


The theory behind the design of DB2 V6 star join support and practical experience gained
through the performance measurements has shown that the best environment for star join
includes these features:
򐂰 There is a good selectivity on the first column of the chosen index:
– The set of qualifying column value combinations needs to be highly selective.
– AND most of the filtering has to happen on the first index column.
– The next most frequent filtering is on the second index column, and so on.
򐂰 There is a high cluster ratio on the chosen index.
򐂰 There is a small number of column values in workfiles.
– The number of qualifying values needs to be low in absolute terms, not just in
proportion to the number of possible values.
– Each probe into the index incurs significant overhead.
– You want to get the most benefit from the least number of probes.
򐂰 There is no missing predicate for index columns:
– This is to avoid the impact due to stage 2 predicates.
򐂰 If there are combinations of qualifying column values that do not actually occur in the fact
table index and data, it is best if these combinations are clustered together:
– Again, this reduces the number of probes needed.

It is important to have all of the following conditions met:


򐂰 Suitable queries should be used, which provide good filtering of the fact table, and ideally
the dimension tables as well, unless they are extremely small.
򐂰 The index should match these queries very closely, with all queries matching the index.
򐂰 If there are many combinations of column values which qualify from dimension tables but
do not appear in the fact table, these need to be combinations that, when ordered in the
sequence of the fact table index, all fit within a small number of gaps in the index.

Chapter 3. SQL performance 85


86 DB2 for z/OS and OS/390 Version 7 Performance Topics
4

Chapter 4. DB2 subsystem performance


In this chapter, we discuss the following topics and related performance enhancements that
affect DB2 subsystem performance:
򐂰 Asynchronous preformat: Asynchronous preformat improves insert performance.
򐂰 Parallel data set open: Data set open/close processing for partitions in the same
partitioned table space/index is no longer a serial process.
򐂰 Virtual Storage Constraint relief and new traces: VSCR is a permanent concern, new
traces help you in monitoring virtual storage usage.
򐂰 CATMAINT utility: Performance of the CATMAINT utility is improved.
򐂰 Evaluate uncommitted: A new DSNZPARM, EVALUNC, specifies whether predicate
evaluation can occur on uncommitted data.
򐂰 Log-only recovery improvement: Frequent update of HPGRBRBA improves log-only
recovery
򐂰 Reduced logging for variable-length rows: Under certain conditions the amount of
logging done for updates of variable-length rows can be reduced.
򐂰 DDL concurrency improvement: Row level locking on catalog table spaces with no links
defined can improve DDL concurrency.

© Copyright IBM Corp. 2001 87


4.1 Asynchronous preformat
Before DB2 V7 you could only preformat unused space in a table space/index during LOAD or
REORG. After the data has been loaded and the indexes built, the PREFORMAT option on
the LOAD and REORG utility command statement directs the utilities to format all the unused
pages, starting with the high-used relative byte address (RBA) plus 1 (first unused page) up
to the high-allocated RBA. After preformatting, the high-used RBA and the high-allocated
RBA are equal.

When the space in a table space is not preformatted by LOAD or REORG, performance is
impacted when insert processing is touching the limit of the preformatted area and
synchronous preformatting need to be done. See Figure 4-1.

This typically takes 0.2 to 1 second, and during this time period, insert processing is held up.
Preformat will format 2 cylinders at a time on a 3390 device unless the allocated space is
smaller then a cylinder; then only 2 tracks will be formatted.

DB2 V7 brings relief for this issue with the asynchronous preformat functionality.

P r i o r t o V e r s io n 7 N ew in V e r s io n 7

ASYNCHR ONOU S

T h re s h o ld
P re fo r m a tte d P r e fo rm a tt e d

A llo c a t e d A llo c a te d

T r i g g e r n e w p r e f o r m a t a n d w a it E a r l y t r i g g e r n e w p r e f o r m a t a n d N O w a it

Figure 4-1 Synchronous and asynchronous preformat

Disk space needs to be preformatted before a row/key can be inserted. First, preformatting is
done at table space/index/partition create time.

Insert activity will use the formatted space in a table space/index/partition pageset until a
threshold is reached. Hitting this threshold triggers a preformat service task. DB2 supports up
to 20 preformat/extend service tasks to perform preformat/extend processing in parallel. Only
one preformat/extend service task can be active on a given table space/index/partition
pageset at one time.

88 DB2 for z/OS and OS/390 Version 7 Performance Topics


4.1.1 Asynchronous preformat performance
A test was executed to evaluate the performance impact of asynchronous preformat.

Measurement environment
Two test scenarios were executed:
򐂰 CPU bound test case — sequential insert of 8 million rows, 200 bytes per row
򐂰 I/O bound test case — sequential insert of 2 million rows, 800 bytes per row

For this measurement, the following hardware and software was used:
򐂰 Hardware
– IBM 9672-ZZ7 processor
– ESS model E20
򐂰 Software levels
– OS/390 V2R7
– DB2 V7 and DB2 V6

Asynchronous preformat measurement results


The measurements included CPU bound and I/O bound situations.

CPU bound test case


Figure 4-2 shows how DB2 V7 asynchronous preformat reduced the elapsed time to only 5%
above the class 1 measured CPU time, 38% less than the time with DB2 V6 synchronous
preformat.

CPU bound test case


600

500
Elapsed time in seconds

400 Unaccounted
-38% EXT/FMT/DEL
300 TS or log writes
lock/latch suspension
200 CPU time

100

0
V7 V6

Figure 4-2 CPU bound test case

Chapter 4. DB2 subsystem performance 89


I/O bound test case
See Figure 4-3 for the results of this measurement.

I/O bound test case


350
-8%
300
Elapsed time in seconds

-19%
250
Unaccounted
200 EXT/FMT/DEL
TS or log writes
150 lock/latch suspension
CPU
100

50

0
V7 V7 V6
after reorg
preformat

Figure 4-3 I/O bound test case

The test results, after a Reorg with preformat was executed on the table space, represent the
optimal performance attainable if asynchronous preformat was 100% concurrent with other
processes.

In this test case, insert reaches the preformat threshold; the preformat service task is then
triggered, and preformat I/Os are done. In the meantime, the insert job could concurrently
insert into the available already formatted storage. After some time, data buffers are filled and
deferred writes are scheduled once the VDWQ threshold is reached.

DB2 may then schedule a deferred write I/O concurrently with a preformat service task for the
same table space/index. In this case, the deferred write task catches up with the
asynchronous preformat service task. This shows up in the measurement by lock/latch
suspension for an end-of-extend lock. On top of this deferred write, task(s) will compete for
the same I/O resources (UCB,...) as the preformat service task. In this situation, if possible, it
is better to use LOAD/REORG preformat prior to execution of the sequential inserts.

The ESS Parallel Access Volumes feature can bring some relief for this situation. See
Chapter 10, “Synergy with host platform” on page 197, for a description of ESS from a DB2
point of view.

4.2 Parallel data set open


Parallel data set open has improved again in DB2 V7.

Before DB2 V5, there was only a single task to perform open/close processing of DB2
database data sets. DB2 V5 10 parallel tasks for open/close processing and this number was
increased to 20 in DB2 V6. Parallel data set open during restart was introduced in DB2 V6
and parallel open/close of partitions within the same partitioned table space/index comes with
DB2 V7.

90 DB2 for z/OS and OS/390 Version 7 Performance Topics


4.2.1 Performance
A set of measurements were executed in order to evaluate the performance benefit of the
parallel data set open feature.

Measurement environment
For this measurement, the following hardware and software was used:
򐂰 Hardware:
– IBM 9672-ZZ7 processor
– Three controllers with 8 disk volumes each
򐂰 Software levels:
– OS/390 V2R7
– DB2 V7 and DB2 V6

Ten parallel jobs were each accessing 20 separate partitions of a partitioned table space with
200 partitions. The same test was run on DB2 V6 and DB2 V7 in order to observe the elapsed
time reduction in DB2 V7 due to the concurrent open of the partitions in a partitioned table
space.

Performance measurement results


See Figure 4-4 for the measurement results.For this test case we achieved a 2.2 times
reduction in elapsed time for DB2 V7 compared to DB2 V6

Parallel data set open


Performance summary
5

4.04
4
Elapsed time in seconds

3
V6
1.85 V7
2

0
Figure 4-4 Parallel data set open performance

4.3 Virtual storage constraint relief


In this section we describe the instrumentation facility enhancements that will help you
monitoring the virtual storage usage in the DBM1 address space.

Chapter 4. DB2 subsystem performance 91


4.3.1 Instrumentation enhancements
Two new IFCIDs are added to record DBM1 storage usage statistics.

Storage manager pool summary statistics


IFCID 225 provides you with summary information on the virtual storage usage in the DBM1
address space.This IFCID is contained in Statistics Class 6 and will be recorded at the DB2
statistics interval. See 9.1.1, “New IFCIDs” on page 180 for a description of this IFCID.

Storage manager pool statistics


IFCID 217 provide you with detailed information on the storage usage in the DBM1 address
space. This IFCID is contained in Global Class 10 and will be recorded at the DB2 statistics
interval. See 9.1.1, “New IFCIDs” on page 180 for a description of this IFCID.

4.3.2 Database address space — virtual storage consumption


The components that are responsible for the major part of the virtual memory consumption in
the DBM1 address space are now reported in section STORAGE STATISTICS of the DB2 PM
statistics long report. The storage statistics report layout (not yet available) will look similar to
the example in Figure 4-5.

Figure 4-5 STORAGE STATISTICS report layout

92 DB2 for z/OS and OS/390 Version 7 Performance Topics


Database address space — virtual storage budget
In the article DB2 UDB for OS/390 Storage Management, IDUG Solutions Journal, Spring
2000, available from: http://www.idug.org, the authors explain a methodology on how to
estimate your virtual storage budget and provide relief to virtual storage constraints inside the
DBM1 address space. You can now use the Storage Statistics report to identify the storage
consumption of each component.

4.4 CATMAINT utility


The CATMAINT utility updates the catalog; it is run during migration or when instructed to do
so by the IBM service.

Even though you are not planning to use new DB2 V7 functions, before you start migrating
you must be aware of changes that might affect your migration. You must consult the current
standard documentation for details. A starting point for your evaluation is the DB2 UDB for
OS/390 and z/OS Version 7 Installation Guide, GC26-9936, specifically the two chapters on
migration from V6 and V5. You must also check with the IBM support organization regarding
the correct maintenance levels on both starting and target systems.

CATMAINT is invoked by the installation procedure DSNTIJTC. It migrates your Version 6 or


Version 5 catalog to the Version 7 catalog. DSNTIJTC contains three steps.

The first step of DSNTIJTC creates new catalog and directory objects, adds columns to
existing catalog tables and creates and updates indexes on the catalog tables to
accommodate new Version 7 objects. All IBM-supplied indexes are created or updated
sequentially during the execution of DSNTIJTC.

The second step of DSNTIJTC searches for the unsupported objects to display the warning
messages. The third step does the stored procedure migration processing and is only for
migration from V5.

The DB2 V7 catalog migration process is as follows:


1. Mandatory catalog processing:
– Authorization check
– Ensure catalog is at correct level
– DDL processing
– Additional processing and tailoring
– Directory header page and BSDS/SCA updates
– Single commit scope. it is all or nothing
– No table space scan
2. Looking for unsupported objects in the DB2 catalog:
– Type 1 indexes
– Data set passwords
– Shared read-only data
– Syscolumns zero records
The migration will not fail if any of these unsupported objects are found. A message will be
issued for each unsupported object found.
There is a SYSDBASE table space scan in this step 2.

Chapter 4. DB2 subsystem performance 93


3. Stored procedure migration processing from DB2 V5:
– Catalog table SYSIBM.SYSPROCEDURE is no longer used to define stored
procedures to DB2. All rows in SYSIBM.SYSPROCEDURE are migrated to
SYSIBM.SYSROUTINES and SYSIBM.SYSPARMS.
– When migrating from V5, DB2 generates CREATE PROCEDURE statements which
populate SYSIBM.SYSROUTINES and SYSIBM.SYSPARMS. Rows in
SYSIBM.SYSPROCEDURES that contain non-blank values for columns AUTHID or
LUNAME are not used to generate the CREATE PROCEDURE statements. DB2 also
copies rows in SYSIBM.SYSPROCEDURES into SYSIBM.SYSPARMS and
propagates information from the PARMLIST column of SYSIBM.SYSPROCEDURES.

A new status message, DSNU777I, is issued at several points during the migration process to
indicate migration progress. New diagnostic error messages are issued when CATMAINT
processing fails. If a problem is found during the SQL processing phase of migration, then
message DSNU778I is issued. If non-supported functions such as type 1 indexes are
encountered, then message DSNU776I is issued. All of these messages are written to the
SYSPRINT data set. If job DSNTIJTC fails, because CATMAINT failures roll back all Version 7
changes, the catalog and directory are in Version 6 format. Altered indexes are not rolled
back and need to be verified with the CHECK INDEX utility.

4.4.1 CATMAINT utility performance


The CATMAINT utility has been greatly improved with DB2 V7, and the execution of the first
step of the migration is extremely fast. In this section we describe the CATMAINT
performance measurements.

CATMAINT performance measurement — description


A customer provide catalog was restored. CATMAINT was executed on this catalog in both
non-data-sharing and data-sharing environments.

For this measurement we used:


– OS/390 V2R7
– DB2 V5, V6 and V7
– IBM 9672 (G6), 12-way
– ESS 2105-E20

94 DB2 for z/OS and OS/390 Version 7 Performance Topics


Table 4-1 lists the number of rows of all the relevant catalog tables.
Table 4-1 Catalog tables
TABLE NUMBER OF TABLE NUMBER OF TABLE NUMBER OF
ROWS ROWS ROWS
SYSCOPY 975341 LOCATIONS 0 SYSDBRM 15920

SYSCOLUMN 2107861 LULIST 0 SYSPLAN 1637

SYSFIELDS 0 LUMODES 0 SYSPLANAUTH 6735

SYSFOREIGNKEYS 180 LUNAMES 1 SYSPLANDEP 4217

SYSINDEXES 70268 MODESELECT 0 SYSSTMT 216743

SYSINDEXPART 74717 USERNAMES 0 SYSCOLDIST 28502

SYSKEYS 263148 SYSRESAUTH 4169 SYSCOLDISTSTATS 4363

SYSRELS 93 SYSSTOGROUP 6 SYSCOLSTATS 19364

SYSSYNONYMS 723 SYSVOLUMES 7 SYSINDEXSTATS 574

SYSTABAUTH 2365287 SYSPACKAGE 690411 SYSTABSTATS 744

SYSTABLEPART 14493 SYSPACKAUTH 799455 SYSSTRINGS 403

SYSTABLES 261262 SYSPACKDEP 2479336 SYSCHECKS 46

SYSTABLESPACE 10003 SYSPROCEDURES 1 SYSCHECKDEP 48

SYSDATABASE 9081 SYSPACKLIST 4085 SYSUSERAUTH 59

SYSDBAUTH 55023 SYSPACKSTMT 13448719 SYSVIEWDEP 162

IPNAMES 0 SYSPLSYSTEM 0 SYSVIEWS 266

CATMAINT performance measurement — results


Table 4-2 lists the results of the performance measurements. They are divided into two
groups, one with data sharing, one without, and show the results of the first CATMAINT step
when migrating from V5 to V6, V6 to V7, and V5 to V7.
򐂰 BP0 was used for the catalog and directory.
򐂰 BP4 was used for the workfiles.
Table 4-2 CATMAINT performance measurement results

Elapsed time CPU time BP0 #getpages BP4 #getpages


(sec) (sec)

Non-data-sharing group

V5 -> V6 834 192 1,607,902 516,284

V6 -> V7 82 6 76,889 24

V5 -> V7 768 194 1,643,913 516,090

Data-sharing group

V5 -> V6 812 191 1,607,523 516,292

V6 -> V7 74 6 76,571 32

V5 -> V7 754 195 1,644,601 516,080

Chapter 4. DB2 subsystem performance 95


CATMAINT performance measurement — conclusions
The following conclusions on the improvements for step 1 of the migration process were
drawn from this measurement for the non-data-sharing case and the data-sharing case,
respectively. With DB2 V7, step 1 elapsed time is now comparable with step 2.

򐂰 Non-data-sharing case (see Figure 4-6):


– 10 times elapsed time improvement migrating from DB2 V6 to DB2 V7 catalog,
compared to migration from DB2 V5 to DB2 V6 catalog.
– 16% elapsed time improvement migrating from DB2 V5 to DB2 V7 catalog, compared
to migration from DB2 V5 to DB2 V7 catalog via DB2 V6 catalog.

Migration performance - non-data sharing

10 times elapsed time 16% elapsed time improvement


improvement migrating from migrating from V5 -> V7
V6 ->V7 compared to V5 ->V6 compared to V5 -> V6 -> V7

V5 - > V6 1200 V5 -> V7


1200
V6 -> V7 V6 -> V7
1000 V5 - > V6
1000 916
834 82

800 800 834 768


Seconds
Seconds

600 600

400 400

200 200
82

0 0

Figure 4-6 Comparison of migration performance — non-data-sharing case

96 DB2 for z/OS and OS/390 Version 7 Performance Topics


򐂰 Data-sharing case (see Figure 4-7):
– 11 times elapsed time improvement migrating from DB2 V6 to DB2 V7 catalog,
compared to migration from DB2 V5 to DB2 V6 catalog.
– 15% elapsed time improvement migrating from DB2 V6 to DB2 V7 catalog, compared
to migration from DB2 V5 to DB2 V7 catalog via DB2 V6 catalog.
򐂰 The best performance for migration from DB2 V5 to DB2 V6 is achieved with a buffer pool
size of 10,000 buffers for BP0, and also 10,000 buffers for the workfile buffer pool.
Migration from DB2 V6 to DB2 V7 rarely uses the workfile buffer pool.

Note: Make sure that APARs PQ38035 and PQ44985 are applied to your system. With
PQ38035 CATMAINT no longer fails when an unsupported object is found. PQ44985
solves a performance problem with CREATE/DROP/ALTER of table spaces, tables and
indexes related to declared temporary tables.

Migration performance - data sharing


11 times elapsed time 15% elapsed time improvement
improvement migrating from migrating from V5 -> V7
V6 ->V7 compared to V5 ->V6 compared to V5 -> V6 -> V7

V5 - > V6 1200 V5 -> V7


1200
V6 -> V7 V6 -> V7
1000 V5 - > V6
1000
886
812 74
800 800 812 754
Seconds
Seconds

600 600

400 400

200 200
74

0 0

Figure 4-7 Comparison of migration performance — data sharing case

Chapter 4. DB2 subsystem performance 97


4.5 Evaluate uncommitted
A new DSNZPARM parameter, EVALUNC, has been introduced; this new subsystem
parameter enables DB2 to take fewer locks during query processing

4.5.1 Description
Specify whether predicate evaluation can occur on uncommitted data of other transactions.
The option applies only to stage 1 predicate processing that uses table access (table space
scan, index-to-data access, and RID list processing) for queries with isolation level RS or CS.

Although the option influences whether predicate evaluation can occur on uncommitted data,
it does not influence whether uncommitted data is returned to an application. Queries with
isolation level RS or CS will return only committed data.They will never return the
uncommitted data of other transactions, even if predicate evaluation occurs on such. If data
satisfies the predicate during evaluation, the data is locked as needed, and the predicate is
re-evaluated as needed before the data is returned to the application.

If you specify NO, the default, predicate evaluation occurs only on committed data (or on the
application’s own uncommitted changes). NO ensures that all qualifying data is always
included in the answer set.

If you specify YES, predicate evaluation can occur on uncommitted data of other transactions.
With YES, data can be excluded from the answer set. Data that does not satisfy the predicate
during evaluation — but then, because of undo processing (ROLLBACK or statement failure),
reverts to a state that does satisfy the predicate — is excluded from the answer set. A value of
YES enables DB2 to take fewer locks during query processing. The number of locks avoided
depends on:
– The query’s access path
– The number of evaluated rows that do not satisfy the predicate
– The number of those rows that are on overflow pages

This parameter can be set as part of the installation procedure on panel DSNTIP4

4.5.2 Recommendation
Specify YES to improve concurrency, if your applications can tolerate returned data, to falsely
exclude any data that would be included as the result of undo processing (ROLLBACK or
statement failure).

4.6 Log-only recovery improvement


Prior to DB2 V7 the field HPGRBRBA, the recover base RBA field on the pageset header
page, representing the starting log RBA value for a log-only recovery, was updated when:
򐂰 A pseudo close or close happened for an object.
򐂰 A stop command was issued for the object.
򐂰 A QUIESCE, LOAD utility run against an object.

In DB2 V7 the HPGRBRBA value is updated every nth checkpoint, where n is controlled by
the value of the DSNZPARM parameter DLDFREQ.

98 DB2 for z/OS and OS/390 Version 7 Performance Topics


Log-only recovery will benefit from limiting the amount of log records to be scanned by the
recover utility. Especially, log-only recoveries with data, restored from copies made during a
log suspend/resume interval, will benefit from this enhancement.

4.7 Reduced logging for variable-length rows


Before DB2 V7, an update to a variable length row resulted in the logging of the data from the
first changed byte of the variable column until the end of the row.

In DB2 V7, the data logged starts at the first byte of the first changed column to the last byte
of the last changed column.

Restrictions
򐂰 The update does not change the row length.
򐂰 No hardware compression is done, and no editprocs are used.

4.8 DDL concurrency improvement


Row level locking for catalog table spaces is now allowed for those catalog table spaces that
do not contain links.

This enhancement should further improve parallel DDL.

Chapter 4. DB2 subsystem performance 99


100 DB2 for z/OS and OS/390 Version 7 Performance Topics
5

Chapter 5. Availability and capacity


enhancements
Continuous availability has become a major business requirement today. The new releases of
DB2 have kept on introducing enhancements to meet this need.

DB2 Version 7 introduces new and enhanced functionality in this area.


򐂰 Online DSNZPARMs: You can now activate a new set of DB2 subsystem parameters
without having to recycle DB2. This option allows for a dynamic change of a major part of
the subsystem startup parameters.
򐂰 Consistent restart: Recover and Load Replace are now allowed against objects
associated with postponed units of recovery. Cancel of postponed recovery is now an
option.
򐂰 Cancel Thread NOBACKOUT: A NOBACKOUT option is added to the CANCEL THREAD
command.
򐂰 Adding workfiles: You can now CREATE and DROP a workfile table space without
having to stop the workfile database.
򐂰 Log manager: Suspend/resume log activity allows for snapshot copy of the DB2
environment. Checkpoint frequency can be controlled by a time interval. Critical log read
access errors do not force DB2 down, but can be retried. A new warning message for long
running units of recovery is introduced.
򐂰 IRLM enhancements: A new option of the modify IRLM command allows the DBMS
timeout value as known to IRLM to be dynamically modified, and new subsecond
detection values can now be specified for the DEADLOCK parameter. These
enhancements are introduced by APARs.

© Copyright IBM Corp. 2001 101


5.1 Online DSNZPARM
Prior to DB2 V7 you had to recycle DB2 in order to load a new DSNZPARM module. DB2 V7
introduces a new command -SET SYSPARM which allows you to dynamically load the
DSNZPARM load module.

5.1.1 SET SYSPARM command


򐂰 -SET SYSPARM LOAD (load module name)
– Loads the specified module
– Loads DSNZPARM if load module name not specified
򐂰 -SET SYSPARM RELOAD
– Reloads the last named subsystem parameter load module
򐂰 -SET SYSPARM STARTUP
– Resets loaded parameters to their startup values

5.1.2 Displaying the current settings


Sample program DSN8ED7 returns the currently active subsystem parameters in a formatted
report.

DSN8ED7: Sample DB2 for OS/390 Configuration Setting Report Generator

Macro Parameter Current


Name Name Setting
-------- ---------------- ---------------------------------------
DSN6SYSP AUDITST 00000000000000000000000000000000
DSN6SYSP CONDBAT 0000000064
DSN6SYSP CTHREAD 00070
DSN6SYSP DLDFREQ 00005

Description/ Install Fld


Install Field Name Panel ID No.
------------------------------------ -------- ----
AUDIT TRACE DSNTIPN 1
MAX REMOTE CONNECTED DSNTIPE 4
MAX USERS DSNTIPE 2
LEVELID UPDATE FREQUENCY DSNTIPL 14

For a complete listing of the updatable parameters, refer to Appendix A, “Updatable DB2
subsystem parameters” on page 211.

5.1.3 Parameter behavior with online change

For most parameters, the online change is transparent, with the change taking effect
immediately. There are several parameters for which this is not the case, because of the type
of functions that they impact. The behavior exhibited by the system upon changes to these
parameters is discussed here:
򐂰 AUTHCACH
Changing the plan authorization cache only takes effect for new BIND PLAN actions
without the CACHESIZE specification.

102 DB2 for z/OS and OS/390 Version 7 Performance Topics


򐂰 LOBVALA
Changing the user LOB value storage does not affect any currently running agents that
have already acquired storage from data spaces for LOBs. It only affects new agents.
򐂰 LOBVALS
The system LOB value storage is examined whenever an agent attempts to allocate
storage from a data space pool for LOBs. If the parameter change decrements the value
of LOBVALS such that the current amount of storage allocated for the system is greater
than the new LOBVALS value, a resource-not-available SQLCODE is not issued until the
next attempt by an agent to acquire storage.
򐂰 MAXRBLK
If the RID pool size is decremented and, as a result, the current number of RID blocks
allocated is greater than the newly specified value, the new MAXRBLK value does not take
effect until a user attempts to allocate another RID block. When the value is decremented,
an attempt is made to contract the storage pool (which can only be contracted if there are
empty segments).
򐂰 NUMLKTS
The number of locks per table space does not change immediately for agents that have
already been allocated when the parameter is changed. The allocated agents keep the old
value in accordance with their RELEASE bind parameter (COMMIT or DEALLOC).
򐂰 EDMPOOL
The EDM pool storage parameter can be used to increase or decrease the size of the
EDM pool. The initial allocation works as it did prior to online change — getting a single
block of storage the size of the request. Using dynamic parameters, the user can increase
the EDM pool from the initial size or reduce the EDM pool back to the initial size. Any
attempt to reduce the EDM pool size below the initial value is rejected and indicated by
message DSNG001I.
The changes in the EDM pool size is done in 5M increments, rounding up to the next 5M
boundary.
If insufficient virtual storage is detected when expanding the EDM pool, DB2 issues a
warning message (DSNG003I) and increases the pool as large as available space allows.
An informational message (DSNG002I) indicates the amount of the allocated size.
Note that a contiguous EDM pool is the most efficient, and using the SET SYSPARM
command to increase the EDM pool size may not result in a contiguous pool.
At the time the EDM pool is contracted, some storage may be in use. If so, a message is
issued indicating how much storage was released and how much is pending. The pending
storage is released when it is no longer accessed. The changes in the EDM pool can be
monitored using the EDM statistics or the 106 trace record.
Using the SET SYSPARM command to decrease the size of the EDM pool may involve a
wait for system activity to quiesce, and therefore the results may not be instantaneous.
See also 5.1.4, “Examples” on page 105.
򐂰 EDMBFIT
The large EDM pool better-fit parameter is used to determine the algorithm to search the
free chain for large EDM pools (greater than 40 M). This change only affects new free
chain requests.

Chapter 5. Availability and capacity enhancements 103


򐂰 EDMDSPAC
The EDM pool data space parameter can be used to increase or decrease the size of the
Data Space storage used for dynamic statements. If the initial value was zero when DB2
was started, this parameter cannot be changed. The data space storage can only be
increased up to the maximum size specified at DB2 start time. Other than those
restrictions, this parameter behaves the same as the EDMPOOL parameter.
򐂰 RLFERRD, RLFAUTH, RLFTBL, RLFERR
After a change to the resource limit facility parameters, for the dynamic statements
SELECT, INSERT, UPDATE, and DELETE, the change takes effect after the resource limit
facility is restarted using the -START RLIMIT command. The change is not seen for
dynamic DML statements issued before the RLF is refreshed with the -START RLIMIT
command.
򐂰 IDBACK, IDFORE, BMPTOUT, DLITOUT
Any changes to these parameters (max batch and max TSO connect, IMS BMP and DLI
batch timeout) do not take effect until the next create thread request. Also, any threads
currently executing are not updated with the new values.
򐂰 Archive log parameters
If an offload (archiving an active log data set) is active at the time of a related parameter
change, the offload continues with the original parameter values. The new archive log
parameter values does not take effect until the next offload sequence.
򐂰 CHKFREQ (LOGLOAD before DB2 V7)
The checkpoint frequency parameter can be modified by the -SET LOG COMMAND,
which overrides the value in the subsystem parameter load module. If a new parameter is
activated through the online change, it overrides the last -SET LOG value. The current
value can be displayed with the -DISPLAY LOG command.
򐂰 DEALLCT, MAXRTU
The behavior for the archive deallocation period and the maximum allocated tape units
parameters is similar to check point frequency (CHKFREQ). These values can be
modified with the -SET ARCHIVE command. Changing the parameter online overrides the
last -SET ARCHIVE value. The current values can be displayed with the -DISPLAY
ARCHIVE command.
򐂰 DSSTIME, STATIME
The data set statistics and the statistics interval time parameters are read from the
updated control block when it is available when the intervals are being set for the next
timer pop for the statistics. So the existing interval for each must expire before the new
value can be set.
򐂰 PTASKROL
This value, indicating whether to roll up query parallel task accounting trace records into
the originating task accounting trace or not, is read from the updated control block when
the first child record is summed up for the child task accounting data roll up. The value is
saved in the parent accounting block so that the behavior for any parent and all its child
tasks is consistent.
򐂰 MAXDBAT
When the maximum remote active value is increased, current agents that were queued
prior to the increase do not become active until existing active agents terminate or become
inactive (see the "DDF THREADS" INACTIVE specification in the DSNTIPR installation
panel where inactive thread support is enabled).

104 DB2 for z/OS and OS/390 Version 7 Performance Topics


5.1.4 Examples
The SET SSPARM command is used as shown in the following examples:

-SET SYSPARM LOAD(DSNZPAR1)

Message DSNZ014I is generated if the value for an unchangeable parameter differs from the
startup value:

DSNZ006I =DB2A DSNZCMD1 SUBSYS DB2A SYSTEM


PARAMETERS LOAD MODULE NAME DSNZPAR1 IS BEING LOADED
DSNZ014I =DB2A DSNZOVTB PARAMETER BACKODUR IN CSECT
DSN6SYSP CANNOT BE CHANGED ONLINE. PARAMETER CHANGE IGNORED.
DSNZ007I =DB2A DSNZCMD1 SUBSYS DB2A SYSTEM
PARAMETERS LOAD MODULE NAME DSNZPAR1 LOAD COMPLETE

-SET SYSPARM LOAD(DSNZPAR2)

If the EDM pool is 10M and the request is to increase it to 12M, then it is increased to 15M:

DSNZ006I =DB2A DSNZCMD1 SUBSYS DB2A SYSTEM


PARAMETERS LOAD MODULE NAME DSNZPAR2 IS BEING LOADED
DSNG002I =DB2A EDM POOL HAS AN
INITIAL SIZE 10485760
REQUESTED SIZE 12582912
AND AN ALLOCATED SIZE 15605760
DSNZ007I =DB2A DSNZCMD1 SUBSYS DB2A SYSTEM
PARAMETERS LOAD MODULE NAME DSNZPAR2 LOAD COMPLETE

-SET SYSPARM STARTUP

Changing the DSNZPARM parameters back to their start-up value also set the EDM pool
back to its initial value:

DSNZ007I =DB2A DSNZCMD1 SUBSYS DB2A SYSTEM


PARAMETERS LOAD MODULE NAME DSNZPAR2 LOAD COMPLETE
DSNG002I =DB2A EDM POOL HAS AN
INITIAL SIZE 15605760
REQUESTED SIZE 10485760
AND AN ALLOCATED SIZE 10485760
DSNZ011I =DB2A DSNZCMD1 SUBSYS DB2A SYSTEM PARAMETERS SET TO STARTUP

5.2 Consistent restart enhancements


In DB2 V6, backout processing during DB2 restart could be postponed in order to make DB2
available faster after an outage. The postponed units of recovery left some objects in RESTP
(non data sharing) or ARESTP (data sharing) status. In order to resolve this status, you had
to issue the -RECOVER POSTPONED command or you had to set LBACKOUT to AUTO in ZPARM.

DB2 V7 introduces further enhancements on this issue. The -RECOVER POSTPONED CANCEL
command allows you to cancel postponed units of recovery. Impacted page sets/partitions
are marked Refresh Pending (REFP) and LPL. In order to make the units of recovery aware
of the REFP status, LPL was added.

Chapter 5. Availability and capacity enhancements 105


The REFP,LPL status can be resolved in one of the following ways:
򐂰 LOAD REPLACE
This is done on the impacted page sets/partitions.
򐂰 RECOVER TO (LOGPOINT, LOGRBA, TOCOPY)
It is your responsibility to specify a valid logpoint.
򐂰 START DB() SPACE() ACCESS(FORCE)
Several new messages, associated with the -RECOVER POSTPONED CANCEL command, are
introduced. See Figure 5-1 for an example.

Figure 5-1 RECOVER POSTPONED CANCEL command response

The DISP field in the SUMMARY OF COMPLETED EVENTS report of DSN1LOGP for a unit
of recovery whose recovery was cancelled is set to CANCELLED. See Figure 5-2.

Figure 5-2 DSN1LOGP SUMMARY OF COMPLETED EVENTS report

A new diagnostic log record is added that can log successful START DATABASE
ACCESS(FORCE) commands. The DBID/OBID/PART is logged, as well as the database and
page set name. The log record will be of type DIAGNOSTIC subtype name START
DATABASE FORCE. See Figure 5-3.

106 DB2 for z/OS and OS/390 Version 7 Performance Topics


00000028C000 URID(00000028BE15) LRSN(B4E94FB29BE0)
TYPE( REDO )
SUBTYPE(START DATABASE ACCESS FORCE)
PROCNAME(DSNIZLDL)

Figure 5-3 DSN1LOGP example - START DATABASE FORCE log record

5.3 CANCEL THREAD NOBACKOUT


The -CANCEL THREAD command was enhanced in a similar way to the -RECOVER POSTPONED
command; optionally you can add the parameter NOBACKOUT to this command, which possibly
leaves affected objects in an inconsistent state. These objects are also marked REFP,LPL.
The same actions as with the recover postponed cancel command can be taken to resolve
the REFP,LPL status.

5.4 Adding workfiles


DB2 V7 provides a less disruptive addition of workfile table spaces. It allows you to CREATE
and DROP workfile table space without having to STOP the workfile database

All customers will benefit from this enhancement, particularly large sites where significant
space needs to be allocated for large workdays, and/or large query sorting, and/or 24x7
applications. You will also be able to better manage your workfile space, by changing the
workfile allocations more frequently. This enhancement reduces performance problems and
improves availability in a 24x7 environment, because changing workfile space allocation is
much less disruptive.

In a non-data sharing environment, a U lock is taken on the workfile DBD. Therefore while you
are creating or dropping a workfile table space, other DB2 agents are able to continue to use
other table spaces in the workfile database.

However, in a data sharing environment, a U lock is taken on the workfile database DBD
only if the DB2 member executing the DDL is the “owning” DB2 member for that workfile
database. Otherwise, an X lock is taken on the DBD. Therefore, DB2 will allow other DB2
agents to have concurrent use of other workfile table spaces in the database only if you
execute the DDL on the “owning” DB2 member. Otherwise, DB2 will not allow concurrent
access to the other workfile table spaces in the database being modified.

5.5 Log Manager enhancements


In this section we describe the Log Manager enhancements.

5.5.1 Log suspend and resume


The LOG SUSPEND command suspends update activity and logging while you make an
external copy of your production system. The LOG RESUME command restarts update
activity and logging. During the brief suspension, you can use a fast-disk copy facility, such as
Enterprise Storage Server FlashCopy or RAMAC Virtual Array Snapshot, to make a copy.
These copies can be used for remote site recovery or point-in-time recovery. This
enhancement was also introduced in DB2 V6 by APAR PQ31492.

Chapter 5. Availability and capacity enhancements 107


Effects of SET LOG SUSPEND command
The SET LOG SUSPEND command has the following effects:
򐂰 A system checkpoint is taken (non data sharing only, in data sharing a system checkpoint
could cause a system hang due to the log suspend of the other members).
򐂰 A log write latch is obtained.
򐂰 The log buffers are flushed.
򐂰 The BSDS is updated with the highest written RBA.
򐂰 A highlighted message, DSNJ372I, is issued to the console which identifies the subsystem
and the log point at which log activity is suspended.

DSNJ372I =DB2A DSNJC09A UPDATE ACTIVITY HAS BEEN


SUSPENDED FOR DB2A AT RBA 00000031CB60, LRSN 00000031CB60, PRIOR
CHECKPOINT RBA 000000317B3A
DSN9022I =DB2A DSNJC001 '-SET LOG' NORMAL COMPLETION

Effects of SET LOG RESUME command


The SET LOG RESUME command has the following effects:
򐂰 The log write latch is released.
򐂰 The highlighted log suspend message is deleted.
򐂰 The log resumed message is issued.

DSNJ373I =DB2A DSNJC09A UPDATE ACTIVITY HAS BEEN RESUMED FOR DB2A
DSN9022I =DB2A DSNJC001 '-SET LOG' NORMAL COMPLETION

Notes:

Avoid the use of LOG SUSPEND/RESUME during heavy update activity. In addition to
holding up normal activity, there can be other issues when using the SnapShot/FlashCopy
process during heavy update activity:
򐂰 Restart processing after restoring the copy will take longer while all pending writes have to
be applied from the log and unresolved units of recovery have to be rolled back or forward
recovered.
򐂰 You can compare the restart process following a restore of the copy with the restart
process after a system crash.
򐂰 Depending on the activity, the RVA/ESS will require more space until the copy offload
process completes.

5.5.2 Time controlled checkpoint interval


Prior to DB2 V7 you could implement a time controlled checkpoint frequency by using the
-SET LOG LOGLOAD(0) command every n minutes. In DB2 V7 the checkpoint frequency can be
time-driven or logload-driven.

108 DB2 for z/OS and OS/390 Version 7 Performance Topics


Description
A new parameter, CHKTIME, of the -SET LOG command allows you to specify a time interval
from 0 to 60 minutes. A value of 0 forces DB2 to take a checkpoint without changing the
checkpoint frequency.

=DB2A SET LOG CHKTIME(20)


DSNJ339I =DB2A DSNJC009 SET LOG COMMAND COMPLETED, CHKTIME (20)
DSN9022I =DB2A DSNJC001 '-SET LOG' NORMAL COMPLETION

You can display the current CHKTIME value as well as current log information using the
-DISPLAY LOG command.

=DB2A DIS LOG


DSNJ370I =DB2A DSNJC00A LOG DISPLAY
CURRENT COPY1 LOG = DB2V710B.LOGCOPY1.DS01 IS 52% FULL
CURRENT COPY2 LOG = DB2V710B.LOGCOPY2.DS01 IS 52% FULL
H/W RBA = 00000117F4A2
H/O RBA = 000000000000
FULL LOGS TO OFFLOAD = 0 OF 6
OFFLOAD TASK IS (AVAILABLE)
DSNJ371I =DB2A DB2 RESTARTED 11:50:06 NOV 10, 2000
RESTART RBA 000000825000
CHECKPOINT FREQUENCY 20 MINUTES
LAST SYSTEM CHECKPOINT TAKEN 18:31:36 NOV 10, 2000
DSN9022I =DB2A DSNJC001 '-DIS LOG' NORMAL COMPLETION

Note:

Remember, the higher the values for LOGLOAD or CHKTIME, the longer it takes for DB2 to restart.
Avoid using the time interval controlled checkpoint frequency while it can influence a DB2
consistent restart time. As you know, DB2 restart time is dependent on the number of log
records to be processed since the last checkpoint and with a time interval controlled
checkpoint frequency the number of log records will be different in each checkpoint interval.
When DB2 is restarted, it again picks up the value for the LOGLOAD parameter from your
DSNZPARM start up module.

5.5.3 Retry of log read request


Using earlier versions of DB2 for OS/390, the DB2 subsystem terminates whenever a
log-read request fails during a “must-complete” operation, such as rollback. Sometimes the
condition causing the log-read failure is correctable, such as a temporary HSM or tape
subsystem problem. With DB2 V7, messages DSNJ153E and DSNJ154I (WTOR) will be
issued to show and identify the critical log-read error failure. DB2 then will wait for the reply to
message DSNJ154I before retrying the log-read access, or before abending.

Chapter 5. Availability and capacity enhancements 109


DSNJ104I - DSNJR206 RECEIVED ERROR STATUS 00000004
FROM DSNPCLOC FOR DSNAME=DSNC710.ARCHLOG1.A0000049
DSNJ104I - DSNJR206 RECEIVED ERROR STATUS 00000004
FROM DSNPCLOC FOR DSNAME=DSNC710.ARCHLOG2.A0000049
*DSNJ153E - DSNJR006 CRITICAL LOG READ ERROR
CONNECTION-ID=TEST0001
CORRELATIOND-ID=CTHDCORID001
LUWID = V71A-SYEC1DB2.B343707629D=10
REASON-CODE=00D10345
*26 DSNJ154I - DSNJHR126 REPLY Y TO RETRY LOG READ REQUEST,
N TO ABEND

5.5.4 Long running UR warning message


Prior to DB2 V7, DB2 issues warning message DSNR035I for a long running unit of recovery
(UR) based on a number of system checkpoints taken since the beginning of the UR.
Although useful, this message could be misleading because it is depending on the overall
DB2 workload.

A new warning message is issued when a unit of recovery has written a predefined number of
log records without committing. Every time the threshold is reached, message DSNJ031I is
generated. The specific purpose of this message is to warn you about ongoing work in DB2
which can impact restart time in case of a subsystem failure.

The value of this threshold is specified in the parameter URLGWTH (UR log record written
threshold) of the DSNZPARM. The value of this parameter can be modified using the -SET
SYSPARM command

DSNJ031I =DB2A DSNJW001 WARNING - UNCOMMITTED UR


HAS WRITTEN 1000 LOG RECORDS -
CORRELATION NAME = PAOLOR6E
CONNECTION ID = BATCH
LUWID = DB2A.SCPDB2A.B4ECE117BB5C = 91
PLAN NAME = DSNTEP71
AUTHID = PAOLOR6
END USER ID = *
TRANSACTION NAME = *
WORKSTATION NAME = *

110 DB2 for z/OS and OS/390 Version 7 Performance Topics


5.6 IRLM enhancements
In this section we describe two new enhancements for IRLM:
򐂰 A new option of the modify IRLM command allows the DBMS timeout value as known to
IRLM to be dynamically modified.
򐂰 New subsecond detection values can now be specified for the DEADLOCK parameter.

5.6.1 Dynamic change of IRLM timeout value


Sometimes, users would like the ability to alter the timeout value that was given to IRLM by
the DBMS at start up. Up to now, for DB2, this means that the subsystem must be shut down,
the value for IRLMRWT altered in the DSNZPARM, and DB2 restarted. APAR PQ38455
introduces a new MODIFY command option to dynamically change the timeout value.

MODIFY command description


The following MODIFY command requests that IRLM dynamically set the timeout value for
the specified subsystem:
MODIFY irlmproc,SET,TIMEOUT=nnnn,subsystem-name
򐂰 nnnn must be a number from 1 through 3600.
򐂰 subsystem-name is the DB2 subsystem name, as displayed by the MODIFY
irlmproc,STATUS command.

Any syntax error in issuing the command will receive DXR106E. Syntax errors include
TIMEOUT value out-of-range or invalid identified subsystem name. A syntax error message
will also be given if the DXR177I message has not been received for the prior command
completion.

The TIMEOUT value must be a multiple of the local deadlock parameter. If the value entered
is not an even multiple of the deadlock parameter, IRLM will increase the timeout value to the
next highest multiple.

The value used by IRLM for timeout will be displayed in the DXR177I message which is
issued during deadlock processing. This new value is used until the IRLM or identified
subsystem is terminated, or the timeout is change again by the operator. The value specified
on the command does NOT affect the timeout value in the DB2ZPARM.

IRLMRWT cannot be changed via the SET SYSPARM command.

Example
Enter the command on an MVS console:
/F DB2GIRLM,SET,TIMEOUT=65,DB2G

This is the response on the MVS console:


DXR177I IRLG001 THE VALUE FOR TIMEOUT IS SET TO 65 FOR DB2G

Chapter 5. Availability and capacity enhancements 111


5.6.2 Subsecond deadlock detection
Both the speed of processors and the number of transactions per second is increasing, while
the window of opportunity is becoming shorter — hence the need to be able to detect
deadlocks before queueing takes place. Currently the DEADLOK parameter in your IRLM
startup procedure is 1 second, and in a sysplex environment, the current minimum time for a
global deadlock to be detected is approximately 2 seconds when DEADLOK(1,1) is specified.

An IRLM enhancement introduced by the PTF for APAR PQ44791 introduces the possibility to
specify values lower than 1 second down to the current new limit of 100 milliseconds.

This enhancement provides the ability to specify a value in milliseconds for the IRLMPROC
DEADLOK parameter for the local deadlock frequency. You can also dynamically change the
deadlock frequency with the new command:
MODIFY irlmproc,SET,DEADLOCK=nnnn

In this command, the value for nnnn ranges from 100 to 5000 msec.

DSNTIPJ, the install panel for IRLM, is modified in accordance with the new allowed values.

Measurements
The IRWW workload was used to evaluate the impact of the subsecond value for DEADLOK.
Two measurements were collected, one with DEADLOK=1000 (the current lower limit), and
one with DEADLOK=100 (the new lower limit after applying the fix). The results are
summarized in Table 5-1 and show no appreciable impact.
Table 5-1 Subsecond deadlock detection impact
Measurement value DEADLOK=1000 DEADLOK=100 Delta %

ITR (commit/sec) 345.07 343.62 -0.42

C2 CPU (msec)% 2,753 2,756 +0.11

Total locks/latches 0.001072 0.001086 +1.30


per Commit

112 DB2 for z/OS and OS/390 Version 7 Performance Topics


6

Chapter 6. Utility performance


In this chapter we describe the DB2 V7 performance related enhancements to the following
DB2 utilities and functions:
򐂰 Dynamic utility jobs: You can specify a pattern of DB2 objects to run utilities against and
to dynamically allocate unload data sets, work data sets, sort data sets, and so on.
򐂰 LOAD partition parallelism: This makes it possible to load partitioned table spaces in
parallel with only one job instead of running multiple jobs per partitions.
򐂰 Online LOAD RESUME: This allows you to load data into tables with a utility that act like
a normal INSERT program.
򐂰 UNLOAD: This is a new utility to unload data from tables without any influence on
business applications.
򐂰 Online REORG enhancements: The FAST SWITCH phase has been improved, and
BUILD2 phase is now supporting parallelism for building NPIs. A new drain specification
statement has been added to overriding the utility locking time-out value specified at
subsystem level.
򐂰 Statistics history: It is now possible to store all generations of statistics information and
not only the current information from RUNSTATS.
򐂰 COPYTOCOPY: This new utility gives you the opportunity to make additional full or
incremental image copies.
򐂰 Cross Loader: This provides a new functionality to move data across multiple databases,
and even across multiple platforms.
򐂰 Modify Recovery: This is a performance enhancement provided through maintenance in
DB2 V5, V6, and V7.

For more information on DB2 utilities, you can refer to the upcoming redbook DB2 for z/OS
and OS/390 Version 7 Using the Utilities Suite, SG24-6289, planned to be available by
3Q2001.

© Copyright IBM Corp. 2001 113


6.1 Dynamic utility jobs
Development and maintenance of utility jobs can be very time consuming and error prone.
Users primarily face three challenges:
1. DD statements must be provided for all data sets.
2. Within each DD statement, space, disposition, and other parameters must reflect the
current situation.
3. Utility statements must explicitly list all DB2 objects to be processed, and these objects
must be named accurately.

As new objects are constantly created, others are deleted, and since the sizes of most
objects vary over time, it is particularly difficult to keep up with the changes.

DB2 V7 addresses these three challenges by introducing the following three new utility
control statements:
򐂰 LISTDEF
򐂰 TEMPLATE
򐂰 OPTIONS

The benefits of these new statements are evident. Development and maintenance of jobs has
become easier. Also, as changes are reflected automatically, less user activity is required,
and thus the possibility for errors is reduced. As a consequence, the total cost of operations
can be minimized.

6.1.1 LISTDEF
LISTDEF is used to define dynamic list of DB2 objects, namely table spaces, index spaces or
their partitions. The defined list can be used in one or more utilities in the job step. The
defined list can be any combination of including or excluding specific names, name patterns
and even other lists. It is even possible to specifies that all objects that are referentially related
to the object expression (PRIMARY KEY <--> FOREIGN KEY) are to be included in the list.

Figure 6-1 shows the differences in JCL using LISTDEFs. For more information about
LISTDEF. See DB2 UDB for OS/390 and z/OS Utility Guide and Reference, SC26-9945.

D a ta b a s e D B X D a ta b a s e D B Y

RI RI RI
TS0 PTS1 TS2 TS3 TS4

//SYSIN DD *
V6 RECOVER TABLESPACE DBX.PTS1
TABLESPACE DBX.TS2
TABLESPACE DBY.TS3
TABLESPACE DBY.TS4
TOLOGPOINT X'xxxxxxxxxxxx'
/*
//SYSIN DD *
V7 LISTDEF RECLIST INCLUDE TABLESPACE DBY.T* RI
RECOVER LIST RECLIST
TOLOGPOINT X'xxxxxxxxxxxx'
/*

Figure 6-1 Sample JCL from DB2 V6 and DB2 V7 using LISTDEF

114 DB2 for z/OS and OS/390 Version 7 Performance Topics


Enhanced DISPLAY UTILITY() command output
DSNU100I for stopped utilities and DSNU105I for active utilities include additional
information. See Figure 6-2.

DSNU105I =DB2A DSNUGDIS - USERID = PAOLOR5


MEMBER =
UTILID = DSNTEX
PROCESSING UTILITY STATEMENT 1
UTILITY = RUNSTATS
PHASE = UTILINIT COUNT = 0
NUMBER OF OBJECTS IN LIST = 40
LAST OBJECT STARTED = 11
STATUS = ACTIVE
DSN9022I =DB2A DSNUGCCC '-DIS UTIL' NORMAL COMPLETION
***

Figure 6-2 DISPLAY UTILITY output

You can see the number of objects included in a list and the number of the current active
objects.

6.1.2 TEMPLATE
With TEMPLATE you can define a dynamic list of data set allocations. You can create a
skeleton or a pattern for the names of the data sets to allocate. The list of these data sets to
allocate is dynamic. This list is generated each time the template is used by an executing
utility. Therefore a template automatically reflects the data set allocations currently needed.

Figure 6-3 shows the differences in JCL using templates. For more information. See DB2
UDB for OS/390 and z/OS Utility Guide and Reference, SC26-9945.

//COPYPRI1 DD DSN=DBX.PTS1.P00001.P.D2000166,
V6 // DISP=...,UNIT=...,SPACE=...
//COPYPRI2 DD DSN=DBX.PTS1.P00002.P.D2000166,
// DISP=...,UNIT=...,SPACE=...
//COPYPRI3 DD DSN=DBX.PTS1.P00003.P.D2000166,
// DISP=...,UNIT=...,SPACE=...
//COPYSEC1 DD DSN=DBX.PTS1.P00001.B.D2000166,
// DISP=...,UNIT=...,SPACE=...
//COPYSEC2 DD DSN=DBX.PTS1.P00002.B.D2000166,
// DISP=...,UNIT=...,SPACE=...
//COPYSEC3 DD DSN=DBX.PTS1.P00003.B.D2000166,
// DISP=...,UNIT=...,SPACE=...
//SYSIN DD *
COPY TABLESPACE DBX.PTS1 DSNUM 1 COPYDDN (COPYPRI1,COPYSEC1)
COPY TABLESPACE DBX.PTS1 DSNUM 2 COPYDDN (COPYPRI2,COPYSEC2)
COPY TABLESPACE DBX.PTS1 DSNUM 3 COPYDDN (COPYPRI3,COPYSEC3)
/*

V7 //* none of the DD statements above


//SYSIN DD *
TEMPLATE TCOPYPRI DSN ( &DB..&TS..P&PART..&PRIBAC..D&JDATE. )
TEMPLATE TCOPYSEC DSN ( &DB..&TS..P&PART..&SECBAC..D&JDATE. )
COPY TABLESPACE DBX.PTS1 DSNUM 1 COPYDDN ( TCOPYPRI,TCOPYSEC )
COPY TABLESPACE DBX.PTS1 DSNUM 2 COPYDDN ( TCOPYPRI,TCOPYSEC )
COPY TABLESPACE DBX.PTS1 DSNUM 3 COPYDDN ( TCOPYPRI,TCOPYSEC )
/*

Figure 6-3 Sample JCL from DB2 V6 and DB2 V7 using TEMPLATE

Chapter 6. Utility performance 115


6.1.3 OPTIONS
OPTIONS gives you the opportunity to validate and control your LISTDEF and TEMPLATE
statements. It allows you to specify a library for your LISTDEF and TEMPLATE definitions,
and you can control return codes and error events.

The PREVIEW option will parse all utility control statements for syntax errors but normal utility
execution will not take place. If syntax is valid, all LISTDEF lists and TEMPLATE DSNAMEs
which appear in SYSIN will be expanded. See Figure 6-4.

DSNU000I DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = COPYTS


DSNU050I DSNUGUTC - LISTDEF COPYLST INCLUDE TABLESPACE DBLP0002.*
DSNU1035I DSNUILDR - LISTDEF STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - TEMPLATE COPYTMP DSN &USERID..FIC.&DB..&TS..D&MONTH.&DAY. UNIT
SYSDA DISP(NEW,CATLG,CATLG)
SPACE(50,25) CYL
DSNU1035I DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - OPTIONS PREVIEW EVENT(ITEMERROR,HALT)
DSNU1000I DSNUGUTC - PROCESSING CONTROL STATEMENTS IN PREVIEW MODE
DSNU1035I DSNUILDR - OPTIONS STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - COPY LIST COPYLST SHRLEVEL REFERENCE FULL YES COPYDDN(COPYTMP)
DSNU1020I -DB2G DSNUILSA - EXPANDING LISTDEF COPYLST
DSNU1021I -DB2G DSNUILSA - PROCESSING INCLUDE CLAUSE TABLESPACE DBLP0002.*
DSNU1022I -DB2G DSNUILSA - CLAUSE IDENTIFIES 6 OBJECTS
DSNU1023I -DB2G DSNUILSA - LISTDEF COPYLST CONTAINS 6 OBJECTS
DSNU1010I DSNUGPVV - LISTDEF COPYLST EXPANDS TO THE FOLLOWING OBJECTS:
LISTDEF COPYLST -- 00000006 OBJECTS
INCLUDE TABLESPACE DBLP0002.TSLP2001
INCLUDE TABLESPACE DBLP0002.TSLP2002
INCLUDE TABLESPACE DBLP0002.TSLP2003
INCLUDE TABLESPACE DBLP0002.TSLP2004
INCLUDE TABLESPACE DBLP0002.TSLP2005
INCLUDE TABLESPACE DBLP0002.TSLP2006
DSNU1009I DSNUGPVV - TEMPLATE COPYTMP DSN=PAOLOR5.FIC.DBLP0002.TSLP2001.D0427
DSNU1007I DSNUGPVV - DATE/TIME VALUES MAY CHANGE BEFORE EXECUTION
DSNU1009I DSNUGPVV - TEMPLATE COPYTMP DSN=PAOLOR5.FIC.DBLP0002.TSLP2002.D0427
DSNU1007I DSNUGPVV - DATE/TIME VALUES MAY CHANGE BEFORE EXECUTION
DSNU1009I DSNUGPVV - TEMPLATE COPYTMP DSN=PAOLOR5.FIC.DBLP0002.TSLP2003.D0427
DSNU1007I DSNUGPVV - DATE/TIME VALUES MAY CHANGE BEFORE EXECUTION
DSNU1009I DSNUGPVV - TEMPLATE COPYTMP DSN=PAOLOR5.FIC.DBLP0002.TSLP2004.D0427
DSNU1007I DSNUGPVV - DATE/TIME VALUES MAY CHANGE BEFORE EXECUTION
DSNU1009I DSNUGPVV - TEMPLATE COPYTMP DSN=PAOLOR5.FIC.DBLP0002.TSLP2005.D0427
DSNU1007I DSNUGPVV - DATE/TIME VALUES MAY CHANGE BEFORE EXECUTION
DSNU1009I DSNUGPVV - TEMPLATE COPYTMP DSN=PAOLOR5.FIC.DBLP0002.TSLP2006.D0427
DSNU1007I DSNUGPVV - DATE/TIME VALUES MAY CHANGE BEFORE EXECUTION
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=4

Figure 6-4 Output from PREVIEW

If you have added or changed your LISTDEF or TEMPLATE statements, we recommend that
you use the PREVIEW option before running utilities.

For more information about OPTIONS. See DB2 UDB for OS/390 and z/OS Utility Guide and
Reference, SC26-9945.

116 DB2 for z/OS and OS/390 Version 7 Performance Topics


6.1.4 RESTART processing
Utility processing checkpoints two new categories of information for the purposes of restart:
򐂰 LIST information
The enumerated list is saved during the UTILINIT phase of each utility. Should the utility
fail and be restarted, the checkpointed list is used from the in-process utility, not
re-expanded from the LISTDEF control statement. Subsequent utilities in the same job
step will re-expand the list from the LISTDEF definition.
򐂰 TEMPLATE information
Data sets will continue to be checkpointed for restart processing. If symbolic variables
were specified for the DSNMAE option of the TEMPLATE statement, the execution time
values are saved for the duration of the utility statement and will be used during restart.
The symbolic variable values that are saved are: the job variables, and the data and time
variables. Subsequent utilities in the same job step will not use the saved variable values.
Values that represent the restart job, not the original job, will be used.

Verify that the PTF for APAR PQ45268 (open at the time of writing) is applied.

Restart from the last commit point (RESTART or RESTART(CURRENT)) and restart from the
beginning of the current phase (RESTART(PHASE)) are supported. However, current utility
specific restart restrictions apply. For example, the COPY utility cannot be restarted with
RESTART(PHASE).

6.1.5 Performance considerations


Verify that the PTF for APAR PQ46577 is applied to prevent having unnecessary overhead at
initialization time when using LISTDEF.

There are obvious benefits obtained from using LISTDEF to ensure that utilities run against
all old and future new DB2 objects, as well as the time that you save to maintain utility jobs.
It is recommended that you use LISTDEF for specifying the DB2 objects that you want to
include in your utility jobs.

To get most benefit out of using LISTDEF, it is necessary that you have a good naming
convention for your DB2 objects, so the pattern you have specified for your DB2 objects will
cover future new objects too.

The performance degradation of using TEMPLATE on simple table spaces is less than 7% in
elapsed time, so the performance overhead must be considered negligible when compared
with the benefit you get out of using TEMPLATE. You no longer need to change space
allocation of data sets used for utilities, and the problems with running out of space (abend
D37) during utilities are eliminated. However, you must ensure that you have enough free
space available to allocate the data sets needed for the execution of the utilities.

6.2 LOAD partition parallelism


Prior to DB2 V7, you have to run dedicated LOAD jobs per partition, but you could experience
problems with contention impact on non-partitioning indexes (NPI). DB2 V7 addresses this
issue and allows you to load partitions in parallel within the same LOAD job. Thus the
problems with contention impact on NPIs are eliminated, because a single task will be
building each NPI.

Chapter 6. Utility performance 117


In DB2 V5 or V6 you have to run LOAD jobs per partition in parallel to minimize the elapsed
time, but you could have contention problems on the NPIs. See Figure 6-5.

SYS R ELO AD
R EC 1 Part 1 SYS SO RT S O RT B UILD PI
U T1 O UT Part 1
Error/M ap key/R ID pairs
SO R TW K nn

B
U
IL
D
ke y/RID pa irs PI
Error/Map SYS SO R T SO R T BUILD
U T1 OUT Part 2
SYS
R EC 2
R ELOAD Part 2 S O R TW K nn BU
ILD
N P I1

IL D
BU
N P I2
S YS
REC3
R ELO AD Part 3 SYS SO R T S O RT BUILD PI
UT 1 O UT Part 3
Error/M ap key/R ID pairs I LD
S O RT W K nn U
B

ke y/RID pa irs
SYS SO RT S O RT B UILD PI
Error/Map
U T1 O UT Part 4
SYS
R EC 4
R ELO AD Part 4 SO R TW K nn

Figure 6-5 Parallel LOAD jobs per partition

To avoid the contention problem, you have to drop all the NPIs, and after running the LOAD
utility, you have to recreate and REBUILD all the NPIs again.

6.2.1 LOAD partitions in parallel


When partitions can be loaded in parallel within a single job, the contention problem on the
NPIs is eliminated. Because a single task will be building each NPI, you no longer need to
drop and recreate any of the NPIs.

You can now LOAD data into a partition table in parallel without building indexes in parallel.
See Figure 6-6.

118 DB2 for z/OS and OS/390 Version 7 Performance Topics


Erro r/Ma p

S YS R ELOAD
R EC 1 P art 1

PI

BUILD
D
S YS

IL
R E LOAD P art 2

BU
R EC 2

key/RID S YS SOR T SORT N PI1


pairs UT1 O UT

BUILD
S YS
R E C3
R ELOAD Part 3 S OR T W K n n

N PI2

S YS
RE C4
R E LOAD P art 4

Figure 6-6 Partition parallel LOAD without building indexes in parallel

Indexes will not be built in parallel when loading data into a partition table, if any one of the
following conditions is true:
򐂰 There is only one index to be built.
򐂰 In the LOAD job, the keyword SORTKEYS is not specified.
򐂰 In the LOAD job, the keyword SORTKEYS is specified with an estimated value of 0.
The default estimated value for SORTKEYS is 0.

For loading partitions in parallel, the optimal approach would be to start one subtask for each
partition to be loaded. However, it might not be possible to start the optimal number of
subtasks. This constraint could arise from lack of sufficient virtual memory, insufficient
available DB2 threads, or insufficient processors (CPUs) to handle more tasks. If fewer
subtasks are started than there are partitions to be loaded, one or more subtasks will load
more than one partition. When a subtask finishes with loading a partition, if there are any
partitions not yet being loaded, DB2 will start the next available partition.

We recommend that you check the value of IDBACK and CTHREAD in ZPARM, so you can
start the necessary number of subtasks.

Enhanced DISPLAY UTILITY() command output


When the DISPLAY UTILITY() command is issued during the RELOAD phase, it will display
the total number of records that have been loaded into all partitions at the time the command
is issued. The count will be 0 from the time the RELOAD phase starts until the first load
subtask begins to load records into its first partition.

Chapter 6. Utility performance 119


Following a DSNU105I response to the DISPLAY UTILITY() command (see Figure 6-7), one
DSNU111I message will be issued for each subtask. The message will give the activity that
the subtask is performing at the time the DISPLAY UTILITY() command is issued and a count
of the number of records processed in that activity.

DSNU105I =DB2A DSNUGDIS - USERID = PAOLOR5


MEMBER =
UTILID = LOADTS
PROCESSING UTILITY STATEMENT 1
UTILITY = LOAD
PHASE = RELOAD COUNT = 147456
NUMBER OF OBJECTS IN LIST = 1
LAST OBJECT STARTED = 1
STATUS = ACTIVE
DSNU111I =DB2A DSNUGDIS - SUBPHASE = RELOAD COUNT = 18202
DSNU111I =DB2A DSNUGDIS - SUBPHASE = RELOAD COUNT = 18202
DSNU111I =DB2A DSNUGDIS - SUBPHASE = RELOAD COUNT = 24576
DSNU111I =DB2A DSNUGDIS - SUBPHASE = RELOAD COUNT = 24576
DSNU111I =DB2A DSNUGDIS - SUBPHASE = RELOAD COUNT = 18202
DSNU111I =DB2A DSNUGDIS - SUBPHASE = RELOAD COUNT = 18202
DSNU111I =DB2A DSNUGDIS - SUBPHASE = RELOAD COUNT = 18202
DSNU111I =DB2A DSNUGDIS - SUBPHASE = RELOAD COUNT = 18202
DSNU111I =DB2A DSNUGDIS - SUBPHASE = RELOAD COUNT = 18202
DSN9022I =DB2A DSNUGCCC '-DIS UTIL' NORMAL COMPLETION
***

Figure 6-7 DISPLAY UTILITY output

Restriction: In a data sharing group, if the utility is running on a member other than the one
to which the DISPLAY UTILITY() command is directed, the status represents the status of the
utility at its last checkpoint. In addition, the DSNU111I messages will not be issued.

You can find information about how many subtasks the utility has started by looking at the
SYSPRINT output from the LOAD utility, where you will find the following messages:

DSNU364I DSNURPPL - PARTITIONS WILL BE LOADED IN PARALLEL, NUMBER OF TASKS = 9


DSNU397I DSNURPPL - NUMBER OF TASKS CONSTRAINED BY CPUS

When loading data into a partition table, you can benefit from running the LOAD utility in
parallel, and you only have to run and maintain one job, instead of running and maintaining
multiple jobs.

120 DB2 for z/OS and OS/390 Version 7 Performance Topics


6.2.2 Building indexes in parallel
You can LOAD data into a partition table in parallel with indexes built in parallel, too. See
Figure 6-8.

E rro r/M a p

SYS R EL O AD
REC1 P art 1

SORT S ORTBLD PI
SW 01W Knn
S W 0 1 W K xx
B U IL D

SYS R E L O AD
REC2 Pa rt 2

SO RT SOR TBLD
SW 02W Knn
S W 0 2 W K xx N P I1
B U IL D

S YS R EL O AD
REC3
P a rt 3

SW 03W Knn SORT S ORTBLD N P I2


S W 0 3 W K xx
B UIL D

SYS
REC4
R E L O AD P art 4

Figure 6-8 Partition parallel LOAD with indexes built in parallel

The LOAD utility builds indexes in parallel, if all of the following conditions are true:
򐂰 There is more than one index to be built.
򐂰 In the LOAD job, the keyword SORTKEYS is specified, with a non-zero estimate of the
number of keys.
򐂰 You either allow the utility to dynamically allocate the data sets needed by SORT, or
provide the necessary data sets yourself.

For loading partition in parallel, the optimal approach would be to start one subtask for each
partition to be loaded and two for each index to built; one subtask to sort extracted keys while
the other subtask builds the index. However, it might not be possible to start the optimal
number of subtasks. This constraint could arise from lack of sufficient virtual memory,
insufficient available DB2 threads, or insufficient processors (CPUs) to handle more tasks. If
fewer subtasks are started than there are partitions to be loaded, one or more subtasks will
load more than one partition. When a subtask finishes with loading a partition, if there are any
partitions not yet being loaded, DB2 will start the next available partition.

We recommend that you check the value of IDBACK and CTHREAD in ZPARM, so you can
start the necessary number of subtasks.

Chapter 6. Utility performance 121


You can find information about how many subtasks the utility has started, and whether the
indexes will be built in parallel, by looking at the SYSPRINT output from the LOAD utility. Here
you will find the following messages:

DSNU395I DSNURPIB - INDEXES WILL BE BUILT IN PARALLEL, NUMBER OF TASKS = 4


DSNU364I DSNURPPL - PARTITIONS WILL BE LOADED IN PARALLEL, NUMBER OF TASKS
= 7
DSNU397I DSNURPPL - NUMBER OF TASKS CONSTRAINED BY CPUS

When loading data into a partition table, you can benefit from running the LOAD utility in
parallel and by building the indexes in parallel. You only have to run and maintain one job,
instead of running and maintaining multiple jobs and you do not have to drop and recreate
NPIs to avoid contention problems.

6.2.3 Performance measurements


All measurements were performed using an IBM G6 12-way processor and ESS disks. The
following table is used:
򐂰 26 columns with a total record length of 119 bytes
򐂰 50 million rows
򐂰 20 partitions; each partition has 2.5 million rows

Note: Without SORTKEYS, this means that the keyword SORTKEYS was not specified or
SORTKEYS was specified with an estimated value of zero. With SORTKEYS, this means that
the keyword SORTKEYS was specified with an estimated value greater than zero.

LOAD without SORTKEYS


Table 6-1 summarizes the comparison of running the LOAD utility for the whole table with the
LOAD utility running in parallel for all partitions, but without building NPIs in parallel, because
SORTKEYS was not specified.
Table 6-1 LOAD in parallel without SORTKEYS

LOAD whole table Parallel LOAD partition Delta %

No. of CPU Elapsed CPU Elapsed CPU Elapsed


indexes time time time time time time

1 index 599 1087 735 734 +22.7% -32.5%

2 indexes 1052 1785 1106 1247 +5.1% -30.1%

3 indexes 1411 2187 1472 1730 +4.3% -20.9%

6 indexes 2459 3650 2573 3480 +4.6% -4.7%

Note: All times are in seconds.

When loading partitions in parallel without building NPIs in parallel (without specifying
SORTKEYS), if there is only one index, the CPU time degrades up to 22.7% and the elapsed
time improves up to 32.5%. When there are more than one index on a partition table, the CPU
overhead is lower, even though managing the parallel subtasks for loading partitions in
parallel contributes to an increase in CPU time.

122 DB2 for z/OS and OS/390 Version 7 Performance Topics


LOAD with SORTKEYS
Table 6-2 summarizes the comparison of running the LOAD utility for the whole table. The
LOAD utility is running in parallel for all partitions and building NPIs in parallel by specifying
SORTKEYS with an estimated value greater than 0.
Table 6-2 LOAD in parallel with SORTKEYS

LOAD whole table with Parallel LOAD partition Delta %


SORTKEYS with SORTKEYS

No. of CPU Elapsed CPU Elapsed CPU Elapsed


indexes time time time time time time

1 index 656 1393 716 722 +9.1% -48.2%

2 indexes 1036 1404 1175 982 +13.4% -30.1%

3 indexes 1398 1737 1573 998 +12.5% -42.5%

6 indexes 2536 2417 2835 1385 +11.8% -42.7%

Note: All times are in seconds.

For tables without NPIs, the parallel partition LOAD with SORTKEYS improves up to 48.2% in
elapsed time, compared to loading the whole table with SORTKEYS and with only 9.1% CPU
time degradation. Managing the parallel subtasks to load partitions in parallel contributes to
an increase in CPU time.

When loading partitions in parallel and building NPIs in parallel (with SORTKEYS specified),
the CPU time degrade up to 13.4% and the elapsed time improves up to 42.7%. Managing
the parallel subtasks to load partitions in parallel and build NPIs in parallel contributes to an
increase in CPU time.

Performance considerations
Table 6-3 summarizes the comparison of optimal performance for LOAD of a 20-partition
table with only one index and 2.5 million rows per partition without SORTKEYS.
Table 6-3 Optimal performance for LOAD without SORTKEYS

LOAD whole table LOAD table with 20 jobs Parallel LOAD partition
in parallel

No. of CPU Elapsed CPU Elapsed CPU Elapsed


indexes time time time time time time

1 index 599 1087 570 272 735 734

Note: All times are in seconds.

When loading partitions in parallel without specifying SORTKEYS, then the CPU and elapsed
time will increase. The CPU time will increase due to managing the parallel subtasks per
partition. The elapsed time will increase due to indexes being built in serial, and therefore, this
takes a longer time. The elapsed time will also increase depending on the number of indexes
to be built in serial. In this situation you cannot benefit from building NPIs in parallel, because
there are no NPIs, only a partition index.

Chapter 6. Utility performance 123


Table 6-4 summarizes the comparison of optimal performance for LOAD of a 20-partition
table with only one index and 2.5 million rows per partition with SORTKEYS specified.
Table 6-4 Optimal performance for LOAD with SORTKEYS

LOAD whole table with LOAD table with 20 jobs Parallel LOAD parallel
SORTKEYS in parallel with with SORTKEYS
SORTKEYS

No. of CPU Elapsed CPU Elapsed CPU Elapsed


indexes time time time time time Time

1 index 656 1393 661 256 716 722

Note: All times are in seconds.

The optimal performance for loading a 20-partition table with only one index is still obtained
by running 20 LOAD jobs in parallel without SORTKEYS. It is the number of NPIs that are
important. If there is only one index (partition index), then there are no NPIs, and you cannot
benefit from building NPIs in parallel. However, if you are loading a table with more than one
NPI, then always use SORTKEYS to improve performance of the LOAD utility and to benefit
from building NPIs in parallel.

6.3 Online LOAD RESUME


The new online LOAD RESUME capability allows loading of data concurrently with user
transactions with minimal impact, by introducing a new LOAD option: SHRLEVEL NONE or
SHRLEVEL CHANGE.

SHRLEVEL NONE specifies that LOAD operates as in previous releases with no user access
to the data during the LOAD.

SHRLEVEL CHANGE allows users to have read and write access to the data during LOAD.
SHRLEVEL CHANGE is valid only in conjunction with LOAD RESUME YES and will
functionally operate like SQL inserts. The index building, duplicate key, and referential
constraint checking will be handled by SQL insert processing. The execution records which
fail the insert will be written to the DDNAME specified by the DISCARDDN option.
SHRLEVEL CHANGE is incompatible with:
򐂰 LOG NO
򐂰 ENFORCE NO
򐂰 KEEPDICTIONARY
򐂰 SORTKEYS
򐂰 STATISTICS
򐂰 COPYDDN
򐂰 RECOVERYDDN
򐂰 PREFORMAT
򐂰 REUSE
򐂰 PART REPLACE

Online LOAD RESUME does not put the table space in CHECK pending or COPY pending
status.

124 DB2 for z/OS and OS/390 Version 7 Performance Topics


Online LOAD RESUME cannot run concurrently on the same target object with online
REORG SHRLEVEL CHANGE. But it can run concurrently on the same target object with the
following utilities:
򐂰 COPY SHRLEVEL CHANGE
򐂰 MERGECOPY
򐂰 RECOVER ERROR RANGE
򐂰 CONCURRENT COPY SHRLEVEL REFERENCE after copy is logically completed
򐂰 RUNSTATS SHRLEVEL CHANGE

Online LOAD RESUME YES cannot run against a table space that is in COPY pending,
CHECK pending or RECOVER pending status. Likewise, it cannot run against an index space
that is in CHECK pending or RECOVER pending status.

Consequences of INSERT processing


These are the consequences of INSERT processing:
򐂰 Triggers are activated.
򐂰 Referential integrity (RI) will be checked.

Data Manager rather than RDS INSERT


The difference here is that the Data Manager will not generate the full range of INSERT
statements: The SQL overhead is omitted.

Lock contention is avoided


DB2 manages the commit scope, dynamically monitoring the current locking situation.

Clustering
Whereas the classic LOAD RESUME stores the new records (in the sequence of the input) at
the end of the already existing records; the new online LOAD RESUME tries to insert the
records in available free pages as close to clustering order as possible; additional free pages
are not created. As you probably insert a lot of rows, these are likely to be stored out of the
clustering order (OFFPOS records).

REORG may be needed after the classic LOAD, as the clustering may not be preserved, but
also after the new online LOAD RESUME, as OFFPOS records may exist. A RUNSTATS with
SHRLEVEL CHANGE UPDATE SPACE followed by a conditional REORG is recommended.

Free space
Furthermore the free space, obtained either by PCTFREE or by FREEPAGE, is used by these
INSERTs of the Online LOAD RESUME - in contrast to the classical LOAD, which loads the
pages thereby providing these types of free space.

As a consequence, a REORG may be needed after an Online LOAD RESUME.

6.3.1 Performance measurements


All measurements were performed using an IBM G6 3-way processor and ESS disks. The
following table was used:
򐂰 10-partition table
򐂰 1 partition index
򐂰 1 NPI

Chapter 6. Utility performance 125


Table 6-5 summarizes the comparison of a user application program inserting rows in random
with Online LOAD RESUME. See Appendix D, “Sample PL/1 program” on page 223 for more
information about the user application used for this test.
Table 6-5 Inserting 2000000 rows
Inserting Online
2,000,000 rows LOAD RESUME Delta %
via program 2,000,000 rows

CPU time 241 171 -29.0%

Elapsed time 326 256 -21.5%

Note: All times are in seconds.

Using Online LOAD RESUME to insert rows in a table can be up to 21,5% faster in elapsed
time than using a user application and even the CPU time is improved up to 29%. When using
the Online LOAD RESUME instead of a user application, you avoid the cost of developing
and maintaining a user application. Another advantage is that Online LOAD RESUME
automatically monitors the lock situation and changes the commit interval dynamically.

Table 6-6 summarizes the comparison of running 10 parallel jobs inserting 2,000,000 rows
with different buffer pool definitions for the buffer pool containing the NPI.
Table 6-6 10 parallel jobs inserting 2,000,000 rows with different buffer pool definitions
10 jobs inserting 10 jobs inserting
2,000,000 rows 2,000,000 rows
in parallel in parallel Delta %
BP3=5,000 pages BP3=40,000 pages

CPU time 268 261 -2.6 %

Elapsed time 548 225 -58.9 %

Note: All times are in seconds.

The effect of tuning the buffer pool used for NPIs, shows that the elapsed time can be
reduced with up to 58.9% and the CPU time reduced up to 2.6%.

Table 6-7 summarizes the comparison of running Online LOAD RESUME in parallel with
different buffer pool definitions for the buffer pool containing the NPIs.
Table 6-7 Running Online LOAD RESUME in parallel with different buffer pool definitions
Parallel partition Parallel partition
Online LOAD Online LOAD
RESUME into RESUME into Delta %
10 partitions 10 partitions
BP3=5,000 pages BP3=40,000 pages

CPU time 196 217 +10.7 %

Elapsed time 916 229 -75.0 %

Note: All times are in seconds.

Running Online LOAD RESUME in parallel can benefit from tuning the buffer pool used for
NPIs. This test shows that the elapsed time can be reduced with up to 75.0%, and in this
situation, the CPU time increased up to 10.7%.

126 DB2 for z/OS and OS/390 Version 7 Performance Topics


6.3.2 Performance considerations
As the new online LOAD RESUME internally works like SQL INSERTs, this kind of LOAD is
slower than the classic LOAD. In some situations you may be willing to trade off performance
for availability, especially for data warehouse applications, where queries may run for several
hours.

The new online LOAD RESUME can take advantage of running in parallel, when loading
partitions.

The classic LOAD utility drains the table space, so no one can access the table space while
loading. The LOAD RESUME uses claim instead, and that allows others to access the table
space while loading.

Online LOAD RESUME SHRLEVEL CHANGE will place the new data in available free pages
as close to clustering order as possible; additional free pages are not created. It is
recommended to run RUNSTATS with SHRLEVEL CHANGE UPDATE SPACE followed by
conditional REORG after online LOAD RESUME YES.

6.3.3 Performance recommendations


Parallel Online LOAD RESUME and parallel insert have similar performance characteristics
in terms of I/O and elapsed time. Parallel Online LOAD RESUME or parallel insert compared
to sequential operation may be impacted by I/O for a non-partitioning index data set. In order
to benefit from parallel operation, both in Online LOAD RESUME and in insert, the following
performance tuning recommendations to minimize non-partitioning index I/O bottlenecks are
suggested:
򐂰 Bigger buffer pool for NPIs to improve buffer pool hit ratio
򐂰 Higher deferred write threshold to reduce the number of pages written
򐂰 Bigger checkpoint interval to reduce the number of pages written
򐂰 ESS PAV (Parallel Access Volume) to support multiple concurrent I/O to the same volume
containing non-partitioning index data set(s)
򐂰 Non-partitioning index pieces to support multiple concurrent non-partitioning index I/Os

6.4 UNLOAD utility


DB2 V6 introduced the option to unload data in external format in the REORG utility. DB2 V7
introduces a new UNLOAD utility with a better performance and functionality than REORG
UNLOAD EXTERNAL.

The new UNLOAD utility gives you more options when unloading data, and allows you to
unload data from COPY data sets, including inline image copy from LOAD and REORG
TABLESPACE, MERGECOPY and DSN1COPY. See Figure 6-9.

Chapter 6. Utility performance 127


UNLOAD
REORG UNLOAD EXTERNAL
created by:
COPY
copy MERGECOPY
table
space DSN1COPY
data set
FROM TABLE selection
Row selection SHRLEVEL REFERENCE / CHANGE
External format Sampling, limitation of rows
numeric General conversion options:
date / time encoding scheme, format
Formatting Field list:
NOPAD for VARCHAR selecting, ordering, positioning, formatting
length, null field

single
data set data set
Partition parallelism
for part1
data set
for part2

Figure 6-9 UNLOAD enhanced functionality

Support for exits


Delimited output and user exits are not supported. Unload does support EDITPROC and
FIELDPROC definitions.

6.4.1 Performance measurements


Figure 6-10 shows the performance improvements of the new UNLOAD utility compared to
REOG UNLOAD EXTERNAL. The following tables are used during the tests:
򐂰 1 million row table with 50 columns, each defined as CHAR(4)
򐂰 1 million row table with 50 columns, each defined as DECIMAL
򐂰 1 million row table with 50 columns, each defined as INTEGER

Note: All measurements were performed using an IBM G5 3-way processor and with 4
channels on ESS.

128 DB2 for z/OS and OS/390 Version 7 Performance Topics


G5 CPU time (sec)
1 million rows
25

20

15
Reorg
Unload
10

0
CHAR DEC INT

Figure 6-10 New UNLOAD utility versus REORG EXTERNAL UNLOAD

The improvement shows a 15% to 18% reduction of the CPU time. Because the UNLOAD
utility is I/O bound, there was no reduction of the elapsed time.

6.4.2 Unloading partitions in parallel


When unloading a partitioned table space into individual data sets (1 per partition), the
UNLOAD utility automatically activates multiple subtasks and runs in partition/parallel unload
mode.

Performance measurements
Figure 6-11 shows the performance measurements of unloading partitions in parallelism. The
following table is used during the tests:
򐂰 6 million rows per partitions
򐂰 Table with 16 columns per row
򐂰 0.82 GB per partition and 1.03 GB when unloaded

The difference between the size of the partition and the unloaded data set is due to
VARCHAR columns.

Note: All measurements are performed using an IBM 7-way G6 processor, and this requires
at least 2 channels per CPU to avoid channel contention.

Chapter 6. Utility performance 129


Elapsed time (sec)
250

200 191

150

100 86 86 89

50

0
1 4 7 14
Number of partitions

Figure 6-11 Unloading partition table in parallelism

The elapsed time for unloading 1 to 4 partitions is the same. There is no extra cost in elapsed
time for unloading the 3 extra partitions.

The elapsed time only increase a little bit, when unloading 7 partitions in parallel. The
increase in elapsed time is due to handling more subtasks and partitions. The elapsed time
for unloading 1 or 7 partitions in parallel is almost the same.

The elapsed time for unloading 14 partitions in parallel shows that the elapsed time is more
than double, because the UNLOAD utility could only start 7 subtasks, which have to unload
more than one partition each and the overhead to handle more partitions than subtasks
increased too.

Note: The maximum number of subtasks will be determined by the number of CPU available
on which the UNLOAD job runs and depends on how many threads DB2 allows you to start.
We recommend that you check the value of IDBACK and CTHREAD in ZPARM.

You can find information about how many subtasks the utility has started by looking at the
SYSPRINT output from the UNLOAD utility, where you will find the following messages:

DSNU1201I DSNUUNLD - PARTITIONS WILL BE UNLOADED IN PARALLEL, NUMBER OF TASKS = 3


DSNU397I DSNUUNLD - NUMBER OF TASKS CONSTRAINED BY CPUS

When you UNLOAD data from a partitioned table, you can benefit from running the UNLOAD
utility in parallel. Thus you only have to run and maintain one job, instead of running and
maintaining many jobs.

130 DB2 for z/OS and OS/390 Version 7 Performance Topics


6.4.3 Unload from copy data sets
These are the advantages when unloading from copy data sets:
򐂰 You do not touch the user data in the table space, therefore you do not degrade the
performance of the SQL applications.
򐂰 You can unload the data even if the table space is stopped.
򐂰 For regularly transferring data into another DB2 subsystem, some use DSN1COPY to
refresh a table space in the target DB2 subsystem from a copy of the source DB2
subsystem. Using UNLOAD and LOAD instead, you can transfer exactly these rows and
columns you need. Moreover, the latter way is more secure, as the DSN1COPY procedure
has to be maintained with care, for example, if columns are added to the source table.

Prerequisite: Existence of table space


The UNLOAD utility, in one UNLOAD statement, supports copy data sets as a source for
unloading data. The table space must be specified in the TABLESPACE option. See
Figure 6-12. This specified table space must still exist when the UNLOAD utility is running,
that is, the table space has not been dropped since the copy was taken.

DSNU050I DSNUGUTC - UNLOAD TABLESPACE DB246129.TS612903


DSNU1252I =DB2A DSNUULIA - PARTITION PARALLELISM IS NOT ACTIVATED AND THE PARTITION VARIABLE IN THE
TEMPLATE DSN WAS REPLACED BY '00000' FOR TABLESPACE DB246129.TS612903
DSNU650I =DB2A DSNUUGMS - FROMCOPY PAOLOR5.DB246129.TS612903.IC013 UNLDDN OR2DATA PUNCHDDN OR2PUNCH
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=OR2DATA
DDNAME=SYS00002
DSN=PAOLOR5.DATA.TS612903.P00000.D1110.T2036
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=OR2PUNCH
DDNAME=SYS00003
DSN=PAOLOR5.PUNCH.TS612903.P00000.D1110.T2036
DSNU253I DSNUUNLD - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=800 FOR TABLE PAOLOR7.OR2
DSNU252I DSNUUNLD - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=800 FOR TABLESPACE DB246129.TS612903
DSNU250I DSNUUNLD - UNLOAD PHASE COMPLETE, ELAPSED TIME=00:00:00
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=4

Figure 6-12 UNLOAD utility

In the output from the UNLOAD utility, you can see that parallelism is not activated when you
unload data from an image copy data set.

6.5 Online REORG enhancements


The DB2 V7 enhancements to online REORG for improving performance and high availability,
introduce a faster SWITCH phase, a new BUILD2 phase for building NPI in parallel and new
DRAIN and RETRY parameters.

When running the Online REORG TABLESPACE utility, data is unavailable from the last part
of the LOG phase and during the SWITCH and BUILD2 phases. See Figure 6-13.

Chapter 6. Utility performance 131


R E A D /W R IT E A C C E S S D A TA
TO LER ATED U N A V A IL A B IL IT Y

U n lo a d R e lo a d S o rt B u ild U nLl o g
ad S w itc h B u ild
2

Figure 6-13 Data unavailability when running REORG SHRLEVEL CHANGE

With a faster SWITCH phase and a new BUILD2 phase for building NPI in parallel, the time
when data is unavailable during Online REORG TABLESPACE utility has been minimized.

6.5.1 Fast switch


The SWITCH phase prior to DB2 V7 could take a long time, so applications may time out. Let
us look at how online REORG works prior to DB2 V7, concentrating on the data sets.

In the UTILINIT phase, shadow objects of the table space (or its partitions) and for the index
spaces (or their partitions) are created. Strictly speaking, this is not true, as these shadow
objects are not reflected in the catalog. What it means is, that new shadow data sets are
created, one for each data set of the original objects (or their partitions). The data set names
of these shadow data sets differ from the original data set names insofar as their fifth qualifier,
also referred to as the instance node, is S0001’ rather than ‘I0001’.

In the SWITCH phase, DB2 renames the original data sets and the shadow data sets. More
specifically, the instance node of the original data sets, ‘I0001’ is renamed to a temporary
one, ‘T0001’; afterwards, the fifth qualifier of the shadow data set, ‘S0001’, is renamed to
‘I0001’.

In the UTILTERM phase, the data sets with ‘T0001’ are deleted, as they are not needed any
more.

Notes:
򐂰 This description applies to DB2-managed table spaces.
򐂰 During the last log iteration and during the BUILD2 phase, SQL accesses are also limited.

Some applications design DB2 tables with several hundred indexes. Others use partitioned
table spaces with some hundred partitions. In both cases, several hundred data sets (both
cluster and data component of the VSAM data set) must be renamed in the SWITCH phase.
For the renaming, DB2 invokes Access Method Services (AMS). In turn, AMS invokes MVS
supervisor calls (SVCs) which result in further cascading SVCs, for example, for checking
whether the new name exists already, and whether the rename was successful.

132 DB2 for z/OS and OS/390 Version 7 Performance Topics


Therefore, the elapsed time for the SWITCH phase can become too long. As the applications
cannot access the table space during that time, they may time out because of the online
REORG, which is exactly what you are trying to avoid.

DB2 V7 gives an alternative to speed up the SWITCH phase, thus making the phase less
intrusive to data access by others:
򐂰 Invocation of AMS to rename the data sets is eliminated.
򐂰 An optional process, the FASTSWITCH, takes place:
a. In the UTILINIT phase, DB2 creates shadow data sets. The fifth qualifier of these data
sets is now ‘J0001’.
b. In the SWITCH phase, DB2 updates the catalog and the object descriptor (OBD) from
‘I’ to ‘J’ to indicate that the shadow object has become the active or valid data base
object. During that time, applications cannot access the table space.
c. After the SWITCH phase, the applications can resume their processing, now on the
new ‘J0001’ data sets.
d. In the UTILTERM phase, DB2 deletes the obsolete original data sets with the instance
node ‘I0001’.

Notes:
򐂰 This description applies to DB2-managed table spaces. If the data sets are SMS
controlled you need to change the ACS routines to accommodate the new DB2 naming
standard and take advantage of the enhancement.

As a result, the SWITCH phase is much shorter; therefore, the applications are less likely to
time out.

As the data set names of a table space now vary, DB2 queries the OBD of the object being
reorganized in the UTILINT phase in order to check, whether the current instance node is
‘I0001’ or ‘J0001’. If the current data base object has a ‘J0001’ instance node then DB2 will
create a shadow object with an ‘I0001’ instance node. Thus, during the SWITCH phase, the
OBD and the DB2 catalog are updated registering the ‘I0001’ object as the active data base
object. The UTILTERM phase then deletes the data base objects with the ‘J0001’ instance
node, the old originals.

Specifying whether Fast SWITCH is the default


You can choose at installation level, whether or not they want Fast SWITCH as your default
processing option for online REORGs. The default value for the new ZPARM parameter,
‘&SPRMURNM’ in the Macro DSN6SPRC, is ‘1’ when DB2 V7 is installed. Thus the default is
to exploit the Fast SWITCH feature. At installation time, you can reset this ZPARM to ‘0’,
indicating that the AMS rename is the default processing for online REORGs.
You can always override the ZPARM default at the utility level by using the new keyword for
the REORG SHRLEVEL REFERENCE/CHANGE utility FASTSWITCH NO/YES.

Notes: You can only use FASTSWITCH NO for DB2 catalog and directory.

Performance measurements
All measurements were performed using an IBM G6 3-way processor and ESS disks. The
following table space is used:
򐂰 254 partitions table space STOGROUP defined
򐂰 1 partition index

Chapter 6. Utility performance 133


Table 6-8 summarizes the comparison of running the Online REORG utility using
FASTSWITCH NO with the Online REORG utility using FASTSWITCH YES.
Table 6-8 Online REOG using FASTSWITCH

REORG FASTSWITCH REORG FASTSWITCH Delta %


NO SHRLEVEL CHANGE YES SHRLEVEL
CHANGE

CPU time Elapsed CPU time Elapsed CPU time Elapsed


time time time

Switch 1 349 1 28 0% -92%

Note: All times are in seconds.

There is a 92% reduction in elapsed time and no change in the CPU time by using the new
FASTSWITCH functionality. The elapsed time for switching a data set is very constant, and it
is about 0.05 seconds. In this test case, the total number of data sets switched was 508 (254
table space partitions and 254 index partitions), and that was done in 28 seconds.

We recommend that you use the new FASTSWITCH functionality, so you can benefit from
using the faster switch phase and that without you have to change your existent Online
REORG jobs.

6.5.2 BUILD2 parallelism for NPIs


DB2 V7 improves the elapsed time of the BUILD2 phase of the REORG
SHRLEVEL(CHANGE) or SHRLEVEL(REFERENCE) utility when reorganizing a single
partition or a range of partitions. The updates of NPIs are now done using parallelism. With
parallelism, one or more subtasks are dispatched to perform the updates. Each subtask will
be assigned an original NPI and the shadow copy of logical partition. The subtasks will
update the original logical partition. If the table space being reorganized has five NPIs, then
possibly five subtasks will be started. When the REORG utility executes to operate on a range
of partitions, one subtask will perform updates for all logical partitions on the same NPI.

The BUILD2 phase applies when you do a REORG:


򐂰 With SHRLEVEL CHANGE or SHRLEVEL REFERENCE
򐂰 Only on a subset of the partitions of a partitioned table space, more specifically, on one
partition only or on a range of partitions only

The BUILD2 phase has been improved by updating of NPIs using parallel subtasks. That
gives you availability improvement and better performance in reducing elapsed time.

The number of parallel subtasks is governed by:


򐂰 Number of CPUs
򐂰 Number of available DB2 threads
򐂰 Number of NPIs

Optimally, if you have one subtask for each NPI, the elapsed time of the BUILD2 phase could
be the time it takes to process the NPI with the most RIDs. It might not be possible to start the
optimal number of subtasks. This constraint could arise from lack of sufficient virtual memory,
insufficient available DB2 threads, or insufficient processors (CPUs) to handle more tasks. If
fewer subtasks are started than there are NPIs to update, one or more subtasks will update
more than one NPI. When a subtask finishes with a NPI, if there are any NPIs not yet being
updated, it will start the next available NPI.

134 DB2 for z/OS and OS/390 Version 7 Performance Topics


Performance measurements
The BUILD2 phase is only activated when running Online REORG with SHRLEVEL
REFERENCE or CHANGE and when running REORG against one partition or a range of
partitions and non-partitioning indexes, therefore there are no measurements of running
REORG with SHRLEVEL NONE and running REORG on all partitions of a partition table
space. During the measurements there are no updates going on, the intent being to show the
best possible availability enhancement provided by the change in this phase of the utility. It is
also not meaningful to show the statistics for the overall online REORG execution, which is
always conditioned by the concurrent activity.

Measurements with one partition index and one NPI


All measurements were performed using an IBM G6 3-way processor and ESS disks. The
following table is used:
򐂰 26 columns with a total record length of 119 bytes
򐂰 20 millions rows
򐂰 20 partitions; each partition has 1 million rows
򐂰 1 partition index
򐂰 1 NPI

Measurements of reorganization of one partition


Table 6-9 summarizes the comparison of the BUILD2 phase for the DB2 V6 REORG utility
using SHRLEVEL REFERENCE, SORTKEYS, NOSYSREC and inline COPY with the DB2
V7 REORG utility.
Table 6-9 BUILD2 — example 1

DB2 V6 DB2 V7 Delta %

CPU time Elapsed CPU time Elapsed CPU time Elapsed


time time time

BUILD2 62 69 44 45 -29% -35%

Note: All times are in seconds.


REORG with SHRLEVEL REFERENCE, SORTKEYS, NOSYSREC, inline COPY

Running REORG against one partition of a partition table space with only one NPI shows the
improvement of the BUILD2 phase in a worst case scenario. In this worst case scenario, the
BUILD2 phase shows an improvement in elapsed time of 35% in elapsed time with an
improvement in CPU time of 29%. This improvement, obtained when there is no multitasking
of NPIs, is due to other internal enhancements in the handling of log writes.

Table 6-10 summarizes the comparison of the BUILD2 phase for the DB2 V6 REORG utility
using SHRLEVEL CHANGE, SORTKEYS, NOSYSREC and inline COPY with the DB2 V7
REORG utility.
Table 6-10 BUILD2 — example 2

DB2 V6 DB2 V7 Delta %

CPU time Elapsed CPU time Elapsed CPU time Elapsed


time time time

BUILD2 63 69 44 45 -30% -35%

Note: All times are in seconds.


REORG witH SHRLEVEL CHANGE, SORTKEYS, NOSYSREC, inline COPY.

Chapter 6. Utility performance 135


Running REORG against one partition of a partition table space with only one NPI, shows the
improvement of the BUILD2 phase in a worst case scenario. In this worst case scenario, the
BUILD2 phase shows an improvement in elapsed time of 35, and CPU time of 30%.

Measurements of reorganizations of three partitions


Table 6-11 summarizes the comparison of the BUILD2 phase for the DB2 V6 REORG utility
using SHRLEVEL REFERENCE, SORTKEYS, NOSYSREC, and inline COPY with the DB2
V7 REORG utility, when reorganizing the first 3 partitions.
Table 6-11 BUILD2 — example 3

DB2 V6 DB2 V7 Delta %

CPU time Elapsed CPU time Elapsed CPU time Elapsed


time time time

BUILD2 166 176 111 112 -33% -36%

Note: All times are in seconds.


REORG 3 partitions with SHRLEVEL REFERENCE, SORTKEYS, NOSYSREC, inline COPY.

Running REORG against a range of partitions (in this case, three partitions) of a partition
table space with only one NPI, shows the improvement of the BUILD2 phase in a worst case
scenario. In this worst case scenario, the BUILD2 phase shows an improvement in elapsed
time of 36%, and the CPU time is improved by 33%.

Table 6-12 summarizes the comparison of the BUILD2 phase for the DB2 V6 REORG utility
using SHRLEVEL CHANGE, SORTKEYS, NOSYSREC, and inline COPY with the DB2 V7
REORG utility, when reorganizing the first 3 partitions.
Table 6-12 BUILD2 — example 4

DB2 V6 DB2 V7 Delta %

CPU time Elapsed CPU time Elapsed CPU time Elapsed


time time time

BUILD2 168 177 110 111 -34% -37%

Note: All times are in seconds.


REORG 3 partitions with SHRLEVEL CHANGE, SORTKEYS, NOSYSREC, inline COPY.

Running REORG against a range of partitions (three in this case) of a partition table space
with only one NPI shows the improvement of the BUILD2 phase in a worst-case scenario. In
this worst-case scenario, the BUILD2 phase shows an improvement in elapsed time of 37%,
and in CPU time of 34%.

Measurements with one partition index and five NPIs


All measurements were performed using an IBM G6 3-way processor and ESS disks. The
following table is used:
– 26 columns with a total record length of 119 bytes.
– 20 millions rows.
– 20 partitions, each partition has 1 million row.
– 1 partition index
– 5 NPI

136 DB2 for z/OS and OS/390 Version 7 Performance Topics


Table 6-13 summarizes the comparison of the BUILD2 phase for the DB2 V6 REORG utility
using SHRLEVEL REFERENCE, SORTKEYS, NOSYSREC, and inline COPY with the DB2
V7 REORG utility.
Table 6-13 BUILD2 — example 5

DB2 V6 DB2 V7 Delta %

CPU time Elapsed CPU time Elapsed CPU time Elapsed


time time time

BUILD2 310 335 233 60 -25% -82%

Note: All times are in seconds.


REORG with SHRLEVEL REFERENCE, SORTKEYS, NOSYSREC, inline COPY.

The BUILD2 phase in DB2 V7 improves the elapsed time by 82% and the CPU time by 25%.
Managing the parallel subtasks for building the five NPIs has reduced the improvement in
CPU time.

Table 6-14 summarizes the comparison of the BUILD2 phase for the DB2 V6 REORG utility
using SHRLEVEL CHANGE, SORTKEYS, NOSYSREC, and inline COPY with the DB2 V7
REORG utility.
Table 6-14 BUILD2 — example 6

DB2 V6 DB2 V7 Delta %

CPU time Elapsed CPU time Elapsed CPU time Elapsed


time time time

BUILD2 307 333 237 62 -23% -81%

Note: All times are in seconds.


REORG with SHRLEVEL CHANGE, SORTKEYS, NOSYSREC, inline COPY.

The BUILD2 phase in DB2 V7 improves the elapsed time by 81% and the CPU time by 23%.

Measurements of reorganizations of three partitions


Table 6-15 summarizes the comparison of the BUILD2 phase for the DB2 V6 REORG utility
using SHRLEVEL REFERENCE, SORTKEYS, NOSYSREC, and inline COPY with the DB2
V7 REORG utility, when reorganizing the first three partitions.
Table 6-15 BUILD2 — example 7

DB2 V6 DB2 V7 Delta %

CPU time Elapsed CPU time Elapsed CPU time Elapsed


time time time

BUILD2 822 872 581 150 -29% -83%

Note: All times are in seconds.


REORG with SHRLEVEL REFERENCE, SORTKEYS, NOSYSREC, inline COPY.

The BUILD2 phase in DB2 V7 improves the elapsed time by 83% and the CPU time by 29%.

Table 6-16 summarizes the comparison of the BUILD2 phase for the DB2 V6 REORG utility
using SHRLEVEL CHANGE, SORTKEYS, NOSYSREC, and inline COPY with the DB2 V7
REORG utility, when reorganizing the first three partitions.

Chapter 6. Utility performance 137


Table 6-16 BUILD2 — example 8

DB2 V6 DB2 V7 Delta %

CPU Elapsed CPU Elapsed CPU Elapsed


time time time time time time

BUILD2 825 874 587 152 -29% -83%

Note: All times are in seconds.


REORG with SHRLEVEL CHANGE, SORTKEYS, NOSYSREC, inline COPY.

The BUILD2 phase in DB2 V7 improves the elapsed time by 83% and CPU time by 29%.
Managing the parallel subtasks to building the five NPIs in parallel, has contributed to an
increase in CPU time.

The improvements of the BUILD2 phase depend on the number of NPIs on the partitioned
table space that the Online RORG utility runs against. The higher the number of NPIs are, the
larger the improvement of the BUILD2 phase will be. These measurements show that the
BUILD2 phase has been improved with up to 83% in elapsed time, and with up to 29% in
CPU time in the case of five NPIs.

If you only have one NPIs on a partitioned table space, you can still expect an improvement in
the BUILD2 phase up to 37% in elapsed time, and up to 34% in CPU time.

6.5.3 DRAIN and RETRY


When executing an Online REORG utility, both for REORG INDEX and REORG
TABLESPACE, a new drain specification statement can be added, overriding the utility locking
time-out value specified at subsystem level in the DSNTIPI installations panel with the values
for Resource Time-out, IRLMRWT, and the multiplier Utility Time-out, UTIMOUT. This new
specification offers granularity at REORG invocation level rather than at DB2 subsystem level.
A shorter wait reduces the impact on applications and can be combined with retries to
increase the chances of completing the REORG execution.
򐂰 DRAIN_WAIT
– Specifies the maximum number of seconds that the utility will wait when draining.
򐂰 RETRY
򐂰 Specifies the maximum number of times that the drain will be attempted.
򐂰 RETRY_DELAY
– Works in conjunction with RETRY and specifies the minimum duration in seconds
between retries.

6.6 RUNSTATS enhancements


DB2 V7 has a new parameter for the RUNSTATS utility that allows you to gather RUNSTATS
information into history catalog tables, so you can use them for analysis, tools support and
input for optimizer hints.

Some DB2 catalog tables has been changed, to enable RUNSTATS to update more
information about space.

A new keyword FORCEROLLUP, allows you to force the roll up of statistics to aggregate
tables, even if any partitions are empty.

138 DB2 for z/OS and OS/390 Version 7 Performance Topics


6.6.1 Statistics history
You can set up a default value for STATISTIC HISTORY by setting the value of STATHIST in
ZPARM. The value of STATHIST can be:
򐂰 SPACE specifies that all inserts/updates made by DB2 to space related catalog statistics
are recorded in catalog history tables.
򐂰 ACCESSPATH specifies that all inserts/updates made by DB2 to ACCESSPATH related
catalog statistics are recorded in catalog history tables.
򐂰 ALL specifies that all inserts/updates made by DB2 in the catalog are recorded in catalog
history tables.
򐂰 NONE specifies that changes made in the catalog by DB2 are not recorded in catalog
history tables. This is the default for the STATHIST parameter.

You can overwrite the default value for STATISTIC HISTORY at RUNSTATS level by specifying
the keyword HISTORY and the level you want: ALL, ACCESSPATH, SPACE, or NONE.

The STATISTICS HISTORY will save the data in the following new DB2 catalog tables:
򐂰 SYSIBM.SYSCOLDIST_HIST
򐂰 SYSIBM.SYSCOLUMNS_HIST
򐂰 SYSIBM.SYSINDEXES_HIST
򐂰 SYSIBM.SYSINDEXPART_HIST
򐂰 SYSIBM.SYSINDEXSTATS_HIST
򐂰 SYSIBM.SYSLOBSTATS_HIST
򐂰 SYSIBM.SYSTABLEPART_HIST
򐂰 SYSIBM.SYSTABLES_HIST
򐂰 SYSIBM.SYSTABSTATS_HIST

Note: If you are dropping an object, all the information in the history tables will be deleted too.

Performance measurements
The cost of collecting history information depends on the type of information you are
collecting and how the object are defined (for example, number of indexes and columns).
However, the increase of elapsed time and CPU time will be less than 5%.

Delete history statistics


The MODIFY STATISTICS online utility deletes unwanted statistics history records from the
corresponding catalog tables. You should run MODIFY STATISTICS regularly to clear
outdated information from the statistics history catalog tables. By deleting outdated
information from these tables, you can help improve performance for processes that access
data from these tables.

6.6.2 DB2 catalog changes


In the DB2 catalog, new columns have been added to the following tables:
򐂰 SYSCOPY:
– COPYPAGESF is the number of pages written to the copy data set. This statistic is
needed by RECOVER utility to know the number of pages in the image data set
(full/incremental).
– NPAGESF is the number of pages in the table space or index at the time of INLINE
COPY.
– CPAGESF is the number of changed pages.

Chapter 6. Utility performance 139


– JOBNAME is the value in this column indicates the job name of the utility.
– AUTHID is the value in this column indicates the authorization ID of the utility.
򐂰 SYSINDEXES:
– SPACEF is the number of kilobytes of disk allocated to the index. The value is -1 if
statistics have not been gathered.
򐂰 SYSINDEXPART:
– SPACEF is the number of kilobytes of disk allocated to the index partition. The value is
-1 if statistics have not been gathered.
– DSNUM is the number of data sets. The value is -1 if statistics have not been gathered.
– EXTENTS is the number of data set extents. The value is -1 if statistics have not been
gathered.
– LEAFNEAR is the number of leaf pages physically near previous leaf page for
successive active leaf pages.The value is -1 if statistics have not been gathered.
– LEAFFAR is the number of leaf pages located physically far away from previous leaf
pages for successive (active leaf) pages accessed in an index scan. The value is -1 if
statistics have not been gathered.
– PSEUDO_DEL_ENTRIES is the number of pseudo deleted entries in the index. These
entries are marked as “logically” deleted but still physically remain in the index. For a
non-unique index, the value is the number of RIDs that are pseudo deleted. For an
unique index the value is the number of both keys and RIDs that are pseudo deleted.
The value is -1 if statistics have not been gathered.
򐂰 SYSTABLEPART:
– SPACEF is the number of kilobyte disk allocated to the table space partition. The value
is -1 if statistics have not been gathered.
– DSNUM is the number of data sets. The value is -1 if statistics have not been gathered.
– EXTENTS is the number of data set extents. The value is -1 if statistics have not been
gathered.
򐂰 SYSTABLES:
– NPAGESF is the number of pages used by the table.The value is -1 if statistics have
not been gathered.
– SPACEF is the number of kilobytes of disk allocated to the table. The value is -1 if
statistics have not been gathered.
– AVGROWLEN is the average length of rows for the tables in the table space. If the
table space is compressed, the value is the compressed row length. If the table space
is not compressed, the value is the uncompressed row length. The value is -1 if
statistics have not been gathered.

140 DB2 for z/OS and OS/390 Version 7 Performance Topics


6.6.3 Force update of aggregate statistics
RUNSTATS gathers statistics for all partitions specified for table space or index space. The
statistics that are collected at the partition level are aggregated into the aggregate tables. For
example, the statistics that are collected at the individual part level for an index is stored in
SYSINDEXPART and these statistics are rolled up to the aggregate table SYSINDEXES.
Prior to DB2 V7 aggregation took place only if statistics were available for all the partitions. If
statistics were not available for any partition, the RUNSTATS issued a DSNU623I message to
indicate that aggregation were not successful.

In DB2 V7 you have the option to force the roll up of statistics to aggregate tables, even if
some partitions are empty. You can set up a default value for FORCEROLLUP by setting the
value of STATROLL in ZPARM. The value of STATROLL can be:
򐂰 YES forces statistic roll up for partitions tables and indexes, where some of the partitions
are empty.
򐂰 NO specifies no statistic roll up for partitions tables and indexes where some of the
partitions are empty, as RUNSTATS does prior to DB2 V7. This is the default for the
STATROLL parameter.

The value of STATROLL can be overridden at RUNSTATS level, by specify the keyword
FORCEROLLUP and YES or NO.

We recommend, for DB2 systems that have large partitioned table spaces and indexes, that
the value for STATROLL in ZPARM is set to YES especially if some of the partitions are
empty.Then you do not have to change any of your RUNSTATS job to take advantage of
forcing statistics roll up to aggregate tables.This will help the optimizer to choose a better
access path.

6.6.4 Performance considerations using new statistics columns


Statistics collected by RUNSTATS can be divided into two major categories. First, there are
access path or optimizer statistics. These are used by the optimizer to select the best access
path to the data. Second, there are space statistics. Space statistics are used to:
򐂰 Determine the optimum primary and secondary allocations used for table spaces and
indexes, and help tune the setting of PCTFREE and FREEPAGE.
򐂰 Determine when to reorganize. This is a major area using different statistics to decide for
different types of objects.
򐂰 Determine when to gather statistics again.

Recalculate space allocation


The maximum number of extents are now 251 with DFP 1.4. Before, is was 119. Multiple
extents are not a big performance concern, but should be kept below 50 extents. It is still a
performance consideration during open and close; however newer devices mitigate
performance penalties seen on older devices.

When the number of EXTENTS in SYSTABLEPART or SYSINDEXPART is greater than 50,


we recommend that you alter the PQTY and/or SQTY to reduce the number of extents and
run REOG without the REUSE keyword to ensure the new space allocation are used.

Chapter 6. Utility performance 141


Maintain the clustering sequence
The statistics measuring clustering are CLUSTERATIO, NEAROFFPOS, and FAROFFPOS in
SYSINDEXPART. With ideal clustering after reorganization, the CLUSTERATIO approaches
100%. With random data ordering, CLUSTERATIO is 50%. The number of a data rows being
near versus far off in position relates to the number of pages included in the prefetch quantity
within DB2. If 32 or 64 pages are read in during prefetch, then an optimal row (or near
optimal) is probably already in a buffer, so an I/O can be avoided when fetching the data row.
Rows outside the range probably required a synchronous I/O to get the row. The more I/O
needed, the slower the scan, and that is measured by off-position rows. See Figure 6-14.

C lust er ing
TS IX TS

R3 K1 R1

K2 R2

deleted K3 R3

K4 R4

R2
R4

d eleted
R1
d eleted
p re fe tc h fa c to r

Figure 6-14 Clustering sequence

NEAROFFPOS is the number of rows in a non-optimal position, but within the range of 16
pages. FAROFFPOS is the number of rows outside this range. The FAROFFPOS value is
more significant and likely to affect performance than the NEAROFFPOS value, but both
values indicate disorganization of the data, that can lead to performance degradation.

After reorganization, FAROFFPOS is 0 and NEAROFFPOS is 0 or a relatively low value.

You can calculate the percentage of rows that are off-position based on the values of
FAROFFPOS and CARDF in SYSINDEXPART:
(FAROFFPOS)/CARDF)*100

You can then determine if you should run REORG TABLESPACE to maintain the clustering
sequence. We recommend that you run REORG TABLESPACE, if the percent of off-position
rows is greater than 10%.

When using the OFFPOSLIMIT keyword for REORG TABLESPACE, the REOG utility will
calculate:
((NEAROFFPOS+FAROFFPOS)/CARDF)*100

The result of this calculation will be compared with the value specified for the OFFPOSLIMIT
keyword. If the calculated value exceeds the specified value of OFFPOSLIMIT, then REORG
is performed.

142 DB2 for z/OS and OS/390 Version 7 Performance Topics


Removing indirect row references
An indirect row reference is a pointer to an overflow row on a different page. This can occur
when an update is made to increase the length of a varying length column, and the page is
full, so that the data row can no longer fit on the page. The RID is used to locate rows. For
instance, an index entry contains a key and RID. When the key is found, the RID identifies a
particular row on a particular page. When accessing the row in page 2, we only found a
pointer over to page 3. There could be an extra I/O involved to read in the extra page. See
Figure 6-15.

P age H eader P age H eader


page 2 page 3
p tr-ro w 1
Row 1

R ow 2

Row 3

R ow 4

ID M a p ID M a p

Figure 6-15 Indirect row reference

Reorganizing the table space eliminates all indirect references to overflow pages.

NEARINDREF is the number of indirect references to overflow rows on another page within
the 16 pages. FARINDREF is the number of indirect references to overflow rows outside this
range. When considering the statistics, both values indicate disorganization of the data.
When rows are placed in another page than their home page, lock avoidances cannot be
used, so DB2 has to take a lock. That can lead to performance degradation, especially in a
data sharing environment.

You can calculate the percentage of indirect row references based on the values of
NEARINDREF, FARINDREF and CARDF in SYSTABLEPART:
((NEARINDREF+FARINDREF)/CARDF)*100

You can then determine if you should run REORG TABLESPACE to remove indirect row
references. We recommend that you run REORG TABLESPACE, if the percent of indirect row
references is greater than 10% for non-data-sharing environment, and for a data-sharing
environment if the percent of indirect row references is greater than 5%.

When using the INDREFLIMIT keyword for REORG TABLESPACE, the REOG utility will
calculate:
((NEARINDREF+FARINDREF)/CARDF)*100

The result of this calculation will be compared with the value specified for the INDREFLIMIT
keyword. If the calculated value exceeds the specified value of INDREFLIMIT, then REORG is
performed.

Chapter 6. Utility performance 143


Dropping tables in a simple table space
Simple table spaces can accumulate dead space that is reclaimed during reorganization. If
this dead space is not regularly reclaimed, it may result in acquiring more extents than
needed to add additional data rows.

For simple table spaces, dropping a table results in that table’s data rows remaining. The
amount of dead space for a simple table space can be tracked directly with PERCDROP in
SYSTABLEPART.

We recommend that you run REORG TABLESPACE, if the value of PERCDROP in


SYSTABLEPART is greater than 10%.

Removing pseudo deleted entries


Indexes can accumulate dead space that is reclaimed during reorganization. If this dead
space is not regularly reclaimed, it may result in acquiring more extents than needed to add
additional keys.

For an index, deleted keys are marked as pseudo deleted. Actual cleaning up will not occur
except during certain processes, such as before a page split.

You can calculate the percentage of RIDs that are pseudo deleted based on the values of
PSEUDO_DEL_ENTRIES and CARDF in SYSINDEXPART:
(PSEUDO_DEL_ENTRIES/CARDF)*100

You can then determine if you should run REORG INDEX to physically remove the pseudo
deleted entries from the index.

To minimize the CPU cost of an index scan, it is important to remove pseudo deleted entries.
Every time an SQL statement makes a scan of an index, it has to scan all entries in the index,
including pseudo deleted entries, that have not yet been removed.

We recommend that you run REORG INDEX, if the percent of pseudo deleted entries is
greater than 10%.

Reclaim space for LOB table spaces


LOB table spaces can accumulate dead space that is reclaimed during reorganization. If this
dead space is not regularly reclaimed, it may result in acquiring more extents than needed to
add additional LOBs.

For LOB table spaces, an updated LOB will be written without reclaiming the old version of
the LOB immediately.

When FREESPACE approaches zero for a LOB table space, it is a good time to reclaim the
space, but there are no direct statistics indicating how much will be reclaimed. You can
calculate the percentage of free space for LOB table spaces based on the values of
FREESPACE in SYSLOBSTATS and SPACEF in SYSTABLEPART:
(FREESPACE/SPACEF)*100

We recommend that you reorganize LOB table spaces, if the percent of free space is less
than 10%.

The FREESPACE gives an indication of how many more LOBs can be added into the existing
extents already allocated.

144 DB2 for z/OS and OS/390 Version 7 Performance Topics


A LOB column always has an auxiliary index which locates the LOB within the LOB table
space. Access path is not an issue, because LOB access is always via a probe using the
auxiliary index. However, performance can be affected if LOBs are scattered into more
physical pieces than necessary, thus involving more I/O to materialize. See Figure 6-16.

A u x ilia r y in d e x

r o w id 1
r o w id 2
r o w id 3
r o w id 4

chunk 1 chunk 2 chunk 3 chunk 4 chunk 5

A u x ilia r y ta b le ( L O B t a b le s p a c e )

Figure 6-16 Fragmented LOB table space

DB2 allocates space for LOBs in chunks. A chunk is 16 contiguous pages of a LOB. If the size
of a LOB is smaller than a chunk, then it is expected to fit in 1 chunk. If the size is greater than
a chunk, then it is optimized to fit into the minimum number of chunks. The fragmentation or
non-optimal organization of a LOB table space is measured in the value of ORGRATIO in
SYSLOBSTATS. Due to fragmentation within chunks, LOBs are split up to store in more
chunks than would be optimal, and ORGRATIO in SYSLOBSTATS will increase. A value of
1.0 is optimal for ORGRATIO. See Figure 6-17.

A u x ilia r y in d e x

r o w id 1

r o w id 2
ro w id 3
ro w id 4

chunk 1 chunk 2 chunk 3 chunk 4 chunk 5

A u x ilia r y ta b le ( L O B t a b le s p a c e )

Figure 6-17 Non-fragmented LOB table space

We recommend that you reorganize LOB table spaces, if the value of ORGRATIO in
SYSLOBSTATS is greater than 2.0.

Chapter 6. Utility performance 145


Removing physical leaf disorganization
LEAFNEAR is the number of leaf pages located within a range of 16 pages. LEAFFAR is the
number of leaf pages located outside the range of 16 pages.

When considering the statistics, the LEAFFAR value is more significant and likely to affect
performance than the LEAFNEAR value, but both values indicate disorganization of the leaf
pages, which can lead to performance degradation.

LEAFDIST also measures index leaf pages disorganization, but it is not as good a
measurement as using LEAFNEAR and LEAFFAR. For DB2 V7, we recommend the use of
LEARNEAR and LEAFFAR instead of LEAFDIST. See Figure 6-18.

L e a f P a g e L e a f P a g e L e a f P a g e L e a f P a g e
1 3 1 4 7 8 7 9

H O T J A K F R I G R U

0 -3 2 6 5 -9 6
p re fe tc h
q u a n t it y

Figure 6-18 Physical view before reorganization.

After reorganizing the index, the leaf pages are in the optimal physical position. For small
indexes, LEAFNEAR and LEAFFAR will be 0 after reorganization. See Figure 6-19.

L e a f P a g e L e a f P a g e
L e a f P a g e 8 L e a f P a g e 9
1 0 1 1

F R I G R U H O T J A K

0 -3 2 6 5 -9 6
p r e fe tc h
q u a n t it y

Figure 6-19 Physical view after reorganization.

For large indexes, LEAFNEAR may not be 0 after reorganization. This is because space map
pages are periodically placed throughout the pages set, and the jump over space map pages
shows up as a count in LEAFNEAR.

You can calculate the percentage of leaf pages in disorganization based on the values of
LEAFFAR in SYSINDEXPART and NLEAF in SYSINDEXES:
(LEAFFAR)/NLEAF)*100

You can then determine if you should run REORG INDEX to remove physical leaf
disorganization. We recommend that you run REORG INDEX, if the percent of physical leaf
disorganization is greater than 10%.

When using DB2 V7, we recommend that you use the above formula to determine when to
run the REORG INDEX utility, instead of using the LEAFDISTLIMIT keyword for REORG
INDEX utility.

146 DB2 for z/OS and OS/390 Version 7 Performance Topics


6.7 COPYTOCOPY utility
DB2 V7 provides you with the opportunity to make additional full or incremental image copies
from a full or incremental image copy that was taken by the COPY utility. You are allowed to
make a maximum of three copies, to make up to the allowable total of four copies; these are
local primary, local backup, recovery site primary, and recovery site backup.

S egm en te d S im ple P a rtitione d


ta ble ta ble tab le
s pa ce sp ac e sp ac e

Im age
1)
C opies

C O P Y TO C O P Y TAB LE S P AC E D B1 .TS 1
FR O M L AS TF U LLC O P Y R E C O V E R Y D D N (rem pri)

C o py the c opy
2)
R e cord in ca talog

Figure 6-20 COPYTOCOPY utility

The COPYTOCOPY utility does not support the following catalog and directory objects:
򐂰 DSNDB01.SYSUTILX, and its indexes
򐂰 DSNDB01.DBD01, and its indexes
򐂰 DSNDB06.SYSCOPY, and its indexes

The SYSCOPY columns, ICDATE, ICTIME, START_RBA will be those of original entries in
SYSCOPY row when the COPY utility recorded them. While columns DSNMAE,
GROUP_MEMBER, JOBNAME and AUTHID will be those of the COPYTOCOPY job.

COPYTOCOPY will leave the target object in read-write access (UTRW), and that allows
other utilities and SQL statements to run concurrently with the same target objects, except for
utilities that insert or delete records in SYSCOPY; namely COPY, LOAD, MERGECOPY,
MODIFY, RECOVER, QUIESCE, and REORG utilities, or utilities with SYSCOPY as the
target object.

Chapter 6. Utility performance 147


6.7.1 Performance measurements
The measurements were performed using a full image copy of a 10-partition table with
517,338 pages, to create another full image copy. The measurement for the COPY LIST, the
LISTDEF includes all 10 partitions of the same table.

Table 6-17 summarizes the comparison of using the COPY utility with the COPYTOCOPY
utility to make additional full image copies.
Table 6-17 Using COPYTOCOPY utility to make additional full image copies
COPY utility COPYTOCOPY utility Delta in %

CPU time Elapsed CPU time Elapsed CPU time Elapsed


time time time

Single 11 182 10 173 -9.1% -4.9%


COPY

COPY 10 177 10 173 0.0% -2.3%


LIST

Note: All times are in seconds.

You can benefit from using COPYTOCOPY to making additional copies asynchronously from
the normal batch stream and it is mostly beneficial for remote copies on slow devices. The
measurements show that you can save up to 4.9% in elapsed time per copy by using the
COPYTOCOPY utility instead of the COPY utility. Another advantage is that COPYTOCOPY
leaves the target object in read-write access (UTRW), and that allows other utilities and SQL
statements to run concurrently with the same target object.

If you need to minimize the time that some critical objects are unavailable due to making
additional copies to a remote site, the new COPYTOCOPY utility is the right way to make
additional copies of these objects.

6.8 Cross Loader


A new utility statement, EXEC SQL, has been added and it can be used before or after any
utilities, or jointly with the LOAD utility. This new utility statement, enhances the functionality
of the LOAD utility to move data across multiple databases and platforms. This enhancement,
which is called Cross Loader, combines the efficiency of the LOAD utility with the power of the
SQL language. This enables the output of any SQL SELECT statement to be directly loaded
into a table on DB2 V7.

The enhancement is provided by the PTFs for APARs PQ46759 and PQ45268. Check their
prerequisites; at the time of our installation we also needed PQ43771, PQ45015, PQ45776,
PQ46245, PQ47035, PQ51501, and PQ52120.

148 DB2 for z/OS and OS/390 Version 7 Performance Topics


D B 2 fa m i ly
O ra c le
S yb as e
I n fo r m i x
S EL EC T IM S
LO AD
V S AM
S Q L S erver
Local D B 2, N C R Te ra d a ta
DRDA, or
D a ta J o in e r
D a ta
c o n v e r s io n

D B 2 fo r
O S /3 9 0
a n d Z /O S

Figure 6-21 Cross Loader

Since the SQL SELECT statement can access any DRDA server, the data source may be any
member of the DB2 Family, Data Joiner, or any other vendor who has implemented DRDA
server capabilities. Using Cross Loader is much simpler and easier than unloading the data,
transferring the output file to the target site, and then running the LOAD utility.

6.8.1 The EXEC SQL statement


Basically, you can directly load the output of a dynamic SQL statement into a table on DB2
V7. Within the new EXEC SQL utility statement, you can declare a cursor or specify any SQL
statement that can be dynamically prepared. The ENDEXEC indicates the end of the
statement. The Load utility performs an EXECUTE IMMEDIATE on the SQL statement.
Errors encountered during the checking of the statement or the execution will stop the
utility and an error message will be issued. No host variables are allowed in the
statement. Also, no self referencing loads are allowed.

Figure 6-22 shows the output from Cross Loader, when loading data from a table on remote
DB server into a DB2 table on a local DB2 for OS/390 using DRDA protocol over TCP/IP and
3-part names.

Chapter 6. Utility performance 149


DSNU000I DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = LOADTS
DSNU050I DSNUGUTC - EXEC SQL DECLARE C1 CURSOR FOR SELECT * FROM
SAMPLE.DK37990.STAFF ENDEXEC
DSNU050I DSNUGUTC - LOAD DATA INCURSOR(C1) REPLACE
DSNU650I -DB2G DSNURWI - INTO TABLE PAOLOR5.STAFF STATISTICS
DSNU350I -DB2G DSNURRST - EXISTING RECORDS DELETED FROM TABLESPACE
DSNU610I -DB2G DSNUSUTP - SYSTABLEPART CATALOG UPDATE FOR DBLP0002.TSLP2002
SUCCESSFUL
DSNU610I -DB2G DSNUSUTB - SYSTABLES CATALOG UPDATE FOR PAOLOR5.STAFF SUCCESSFUL
DSNU610I -DB2G DSNUSUTS - SYSTABLESPACE CATALOG UPDATE FOR DBLP0002.TSLP2002
SUCCESSFUL
DSNU620I -DB2G DSNURDRT - RUNSTATS CATALOG TIMESTAMP = 2001-04-27-18.58.22.193142
DSNU304I -DB2G DSNURWT - (RE)LOAD PHASE STATISTICS - NUMBER OF RECORDS=35 FOR TABLE
PAOLOR5.STAFF
DSNU302I DSNURILD - (RE)LOAD PHASE STATISTICS - NUMBER OF INPUT RECORDS PROCESSED=35
DSNU300I DSNURILD - (RE)LOAD PHASE COMPLETE, ELAPSED TIME=00:00:01
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=0

Figure 6-22 Output from Cross Loader using DRDA and 3-part names

It is important that the CCSID on the local and remote DB server is defined correctly, to avoid
problems when converting data from one code page to another.

The new utility statement EXEC SQL can be used before and/or after any of the utilities, that
is, RUNSTATS, COPY, and so on.

6.9 MODIFY RECOVERY enhancement


DB2 V7 has introduced some maintenance provided by the PTF for APAR PQ45184 that has
massively improved the way the MODIFY RECOVERY utility performs when dealing with
complex situations. The same enhancement has been made available to DB2 V5 and V6.

The measurements were performed under DB2 V7 with and without this enhancement, for a
customer SYSCOPY table with 975,341 rows, with 7,476 entry records for the specific
tablespace. The results are listed in Table 6-18.

Table 6-18 Modify recovery enhancement


MODIFY RECOVERY DELETE AGE(*) Without fix for PQ45184 With fix for PQ45184

CPU time (sec) 119 1.7

Elapsed time (sec) 123 5.1

With this enhancement, the improvements are over 90% in both CPU and elapsed time.

150 DB2 for z/OS and OS/390 Version 7 Performance Topics


7

Chapter 7. Network computing performance


This chapter provides a functional description of the performance related enhancements to
network computing in DB2 V7.
򐂰 FETCH FIRST n ROWS ONLY: This enhancement allows you to set a limit on the number
of rows returned to a result set.
򐂰 Stored procedures with COMMIT/ROLLBACK: This makes it possible to implement
more complex stored procedures that are invoked by Windows and UNIX clients.
򐂰 Java stored procedures with JVM: This implements support for Java stored procedures
as interpreted Java executing in a Java Virtual Machine (JVM) as well as support for
user-defined external (non-SQL) functions written in Java.
򐂰 Considerations on SQLJ and JDBC performance: These recommendations, if
implemented, can have a dramatic effect on the performance of your SQLJ applications.
򐂰 Unicode: Full support is provided for a third encoding scheme.
򐂰 DRDA server elapsed time reporting: This makes it easier to monitor bottlenecks in
client/server applications.

© Copyright IBM Corp. 2001 151


7.1 FETCH FIRST n ROWS
DB2 V7 introduces the statement FETCH FIRST n ROWS ONLY. This statement clause
allows a limit on the number of rows returned to the result set to be specified, and at the same
time, a fast implicit close of the cursor.

Prior to V7, DB2 would prefetch blocks. The blocks might contain more rows than the
application needed and this could have a negative impact on the performance of such
statements. Although there is a performance improvement in CPU savings in DB2, most of
the performance improvement comes in the form of elapsed time savings. This is because in
an environment of short running SQL transactions, the network latency adds a higher level of
magnitude to the overall transaction time, as opposed to the actual processing of the SQL
statement.

If DB2 knows that only one row, or few rows, are required, the block prefetching does not take
place, and only the specified number of rows is returned in the result set.

This new clause FETCH FIRST n ROWS ONLY, does not allow the application to fetch more
rows than the specified number n. An attempt to fetch more rows than specified is handled the
same way as normal end of data (SQLCODE +100, SQLSTATE 02000). See Figure 7-1.

SELECT T1.CREATOR, T1.NAME


FROM SYSIBM.SYSTABLES T1
WHERE T1.CREATOR = 'SYSIBM'
AND T1.NAME LIKE 'SYS%'
ORDER BY T1.CREATOR, T1.NAME
FETCH FIRST 5 ROWS ONLY;

CREATOR NAME
---------+---------+---------+---------+---------+---------
SYSIBM SYSAUXRELS
SYSIBM SYSCHECKDEP
SYSIBM SYSCHECKS
SYSIBM SYSCHECKS2
SYSIBM SYSCOLAUTH
DSNE610I NUMBER OF ROWS DISPLAYED IS 5
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 100

Figure 7-1 Fetching n rows

FETCH FIRST n ROWS ONLY (with fast implicit close of the cursor) is basically a
performance option for applications that only wish the first n fetched rows to be returned from
a query result set that might be much larger. The processing also results in an implicit close of
the cursor at the server when reaching the value of n. This saves line flow in the network by
not requiring the client to explicitly close the cursor with extra communications.

152 DB2 for z/OS and OS/390 Version 7 Performance Topics


7.1.1 OPTIMIZE FOR m ROWS
OPTIMIZE FOR m ROWS exists in versions prior to V7 and is used to control network
blocking and access path selection. If both the FETCH FIRST n ROWS and OPTIMIZE FOR
m ROWS clauses are specified with DB2 V7, the value of m still influences the network
blocking.

For example, if m is less than n, there will be "n divided by m rounded up" blocks sent to
return n rows. This has the performance implications discussed below. When m is greater
than n, there is no effect, since the result set is truncated to the value of n, and all n rows will
be returned in a single block.

Currently, with DB2 V7, the lower value between n and m is also assumed by DB2 optimizer
for access path selection of the query; this is no longer the case with the change implemented
by the PTF for APAR PQ49458 (still open at the time of writing.) If both clauses are explicitly
specified, DB2 will honor the specified values and keep the two options independent.

If the OPTIMIZE FOR clause is not specified, a default of OPTIMIZE FOR m ROWS is
assumed, where m is equal to the value of n in the FETCH FIRST n ROWS ONLY clause.

7.1.2 Performance measurements


In these measurements, a statement SELECT * FROM Y that would return a full result set
with a fixed number of rows 50, 200, and 400 is compared to the same SELECT statement
adding a FETCH FIRST n ROWS ONLY clause, where n limits the result set to a fixed number
of rows 50, 200, and 400. Finally, it is compared to the same SELECT statement with FETCH
FIRST n ROWS ONLY adding an OPTIMIZE FOR 1 ROW clause.

Each row is 20 bytes. A larger row size would create even larger differences in the
comparisons. The semantics of OPTIMIZE FOR 1 (or 2 or 3) ROW clause results in a query
block of 16 rows. This forces a network turnaround when the result set is greater than 16
rows. OPTIMIZE FOR m ROWS would cause line turnarounds at the value of m, which is an
even greater performance degradation.

The effect of performance degradation is obvious in Figure 7-2, which shows the overall
transaction rate for the three versions of the SQL statements for the three result set row
counts of 50, 200, and 400.

Chapter 7. Network computing performance 153


200

150

Transactions per Second


SELECT * FROM Y
SELECT * FROM Y FETCH
100 FIRST 50 ROWS ONLY

SELECT * FROM Y FETCH FIRST 50


ROWS ONLY OPTIMIZE FOR 1 ROW

50

0
50 200 400

Number of Rows

Figure 7-2 Throughput per SQL statement

Notice that for a result set of 50 rows, the transaction rate is roughly equivalent between the
SELECT with and without the FETCH FIRST 50 ROWS ONLY clause. This is because they
return the same number of rows. The SELECT statement with the OPTIMIZE FOR 1 ROW
clause is degraded due to requiring four network flows to return the entire 50 rows (16 rows +
16 rows + 16 rows + 2 rows).

Figure 7-3 shows the CPU time per transaction for the three versions of SQL statements for
the three result set row counts of 50, 200, and 400.

0.01

0.009

0.008
CPU time per transaction

0.007
SELECT * FROM Y
0.006
Seconds

SELECT * FROM Y FETCH FIRST 50


0.005 ROWS ONLY

0.004 SELECT * FROM Y FETCH FIRST 50


ROWS ONLY OPTIMIZE FOR 1 ROW
0.003

0.002

0.001

0
50 200 400
Number of Rows

Figure 7-3 CPU time per transaction

154 DB2 for z/OS and OS/390 Version 7 Performance Topics


Notice that the CPU time increases for the SELECT statement without the FETCH FIRST n
ROWS ONLY clauses as the number of rows increases. The CPU time remain flat for the SQL
statements that include the FETCH FIRST 50 ROWS ONLY clause.

Figure 7-4 shows the elapsed time outside of DB2 and the host to perform the transactions.

Notice that the elapsed time slowly increases for the SELECT statement without the FETCH
FIRST n ROWS ONLY clauses. This is the result of the increased number of bytes as the
result set increases. Notice also that the elapsed time for the SELECT with the OPTIMIZE
FOR 1 ROW clause is much larger than the other two. This is an example of the high cost of
network turnarounds.

0.06

0.05
Elapsed time per transaction

0.04
SELECT * FROM Y
Seconds

SELECT * FROM Y FETCH FIRST 50


0.03 ROWS ONLY
SELECT * FROM Y FETCH FIRST 50
0.02 ROWS ONLY OPTIMIZE FOR 1 ROW

0.01

0
50 200 400
Number of Rows

Figure 7-4 Elapsed time outside of DB2 per transaction

The performance measurements shows that if you do not need to fetch all the rows in a result
set, you can benefit from using FETCH FIRST n ROWS ONLY. It is recommended that you
use FETCH FIRST n ROWS ONLY, every time you do not need a full result set, but can
minimize the numbers of rows in the result set.

7.1.3 Singleton select


Singleton select statements have performance benefits as described below. Unfortunately, if
the answer set is greater than one row, the SQL statement fails. If FETCH 1 ROW ONLY is added
to the singleton select statement, the statement is successful. This support now allows
applications to use singleton select statements with the resultant performance benefits when
the result set is greater than one row. To retrieve only one qualified row from a table into an
application program, you can use a singleton SELECT statement:
EXEC SQL
SELECT NAME, CREATOR
INTO :HV_NAME, :HV_CREATOR
FROM SYSIBM.SYSTABLES
WHERE NAME = ‘TABLENAME’
AND CREATOR = ‘TBCRE01’
END-EXEC.

Chapter 7. Network computing performance 155


Or you can use a non-singleton SELECT statement, where you have to declare a cursor:
EXEC SQL
DELCARE C1 CURSOR FOR
SELECT NAME, CREATOR
FROM SYSIBM.SYSTABLES
WHERE NAME = ‘TABLENAME’
AND CREATOR = ‘TBCRE01’
FETCH 1 ROW ONLY
END-EXEC.

After you have declared the cursor, you have to open it:
EXEC SQL
OPEN C1
END-EXEC.

And then you have to read the data into the application program:
EXEC SQL
FETCH C1 INTO
:HV_NAME, :HV_CREATOR
END-EXEC.

And then, you have to close the cursor:


EXEC SQL
CLOSE C1
END-EXEC.

The performance impact of using a non-singleton SELECT statement to retrieve only one
qualified row from a table into an application, has an extra CPU cost in the DDF address
space of 4%. See Figure 7-5.

Time spent in DDF


60
59
CPU time per transaction

58
in micro seconds

57
56
55
54
53
52
51
50
No singleton SELECT
Singleton SELECT

Figure 7-5 Non-singleton SELECT versus singleton SELECT

The extra CPU cost can be up to 30% in other environments such as CICS. The extra cost of
CPU time for using non-singleton SELECT statement is used to OPEN and CLOSE the
cursor.

156 DB2 for z/OS and OS/390 Version 7 Performance Topics


When coding an SQL statement to retrieve only one qualified row from a table in an
application program, you will gain the best performance of using a singleton SELECT
statement instead of a non-singleton SELECT statement.

7.2 Stored procedures with COMMIT/ROLLBACK


DB2 V7 allows you to use COMMIT/ROLLBACK in stored procedure. It makes it possible to
implement more complex stored procedures that are invoked by Windows and UNIX clients.

When COMMIT/ROLLBACK is used in a stored procedure, the whole unit of work, including
uncommitted changes made from the client before the stored procedure was called, will be
committed or rolled back.

COMMIT/ROLLBACK is not allowed in the following situations:


򐂰 No COMMIT/ROLLBACK for User Defined Functions
򐂰 No COMMIT/ROLLBACK, if the client is using 2-phase commit
򐂰 No COMMIT/ROLLBACK for nested stored procedures

7.3 Java stored procedures with JVM


DB V7 implements support for Java stored procedures as interpreted Java executing in a
Java Virtual Machine (JVM) as well as support for user-defined external (non-SQL) functions
written in Java.

Unlike compiled Java stored procedures which are executed in a WLM stored procedure
address space, the interpreted Java is invoked by the WLM stored procedure address space,
but is executed in a JVM under OS/390 UNIX System Services (USS). See Figure 7-6.

OS/390 System
DRDA
or Stored
Procedures PROCLIB
Java DB2
or Address Space SPAS
Native EXEC SQL JCL
Catalog Entry LE Enclave
CALL
Client PROGX
Sched Driver Driver:

Identify C1 Find Java USS File system


package
Reusable Java package,
Identify C2 JDBC/SQLJ
Java VM Load and
execute Driver packages
Return parms Java
method
C1 rows Java method
C2 rows

JDBC/SQLJ
Driver

Figure 7-6 Interpreted Java stored procedures

Chapter 7. Network computing performance 157


The CREATE PROCEDURE/FUNCTION SQL statement allows you to specify an SQL name
for a Java method. The Java method for interpreted Java exists in a JAR file which is installed
into DB2 by the SQLJ.INSTALL_JAR built-in stored procedure. A JAR file can contain any
number of Java classes which can be invoked as DB2 stored procedures or functions. You
can also reference the same Java class by more than one stored procedure name, by
executing different CREATE PROCEDURE FUNCTION SQL statements, referencing the
same Java class. See Figure 7-7.

SQLJ Source DB2 Packages


Program
Preparation
ACEMSOS/ Process ACMESOS1
Add_customer.sqlj
2
ADD_CUSTOMER
USS
2
JAVAENV SP WLM Address Space
Add_customer.class
dataset
SYSROUTINES 4
NAME JARSCHEMA JAR_ID EXTERNAL_NAME ...

client 1 ADD_CUSTOMER MY JAR

...
CALL SYSJAROBJECTS
ADD_CUSTOMER 3
JARSCHEMA JAR_ID JAVA_DATA ...
(FIRSTNAME,...)
... MY JAR

Figure 7-7 Java runtime environment in DB2 V7

Although there are no syntax changes, two parameters are enhanced. The LANGUAGE
parameter now accepts JAVA, which indicates the stored procedure or functions written in
Java and the Java Byte code will be executed in the OS/390 JVM, under USS.

The EXTERNAL parameter, which specifies the program that runs when the procedure name
is specified in a CALL statement, is also extended. If LANGUAGE is JAVA then the
EXTERNAL NAME clause defines a string of one or more external Java routine name(s),
enclosed in single quotes.

DB2 executes Interpreted Java stored procedures and functions using the OS/390 JDK 1.1.8
and above. Using the JDK 1.1.8, the JVM is created and destroyed each time the Java stored
procedure is invoked. The OS/390 JDK 1.3 should overcome this problem.

7.3.1 Performance considerations


Java stored procedures implemented as compiled Java will perform better, if they are
implemented as interpreted Java executing in a JVM. Compiled Java stored procedures
execute in the WLM stored procedure address space, while interpreted Java stored
procedures execute in a JVM running in a USS environment.

158 DB2 for z/OS and OS/390 Version 7 Performance Topics


We recommend that you configure different stored procedure workloads into different WLM
stored procedure environments. This includes separating stored procedures by language.
The WLM environment does not have to be destroyed and a new runtime environment built
when a different stored procedure needs to be executed. Java stored procedures also require
large amounts of memory to execute efficiently. For best performance, we recommend that
you separate interpreted Java stored procedures from Compiled Java stored procedures.

7.4 SQLJ and JDBC performance considerations


Here we draw together some of the points made above about the coding and preparation of
SQLJ and JDBC programs with some extra information. We focus in particular on SQLJ
programs and make a series of recommendations which, if implemented, can have a dramatic
effect on the performance of your SQLJ applications. Recent performance studies have
highlighted the fact that these recommendations should not be regarded as implementation
options, but as actions you must take in order to get good SQLJ application performance. A
good source of general considerations on Java performance is Systems Journal Vol.39, No.1,
2000, Java Performance, G321-0137.

The points we make here are not necessarily listed in order of importance. They should be
implemented as a whole.

Memory usage - environment variable


You must define the environment variable _cee_runopts to include the following specification:
_cee_runopts=”heappools(on)”

This defines the way in which dynamic memory is used by SQLJ and JDBC applications. If
you do not specify heappools(on), then you incur the very considerable overhead of memory
being repeatedly freed and allocated as objects are instantiated.

Ensure JIT compilation for every Java method


To ensure that JIT compilation is performed for every Java method, you should specify the
following environment variable:
IBM_MIXED_MODE_THRESHOLD=1

Use the correct level of the JDK


Make sure your JDK level is at least 1.3. With JDK 1.3 you can exploit the hardware support
for floating point calculations provided by the G5, G6, and zSeries processing complexes.

BIND options
Specify DYNAMICRULES(BIND) for dynamic SQL to make sure that the table access
privileges of the binder are implicitly used during execution of the package. Otherwise, you
will have to grant authorization over the underlying DB2 objects to the authorization id
executing the plan. Dynamic SQL includes JDBC and cursor controlled updates and deletes
in SQLJ.

Use the QUALIFIER keyword of the BIND command to provide qualification for unqualified
table or view names referenced in JDBC and SQLJ.

Mapping Java data types to DB2 data types


For optimal performance it is recommended to map the Java data types used to the SQL
column data types. The primary reason for this is to provide for efficient predicate processing:
indexable and stage 1. The other reason is to minimize data conversion cost.

Chapter 7. Network computing performance 159


Table 7-1 below provides the recommended mapping between Java data types and SQL
column data types.
Table 7-1 Mapping DB2 data types to Java data types
DB2 data type Java data type

SMALLINT short, boolean

INTEGER int

REAL float (single precision)

DOUBLE, FLOAT double (double precision)

DATE java.sql.Date

CHAR String

VARCHAR String

TIME java.sql.Time

TIMESTAMP java.sql.Timestamp

DECIMAL or NUMERIC java.math.BigDecimal

Online checking with db2profc


For String data types in Java there is no concept of length. The associated SQL column data
types are CHAR and VARCHAR. In order to have the predicates used for index matching, the
definition in the DBRM for SQLJ (static SQL) must match the definition in the DB2 catalog in
terms of data type and length. In order to achieve this you must customize the SQLJ
serialized profile using db2profc and specify the online checker option -online = <DB2
location name>:
db2profc ... -online=<db2_location_name> ...

Use the correct serialized profile


In order to achieve true static SQL execution for SQLJ applications you must customize the
SQLJ serialized profile using db2profc, and the output customized profile must be available
and accessible in the runtime PATH.

Only select and update columns as necessary


For optimal performance it has always been recommended that DB2 applications should only
select and update the columns actually required by your application. The weight of this
recommendation is even stronger for Java applications. This is because of the emphasis on
individual column processing within the Java programming model where a Java object is
created for every single column. Java is running in Unicode: retrieving string columns requires
the conversion from EBCDIC/ASCII to Unicode.

Activate dynamic SQL statement caching


For relatively simple SQL requests, the processing cost of preparing dynamic SQL
statements can be very significant. To dramatically reduce the processing overhead and get
reasonable close to the performance of static SQL, DB2 provides for dynamic SQL statement
caching. Dynamic statement caching avoids the full cost of preparing an SQL request.

160 DB2 for z/OS and OS/390 Version 7 Performance Topics


Dynamic statement caching is enabled by specifying CACHEDYN=YES on the DSN6SPRM
macro in DSNZPARM. When the prepared statement is found in the prepared statement
cache, it is possible to make significant savings. A typical hit ratio is between 70% and 90%.
For optimal performance it is strongly recommended to use SQLJ for database access. When
using JDBC or controlled updates or deletes in SQLJ, it is recommended to turn on dynamic
statement caching.

Use positioned iterators


For optimal performance it is recommended to use positioned iterators rather than named
iterators to reduce the processing cost.

Releasing resources
For JDBC it is important to close prepared statements before reusing the statement handle to
prepare a different SQL statement within the same connection.

Performance recommendation responsibilities


The responsibility for improving the performance of your Java application is shared across
multiple functions within your organization. To bring all this information together, we have
collected it in Table 7-2, listing for each header which job function we believe should be
responsible for implementing each recommendation:
Table 7-2 Recommendations and responsibilities
Recommendation Responsibility

Memory usage - environment variable Systems Programmer/Application Integrator

Ensure JIT compilation for every Java method Systems Programmer/Application Integrator

Use the correct level of the JDK Systems Programmer

BIND options DBA/Systems Programmer

Mapping Java data types to DB2 data types Application Developer

Online checking with db2profc DBA/Application Developer

Use the correct serialized profile DBA/Systems Programmer/Application


Integrator

Only select and update columns as necessary Application Developer

Activate dynamic SQL statement caching Systems Programmer

Cursor-controlled SELECT Application Developer

Memory management essentials Application Developer

Servicing JDBC and SQLJ performance Systems Programmer/DBA


problems

7.5 Unicode
DB2 V7 introduces full support for a third encoding scheme besides EBCDIC and ASCII,
Unicode.

The Unicode encoding standard is an encoding scheme that can include characters for
almost all the languages in the world.

Chapter 7. Network computing performance 161


With this enhancement, DB2 V7 can truly support multinational and e-commerce
applications, by allowing data form more than one country/language to be stored in the same
DB2 subsystem.

Data sharing considerations


We recommend that all members of a data sharing group adhere to these recommendations:
򐂰 Once one member of the data sharing group converts to a DB2 release that support
Unicode, the rest of the members of the data sharing group should also be converted to
the same release as soon as possible.
򐂰 If the CCSIDs are changed on one system, then the same changes should be made to all
other members of the data sharing group.

If these recommendations are not followed, the results of SQL may be unpredictable.

UCS-2
UCS-2 is a fixed-width 16-bit encoding standard, with a range of 2*16 code points.

UCS-4
UCS-4 is a fixed-width 32-bit encoding standard, with a range of 2*31 code points. The 2*31
code points are grouped into 2*15 planes, each consisting of 2*16 code points. The planes
are numbered form 0. Plan2 0, the Basic Multilingual Plane (BMP), corresponds to UCS-2.

UTF-8
UTF-8 is a transformation format in 8 bits and uses a sequence of 8-bit values to encode UCS
code points.

UTF-16
UTF-16 is a transformation format in 16 bits and uses a sequence of 16-bit values to encode
UCS code points.

7.5.1 Performance considerations


There is a large CPU cost to translate between EBCDIC and Unicode. This translation is
currently being done by software without any hardware assist. In the future, new processors
will assist in the translation and bring down the cost. Note that translation occurs on inserts,
or LOAD/UNLOAD or anytime that an SQL predicate uses a data type which does not match
the table contents. Its best to avoid translation as much as possible.

Even without translation, there is still a small CPU cost to handling Unicode data types.

7.6 DRDA server elapsed time reporting


The network monitoring has been improved in DB2 V7. Applications accessing a DB2 for
OS/390 server using DB2 Connect V7 can now monitor the server’s elapsed time using the
System Monitor Facility. This enhancement makes it easier to monitor bottlenecks in
client/server applications, by helping to easily identify where the bottleneck is.

Server elapsed time allows remote clients to determine the actual amount of time it takes for
DB2 to parse a remote request, process any SQL statements required to satisfy the request,
and generate the reply. It does not include any of the network time used to receive the request
or send the reply. A timestamp is recorded when DDF first receives a remote request.

162 DB2 for z/OS and OS/390 Version 7 Performance Topics


A second timestamp is recorded after the DB2 server has processed any SQL statements
and generated the reply. The difference in values is the elapsed time that is returned to the
client. This function is supported starting with DB2 Connect Version 7.2 with Fix pack 1 and is
activated on DB2 Connect Monitor by setting up the variables for unit of work and statement.
See Connecting WebSphere to DB2 UDB Server, SG24-62119 for details.

Chapter 7. Network computing performance 163


164 DB2 for z/OS and OS/390 Version 7 Performance Topics
8

Chapter 8. Data sharing


DB2 Version 7 delivers a number of enhancements to improve availability, performance and
usability of DB2 data sharing:
򐂰 Coupling Facility Name Class Queues: Name Class Queues allows the Coupling
Facility Control Code (CFCC) to organize the GBP directory entries in queues based on
DBID, PSID and partition number. This enhancement reduces the performance impact of
purging group buffer pool (GBP) entries for GBP dependent page sets.
򐂰 Group attach enhancements: An application can now connect to a specific DB2 member
even if the DB2 member name and the DB2 group name are identical. Applications, using
the group attach name, can request to be notified when a member of the data sharing
group becomes active on the executing OS/390 image. With DB2 V7 you can specify the
group attach name for DL/I batch applications.
򐂰 IMMEDWRITE bind option: The IMMEDWRITE bind/rebind option, introduced by APAR
in DB2 V5 and V6, is fully implemented in DB2 V7.
򐂰 DB2 Restart Light: The “Light“restart option allows you to restart a failed DB2 member on
the same or another OS/390 image with a minimum storage footprint in order to release
any retained locks.
򐂰 Persistent Coupling Facility structure sizes: After altering the size of DB2 structures,
DB2 V7 keeps track of their size across DB2 executions and structure allocations.
򐂰 Miscellaneous items:
– During normal shutdown processing, DB2 notifies you of unresolved UOWs (indoubt or
postponed abort) that will hold retained locks after the DB2 member has shut down.
– DB2 V7 reduces the MVS to DB2 communication that occurs during a Coupling
Facility/structure failure, which will decrease recovery time in such a failure situation.
– OS/390 Version 2 Release 10 introduces a new function, Auto Alter, for automatic
tuning of CF structures.
– APAR PQ45407 - User enables Coupling Facility duplexing for IRLM lock structure.
– A new special register, CURRENT MEMBER, returns the DB2 member name.
– APAR PQ44114 - User can specify how many lock hash entries they want in the
Coupling Facility lock structure when the first IRLM connects to the group.
– A new option on the MODIFY irlmproc command allows user to purge retained locks.

© Copyright IBM Corp. 2001 165


8.1 Coupling Facility Name Class Queues
The CFCC Level 7 introduced a new function called Name Class Queues.

In a DB2 data sharing environment, DB2 can use the Name Class Queues to reduce the
problem of Coupling Facility utilization spikes and delays when deleting GBP entries for a
page set/partition. This occurs during:
򐂰 Read-only switching (pseudo close)
򐂰 DB2 shutdown and this member was the last updater of the page set/partition.

Without this enhancement, DB2 had to scan the whole GBP directory looking for the entries
for a particular page set. DB2 V7 requests to use Name Class Queues at group buffer pool
connect.

Name Class Queues allows the CFCC to organize the GBP directory entries into queues
based on DBID, PSID and partition number. Locating and purging these entries will now
happen more efficiently.

The number of requests to the Coupling Facility to delete the directory entries associated with
a set of pages is reported in message DSNB797I in response to a -DISPLAY GBPOOL() MDETAIL
command:

.
DSNB797I -DBG2 OTHER INTERACTIONS
REGISTER PAGE =0
UNREGISTER PAGE =0
DELETE NAME = 0
READ STORAGE STATISTICS = 12
EXPLICIT CROSS INVALIDATIONS =0
ASYNCHRONOUS GBP REQUESTS =0

The required levels and service levels for the CFCC are:
򐂰 CFCC level 7 at service level 1.06
򐂰 CFCC level 8 at service level 1.03
򐂰 CFCC level 9

Prior to OS/390 Version 2.8 the level of the CFCC was not externalized. The only way to
safely determine what level the CFCC was running at was to ask the system administrator.
OS/390 Version 2.8 enhances the D CF command output. To externalize the level of the
CFCC. See Figure 8-1.

166 DB2 for z/OS and OS/390 Version 7 Performance Topics


D CF
I XL150I 14. 38. 15 DI SPLAY CF 805
COUPLI NG FACI LI TY 002064. I BM. 02. 00000 0010ECB
PARTI TI ON: E CPCI D: 00
CONTROL UNI T I D: FFF9
NAMED CF01
COUPLI NG FACI LI TY SPACE UTI LI ZATI ON
ALLOCATED SPACE DUMP SPACE UTI LI ZATI ON
STRUCTURES: 169728 K STRUCTURE DUMP TABLES: 0 K
DUMP SPACE: 2048 K TABLE COUNT: 0
FREE SPACE: 844800 K FREE DUMP SPACE: 2048 K
TOTAL SPACE: 10165 76 K TOTAL DUMP SPACE: 2048 K
MAX REQUESTED DUMP SPACE: 0 K
VOLATI LE: YES STORAGE I NCREMENT SI ZE: 256 K
CFLEVEL: 9
CFCC RELEASE 09. 00 , SERVI CE LEVEL 02. 01
BUI LT ON 12/ 08/ 200 0 AT 09: 05 : 00

Figure 8-1 Output of D CF command

8.2 Group attach enhancements


DB2 V7 introduces a number of enhancements to DB2 group attach processing.

8.2.1 Bypass group attach processing on local connect — groupoverride


An application can now connect to a specific member of a Data Sharing group, where there
are two or more members of the Data Sharing group active on the same OS/390 image and
the subsystem id of one of the members is the same as the group attach name. This is
important for some applications that need to connect to specific members of the Data Sharing
group, rather than generically connecting to any member of a given Data Sharing group (for
example, monitoring applications).

The groupoverride parameter on CONNECT (CAF) and IDENTIFY (RRSAF) is available to


applications using CAF and RRSAF to attach to DB2. This parameter is optional on the
CONNECT or IDENTIFY call. If this parameter is provided, it contains the string GROUP(N).
This string indicates that the subsystem name that is specified is to be used as a DB2
member subsystem name.

Applications using TSO attach to DB2 can use the parameter GROUP(NO) on the DSN
command.

8.2.2 Support for local connect using STARTECB parameter


You can now specify the startecb parameter on a local CONNECT (CAF) and IDENTIFY
(RRSAF) call to DB2 using the group attach name, to be notified when any member of the
Data Sharing group becomes active on this OS/390 image.

The startecb parameter of the CONNECT call to DB2 is used by applications who want to wait
for the target DB2 subsystem to become available if it is currently not started. When the target
DB2 comes up, it posts any startup ECBs that might exist to notify the waiting applications
that DB2 is now available for work.

Chapter 8. Data sharing 167


For example, if the application tries to connect to DB0G and DB0G is the group attach name
for DB2 subsystems DB1G and DB2G, then whichever of the subsystems starts first will post
the startup ECBs.

8.2.3 group attach support for DL/I batch jobs


Prior versions of DB2 UDB for OS/390 do not allow DL/I batch jobs to specify the group attach
name when connecting to DB2.

Limited support for using the group attach for DL/I batch is available in DB2 for MVS Version
4, DB2 UDB for OS/390 Versions 5 and 6 by arrangement with IBM Support, via APAR only.

This support has restrictions. DB2 ignores the STARTECB parameter on the CONNECT call,
when the group attach name is coded. DB2 therefore behaves differently, depending on if you
specify the Data Sharing group name or DB2 subsystem name with the STARTECB
parameter.

Now that STARTECB is supported for group attach, the incompatibility is removed and
therefore group attach support can now be safely added for DL/I Batch without any
incompatible behavior being introduced.

8.2.4 CICS group attach


Now that RDO for RCT has been implemented, group attach is the most asked for
enhancement to the CICS-DB2 interface. Although announced together with the CICS TS
2.1, CICS group attach comes with CICS TS 2.2.

CICS group attach description


A new DB2GROUPID parameter is added to the DB2CONN definition. It is mutually exclusive
to the existing DB2ID parameter. Specifying a DB2GROUPID means group attach is required.
Specifying a DB2ID means that attach to a specific DB2 is required. If both are specified on a
CEDA panel then DB2ID wins, DB2GROUPID is blanked out and a warning message is
produced. It is possible to leave both DB2ID and DB2GROUPID blank. In this case it means
that no group attach is required, and a specific DB2ID is used from INITPARM if specified.

EXEC CICS CREATE DB2CONNl supports DB2GROUPID. It fails with an INVREQ if both
DB2GROUPID and DB2ID are specified.

CEMT/EXEC CICS SET DB2CONN is enhanced to support DB2GROUPID. Specifying a


DB2GROUPID causes the DB2ID to be blanked out, and vice versa. As with DB2ID, it is only
be possible to set a DB2GROUPID when the CICS-DB2 attachment facility is not active.

CEMT/EXEC CICS INQUIRE DB2CONN is enhanced to support DB2GROUPID

The semantics of CEMT/EXEC CICS INQUIRE DB2CONN DB2ID() change slightly. When
not using group attach, there is no change, that is, when CICS is not connected to DB2 this
field contains the specific DB2 subsystem the user has specified or set in the db2conn
definition (if any; it could be blank). When connected to DB2, this is the name of the DB2
connected to. For group attach this field will be blank if CICS is not connected to DB2.
However when CICS is connected to DB2, it contains the member of the DB2 data sharing
group that was chosen. This information is returned to CICS by DB2 when connection is
established.

CPSM needs to change to pick up the new DB2GROUPID on EXEC CICS CREATE
DB2CONN and EXEC CICS INQ/SET DB2CONN.

168 DB2 for z/OS and OS/390 Version 7 Performance Topics


The DSNC STRT xxxx command (where xxxx is a DB2ID) is still be supported. It means the
user wants to connect to a specific DB2. It results in a EXEC CICS SET DB2CONN
DB2ID(xxxx) CONNECTED command being issued. This means that the DB2GROUPID is
blanked out.

The INITPARM=(DFHD2INI='xxxx') command continues to be supported as an override to


connect to a specific DB2. It does not support overriding a DB2GROUPID. Today users can
leave the DB2ID in the DB2CONN definition blank and use the value from INITPARM. If group
attach is required then users should remove their INITPARM setting, as it overrides group
attach. For example setting a DB2GROUPID of FRED in the DB2CONN causes the DB2ID to
be blanked. If however the user specifies INITPARM=(DFHD2INI='JOHN') as a SIT
parameter, because the DB2ID is blank in the DB2CONN then INITPARM is picked up, the
DB2GROUPID is blanked and CICS connects to DB2 subsystem JOHN.

Whether it be via DSNC STRT xxxx or INITPARM, or EXEC CICS SET DB2ID(), a request to
connect to a specific DB2 always overrides group attach. This gives the user the override
capability if it is required. It should be noted, however, that DB2GROUPID has to be reset
explicitly should group attach be wanted later.

Indoubt resolution
The DB2 V7 enhancements to group attach do not include solving the problem of unresolved
indoubts. There is no DB2 peer recovery.

Indoubt resolution depends on which action, YES or NO, you specify for the keyword
RESYNCMEMBER on the DB2CONN definition.

RESYNCMEMBER would only take affect if DB2GROUPID was being used:


򐂰 RESYNCMEMBER(YES) would be the default, and would mean that if CICS detected that
it had UOWs outstanding, it would ignore group attach and wait to reconnect to the
specific member last used.
򐂰 RESYNCMEMBER(NO) would mean that the user does not require resynchronization with
the previous member. However, if CICS detects that UOW resolutions are outstanding, it
would under the covers try the specific member first, and only if this fails would it resort to
group attach.

8.2.5 IMS group attach


With APAR PQ42180 IMS supports the use of the DB2 data sharing group name for DB2 V5
and later versions. The DB2 group name may be used in all online regions.

IMS looks for a DB2 group name if the following conditions for the dependent region SSM
member are met:
򐂰 SST=DB2 was specified.
򐂰 SSM= was specified when the dependent region was started.
򐂰 A subsystem with the name specified in the SSM member parameter SSN was not found
to be connected to the IMS control region.

The subsystem name specified in the dependent region SSM member is then interpreted as a
DB2 group name and IMS calls MVS Name Services. If MVS returns a positive response to
the group inquiry, IMS selects a member of the DB2 group that is connected to the IMS
control region and connect that DB2 to the dependent region.

All DB2s IMS and any of its dependent regions can connect to must still be specifically named
in the IMS control region SSM member.

Chapter 8. Data sharing 169


8.3 IMMEDWRITE bind option
In this section we describe the IMMEDWRITE bind/rebind option, its introduction with DB2
V5, DB2 V6 and its evolution with DB2 V7.

8.3.1 IMMEDWRITE bind option before DB2 V7


Consider the following scenario. We have a two way data sharing group DB0G with members
DB1G and DB2G. Transaction T1, running on member DB1G, makes an update to a page.
Transaction T2, spawned by T1 and dependent on the updates made by T1, runs on member
DB2G. If transaction T2 is not bound with isolation repeatable read (RR), and the updated
page was once in use by DB2G, there is a chance, due to lock avoidance, that T2 uses an old
copy of the same page in the virtual buffer pool of DB2G if T1 still has not committed the
update.

Possible workarounds for this problem are:


򐂰 Execute the two transactions on the same member
򐂰 Bind transaction T2 with ISOLATION(RR)
򐂰 Make T1 commit before spawning T2

DB2 V5 APAR PQ22895 introduces a new bind/rebind option that can be considered when
none of the above actions are desirable. IMMEDWRITE(YES) allows the user to specify that
DB2 should immediately write updated GBP dependent buffers to the Coupling Facility
instead of waiting until commit or rollback.

However, IMMEDWRITE(YES) is not the optimum solution when:


򐂰 The exact plans/packages which need IMMEDWRITE(YES) can not be identified and
rebinding all plans/packages is not a feasible alternative.
򐂰 There is a high concern for the performance impact of specifying IMMEDWRITE(YES). A
page can be written several times to the CF within the same UOW.

DB2 V6 APAR PQ25337 delivers the functionality introduced by APAR PQ22895 in DB2 V5
with the addition of a third value for IMMEDWRITE and a new DSNZPARM parameter.

IMMEDWRITE(PH1) allows the user to specify that a given plan or package should write
updated group buffer pool dependent pages to the Coupling Facility at or before Phase 1 of
commit. If the transaction subsequently rolls back, the pages will be updated again during the
rollback process and will be written again to the CF at the end of abort.
This option is only useful if the dependent transaction is spawned during syncpoint
processing of the originating transaction.

A new DSNZPARM parameter, IMMEDWRI(NO/PH1/YES), is added to DSN6GRP to allow


the user to specify at the DB2 member level whether immediate writes or phase 1 writes
should be done. This parameter affects all plans and packages that are bound on this
member except if they were bound with the IMMEDWRITE option. The default for REBIND
PLAN/PACKAGE is the IMMEDWRITE option used the last time the plan/package was bound.

8.3.2 IMMEDWRITE bind option in DB2 V7


The DB2 catalog now supports the IMMEDWRITE bind/rebind option by adding a new
column, IMMEDWRITE, to the catalog tables SYSIBM.SYSPLAN and
SYSIBM.SYSPACKAGE.

The bind/rebind panels of DB2I now support the IMMEDWRITE parameter.

170 DB2 for z/OS and OS/390 Version 7 Performance Topics


DB2 V7 also externalizes the IMMEDWRI DSNZPARM parameter to the installation panels.

Table 8-1 illustrates the effect of the IMMEDWRI subsystem parameter and the
IMMEDWRITE bind option on the value at run time. A value of YES takes precedence
whether it is the subsystem parameter or the bind option.
Table 8-1 Immediate write hierarchy
IMMEDWRITE bind option IMMEDWRI subsystem value at run time
parameter

NO NO NO

NO PH1 PH1

NO YES YES

PH1 NO PH1

PH1 PH1 PH1

PH1 YES YES

YES NO YES

YES PH1 YES

YES YES YES

8.3.3 IMMEDWRITE bind option performance


IMMEDWRITE(YES) should be used with caution due to its potential performance impact.
The more updates a plan or package does on the same page, the higher the performance
impact of this bind option will be.

IMMEDWRITE(PH1) should have little or no impact on overall performance. However, when


IMMEDWRITE(PH1), is used, some of the CPU consumption will shift from being charged to
the MSTR address space to being charged to the allied agent’s TCB. This is because the
group buffer pool writes are done at commit Phase 1, which is executed under the allied
agent’s TCB, instead of being done at commit Phase 2, which is executed under SRBs in the
MSTR address space. Also, if a transaction aborts after commit Phase 1 has completed,
there will be typically about twice the amount of group buffer pool writes for
IMMEDWRITE(PH1) as compared to IMMEDWRITE(NO), because each group buffer pool
dependent page will be written out once during Phase 1 and once at the end of abort.

8.3.4 IMMEDWRITE bind option performance measurement


A performance measurement was executed in order to compare the effect of the different
IMMEDWRITE options.

Performance measurement description


The measurements were executed using the IRWW workload on a 2-way data sharing group

Chapter 8. Data sharing 171


Performance measurement results
The first measurement compares the internal throughput rate in case we use
IMMEDWRITE(PH1) and IMMEDWRITE(YES). See Table 8-2 for the results of this
measurement. A degradation of the internal throughput rate with 11.6% is noticed when using
IMMEDWRITE(YES)
Table 8-2 IMMEDWRITE(PH1) vs. IMMEDWRITE(YES)

IMMEDWRITE(PH1) IMMEDWRITE(YES)

Member DG1G External 63.0 52.2


throughput
(commits/sec.)

Member DG1G CPU utilization 65.40 61.60


(%)

Member DG1G Internal 96.41 84.74


throughput

Member DG2G External 63.3 52.6


throughput
(commits/sec.)

Member DG2G CPU utilization 64.88 60.91


(%)

Member DG2G Internal 97.56 86.36


throughput

Internal throughput of data 193.97 171.10


sharing group

The second measurement evaluates the impact of IMMEDWRITE(PH1) vs.


IMMEDWRITE(NO). See Table 8-3 for the results of this measurement.
Table 8-3 IMMEDWRITE(PH1) vs. IMMEDWRITE(NO)

IMMEDWRITE(PH1) IMMEDWRITE(NO)

Internal throughput of data 339.39 338.95


sharing group

MSTR SRB time 0.32 1.21


(millisecond/commit)

DB2 Class 2 CPU time 6.37 5.44


(millisecond/commit)

The internal throughput rates are very close to each other; meaning IMMEDWRITE(PH1) has
little or no performance impact from our observation. We do observe CPU time shifting from
the DB2 MSTR address space to the allied agent's TCB, as was expected.

8.4 DB2 Restart Light


If a MVS image in an OS/390 parallel sysplex became unavailable and had to be IPLed, there
was not an acceptable option to resolve possible retained locks. Up to now, you had two
alternatives to resolve the retained locks:
򐂰 Wait until the failed image was IPLed and then restart DB2.

172 DB2 for z/OS and OS/390 Version 7 Performance Topics


򐂰 Restart the DB2 member on another image and bring it back down once the retained locks
are resolved.

The first alternative can be unacceptable from an availability point of view. The problem with
the second alternative is that the DB2 restart asks for a lot of resources that impact overall
performance of the image on which the restart is initiated.

8.4.1 Description
The Restart Light capability brings DB2 up with a minimal memory footprint. Retained locks
are freed as part of the forward recovery and backward recovery processes and once this is
done DB2 terminates.No new work is allowed. All retained locks are freed except for:
򐂰 Locks that are held by indoubt units of recovery.
򐂰 Locks that are held by postponed abort units of recovery.
򐂰 IX mode page set P-locks. These locks do not block access from other DB2 member;
however, they do block drainers, such as utilities.

The light restart mode uses:


򐂰 No EDM/RID pool, LOB manager, RDS or RLF
򐂰 Reduced number of service tasks
򐂰 Only primary buffer pools with VPSIZE= min(vpsize,2000)
򐂰 VDWQT = 0.
򐂰 CASTOUT(NO) for shutdown
򐂰 PC=YES for IRLM if autostarted by DB2

8.4.2 Command syntax


The LIGHT(YES) option on the START DB2 command tells DB2 to perform a light restart.

Message DSNY009I accompanies a light restart:


DSNY009I SUBSYSTEM STARTING IN LIGHT MODE,
NORMAL TERMINATION TO FOLLOW RELEASE OF RETAINED LOCKS

In a non data sharing environment the LIGHT(YES) parameter is ignored. If you start DB2
with this option, you receive the following message:
DSNY015I LIGHT(YES) ON START DB2 COMMAND WAS IGNORED,
SYSTEM IS NOT ENABLED FOR DATA SHARING.

8.4.3 Performance measurement


A set of performance measurements were done to compare a normal restart with a light
restart.

Chapter 8. Data sharing 173


Performance measurement description
The measurement was executed using the IRWW workload on a 3-way data sharing group.
All three members were driven to a commit rate of about 66 commits/sec. in order to obtain a
data sharing group commit rate of 200 commits/sec. Once a steady state was observed the
second member was crashed. The number of retained locks was about 750. DB2 was
restarted on the first member by ARM. This scenario was executed once for the light restart
and once for a normal restart.

For this measurement the following hardware and software was used:
򐂰 Hardware
– 9672-ZZ7 — 12 CPUs
– Three LPARS — 12 logical CPUs per LPAR and 25% capping
– 2 9674 C05 CFs — 6 CPUs each
– Enterprise storage server
򐂰 Software
– OS/390 V2R8, CF micro code level 8
– DB2 V7
– IMS V6 DCCTL — 78 IMS regions — 750 terminals per member
– TPNS

Performance measurement results


Here we compare the memory usage, the restart time, and the shutdown time for a light
restart and a normal restart. Table 8-4 shows the virtual storage allocation for the EDM pool
and the buffer pools used in this measurement for the normal restart and the light restart. The
virtual storage used by DB2 was reduced by 373.5 MB. IRLM was started with parameter
PC=YES, which reduces the impact on ECSA impact of the hosting system.
Table 8-4 RESTART LIGHT storage measurement

LIGHT(NO) #pages LIGHT(YES) #pages Reduction #pages

EDM 5,000 0 5,000

BP0 1,000 1,000 0

BP1 5,500 2,000 3,500

BP2 31,250 2,000 29,250

BP3 18,750 2,000 16,750

BP4 3,125 2,000 1,125

BP5 6,250 2,000 4,250

BP6 12,500 2,000 10,500

BP7 1,000 1,000 0

BP8 25,000 2,000 23,000

TOTAL 109,375 16,000 93,375

TOTAL(MB) 437.5 64.0 373.5

174 DB2 for z/OS and OS/390 Version 7 Performance Topics


Table 8-5 summarizes the restart and shutdown times for the normal and light restart. The
gain in restart time is not as substantial as the gain in memory footprint. For the light restart
case, DB2 was stopped with CASTOUT(NO), which explains the faster shutdown.
Table 8-5 Restart Light — restart time measurements

LIGHT(NO) - in seconds LIGHT(YES) - in seconds

ARM delay 7 6

DB2 restart 67 55

DB2 shutdown 16 9

We noticed a longer restart time for the cross system restart than for a restart in place; this is
due to XCF signalling activity which takes place in order to allow the restarting DB2 to join the
data sharing group on another MVS image.

Further enhancements will concentrate on a reduction of light restart time. The goal is that,
after a crash of a MVS image, a DB2 cross system light restart will be fast enough in order to
avoid transaction time-outs on the other members of the group due to the retained locks.
Parameter RETLWAIT of DSNZPARM should be set to YES in this scenario.

8.5 Persistent Coupling Facility Structure sizes


You can change the size of a Coupling Facility structure dynamically using the SETXCF
START,ALTER command. Prior to DB2 Version 7, this change in size was lost for subsequent
allocations of the structure:
򐂰 When rebuilding the structure, the value of INITSIZE in the CFRM policy was used.
򐂰 When starting group buffer pool (GBP) duplexing after a change was made to the size of
the structure, the secondary structure was allocated using the INITSIZE value in the
CFRM policy rather then using the current size of the primary structure.

DB2 V7 uses the currently allocated size of the SCA, Lock and GBP structures:
򐂰 When allocating a new Coupling Facility structure instance in response to a structure
rebuild
򐂰 When allocating a secondary structure to support duplexing
򐂰 When allocating a new Coupling Facility structure, after the size was changed by a
SETXCF START,ALTER command and the structure was subsequently deallocated.

DB2 now stores the currently allocated size of the SCA and GBP structures in the BSDS and
these sizes will be used when DB2 needs to allocate the structures.

DB2 will use the INITSIZE, if no SETXCF START,ALTER,STRNM=strname,SIZE=size command was issued
against the structure.

Starting a new CFRM policy with a new INITSIZE will override the saved size of the structure.

The lock and SCA structures already have a form of size persistence, since the structures
remain allocated while all members of the Data Sharing group are stopped. Group buffer
pools (GBP) on the other hand are not.

If the entire data sharing group comes down, and all the Coupling Facility structures have
been deallocated, when the group comes back up, the lock structure will go back to its
INITSIZE and SCA and GBPs will be initialized to the last saved size.

Chapter 8. Data sharing 175


8.6 Miscellaneous items
In this section we describe some miscellaneous enhancements.

8.6.1 Notifying incomplete units of recovery during shutdown


DB2 V7 produces message DSNR046I during normal DB2 member shutdown, if indoubt or
postponed abort units of recovery exists; there will be retained locks remaining due to the
incomplete units of recovery. These retained locks will continue to block access to the
affected DB2 data from other members.

If this message is issued, then you may choose to immediately restart the DB2 member in
order to resolve the incomplete units of recovery and remove the retained locks.

This warning is given in addition to the existing DSNR036I message that notifies DB2 you at
each DB2 checkpoint of any unresolved indoubt units of recovery.

8.6.2 More efficient message handling for CF/structure failure


DB2 V7 reduces the MVS to DB2 communication that occurs during a Coupling
Facility/structure failure. Messages not needed by DB2 are no longer exchanged with MVS.
This enhancement should improve CF/structure recovery time.

8.6.3 Automatic Alter for CF structures (Auto Alter)


This new function of OS/390 Version 2 Release 10 supports the automatic tuning of CF
structure size and ratios of structure objects in response to changing structure object usage.

You can define a structure's minimum and maximum sizes, and Auto Alter is then at liberty to
expand or contract the structure only within those customer-specified boundaries. You also
have control over the threshold percent full against which a structure is to be managed by
Auto Alter, and if desired, the installation has the ability to turn off the Auto Alter function for a
structure.

Changes made by Auto Alter are passed to and saved by DB2; see section 8.5, “Persistent
Coupling Facility Structure sizes” on page 175.

8.6.4 CF lock structure duplexing


With APAR PQ45407 IRLM 2.1 will connect to the IRLM lock structure with attributes allowing
duplexing if properly defined in the CF Policy by the user. This duplexing is dependent on the
System-Managed CF Structure Duplexing feature of z/OS V1R2.

8.6.5 CF lock structure size


The Coupling Facility lock structure contains two parts. The first part is a lock hash table used
to determine if there is inter-DB2 read/write interest on a particular hash class. The second
part is a list of the update locks that are currently held (sometimes called a modify lock list or
record list table). With APAR PQ44144 the division of the lock structure storage between
these two components can be controlled by the user through the value of the new parameter
HASH in the irlmproc or IRLM MODIFY command

176 DB2 for z/OS and OS/390 Version 7 Performance Topics


Description
The value of HASH represents the number of hash entries required in the lock hash table of
the CF Lock structure in units of 1048576. HASH can have a value of blank,0, or any exact
power of 2 up to a maximum of 1024. For example, a value of HASH=32 would result in a
hash table size of 64MB assuming the width of each hash entry to be 2 bytes. The number of
HASH entries for the group is used in the following order and the width is controlled by the
MAXUSRS value. Both of these are dictated by the first IRLM to connect to the group during
initial structure allocation or during a REBUILD:
򐂰 The value specified on the MODIFY irlmproc,SET,HASH= command if > 0.
򐂰 The value from the HASH parameter in the IRLMPROC if > 0.
򐂰 The existing logic, which determines the nearest power of 2 after dividing the XES
structure size returned on the IXCQUERY call by 2 * hash width based on MAXUSRS.

Recommendation
Choose a value for the INITSIZE of the CF Lock structure that is a power of 2. This enables
IRLM to allocate the Coupling Facility storage so that half will be used for lock hash table
entries and the remainder for the record table entries. If you plan to change the value for the
parameter HASH, validate the lock structure sizing with IBM support.

8.6.6 Purge retained locks


The MODIFY irlmproc,PURGE command releases IRLM locks retained due to a DB2, IRLM,
or system failure. The command causes all retained locks for the specified DB2 to be deleted
from the system, thereby making them available for update. Since retained locks protect
updated resources, it should be used only after understanding what the resources are and
the consequence to data integrity if they are deleted.

8.6.7 CURRENT MEMBER register


A new special register is now provided to return the DB2 member name of the subsystem on
which the statement is executing. The value of this register is a string of eight characters,
padded if necessary. In non data sharing environments the returned value is a string of
blanks.

To set a host variable MEMB to the current DB2 member name:


EXEC SQL SET:MEMB = CURRENT MEMBER;
or
EXEC VALUES (CURRENT MEMBER) INTO :MEMB;

This new register can be useful in several cases:


򐂰 For applications that need to know if the DB2 they are connected to is a member of a data
sharing group or not
򐂰 When merging DB2 subsystems with applications that have logic pointing to the initial
location name
򐂰 When assigning partitions of data to separate members in order to avoid insert hot spots
This function is provided by PTF UQ51114 for APAR PQ44671.

Chapter 8. Data sharing 177


178 DB2 for z/OS and OS/390 Version 7 Performance Topics
9

Chapter 9. Performance tools


This chapter provides a functional description of the performance related enhancements to
performance tools in DB2 V7:
򐂰 IFI enhancements for V7: Several new IFCIDs have been added, others have changed,
and the DSNZPARM has a new parameter to help in synchronizing the writing of IFCID
records.
򐂰 DB2 PM changes and new functions: Statistics and accounting records have been
improved to help in identifying the event involved.
򐂰 Index Advisor: This is an extension of the DB2 for UNIX and Windows optimizer that
provides recommendations for indexes based on the tables and views defined in the
database, the existing indexes, and the SQL workload.
򐂰 DB2 Estimator: Several new DB2 for OS/390 V7 functions have been added to DB2
Estimator, including: the new UNLOAD utility, more parallel LOAD of partitions, Unicode
encoding scheme, new built-in functions, the FETCH FIRST n ROWS ONLY clause, and
scrollable cursors.

© Copyright IBM Corp. 2001 179


9.1 IFI enhancements
This section details changes to the Instrumentation Facility Interface (IFI).

9.1.1 New IFCIDs


A number of new IFCIDs are added in Version 7. These are summarized in Table 9-1.
Table 9-1 New IFCIDs

IFCID Trace Type Class Description

0217 Global 10 Records detailed information about storage


usage in the DBM1 address space.

0219 Audit 8 Records use of LISTDEFs by utilities.


Performance 10

0220 Audit 8 Records information about dynamically allocated


Performance 10 utility output datasets.

0225 Statistics 6 Summary of storage usage in DBM1 address


space.

0319 Audit 8 Records Kerebos security translation of user IDs.

IFCID 0217
This record details the amount of available storage in the DBM1 address space, the amount
of storage for MVS use, the total Getmained stack storage and the total Getmained storage.
This is followed by information on each DBM1 storage pool and each agent storage pool. If
there are more than 250 such pool entries, they will overflow to another IFCID 217 record. For
each pool, the total storage used is recorded. For agent pools, the thread is identified by
Authorization ID, Correlation ID, Connection Name, and Plan Name. The description of this
IFCID is shown in Figure 9-1.

IFCID 0219
Each time a LISTDEF is used by a utility it is recorded by this IFCID. The record contains the
list name and size in bytes as well as the list type. The list type is T for a table space list, I for
an index space list, or M for a mixed list.

IFCID 0220
This record is written whenever a dynamically allocated utility data set is closed. It contains
the DD name, the data set name, and the template name. It also contains the number of
reads, writes, and check macro invocations, and the number of times an end of volume
condition occurred during writing. The timestamp of the first open for the data set, the device
type (disk or tape) and the I/O wait time in milliseconds for the data set are also recorded.

IFCID 0225
This record summarizes the information from IFCID 0217. See 4.3.2, “Database address
space — virtual storage consumption” on page 92 for some more information. The
description of this IFCID is shown in Figure 9-2.

180 DB2 for z/OS and OS/390 Version 7 Performance Topics


IFCID 0319
When DB2 receives a non-RACF identity that represents a remote user, it maps that identity
to a local User ID for use in connection processing. This IFCID records that mapping. It
records the SNA or TCP/IP address from which the request originated and the User ID
assigned. At present this mapping only takes place when Kerberos security is used.

*/********************************************************************/
*/* IFCID 0217 for storage manager pool statistics. */
*/********************************************************************/
* DCL 1 QW0217HE BASED BDY(WORD), /* IFCID(QWHS0217) HEADER */
* 3 QW0217AV FIXED(32), /* AMOUNT OF AVAIL STORAGE */
* 3 QW0217MV FIXED(32), /* AMOUNT OF STG FOR MVS USAGE */
* 3 QW0217CR FIXED(32), /* STG RSRVD ONLY FOR MUST CMPLT */
* 3 QW0217SO FIXED(32), /* STG CUSHION WARNING TO CONTRACT*/
* 3 QW0217AL FIXED(32), /* TOTAL GETMAINED STACK STORAGE */
* 3 QW0217GM FIXED(32), /* TOTAL GETMAINED STORAGE */
* 3 * CHAR(8); /* Reserved */
* DCL 1 QW02172 BASED BDY(WORD), /* POINTED TO BY QWT02R20 */
* 3 QW0217PH FIXED(32), /* (S) */
* 3 QW0217ST FIXED(32), /* TOTAL STORAGE IN THE POOL */
* 3 QW0217CL FIXED(8), /* STORAGE CLASS */
* 3 QW0217BP FIXED(8), /* MVS SUBPOOL */
* 3 QW0217FL BIT(8), /* FLAGS */
* 5 QW0217FX BIT(1), /* 1=FIXED-STORAGE POOL */
* 5 QW0217VR BIT(1), /* 1=VARIABLE-STORAGE POOL */
* 5 QW0217LS BIT(1), /* 1=MORE QW02172 DATA WILL FOLLOW*/
* /* 0=THIS IS THE LAST QW02172 DATA*/
* 3 * FIXED(8), /* Not used */
* 3 QW0217DE CHAR(24), /* STORAGE POOL DESCRIPTION */
* 3 * CHAR(8); /* Reserved */
* DCL 1 QW02173 BASED BDY(WORD), /* POINTED TO BY QWT02R30 */
* 3 QW02173H FIXED(32), /* (S) */
* 3 QW02173T FIXED(32), /* TOTAL STORAGE IN THE POOL */
* 3 QW02173L FIXED(8), /* STORAGE CLASS */
* 3 QW02173P FIXED(8), /* MVS SUBPOOL */
* 3 QW02173F BIT(8), /* FLAGS */
* 5 QW02173X BIT(1), /* 1=FIXED-STORAGE POOL */
* 5 QW02173R BIT(1), /* 1=VARIABLE-STORAGE POOL */
* 5 QW02173S BIT(1), /* 1=MORE QW02172 DATA WILL FOLLOW*/
* /* 0=THIS IS THE LAST QW02173 DATA*/
* 5 QW02173A BIT(1), /* 1=PARENT TASK FOR PARALLELISM */
* 5 QW02173I BIT(1), /* 1=CHILD TASK FOR PARALLELISM */
* 3 * FIXED(8), /* Not used */
* 3 QW02173C FIXED(32), /* (S) */
* 3 QW0217QC CHAR(8), /* AUTHORIZATION ID */
* 3 QW0217QR CHAR(12), /* CORRELATION ID */
* 3 QW0217QN CHAR(8), /* CONNECTION NAME */
* 3 QW0217QP CHAR(8), /* PLAN NAME */
* 3 QW0217QD CHAR(16), /* THE END USER'S USERID AT THE
* USER'S WORKSTATION */
* 3 QW0217QX CHAR(32), /* THE END USER'S TRANSACTION NAME*/
* 3 QW0217QW CHAR(18), /* THE END USER'S WORKSTATION NAME*/
* 3 * FIXED(16), /* Not used */
* 3 * CHAR(8); /* Reserved */
* DCL 1 QW02174 BASED BDY(WORD), /* POINTED TO BY QWT02R40 */
* 3 QW02174D FIXED(32), /* TOTAL DICTIONARY STORAGE */
* 3 QW02174T FIXED(32), /* TOTAL STORAGE IN THE AGENT LOCAL
* POOLS */
* 3 QW02174A FIXED(32), /* # OF ACTIVE ALLIED THREADS */
* 3 QW02174C FIXED(32), /* # OF CASTOUT ENGINES */
* 3 QW02174E FIXED(32), /* # OF P-LOCK/NOTIFY EXIT ENGINES*/
* 3 QW02174F FIXED(32), /* # OF PREFETCH ENGINES */
* 3 QW02174G FIXED(32), /* # OF GBP WRITE ENGINES */
* 3 QW02174W FIXED(32), /* # OF DEFERRED WRITE ENGINES */

Figure 9-1 IFCID 0217 description

Chapter 9. Performance tools 181


*/********************************************************************/
*/* IFCID 0225 for storage manager pool statistics. */
*/********************************************************************/
* DCL 1 QW0225 BASED BDY(WORD), /* IFCID(QWHS0225) */
* 3 QW0225AL FIXED(32), /* TOTAL AGENT LOCAL POOL STORAGE */
* 3 QW0225AS FIXED(32), /* TOTAL AGENT SYSTEM STORAGE */
* 3 QW0225AV FIXED(32), /* AMOUNT OF AVAIL STORAGE */
* 3 QW0225CD FIXED(32), /* TOTAL COMP DICTIONARY STORAGE */
* 3 QW0225CR FIXED(32), /* STG RSRVD ONLY FOR MUST CMPLT */
* 3 QW0225FX FIXED(32), /* TOTAL FIXED STORAGE */
* 3 QW0225GM FIXED(32), /* TOTAL GETMAINED STORAGE */
* 3 QW0225GS FIXED(32), /* TOTAL GETMAINED STACK STORAGE */
* 3 QW0225MV FIXED(32), /* AMOUNT OF STG FOR MVS USAGE */
* 3 QW0225PM FIXED(32), /* TOTAL PIPE MANAGER SUBPOOL STG */
* 3 QW0225RO FIXED(32), /* TOTAL RDS OP POOL STORAGE */
* 3 QW0225RP FIXED(32), /* TOTAL RID POOL STORAGE */
* 3 QW0225SB FIXED(32), /* TOTAL STATEMENT CACHE BLK STG */
* 3 QW0225SC FIXED(32), /* TOTAL STORAGE FOR THREAD COPIES
* OF CACHED SQL STATEMENTS */
* 3 QW0225SO FIXED(32), /* STG CUSHION WARNING TO CONTRACT*/
* 3 QW0225TT FIXED(32), /* TOTAL BM/DM INTERNAL TRACE
* TABLE STORAGE */
* 3 QW0225VR FIXED(32), /* TOTAL VARIABLE STORAGE */
* 3 QW0225AT FIXED(32), /* # OF ACTIVE ALLIED THREADS */
* 3 QW0225CE FIXED(32), /* # OF CASTOUT ENGINES */
* 3 QW0225DW FIXED(32), /* # OF DEFERRED WRITE ENGINES */
* 3 QW0225GW FIXED(32), /* # OF GBP WRITE ENGINES */
* 3 QW0225PF FIXED(32), /* # OF PREFETCH ENGINES */
* 3 QW0225PL FIXED(32), /* # OF P-LOCK/NOTIFY EXIT ENGINES*/
* 3 * CHAR(8); /* Reserved */

Figure 9-2 IFCID 0225 description

9.1.2 Changed IFCIDs


Table 9-2 summarizes the changes to existing IFCIDs.
Table 9-2 Changed IFCIDs

IFCID Changes

0001 Field added to record number of -SET SYSPARM commands executed.

0002 Page P-lock counters recorded at the Group Bufferpool level. Field added to record
number of times pages were added to LPL.

0003 More granular information on wait times for global locks. Page P-lock counters recorded
at the Group Bufferpool level.

0022 Fields added corresponding to new PLAN_TABLE columns.

0023 Add flags to track keywords used in utility invocation.

0023, These record are now written for each subtask of a utility that uses multiple subtasks
0024, (REORG and LOAD). They are also written for the new utilities (UNLOAD, MODIFY
0025 HISTORY, and COPY2COPY).

0106 SYNCVAL parameter value added.

0147 More granular information on wait times for global locks.

0148 More granular information on wait times for global locks. Page P-lock counters recorded
at the Group Bufferpool level. Fields added to track DDF block fetch.

0150 An on-line monitor can restrict the records returned to either only locks with waiters or
only locks in which multiple agents have an interest.

182 DB2 for z/OS and OS/390 Version 7 Performance Topics


IFCID Changes

0254 This IFCID, which records statistics for a CF cache structure (Group Buffer Pool) is now
written at the Statistics Interval. It is added to Statistics Class 5.

0312 This record is no longer written.

0313 Field added to indicate if long running UR was detected by number of checkpoints
(parameter URCHKTH) or number of log records written (parameter URLGWTH).

0314 Field added to record DBADM authority on database.

IFCIDs 0002, 0003, 0148: page P-lock counters


New counters are used for both Statistics and Accounting traces to record detail on page
P-lock requests in a data sharing environment. These counters record the following:
򐂰 Number of page P-lock requests for space map pages
򐂰 Number of page P-lock requests for data pages
򐂰 Number of page P-lock requests for index leaf pages
򐂰 Number of page P-lock unlock requests
򐂰 Number of page P-lock suspensions for space map pages
򐂰 Number of page P-lock suspensions for data pages
򐂰 Number of page P-lock suspensions for index leaf pages

The Statistics Trace also includes the following additional counters:


򐂰 Number of page P-lock negotiations for space map pages
򐂰 Number of page P-lock negotiations for data pages
򐂰 Number of page P-lock negotiations for index leaf pages

This information will be reported by DB2 PM V7 when APAR PQ46636 is applied.

IFCIDs 0003, 0147, 0148: global contention


The Class 3 wait time for global contention (data sharing only) was reported as a single value
in Version 6. See Figure 9-3.

Chapter 9. Performance tools 183


CLASS 3 SUSPENSIONS AVERAGE TIME AV.EVENT
-------------------- ------------ --------
LOCK/LATCH(DB2+IRLM) 0.000000 0.00
SYNCHRON. I/O 0.003100 2.00
DATABASE I/O 0.003100 2.00
LOG WRITE I/O 0.000000 0.00
OTHER READ I/O 0.000000 0.00
OTHER WRTE I/O 0.000000 0.00
SER.TASK SWTCH 0.004165 1.00
UPDATE COMMIT 0.004165 1.00
OPEN/CLOSE 0.000000 0.00
SYSLGRNG REC 0.000000 0.00
EXT/DEL/DEF 0.000000 0.00
OTHER SERVICE 0.000000 0.00
ARC.LOG(QUIES) 0.000000 0.00
ARC.LOG READ 0.000000 0.00
STOR.PRC SCHED 0.000000 0.00
UDF SCHEDULE 0.000000 0.00
DRAIN LOCK 0.000000 0.00
CLAIM RELEASE 0.000000 0.00
PAGE LATCH 0.000000 0.00
NOTIFY MSGS 0.000000 0.00
GLOBAL CONT. 0.000000 0.00
FORCE-AT-COMMIT 0.000000 0.00
ASYNCH IXL REQUESTS 0.000000 0.00
TOTAL CLASS 3 0.007265 3.00

Figure 9-3 DB2 PM Accounting Class 3

In Version 7 this is now broken down into a number of pairs of elapsed time and event
counters as follows:
򐂰 Waits for parent L-locks (database, table space, table, or partition)
򐂰 Waits for child L-locks (page, or row)
򐂰 Waits for other L-locks
򐂰 Waits for pageset and partition P-locks
򐂰 Waits for page P-locks
򐂰 Waits for other P-locks

This information will be reported by DB2 PM V7 when APAR PQ46636 is applied.

184 DB2 for z/OS and OS/390 Version 7 Performance Topics


9.1.3 DSNZPARM to synchronize IFI records
The writing of Statistics Trace IFCIDs can be synchronized with the clock. The new parameter
SYNCVAL in the DSN6SYSP macro controls this option. Acceptable values are NO and 0 to
59. NO means no synchronization, and is the default. A numeric value is the number of
minutes past the hour at which Statistics records should be written.

The parameter has no effect if the STATIME parameter is greater than 60. It is set on the
install panel DSNTIPN, as shown in Figure 9-4. This parameter can be used to synchronize
all members of a data sharing group so that they write their statistics trace records at the
same times. It can also be used to synchronize between the DB2 statistics trace and RMF.

DSNTIPN INSTALL DB2 - TRACING PARAMETERS


===> _
Enter data below:
1 AUDIT TRACE ===> NO Audit classes to start. NO,YES,list
2 TRACE AUTO START ===> NO Global classes to start. YES,NO,list
3 TRACE SIZE ===> 64K Trace table size in bytes. 4K-396K
4 SMF ACCOUNTING ===> 1 Accounting classes to start. NO,YES,list
5 SMF STATISTICS ===> YES Statistics classes to start. NO,YES,list
6 STATISTICS TIME ===> 30 Time interval in minutes. 1-1440
7 STATISTICS SYNC ===> NO Synchronization within the hour. NO,0-59
8 DATASET STATS TIME ===> 5 Time interval in minutes. 1-1440
9 MONITOR TRACE ===> NO Monitor classes to start. NO,YES,list
10 MONITOR SIZE ===> 8K Default monitor buffer size. 8K-1M

PRESS: ENTER to continue RETURN to exit HELP for more information

New SYNCVAL parameter

Figure 9-4 DSNTIPN panel

If STATIME=15 and SYNCVAL=5, then Statistics records will be written at 5, 20, 35, and 50
minutes past each hour.

9.2 DB2 PM changes and new functions


There are enhancements to both the Statistics and Accounting Reports of DB2 PM.

9.2.1 Statistics Report Long


DB2 PM Version 7 can report the data set I/O statistics gathered by IFCID 199. They can be
printed using the Statistics Report Long using the keyword DSETSTAT, as shown in
Figure 9-5.

Chapter 9. Performance tools 185


/ / P A OL O R 7M JO B (9 9 9 , NT ), ' D B 2P M ' ,C L AS S = A, M S GC L AS S = T ,
/ / N OT I F Y= & SY S U ID , T I ME =1 4 4 0 ,R E G IO N =0 M
/ / P M V5 1 0 E XE C PG M = D B2 PM
/ / S T EP L I B DD DS N = D B2 PM V 7 . SD G O LO A D, D I SP = S HR
/ / I N PU T D D DD DS N = P AO LO R 7 . SG 2 4 61 2 9. T R AC E A ,D I SP = S H R
/ / D P ML O G DD SY S O U T= A
/ / J O BS U M DD DD SY S O U T= A
/ / S Y SO U T DD SY S O U T= A
/ / S Y SI N DD *
S T A T IS T I CS
R E P OR T
D SE T S T AT
L AY O U T (L O N G)
EXEC

Figure 9-5 DB2 PM Statistics Report Long with data set statistics

The data set statistics are reported after the buffer pool information. The output looks as
shown in Figure 9-6.

For each buffer pool, only the 10 most active data sets by I/O are reported. The column
DATABASE/SPACENAM/PART reports the pageset name and partition (or data set) number.
TYPE/GBP identifies the pageset as a table space (TSP) or an index space (IDX) and
whether or not it is GBP-dependent (only relevant for DB2 data sharing).

SYNCH I/O AVERAGE is the average number of synchronous I/Os per second, ASYNCH I/O
AVERAGE is the average number of asynchronous I/Os per second, ASY I/O PGS AVG is the
average number of pages read or written per asynchronous I/O. SYN I/O AVERAGE
DELAY/SYN I/O MAX DELAY are the average and maximum I/O time in milliseconds for
synchronous I/Os. The equivalent asynchronous figures are per page.

Finally, the current number of pages resident in the Virtual Buffer Pool and the Hiperpool are
reported together with the number of updated and not yet written pages in the VBP.

The restriction to the 10 most active data sets per buffer pool only applies to the DB2PM
Statistics Report Long. Trace records are generated for all data sets and the FILE and SAVE
options will process all IFCID 199 trace records.

The DSETSTAT keyword was added to DB2PM V6 by APAR PQ37807.

186 DB2 for z/OS and OS/390 Version 7 Performance Topics


---- HIGHLIGHTS ----------------------------------------------------------------------------------------------------
INTERVAL START : 12/01/00 17:37:33.35 SAMPLING START: 12/01/00 17:37:33.35 TOTAL THREADS : 2189.00
INTERVAL END : 12/01/00 17:56:34.32 SAMPLING END : 12/01/00 17:56:34.32 TOTAL COMMITS : 2188.00
INTERVAL ELAPSED: 19:00.977262 OUTAGE ELAPSED: 0.000000 DATA SHARING MEMBER: N/A

BPOOL DATABASE TYPE SYNCH I/O AVG SYN I/O AVG DELAY ASYN I/O AVG DELAY CURRENT PAGES (VP)
SPACENAM GBP ASYNC I/O AVG SYN I/O MAX DELAY ASYN I/O MAX DELAY CHANGED PAGES (VP)
PART ASY I/O PGS AVG CURRENT PAGES (HP)
----- -------- ---- --------------- ----------------- ------------------ ------------------
BP0 DSNDB01 TSP 0.00 14.00 0.00 40.00
DBD01 N 0.00 21.00 0.00 0.00
1 N/C 0.00
BP0 DSNDB06 IDX 0.00 7.00 0.00 4.00
DSNATX02 N 0.00 18.00 0.00 0.00
1 N/C 0.00
BP0 DSNDB06 IDX 0.00 7.00 0.00 4.00
DSNDSX01 N 0.00 14.00 0.00 0.00
1 N/C 0.00
BP0 DSNDB06 IDX 0.00 7.00 0.00 4.00
DSNDTX01 N 0.00 16.00 0.00 0.00
1 N/C 0.00
BP0 DSNDB06 IDX 0.00 7.00 0.00 4.00
DSNDXX01 N 0.00 16.00 0.00 0.00
1 N/C 0.00
BP0 DSNDB06 IDX 0.00 8.00 0.00 4.00
DSNTTX01 N 0.00 21.00 0.00 0.00
1 N/C 0.00
BP0 DSNDB01 IDX 0.00 9.00 0.00 3.00
DSNSPT01 N 0.00 18.00 0.00 0.00
1 N/C 0.00
BP0 DSNDB01 TSP 0.00 1.00 0.00 5.00
SCT02 N 0.00 1.00 0.00 0.00
1 N/C 0.00
BP0 DSNDB01 TSP 0.00 11.00 0.00 2.00
SPT01 N 0.00 14.00 0.00 0.00
1 N/C 0.00
BP0 DSNDB01 TSP 0.00 1.00 0.00 5.00
SYSLGRNX N 0.00 1.00 0.00 0.00
1 N/C 0.00
BP1 DB246129 TSP 18.11 1.00 0.00 60.00
TS612903 N 0.00 89.00 0.00 0.00
4 N/C 0.00
BP1 DB246129 TSP 8.34 1.00 0.00 81.00
TS612903 N 0.00 28.00 0.00 0.00
3 N/C 0.00
BP1 DB246129 TSP 8.31 1.00 0.00 78.00
TS612903 N 0.00 46.00 0.00 0.00
2 N/C 0.00
BP1 DB246129 TSP 7.83 2.00 0.00 81.00
TS612903 N 0.00 65.00 0.00 0.00
1 N/C 0.00
BP7 DSNDB07 TSP 0.00 0.00 0.00 2.00
DSN4K01 N 0.00 0.00 0.00 1.00
1 N/C 0.00
BP7 DSNDB07 TSP 0.00 0.00 0.00 2.00
DSN4K02 N 0.00 0.00 0.00 1.00
1 N/C 0.00

Figure 9-6 DB2 PM Statistics Report data set statistics

The IFCID 199 records can also be printed using a record trace, as shown in Figure 9-7.

Chapter 9. Performance tools 187


RECTRACE
TRACE
LEVEL(SHORT)
INCLUDE (IFCID(199) SUBSYSTEMID(DB2A))
EXEC

LOCATION: DB2A DB2 PERFORMANCE MONITOR (V7) PAGE: 1-1


GROUP: N/P RECORD TRACE - SHORT REQUESTED FROM: NOT SPECIFIED
MEMBER: N/P TO: NOT SPECIFIED
SUBSYSTEM: DB2A ACTUAL FROM: 11/29/00 22:3
DB2 VERSION: V7 PAGE DATE: 11/29/00
0PRIMAUTH CONNECT INSTANCE END_USER WS_NAME TRANSACT
ORIGAUTH CORRNAME CONNTYPE RECORD TIME DESTNO ACE IFC DESCRIPTION DATA
PLANNAME CORRNMBR TCB CPU TIME ID
-------- -------- ----------- ----------------- ------ --- --- -------------- ------------------------------------------------
SYSOPR DB2A B504F3F391C9 'BLANK' 'BLANK' 'BLANK'
SYSOPR 010.TSDD 'BLANK' 22:33:57.16386089 53 1 199 ACT. DATASETS NETWORKID: DB2A LUNAME: SCPDB2A LUWSEQ:
'BLANK' BP01 N/P
|--------------------------------------------------------------------------------------------------------------------
|TIME LSTATS STARTED 11/29/00 22:33:57.16
|--------------------------------------------------------------------------------------------------------------------
|DBID: 262 DBNAME: DBLP0001 GBP DEPENDEND: NO
|OBID: 2 OBNAME: TSLP0001 TYPE OF DATASET: DATA
|BPID: BP1
|
|SYNC.I/O FOR WRITE AND READ ASYNC.I/O FOR WRITE, READ, CASTOUT BUFFER POOL CACHED PAGES
|AVG. DELAY I/O (MS) 3 AVG. DELAY I/O (MS) 2 VPOOL CACHE CURR. 100
|MAX. DELAY I/O (MS) 136 MAX. DELAY I/O (MS) 15 VPOOL CACHE CHANGED 0
|TOTAL I/O PAGES 216 TOTAL I/O PAGES 857 HPOOL CACHE CURR. 0
| TOTAL I/O COUNT 34
|---------------------------------------------------------------------------------------------------------------------
|DBID: 262 DBNAME: DBLP0001 GBP DEPENDEND: NO
|OBID: 2 OBNAME: TSLP0001 TYPE OF DATASET: DATA
|BPID: BP1
|
|SYNC.I/O FOR WRITE AND READ ASYNC.I/O FOR WRITE, READ, CASTOUT BUFFER POOL CACHED PAGES
|AVG. DELAY I/O (MS) 4 AVG. DELAY I/O (MS) 3 VPOOL CACHE CURR. 82
|MAX. DELAY I/O (MS) 96 MAX. DELAY I/O (MS) 15 VPOOL CACHE CHANGED 55
|TOTAL I/O PAGES 201 TOTAL I/O PAGES 786 HPOOL CACHE CURR. 0
| TOTAL I/O COUNT 32

Figure 9-7 IFCID 199 record trace

If APAR PQ43357 is applied to DB2 PM, then some additional information is reported in the
Statistics Report and Trace for a data sharing environment as shown in Figure 9-8. The new
item is titled GBP-DEPENDENT GETPAGES and is reported for each Group Buffer Pool. It
records the number of getpages performed for GBP-dependent objects. It is used to indicate
the degree of data sharing.

GROUP BP0 QUANTITY


--------------------------- --------
GROUP BP HIT RATIO (%) N/C
GBP-DEPENDENT GETPAGES 0.00
SYN.READ(XI)-DATA RETURNED 0.00
SYN.READ(XI)-NO DATA RETURN 0.00
SYN.READ(NF)-DATA RETURNED 0.00
SYN.READ(NF)-NO DATA RETURN 0.00
CLEAN PAGES SYN.WRTN 0.00
CHANGED PGS SYN.WRTN 0.00
CLEAN PAGES ASYN.WRT 0.00
CHANGED PGS ASYN.WRT 0.00
REG.PG LIST (RPL) RQ 0.00
CLEAN PGS READ RPL 0.00
CHANGED PGS READ RPL 0.00
PGS READ FRM DASD AFTER RPL 0.00
ASYN.READ-DATA RETURNED 0.00
PAGES CASTOUT 0.00
EXPLICIT X-INVALIDATIONS 0.00
CASTOUT CLASS THRESH 0.00
GROUP BP CAST.THRESH 0.00
CASTOUT ENG.UNAVAIL. 0.00
WRITE ENG.UNAVAIL. 0.00
READ FAILED-NO STOR. 0.00
WRITE FAILED-NO STOR 0.00

Figure 9-8 Statistics report GBP information

188 DB2 for z/OS and OS/390 Version 7 Performance Topics


DB2 PM V7 also has a new section containing page P-lock detail, by Group Buffer Pool, as
shown in Figure 9-9.

GROUP BP0 CONTINUED QUANTITY /SECOND /THREAD /COMMIT


---------------------- -------- ------- ------- -------
PAGE P-LOCK LOCK REQ 6.00 0.01 0.20 0.00
SPACE MAP PAGES 1.00 0.00 0.03 0.00
DATA PAGES 2.00 0.00 0.07 0.00
INDEX LEAF PAGES 3.00 0.00 0.10 0.00
PAGE P-LOCK UNLOCK REQ 4.00 0.01 0.13 0.00
PAGE P-LOCK LOCK SUSP 18.00 0.02 0.60 0.01
SPACE MAP PAGES 5.00 0.01 0.17 0.00
DATA PAGES 6.00 0.01 0.20 0.00
INDEX LEAF PAGES 7.00 0.01 0.23 0.01
PAGE P-LOCK LOCK NEG 27.00 0.04 0.90 0.02
SPACE MAP PAGES 8.00 0.01 0.27 0.01
DATA PAGES 9.00 0.01 0.30 0.01
INDEX LEAF PAGES 10.00 0.01 0.33 0.01

Figure 9-9 Statistics report P-lock detail

Here are the definitions of the items on the P-lock detail:

PAGE P-LOCK LOCK REQ: The sum of all page P-lock lock requests, which can be broken
down further as follows:
򐂰 SPACE MAP PAGES: The number of page P-lock lock requests for space map pages.
򐂰 DATA PAGES: The number of page P-lock lock requests for data pages.
򐂰 INDEX LEAF PAGES: The number of page P-lock lock requests for index leaf pages.

PAGE P-LOCK UNLOCK REQ: The number of page P-lock unlock requests.

PAGE P-LOCK LOCK SUSP: The sum of all page P-lock lock suspensions, which can be
broken down further as follows:
򐂰 SPACE MAP PAGES: The number of page P-lock lock suspensions for space map pages.
򐂰 DATA PAGES: The number of page P-lock lock suspensions for data pages.
򐂰 INDEX LEAF PAGES: The number of page P-lock lock suspensions for index leaf pages.

PAGE P-LOCK LOCK NEG: The sum of all page P-lock lock negotiations, which can be
broken down further as follows:
򐂰 SPACE MAP PAGES: The number of page P-lock lock negotiations for space map pages.
򐂰 DATA PAGES: The number of page P-lock lock negotiations for data pages.
򐂰 INDEX LEAF PAGES: The number of page P-lock lock negotiations for index leaf pages.

The Statistics Report Long will also, in the near future, include the DBM1 storage information
from IFCID 225. The new Storage Statistics section of the report will look something like
Figure 4-5 on page 92.

9.2.2 Accounting Report


The Accounting Report shows more detail for the Class 3 times and highlights, as shown in
Figure 9-10.

These are the new items in the Class 3 times, introduced by DB2 in Version 6:

Chapter 9. Performance tools 189


򐂰 FORCE-AT-COMMIT: The time spent waiting for LOB values to be written to the LOB table
space at commit.
򐂰 ASYNCH IXL REQUESTS: The time spent waiting for asynchronous IXLCACHE and
IXLFCOMP requests in a data sharing environment.

CLASS 3 SUSPENSIONS AVERAGE TIME AV.EVENT HIGHLIGHTS


-------------------- ------------ -------- --------------------------
LOCK/LATCH(DB2+IRLM) 0.000000 0.00 #OCCURRENCES : 1
SYNCHRON. I/O 0.003100 2.00 #ALLIEDS : 1
DATABASE I/O 0.003100 2.00 #ALLIEDS DISTRIB: 0
LOG WRITE I/O 0.000000 0.00 #DBATS : 0
OTHER READ I/O 0.000000 0.00 #DBATS DISTRIB. : 0
OTHER WRTE I/O 0.000000 0.00 #NO PROGRAM DATA: 0
SER.TASK SWTCH 0.004165 1.00 #NORMAL TERMINAT: 1
UPDATE COMMIT 0.004165 1.00 #ABNORMAL TERMIN: 0
OPEN/CLOSE 0.000000 0.00 #CP/X PARALLEL. : 0
SYSLGRNG REC 0.000000 0.00 #IO PARALLELISM : 0
EXT/DEL/DEF 0.000000 0.00 #INCREMENT. BIND: 0
OTHER SERVICE 0.000000 0.00 #COMMITS : 1
ARC.LOG(QUIES) 0.000000 0.00 #ROLLBACKS : 0
ARC.LOG READ 0.000000 0.00 #SVPT REQUESTS : 0
STOR.PRC SCHED 0.000000 0.00 #SVPT RELEASE : 0
UDF SCHEDULE 0.000000 0.00 #SVPT ROLLBACK : 0
DRAIN LOCK 0.000000 0.00 MAX SQL CASC LVL: 0
CLAIM RELEASE 0.000000 0.00 UPDATE/COMMIT : 0.00
PAGE LATCH 0.000000 0.00 SYNCH I/O AVG. : 0.001550
NOTIFY MSGS 0.000000 0.00
GLOBAL CONT. 0.000000 0.00
FORCE-AT-COMMIT 0.000000 0.00
ASYNCH IXL REQUESTS 0.000000 0.00
TOTAL CLASS 3 0.007265 3.00

Figure 9-10 Accounting report Class 3 changes

Waits for Commit are now reported in three places in the Class 3 times:
򐂰 The LOG WRITE I/O figure under SYNCHRON. I/O is the time spent waiting for log write
during Phase 1 of a Two-Phase Commit.
򐂰 The UPDATE COMMIT figure under SER. TASK SWTCH is the time spent during Phase 2
of a Two-Phase Commit or during a One-Phase Commit (Synchronization Commit).This
includes any wait for log write.
򐂰 The FORCE-AT-COMMIT figure is the time spent writing LOB values back to the LOB
table space at Commit.
Note that FORCE-AT-COMMIT does not include time spent writing pages to the Group
Buffer Pools at Commit in a data sharing environment.

The new items in the Highlights section are:


򐂰 #SVPT REQUESTS: The number of SAVEPOINT requests executed.
򐂰 #SVPT RELEASE: The number of RELEASE SAVEPOINT requests executed.
򐂰 #SVPT ROLLBACK: The number of ROLLBACK TO SAVEPOINT requests executed.

190 DB2 for z/OS and OS/390 Version 7 Performance Topics


If APARs PQ43357 and PQ46636 are applied to DB2 PM, then additional information will be
reported for the Accounting Report and Trace in a data sharing environment. The
components of Class 3 Global Contention wait time are reported in two new sections, shown
in Figure 9-11. These will be reported if APAR PQ46636 is applied.

G lo b a l c o n te n tio n o n L -lo c k s :

GL OBA L C ON TEN TIO N L- LO CKS A VE RA GE TI ME A V. EVE NT


-- --- -- --- -- -- -- --- --- -- -- --- -- -- --- - -- -- --- -- -- - -- --- --
L- LOC KS 0 .00 00 00 0. 00
PA REN T (DB ,T S, TA B,P ART ) 0 .00 00 00 0. 00
CH ILD 0 .00 00 00 0. 00
OT HER 0 .00 00 00 0. 00

G lo b a l c o n te n tio n o n P -lo c k s :

GL OBA L C ON TEN TIO N P- LO CKS A VE RA GE TI ME A V. EVE NT


-- --- -- --- -- -- -- --- --- -- -- --- -- -- --- - -- -- --- -- -- - -- --- --
P- LOC KS 0 .00 00 00 0. 00
PA GES ET /PA RT IT IO N 0 .00 00 00 0. 00
PA GE 0 .00 00 00 0. 00
OT HER 0 .00 00 00 0. 00

Figure 9-11 Accounting global contention detail

The definitions of the L-lock items are as follows:

L-LOCKS: The accumulated wait times due to global contention for all L-locks. The
accumulated wait trace events processed for waits for global contention of all L-locks.

PARENT (DB,TS,TAB,PART): The accumulated wait time due to global contention for parent
L-locks. Parent L-locks are any of these L-lock types: database, tablespace, table, partition.
The number of wait trace events processed for waits for global contention for parent L-locks.

CHILD: The accumulated wait time due to global contention for child L-locks. Child L-locks
are any of these L-lock types: page, row. The number of wait trace events processed for waits
for global contention for child L-locks.

OTHER: The accumulated wait time due to global contention for other L-locks. The number of
wait trace events processed for waits for global contention for other L-locks.

The definitions of the P-lock items are as follows:

P-LOCKS: The accumulated wait times due to global contention for all P-locks. The
accumulated wait trace events processed for waits for global contention of all P-locks.

PAGESET/PARTITION: The accumulated wait time due to global contention for


pageset/partition P-locks. The number of wait trace events processed for waits for global
contention for pageset/partition P-locks.

PAGE: The accumulated wait time due to global contention for page P-locks. The number of
wait trace events processed for waits for global contention for page P-locks.

OTHER: The accumulated wait time due to global contention for other P-locks. The number of
wait trace events processed for waits for global contention for other P-locks.

Chapter 9. Performance tools 191


Further information on page P-locks will be reported, if APAR PQ46636 is applied, as shown
Figure 9-12. The GBP-DEPEND GETPAGES is added by APAR PQ43357.

GROUP BP0 AVERAGE TOTAL


-------------------- -------- --------
GBP-DEPEND GETPAGES N/A N/A Added by APAR PQ43357
READ(XI)-DATA RETUR 0.00 0
READ(XI)-NO DATA RT 0.00 0
READ(NF)-DATA RETUR 0.00 0
READ(NF)-NO DATA RT 0.00 0
PREFETCH PAGES READ 0.00 0
CLEAN PAGES WRITTEN 0.00 0
CHANGED PAGES WRTN 0.00 0
UNREGISTER PAGE 0.00 0
ASYNCH GBP REQUESTS 0.00 0
EXPLICIT X-INVALID 0.00 0
WRITE TO SEC-GBP 0.00 0
ASYNCH SEC-GBP REQ 0.00 0
PG P-LOCK LOCK REQ 0.00 0
SPACE MAP PAGES 0.00 0
DATA PAGES 0.00 0
INDEX LEAF PAGES 0.00 0
PG P-LOCK UNLOCK REQ 0.00 0 Added by APAR PQ46636
PG P-LOCK LOCK SUSP 0.00 0
SPACE MAP PAGES 0.00 0
DATA PAGES 0.00 0
INDEX LEAF PAGES 0.00 0

Figure 9-12 Accounting GBP information

The definitions of the new items are as follows:

GBP-DEPEND GETPAGES: The number of getpages done for GBP-dependent objects. It is


used to indicate the degree of data sharing.

PG P-LOCK LOCK REQ: The number of all page P-lock lock requests. This is further broken
down by type of page:
򐂰 SPACE MAP PAGES: The number of page P-lock lock requests for space map pages.
򐂰 DATA PAGES: The number of page P-lock lock requests for data pages.
򐂰 INDEX LEAF PAGES: The number of page P-lock lock requests for index leaf pages.

PG P-LOCK UNLOCK REQ: The number of page P-lock unlock requests.

PG P-LOCK LOCK SUSP: The sum of all page P-lock lock suspensions. This is further
broken down by type of page:
򐂰 SPACE MAP PAGES: The number of page P-lock lock suspensions for space map pages.
򐂰 DATA PAGES: The number of page P-lock lock suspensions for data pages.
򐂰 INDEX LEAF PAGES: The number of page P-lock lock suspensions for index leaf pages.

9.3 Index Advisor


The Index Advisor is part of DB2 UDB for UNIX, Windows, and OS/2 (DB2 UWO). It is an
extension of the DB2 UWO Optimizer. It makes recommendations for indexes based on:
򐂰 The tables and views defined in the database
򐂰 The existing indexes
򐂰 The SQL workload

192 DB2 for z/OS and OS/390 Version 7 Performance Topics


The recommendations provide before and after costs and the DDL to create the
recommended indexes.

The SQL workload is a group of related SQL statements processed over a period of time. The
workload is stored in a workload table for input to the Index Advisor. Each SQL statement has
associated with it:
򐂰 Its frequency (how often it executes)
򐂰 Its importance (business importance — set manually)
򐂰 Its weight, calculated as (frequency * importance)

The Index Advisor process is being extended to support DB2 for z/OS and OS/390 Version 7.
DB2 for z/OS is supported by modeling a DB2 for z/OS subsystem on a DB2 UWO system,
thus allowing Index Advisor to analyze the model.

9.3.1 Index Advisor process for DB2 for z/OS and OS/390
The process consists of:
򐂰 A set of tools to capture DB2 for OS/390 metadata, catalog statistics, and SQL workload
򐂰 A procedure for creating the model and using Index Advisor to analyze it

This process is shown in Figure 9-13. The software prerequisites are:


򐂰 DB2 for z/OS and OS/390 Version 7
򐂰 DB2 for UWO Version 7
򐂰 DB2 Connect

System configuration parameters

DB2 UWO
Metadata (tables, views, indexes)

Catalog statistics

DB2 for OS/390

SQL workload
Create Index

DBA

Recommendations Index
Advisor

Figure 9-13 Index Advisor process for DB2 for z/OS and OS/390

Chapter 9. Performance tools 193


The process alters certain configuration settings of the database manager for the DB2 UWO
instance that is used to model the DB2 for OS/390 subsystem Therefore it is recommended
that a dedicated DB2 UWO instance be used for the DB2 for z/OS and OS/390 Index Advisor
process.

9.3.2 Modeling DB2 for z/OS and OS/390


You must create a modeling database on DB2 UWO and update the configuration parameters
to mimic DB2 for OS/390. Guidelines are provided for setting the parameters. The application
table, view, and index definitions can be imported to the modeling database using the db2look
command.

The db2look command is a DB2 UWO tool. In Version 7 it is enhanced to allow connection to
DB2 for OS390 using dynamic bind. It accesses the DB2 for OS/390 Catalog and generates:
򐂰 DDL for tables, views, and indexes
򐂰 DML for propagating the Catalog statistics

You can use the DB2 UWO Command Line Processor to process the DDL and DML files.

9.3.3 Collecting the SQL workload


You must identify an appropriate time period over which to collect workload data. Then you
need to do the following:
1. Run job DSNTEJWC on the mainframe.
2. Run job DSNTEJWF on the mainframe.
3. Use DB2 UWO EXPORT to transfer data from mainframe to the DB2 UWO workstation.
4. Use DB2 UWO LOAD to load the data to the Index Advisor Workload table.

A number of jobs are provided to set up and execute this process. These are described briefly
in Table 9-3.
Table 9-3 Index Advisor workload collection jobs

Job Description

DSNTEJWA Binds plans for workload collector and formatter programs.

DSNTEJWB Drops and recreates workload collector tables.

DSNTEJWC Calls dynamic workload collector (DSNZSCH). Starts traces for IFCIDs 0316 and
0317. Reads these IFCIDS via IFI and inserts them to the workload collector
tables on DB2 for OS/390.

DSNTEJWD Drops and recreates workload formatter tables.

DSNTEJWF Calls dynamic workload formatter (DSNZSCJ). Extracts SQL statements from
workload collector tables and reformats them into the format expected by the
Index Advisor. Inserts reformatted statements into workload formatter tables.

194 DB2 for z/OS and OS/390 Version 7 Performance Topics


9.3.4 Running the Index Advisor
Once the metadata and SQL workload has been set up on DB2 UWO, you can run the Index
Advisor. To do this, you can either use the Index SmartGuide, which is in the DB2 UWO
Control Center under then Create Index tab, or use the db2advis command directly from the
command line.

You can adjust the inputs as required:


򐂰 Add/remove/change SQL statement text
򐂰 Change frequency and importance
򐂰 Specify index space restrictions
򐂰 Specify analysis time limit

You can use the results to reaffirm your existing indexes and to ensure that you have not
overlooked beneficial indexes.

9.3.5 Considerations
The analysis time limit affects the completeness of the results. Running the advisor for a
longer time or without time limit may result in even better index suggestions.

The index space restrictions limit the completeness of the results. Deactivating this limit will
cause indexes to be recommended that improve performance regardless of the disk space
required for the indexes.

You will still need to decide the best index for clustering or partitioning. The Index Advisor
does not provide recommendations.

Be aware that there are differences between DB2 UWO and DB2 for OS/390, not least in that
they have different Optimizers. The index advisor is based on the DB2 UWO Optimizer and
therefore its recommendations may ignore issues specific to DB2 for OS/390. However, the
criteria for determining good indexes is largely platform independent and therefore the
recommendations of the Index Advisor will be appropriate to DB2 for OS/390 in most cases.

The initial implementation of the Index Advisor supports dynamic SQL with dynamic
statement caching enabled, that is with CACHEDYNAMIC(YES) specified in your
DNSNZPARM.

9.3.6 Availability
There will be an informal delivery of this facility with DB2 for z/OS and OS/390 Version 7. The
software will be downloadable from the DB2 for OS/390 web site.

A formal delivery is planned at some future date. This will include a Control Center front end
that provides increased automation of tasks and formal task guidance for tasks that are not
automated. It will also support workload collection for static SQL.

Chapter 9. Performance tools 195


9.4 DB2 Estimator
The Windows based DB2 capacity planning modeling tool, enhanced for DB2 V7, provides:
򐂰 Bulk table change window
򐂰 Bulk SQL parsing window
򐂰 Bulk SQL cardinality repair window
򐂰 New "Getting Started" document available from Start menu
򐂰 Product Help, Tutorial program, README.TXT file available from Start menu
򐂰 Table and index partitions taken into account by Capacity Runs window
򐂰 DB2 V7 UNLOAD utility
򐂰 DB2 V7 more parallel LOAD partitions
򐂰 DB2 V7 Scrollable Cursors
򐂰 DB2 V7 FETCH FIRST n ROWS ONLY, and Fast Implicit Close
򐂰 DB2 V7 Self-referencing subselect on UPDATE/DELETE
򐂰 DB2 V7 Unicode encoding scheme
򐂰 DB2 V7 new built-in functions and special registers
򐂰 DB2 V7 FOR UPDATE clause, without column list
򐂰 DB2 V7 Statistics History option

Several new DB2 V7 functions have been added to DB2 Estimator, including: the new
UNLOAD utility, more parallel LOAD of partitions, Unicode encoding scheme, new built-in
functions, the FETCH FIRST n ROWS ONLY clause, and Scrollable Cursors.

The Capacity Runs window now reports table and index device busy information for an
average partition of the partitioned table or index. This will show a reduction in device busy
performance estimates effects due to partitioning.

A new function creates transactions from imported SQL based on the program name or
package name from which the SQL was extracted. In addition, a new facility modifies the
transaction environment after a transaction is created.

If you have created and saved the results of a capacity run for exporting, you can use the new
Predicate Analysis window to analyze transactions and their SQL to create a list of all
predicates used and the number of times each predicate is used per second. If the same
predicates are used many times without an applicable index, you can create a new index,
which is easy to do in DB2 Estimator.

Support for the LOAD RESUME utility has been added to DB2 Estimator. The new ESS and
RVA DASD devices and the latest S/390 CPUs have been added to the DB2 Estimator project
configuration file, MVS.CFG.

The new "Quick Check/Update Status of SubProj SQL" menu item quickly checks the status
of each SQL item and updates the SQL item's status if it has changed. This menu item allows
you to scroll down the sub-project's list of SQL items and see the actual SQL status for each
SQL item. The status can change from the desired CARDOK status to PARSEOK or even to
PARSEFAIL status if a table referenced by the SQL item has changed in one of many ways.

This menu item also produces a list of SQL items that are not in the desired CARDOK status.
If an SQL item was demoted from CARDOK status to PARSEOK status, you can open the
SQL item and proceed through the cardinality questions again, using the old answers only if
they still are correct. The SQL items should be in CARDOK status after this process.

Remember that DB2 Estimator is included in the DB2 Management Clients Package
no-charge feature of DB2 V7. However, for newer versions of DB2 Estimator, you should also
check the Web site:
ibm.com/software/data/db2/os390/estimate

196 DB2 for z/OS and OS/390 Version 7 Performance Topics


10

Chapter 10. Synergy with host platform


DB2 has several functions that are capable of exploiting the functionalities of the z/OS and
OS/390 platform. In this chapter we discuss improvements not strictly internal to DB2 that still
positively impact DB2 performance.
򐂰 DB2 and zSeries: DB2 benefits from large real memory support, faster processors and
better hardware compression
򐂰 VSAM striping: Striping becomes available for VSAM data sets with DFSMS in OS/390
V2R10
򐂰 ESS enhancements: The IBM Enterprise Storage Server (ESS) exploits the Parallel
Access Volume and Multiple Allegiance features of OS/390. FlashCopy is the ESS feature
to use for DB2 cloning/backup in combination with log suspend/resume.

© Copyright IBM Corp. 2001 197


10.1 zSeries
The zSeries 900 and z/OS V1R1 deliver immediate performance benefits to DB2 in terms of
real storage exploitation. The z900 hardware is currently supporting up to 64 GB of real
storage. All supported releases of DB2 are compatible with z/OS.

64-bit support in z/OS V1R1 provides 64-bit arithmetic and 64-bit real addressing support.
Later releases of z/OS will support 64-bit virtual addressing which means support for 16
exabyte address spaces. OS/390 V2R10 will also deliver 64-bit real addressing support.

10.1.1 DB2 and zSeries overview


In this section we provide an overview of DB2 and zSeries.

64 GB central storage
All DB2 Versions will be able to receive immediate benefits from the increased capacity of
central memory—up to 64 GB, from the current limit of 2 GB. More DB2 subsystems can be
supported on a single OS image, without significant paging activity. The increased real
memory provides performance and improved scaling for all customers.

With Versions 6 and 7 of DB2, the key benefit of scaling with the larger memory is in the use
of data spaces, first shipped with DB2 Version 6. architecture. Data spaces allow for buffer
pools and EDM pool global dynamic statement cache to reside outside of the DBM1 address
space. Now more data can be kept in memory, which can help reduce I/O time. Customers
who are reaching the 2 GB address space limit should migrate to this solution.

Larger number of faster processors


DB2 query parallelism is positioned to take advantage of the additional, more powerful
processors in z900. The larger number of processors and additional storage will enable
higher degrees of parallelism for qualifying queries in all current DB2 versions.

Compression
In the G5/G6 generation of 9672 servers, hardware compression was made by microcode.
The z900 Processing Units have a new unit, the Compression Unit, on its chip. This new
implementation provides better hardware compression performance, allowing 3 to 4 times
fewer cycles than the G6 servers.

Unicode
Unicode support in DB2 Version 7 will be able to use the new hardware translation services
provided through architecture expansion in future hardware.

DB2 256 GB central storage support PTFs


Data space buffer pools can add up to 32 GB of 4 KB page size pools and up to 256 GB of 32
KB page size pools. To benefit from larger real memory the following PTFs must be applied to
exploit the use of 64-bit real addressing.

PQ25914 — With this APAR data space buffers are allowed to be page fixed above 2 GB
without being moved.

PQ36933 — Virtual pool buffers and log buffers that reside above 2 GB real storage will not
be moved when they need to be page fixed for I/O.

198 DB2 for z/OS and OS/390 Version 7 Performance Topics


PQ38174 — The asynchronous data mover facility (ADMF) was invented to improve
storage-to-storage movement of large sets of data pages. The continued evolution of CMOS
processor and memory technology in G5/G6 and now zSeries has improved the synchronous
data movement using the Move Page instruction to the point where its performance is on a
par with ADMF. The presence of this Fast Synchronous Data Mover Facility, as it is now
called, is now indicated by the APAR to DB2 so that Move Page is used in place of ADMF.
Hiper pools can still be used even if the ADMF facility is not configured. DB2 will continue to
use ADMF on pre-G5 machines. DB2 without this APAR will also continue to use ADMF on
G5/G6.

10.1.2 Performance measurements


A set of measurements were executed at SVL to demonstrate the performance benefits of the
z/Series servers. We will summarize the results of these tests in this section. For a more
detailed evaluation of DB2 performance on z/Series, see the white paper DB2 UDB for z/OS
and OS/390 Performance on the IBM zSeries 900 server by David Witkowski, which is
available from the Web site:
ibm.com/software/data/db2/os390/support

Large real storage performance


In this section we cover various performance related features for large real storage.

Measurement environment
Here we provide the 31-bit and 64-bit specifications.

31-bit measurements:
򐂰 9672-ZZ7 (G6 turbo): 1.8 GB central storage, 6 GB expanded storage
򐂰 2 dedicated processors (OLTP-IRWW workload),
4 dedicated processors (query workload)
򐂰 ESS and RAMAC3 disk
򐂰 OS/390 V2R10
򐂰 DB2 V7

64-bit measurements:
򐂰 z900 model 2064-M116: 12 GB and 16 GB central storage, depending on the test
򐂰 2 dedicated processors (OLTP workload), 4 dedicated processors (query workload)
򐂰 ESS and RAMAC3 disk
򐂰 OS/390 V2R10, z/OS V1R1
򐂰 DB2 V7

Performance of buffer pools in data spaces in 31-bit mode


This test was run to compare the cost of buffer pools in data spaces to traditional virtual buffer
pools allocated in the DBM1 address space. The size of the buffer pools was 800 MB in both
cases and they were completely backed by 1.8 GB central storage (no paging).

A query scanned a six million rows from 200,000 table space pages. The elapsed times were
equivalent but the CPU overhead of data spaces was measured up to 10%. Transferring
pages from data spaces to the lookaside buffers in the DBM1 address space is responsible
for the CPU overhead.

In order to measure the effect of paging, a second test was executed, once with 1.2 GB
primary buffer pools and once with 6 GB buffer pools in data spaces. Paging was expected to
expanded storage and some to auxiliary storage.

Chapter 10. Synergy with host platform 199


A query scanned 45 million rows from 1,5 million table space pages. The CPU overhead of
data spaces was up to 60% and the elapsed time was 4.5 times greater.

Large buffer pools in data spaces not backed by real storage are not a good performing
alternative and this test does not even demonstrate the worst case while almost all paging
was done to expanded storage.

Performance of buffer pools in data spaces in 64-bit mode


The same 45 million row scan as in the previous experiment was executed, but now the data
was cached in the data space buffer pool in order to achieve a 100% hit ratio. The results
from this test are shown in Figure 10-1. The CPU time dropped by 27% and the elapsed time
was cut by a factor of 50.

If we ran the same test with a 100% miss ratio, again the elapsed times would be almost
equivalent, and the CPU overhead due to data spaces would be 12%.

Bpool in data spaces performance 64-bit


1200 60
primary - 100% MISS
1000 primary 50
Elapsed time in seconds

CPU time in seconds


800 40
data space
Elapsed time
600 30
CPU time

400 20

200 10
data space - 100% HIT
0 0

Figure 10-1 Buffer pool in data spaces performance 64-bit

Performance of hiperpools versus data spaces


z/Architecture provides hiperpool support through the Fast Synchronous Data Mover.
Hiperpools are backed by central storage
򐂰 Read workload:
From a 1 million page table space, 30 million rows were read via a table space scan. With
a 100% buffer miss ratio, the performance of hiperpools and data space buffer pools was
observed to be equivalent. If both pool types were sized for a 100% buffer hit ratio it turned
out that data space buffer pools were approximately 30% more efficient in terms of CPU
and elapsed time. This is due tot the extra data movement between the primary pool and
the hiperpool.

200 DB2 for z/OS and OS/390 Version 7 Performance Topics


򐂰 Update workload:
7 million rows representing 1GB data were updated in this test. Both, the hiperpool and
the data space buffer pool, were sized to hold the entire 1GB data in order to avoid read
I/O. The test with the data space buffer pool was only slightly more CPU efficient than the
one with the hiperpool but the elapsed time was only half. The exact degree of elapsed
time benefit will depend on many factors, including I/O performance and buffer pool
thresholds. See Figure 10-2 for the results.

Data space bpool vs. hiperpool


250

200

sd
no 150
ce Hiperpool

s Data space
ni
e 100
m
iT
50

Elapsed CPU

Figure 10-2 Data space versus hiperpool update performance

Compression performance
In this section we cover compression performance features.

Measurement environment
Here we provide specifications for the 9672 and z900 environments.

9672 environment:
򐂰 9672-ZZ7 (G6 turbo)
򐂰 4 dedicated processors
򐂰 OS/390 V2R10
򐂰 DB2 V7

z900 environment:
򐂰 z900 model 2064-M116
򐂰 4 dedicated processors
򐂰 OS/390 V2R10
򐂰 DB2 V7

In both environments, adequate storage was available in order to avoid database I/O and
system paging I/O.

Single thread table scan


We scanned 8 million rows from 290,000 non-compressed table space pages. To eliminate
the effect of noise, this query was repeated 20 times, resulting in DB2 having to perform 5.8
million getpages. A reduction of 20-25% in elapsed time and CPU was measured when
moving this workload from G6 to z/900. Elapsed and CPU time were almost equivalent due to
the absence of I/O.

Chapter 10. Synergy with host platform 201


We ran the same query against the same data, but now compressed, with a compression
ratio of 21%. The results are shown in Figure 10-3. The same 160 million rows were scanned,
but this time from 4.6 million getpages. Elapsed time and CPU cost were reduced by 60%
when the workload was moved from G6 to z/900. CPU overhead for decompression is
reduced 3 to 4 times on z/900 compared to the G6 machine.

Compression performance G6 vs z900


single thread table scan
1200

1000

sd 800
no
ce Elapsed
s 600
ni CPU
e
im
T 400

200

0
G6 z900

Figure 10-3 Single thread table scan performance — compressed data

Insert subselect performance


An insert taking 4.4 million rows from a subselect (100% buffer hit) inserts at the end of a
table space. In non-compressed form 142,000 pages were inserted. This compares to 65,000
pages in the compressed form for a compression ratio of 55%.

In the non-compressed test case elapsed time was equivalent because the query had
become I/O bound as a result of database and log write I/O. CPU cost improved with 25%.
In the compressed test case CPU time was reduced by 40% and this reduction was
significant enough to account for an elapsed time reduction of 30%.
Overhead of compression showed to be reduced from 42% on G6 to only 12% on a z900
server. See Figure 10-4 for the results of these measurements.

202 DB2 for z/OS and OS/390 Version 7 Performance Topics


C om p ression perfo rm an ce G 6 vs z900
insert subselect
140
120
100
80
Elapsed
60 CPU

40
20
0
G6 non comp G6 comp
z900 non comp z900 comp

Figure 10-4 Insert performance with and without compression

DB2 utility performance


Tests were performed to measure the benefit that the z900 faster processors have on utilities.
REORG, with and without SORTKEYS option, and RUNSTATS were run against compressed
and non-compressed table spaces. Utility CPU time is shown in Table 10-1.
Table 10-1 Utility CPU time
Partitioned table G6 z900 Delta CPU
20 million rows CPU sec CPU sec %
2 indexes

REORG — compressed data 699 426 -39.1

REORG — non-compressed 393 331 -15.8


data

REORG SORTKEYS 662 380 -42.6


compressed data

REORG SORTKEYS 355 288 -18.87


non-compressed data

RUNSTATS — compressed 479 334 -30.27


data

RUNSTATS — 411 318 -22.63


non-compressed data

Utilities run against compressed data experienced a significant reduction in CPU utilization.
For the REORG wit SORTKEYS and compressed data, the reduction in CPU cost was almost
43%.

Chapter 10. Synergy with host platform 203


10.2 VSAM Data Striping
In this section we describe the DFSMS feature VSAM Data Striping and how DB2 can benefit
from this functionality.

Striping is a technique to improve the performance of data sets which are processed
sequentially. This is achieved by splitting the data set into segments or stripes and spreading
those stripes across multiple volumes. This technique has been available for non-VSAM data
sets since DFSMS/MVS Version 1 Release 1 and becomes available for VSAM data sets with
DFSMS in OS/390 V2R10.

10.2.1 Hardware versus software striping


Whereas hardware striping facilitates parallel access to multiple physical disks in the same
disk array, DFSMS striping or software striping enables parallel access to multiple channels,
to multiple disk arrays, and to multiple storage controllers.

10.2.2 Description of VSAM striping


DFSMS implements VSAM Data Striping by spreading control intervals in a control area
across multiple devices.

Characteristics of VSAM Data Striping are:


򐂰 Data sets must be SMS managed and in extended format; you can use the AMS REPRO
command to convert existing standard VSAM data sets to extended format or to revert
back to standard VSAM.
򐂰 Available for all VSAM data set organizations, including linear data sets.
򐂰 VSAM striping is limited to 16 stripes.
򐂰 Individual stripes can be extended to new volumes.

For a more detailed discussion of VSAM Data Striping see OS/390 V2R10 DFSMS Using
Data Sets, SC26-7339, and the redbook DFSMS Release 10 Technical Update, SG24-6120.

Notes
Extended format data sets require storage control units that support non-synchronous I/O
operation.

10.2.3 VSAM Data Striping and DB2


VSAM Data Striping is transparent to DB2 and will work with all versions of DB2.

In DB2, a form of data striping has been available since the introduction of partitioned table
spaces in DB2 Version 1 Release 3. Partitioning is, in fact, a method of data striping, with
each partition being a stripe. Concurrent parallel I/O against these partitions/stripes is
possible in DB2 as long as the partition pagesets are allocated on different volumes.
Additionally, the use of the keyword piecesize, introduced in DB2 V4 on a CREATE INDEX
statement, implements a kind of data striping for non-partitioned indexes. Also, ESS devices
and PAV can reduce contention (see 10.3, “ESS enhancements” on page 206.)

204 DB2 for z/OS and OS/390 Version 7 Performance Topics


DB2 candidate data sets for VSAM Data Striping
DB2 active log data sets are good candidates for VSAM Data Striping. In heavy update
environments, the DB2 active log can be the limiting factor from a performance point of view.
In the white paper DB2 for OS/390 Performance on IBM Enterprise Storage Server, it was
indicated that the maximum throughput of the DB2 log was 8.2 MB/sec on ESS model E20.
Recent measurements done at SSD Performance Evaluation Lab on ESS model F20 showed
a maximum throughput 11.6 MB/sec, and striping further increased the throughput to 27
MB/sec (and this is not a theoretical limit). See 10.2.4, “Measurements” on page 205 for
details.

VSAM Data Striping could be a good solution for non-partitioned table spaces or for table
spaces where partitioning cannot be implemented, such as segmented table spaces; also
non-partitioning indexes can benefit from striping. Less obvious candidates are the workfile
table spaces; during the merge phase of a sort, all sort work files are merged to one logical
workfile which can result in contention for that merged workfile data set. Be aware that:
򐂰 The current requirement on striping being applicable only to non-DB2-managed objects
restricts the applicability of this solution.
򐂰 Performance and usability tests are still under way, and the results and recommendations
are not available at this time.

One difference between VSAM Data Striping and partitioning is that the sequential prefetch
I/O quantity (number of pages returned by one prefetch I/O) is reduced by a factor equal to
the degree of striping; this is also true for deferred writes. From the application point of view
nothing changes, the DB2 prefetch is still 32 pages, but internally the I/O will be reduced in
scope to account for the allocated stripe. Whether this is an advantage or a disadvantage
depends on the workload; a table space scan on a 16-stripe table space data set could
definitely be slower than the same scan using parallelism on a 16-partition table space, due to
the internal reduction of the prefetch quantity I/O from 32 to 2.

The same query on a non-partitioned table space will run faster for the striped table space
data set than for the non-striped one, because the 16 parallel I/O streams only prefetch two
pages each. Also, be aware that parallelism due to partitioning will be decided by the
optimizer, and VSAM I/O striping is independent of the optimizer and/or workload.

Enabling VSAM I/O striping


See Appendix B, “Enabling VSAM I/O striping in OS/390 V2R10” on page 215 for a
description on how you can specify VSAM striping in DFSMS R10 for log data sets.

Recommendation
The usage of VSAM striping for the DB2 active logs is tested, verified and approved. VSAM
striping for DB2 user data is still under investigation; we strongly advise you to wait for the
results and recommendations from the DB2 labs before using VSAM striping for DB2 data.

10.2.4 Measurements
The performance measurements were executed with the specific objective of evaluating the
DB2 log throughput with VSAM striping.

DB2 logging and VSAM Data Striping


Table 10-2 shows the measurement results for the log write throughput without and with
VSAM Data Striping. The measurements were executed in an environment using dual logging
but no archiving to avoid interference, and the following characteristics:
򐂰 OS/390V2R10

Chapter 10. Synergy with host platform 205


򐂰 DB2 UDB for OS/390 V6
򐂰 ESS 2105-F20, 16 paths — 8 per cluster
򐂰 DB2 INSERT intensive ERP workload

Table 10-2 DB2 log throughput and CPU times

# stripes INSERTs/min. DB2 log CPU time (sec) CPU time (sec)
(thousand) throughput Class 1 Class 2
(MB/sec)

0 161.1 11.6 57.40 54.68

1 145.7 10.5 57.53 54.76

2 257.2 18.5 57.67 54.96

4 382.3 27.0 57.96 55.20

The log throughput rate increased 59% when the log data sets were 2-striped and 133%
when 4-striped. The number of inserts is consistent with the log throughput. The 1-striped
case was included to demonstrate the slight overhead of striping; there is a 9% throughput
reduction going from a non-striped log to a single striped log, because of the overhead to
write the 32 byte suffix using data chaining.

10.3 ESS enhancements


In the early days, disk hardware was only capable of processing one I/O at a time. OS/390
systems knew that, and did not try to issue another I/O to a disk volume — represented in
MVS by a Unit Control Block (UCB) — while an I/O was already active for that device. Not
only were the S/390 systems limited to processing only one I/O at a time, but also, the
storage subsystems accepted only one I/O at a time from different system images to a shared
disk volume, for the same reasons mentioned above.

10.3.1 Parallel Access Volumes


The ESS has the capability to do more than one I/O to an emulated S/390 volume. The ESS
introduces the concept of Alias addresses. Instead of one UCB per logical volume, an
OS/390 host can now use up to 256 UCBs for the same logical volume. Apart from the
conventional Base UCB, Alias UCBs can be defined and used by OS/390 to issue I/Os in
parallel to the same logical volume. The function that allows parallel I/Os to a volume from
one host is called Parallel Access Volumes (PAV). But I/Os are not limited to coming from one
host in parallel. The ESS also accepts I/Os to a shared volume coming from different hosts in
parallel. This capability is called Multiple Allegiance.

Static PAV
The association between the base UCB and the aliases is predefined and fixed. Static PAV
was introduced in OS/390 V2R3 and DFSMS 1.3.

Dynamic PAV
The association between the base UCB and the aliases is predefined but can be reassigned.
The reassignment of the alias addresses is governed by WLM (goal mode) and based on the
concurrent I/O activity for a volume. WLM will instruct the I/O subsystem to reassign an alias
UCB. Dynamic PAV was introduced in OS/390 V2R7 and DFSMS 1.5.

206 DB2 for z/OS and OS/390 Version 7 Performance Topics


Effect of PAV
The effect of PAV on I/O response time is that IOSQ time almost disappears. This is
illustrated in Figure 10-5.

Parallel Access Volumes


10
9
8
7
IOSQ
6
PEND
DISC 5
CONN 4
3
2
1
0
Current disk ESS

Figure 10-5 Effect of PAV on I/O response time in msec

Concurrent read and write


The unit of read and write I/O serialization for ESS when PAV is enabled is called the
Define-Extent Domain.

Define-Extent Domain
The Define-Extent Domain is dependent on the type of I/O. Table 10-3 gives an overview of
the size of the Define-Extent Domain.
Table 10-3 Define-Extent Domain in DB2 I/O
DB2 Database I/O type Define-Extent Domain

Single page read/write I/O 1 track

Sequential and Dynamic prefetch 3 tracks for SQL;


6 tracks for utilities

Sequential and Dynamic prefetch with 3 tracks + 4 additional cylinders for SQL
DSNZPARM parameter SEQCACH=SEQ 6 tracks + 4 additional cylinders for utilities

List prefetch and deferred write up to 1 cylinder

APAR OW43946 introduces a no-serialization option. If applied, the I/O driver will ignore
serialization at the define-extent domain level when PAV is enabled. Read and write I/O is
identified and serialization is done at the track level.

DB2 V7 will exploit the no-serialization option for all database I/Os. Therefore, I/O wait will
only happen for concurrent read and write I/Os against the same track. Also, the concurrency
limitation imposed by DSNZPARM option SEQCACH=SEQ is hereby relieved.

Chapter 10. Synergy with host platform 207


Example
In order to illustrate the benefit that you can obtain with PAV, the performance of a parallel
query was measured. The ESS shows virtually no degradation when a parallel query scans
one, two, or three partitions simultaneously on the same volume. The PAV function nearly
eliminates the IOSQ time and removes the contention caused when multiple concurrent I/Os
try to access the same volume. The same measurement on RAMAC-3 shows substantial
degradation in elapsed time — two times worse for two partitions, and over three times worse
for three partitions on one volume. The results are shown in Figure 10-6.

Elapsed Time (sec) RAMAC 3

300
250
200 167

100 73

0
1 2 3
Number of concurrent prefetch streams

Elapsed Time (sec) ESS E20

300

200

100
34 33 37
0
1 2 3
Number of concurrent prefetch streams

Figure 10-6 Parallel query performance with and without PAV

Note
Parallel Access Volumes is a special performance enhancement function. It is provided by a
separately priced feature that must be ordered to enable this function in the ESS.

10.3.2 VSAM I/O striping versus PAV


VSAM I/O striping alleviates contention on the data set level while PAV alleviates contention
on the volume level whether the I/O is done against the same or different data sets. The DB2
optimizer is not aware of VSAM I/O striping nor of PAV. The access path selection will not
take any of these features into account. DB2 I/O parallelism will not be selected for
non-partitioned table spaces on striped or PAV enabled volumes.

208 DB2 for z/OS and OS/390 Version 7 Performance Topics


10.3.3 FlashCopy
FlashCopy provides an “instant”, point-in-time copy of the data for application usage such as
backup and recovery operations. FlashCopy enables you to copy or dump data while
applications are updating the data. Both source and target volumes must reside on the same
logical subsystem (LSS), also known as a Logical Control Unit (LCU). DFSMSdss
automatically invokes FlashCopy when you issue the COPY FULL command on a subsystem
that supports FlashCopy functions. This is a unique feature of the IBM Enterprise Storage
Server (ESS).

FlashCopy requirements
In order to take advantage of FlashCopy, you must have both software and hardware
prerequisites.

FlashCopy is a feature of the IBM Enterprise Storage Server (ESS), and DFSMS/MVS
Version1 Release3 and subsequent releases provide FlashCopy support. See the redbook
Implementing ESS Copy Services on S/390, SG24-5680 for more details.

FlashCopy and DB2


The SET LOG SUSPEND/RESUME option of DB2, allows you to temporarily “freeze”
updates to a DB2 subsystem while the logs and databases can be copied using FlashCopy
for remote site recovery usage.

This would be a fuzzy backup in that uncommitted changes might still be in progress at the
time of the copy. This would allow you to restart DB2 normally at the recovery site, taking also
care of all in-flight URs.The process is intended to work as follows:
򐂰 Log suspend, FlashCopy the entire system, log resume.
򐂰 At recovery site, restore everything and restart.

Concurrent Copy versus FlashCopy


Concurrent Copy (CC) is both an extended function in the ESS and a component of
DFSMSdss. CC enables you to copy or dump data while applications are updating the data.
DB2 can use CC for their backups. Concurrent Copy delivers an image copy of the data, in a
consistent form. See DB2 UDB For OS/390 and z/OS Utility Guide and Reference Version 7,
SC26-9945, for a description of the copy process and the restrictions of CC.

FlashCopy and Concurrent Copy (CC) are instant-related or so-called time-zero (T0) copy
tools. FlashCopy operates at the logical volume level where CC can also operate at the data
set level. Copies made with Concurrent Copy can be registered in the SYSCOPY table if
invoked via the image copy utility. FlashCopy works entirely outside DB2. DB2 CC copy
produces a consistent copy that can be used by the recover utility. A FlashCopy of an entire
DB2 system has to be restored and possible inconsistencies are resolved during DB2 restart.
Log-only recovery with FlashCopy as an input is not supported.

FlashCopy and SnapShot copy


SnapShot has been introduced to customers on the IBM RVA subsystem. Customers have
programs and policies that uses the RVA functions. It is intended that ESS implementation be
as easy as possible for you to migrate from RVA to ESS.

Chapter 10. Synergy with host platform 209


The FlashCopy function is similar to volume level SnapShot on the IBM RVA. As with
SnapShot, you get an instant T0 copy when you start the command. In contrast to the
SnapShot implementation, however, FlashCopy on the ESS requires backend storage for the
copy, because it creates a physical point-in-time copy of the data. While RVA can make
SnapShot across four logical subsystems (LSS), the ESS can make a FlashCopy across only
one LSS of 256 drives. In the first implementation of FlashCopy, it only operates on a volume
basis. SnapShot can operate on either a volume, or data set level.

210 DB2 for z/OS and OS/390 Version 7 Performance Topics


A

Appendix A. Updatable DB2 subsystem


parameters
Notes:
򐂰 Changeable subsystem parameters are marked with “¦”.
򐂰 The list can change with maintenance and needs to be verified.
DSN8ED7: Sample DB2 for OS/390 Configuration Setting Report Generator

Macro Parameter Current Description/ Install Fld


Name Name Setting Install Field Name Panel ID No.
-------- ---------------- --------------------------------------- ------------------------------------ -------- ----
DSN6SYSP AUDITST 00000000000000000000000000000000 AUDIT TRACE DSNTIPN 1
¦DSN6SYSP CONDBAT 0000000064 MAX REMOTE CONNECTED DSNTIPE 4
¦DSN6SYSP CTHREAD 00070 MAX USERS DSNTIPE 2
¦DSN6SYSP DLDFREQ 00005 LEVELID UPDATE FREQUENCY DSNTIPL 14
¦DSN6SYSP PCLOSEN 00005 RO SWITCH CHKPTS DSNTIPL 12
¦DSN6SYSP IDBACK 00020 MAX BATCH CONNECT DSNTIPE 6
¦DSN6SYSP IDFORE 00040 MAX TSO CONNECT DSNTIPE 5
¦DSN6SYSP CHKFREQ 0000050000 CHECKPOINT FREQ DSNTIPL 6
¦DSN6SYSP MON 00000000 MONITOR TRACE DSNTIPN 9
DSN6SYSP MONSIZE 0000008192 MONITOR SIZE DSNTIPN 10
DSN6SYSP SYNCVAL NO STATISTICS SYNC DSNTIPN 7
¦DSN6SYSP RLFAUTH SYSIBM RESOURCE AUTHID DSNTIPP 8
DSN6SYSP RLF NO RLF AUTO START DSNTIPO 4
¦DSN6SYSP RLFERR NOLIMIT RLST ACCESS ERROR DSNTIPO 6
¦DSN6SYSP RLFTBL 01 RLST NAME SUFFIX DSNTIPO 5
¦DSN6SYSP MAXDBAT 00064 MAX REMOTE ACTIVE DSNTIPE 3
¦DSN6SYSP DSSTIME 00005 DATASET STATS TIME DSNTIPN 8
DSN6SYSP EXTSEC NO EXTENDED SECURITY DSNTIPR 11
DSN6SYSP SMFACCT 10000000000000000000000000000000 SMF ACCOUNTING DSNTIPN 4
DSN6SYSP SMFSTAT 10111000000000000000000000000000 SMF STATISTICS DSNTIPN 5
DSN6SYSP ROUTCDE 1000000000000000 WTO ROUTE CODES DSNTIPO 1
¦DSN6SYSP STORMXAB 00000 MAX ABEND COUNT DSNTIPX 4
DSN6SYSP STORPROC DB2ASPAS DB2 PROC NAME DSNTIPX 2
¦DSN6SYSP STORTIME 00180 TIMEOUT VALUE DSNTIPX 5
¦DSN6SYSP STATIME 00030 STATISTICS TIME DSNTIPN 6
DSN6SYSP TRACLOC 00016
DSN6SYSP PCLOSET 00010 RO SWITCH TIME DSNTIPL 13
DSN6SYSP TRACSTR 00000000000000000000000000000000 TRACE AUTO START DSNTIPN 2
DSN6SYSP TRACTBL 00016 TRACE SIZE DSNTIPN 3
¦DSN6SYSP URCHKTH 000 UR CHECK FREQ DSNTIPL 8
¦DSN6SYSP WLMENV WLM ENVIRONMENT DSNTIPX 6
¦DSN6SYSP LOBVALA 0000002048 USER LOB VALUE STORAGE DSNTIP7 1
¦DSN6SYSP LOBVALS 0000002048 SYSTEM LOB VALUE STORAGE DSNTIP7 2
DSN6SYSP LOGAPSTG 000 LOG APPLY STORAGE DSNTIPL 5
¦DSN6SYSP DBPROTCL DRDA DATABASE PROTOCOL DSNTIP5 6
¦DSN6SYSP PTASKROL YES
DSN6SYSP EXTRAREQ 00100 EXTRA BLOCKS REQ DSNTIP5 4
DSN6SYSP EXTRASRV 00100 EXTRA BLOCKS SRV DSNTIP5 5
¦DSN6SYSP TBSBPOOL BP0 DEFAULT BUFFER POOL FOR USER DATA DSNTIP1 1
¦DSN6SYSP IDXBPOOL BP0 DEFAULT BUFFER POOL FOR USER INDEXES DSNTIP1 2
DSN6SYSP LBACKOUT AUTO LIMIT BACKOUT DSNTIPL 10
DSN6SYSP BACKODUR 005 BACKOUT DURATION DSNTIPL 11
¦DSN6SYSP URLGWTH 0000000000 UR LOG WRITE CHECK DSNTIPL 9

© Copyright IBM Corp. 2001 211


DSN6LOGP TWOACTV 2 NUMBER OF COPIES DSNTIPH 3
DSN6LOGP OFFLOAD YES
DSN6LOGP TWOBSDS 2
DSN6LOGP TWOARCH 2 NUMBER OF COPIES DSNTIPH 6
DSN6LOGP MAXARCH 0000001000 RECORDING MAX DSNTIPA 10
¦DSN6LOGP DEALLCT 00000:00000 DEALLOC PERIOD DSNTIPA 9
¦DSN6LOGP MAXRTU 00002 READ TAPE UNITS DSNTIPA 8
DSN6LOGP OUTBUFF 0000004000 OUTPUT BUFFER DSNTIPL 2
¦DSN6LOGP WRTHRSH 00020
¦DSN6LOGP ARC2FRST NO READ COPY2 ARCHIVE DSNTIPO 13
¦DSN6ARVP BLKSIZE 0000028672 BLOCK SIZE DSNTIPA 7
¦DSN6ARVP CATALOG NO CATALOG DATA DSNTIPA 4
¦DSN6ARVP ALCUNIT BLK ALLOCATION UNITS DSNTIPA 1
¦DSN6ARVP PROTECT NO ARCHIVE LOG RACF DSNTIPP 1
¦DSN6ARVP ARCWTOR YES WRITE TO OPER DSNTIPA 11
¦DSN6ARVP COMPACT NO COMPACT DATA DSNTIPA 15
¦DSN6ARVP TSTAMP NO TIMESTAMP ARCHIVES DSNTIPH 9
¦DSN6ARVP QUIESCE 00005 QUIESCE PERIOD DSNTIPA 14
¦DSN6ARVP ARCRETN 09999 RETENTION PERIOD DSNTIPA 13
¦DSN6ARVP ARCPFX1 DB2V710A.ARCHLOG1 ARCH LOG 1 PREFIX DSNTIPH 7
¦DSN6ARVP ARCPFX2 DB2V710A.ARCHLOG2 ARCH LOG 2 PREFIX DSNTIPH 8
¦DSN6ARVP PRIQTY 0000001234 PRIMARY QUANTITY DSNTIPA 2
¦DSN6ARVP SECQTY 0000000154 SECONDARY QTY DSNTIPA 3
¦DSN6ARVP UNIT 3490 DEVICE TYPE 1 DSNTIPA 5
¦DSN6ARVP UNIT2 NONE DEVICE TYPE 2 DSNTIPA 6
¦DSN6ARVP ARCWRTC 1011000000000000 WTOR ROUTE CODE DSNTIPA 12
¦DSN6SPRM ABIND YES AUTO BIND DSNTIPO 8
DSN6SPRM SYSADM2 PAOLOR1 SYSTEM ADMIN 2 DSNTIPP 4
¦DSN6SPRM AUTHCACH 01024 PLAN AUTH CACHE DSNTIPP 10
DSN6SPRM AUTH YES USE PROTECTION DSNTIPP 2
¦DSN6SPRM BMPTOUT 00004 IMS BMP TIMEOUT DSNTIPI 11
DSN6SPRM LEMAX 00020 MAXIMUM LE TOKENS DSNTIP7 3
¦DSN6SPRM BINDNV BINDADD BIND NEW PACKAGE DSNTIPP 9
¦DSN6SPRM CDSSRDEF ANY CURRENT DEGREE DSNTIP4 6
DSN6SPRM DBCHK NO
DSN6SPRM DEFLTID IBMUSER UNKNOWN AUTHID DSNTIPP 7
DSN6SPRM CHGDC AND EDPROP 1 DPROP SUPPORT DSNTIPO 10
DSN6SPRM DECDIV3 NO MIN. DIVIDE SCALE DSNTIPF 3
¦DSN6SPRM DLITOUT 00006 DLI BATCH TIMEOUT DSNTIPI 12
¦DSN6SPRM DSMAX 0000003000 MAXIMUM OPEN DATA SETS DSNTIPC 1
¦DSN6SPRM EDMPOOL 0015167488 EDMPOOL STORAGE SIZE DSNTIPC 2
¦DSN6SPRM RECALLD 00120 RECALL DELAY DSNTIPO 3
¦DSN6SPRM RELCURHL YES RELEASE LOCKS DSNTIP4 10
DSN6SPRM RECALL YES RECALL DATA BASE DSNTIPO 2
DSN6SPRM IRLMAUT YES AUTO START DSNTIPI 4
¦DSN6SPRM ABEXP YES EXPLAIN PROCESSING DSNTIPO 9
DSN6SPRM IRLMPRC IRLAPROC PROC NAME DSNTIPI 5
DSN6SPRM IRLMSID IRLA SUBSYSTEM NAME DSNTIPI 2
¦DSN6SPRM IRLMSWT 0000000300 TIME TO AUTO START DSNTIPI 6
¦DSN6SPRM NUMLKTS 0000001000 LOCKS PER TABLE(SPACE) DSNTIPJ 3
¦DSN6SPRM NUMLKUS 0000010000 LOCKS PER USER DSNTIPJ 4
DSN6SPRM HOPAUTH BOTH AUTH AT HOP SITE DSNTIP5 7
¦DSN6SPRM SEQCACH BYPASS SEQUENTIAL CACHE DSNTIPE 7
DSN6SPRM RRULOCK NO U LOCK FOR RR OR RS DSNTIPI 8
¦DSN6SPRM DESCSTAT NO DESCRIBE FOR STATIC DSNTIPF 16
¦DSN6SPRM SEQPRES NO UTILITY CACHE OPTION DSNTIPE 8
DSN6SPRM CACHEDYN NO CACHE DYNAMIC SQL DSNTIP4 7
¦DSN6SPRM RETLWAIT 00000 RETAINED LOCK TIMEOUT DSNTIPI 13
DSN6SPRM CACHERAC 0000032768 ROUTINE AUTH CACHE DSNTIPP 12
¦DSN6SPRM EDMDSPAC 0000000000 EDMPOOL DATA SPACE SIZE DSNTIPC 3
¦DSN6SPRM CONTSTOR NO CONTRACT THREAD STG DSNTIPE 10
DSN6SPRM MAXKEEPD 0000005000 MAX KEPT DYN STMTS DSNTIPE 9
DSN6SPRM RETVLCFK NO VARCHAR FROM INDEX DSNTIP4 9
DSN6SPRM SYSOPR1 SYSOPR SYSTEM OPERATOR 1 DSNTIPP 5
DSN6SPRM SYSOPR2 SYSOPR SYSTEM OPERATOR 2 DSNTIPP 6
DSN6SPRM CACHEPAC 0000032768 PACKAGE AUTH CACHE DSNTIPP 11
DSN6SPRM PARAMDEG 0000000000 MAX DEGREE DSNTIP4 11
DSN6SPRM PARTKEYU YES UPDATE PART KEY COLS DSNTIP4 12
DSN6SPRM STATHIST NONE STATISTICS HISTORY DSNTIPO 14
DSN6SPRM RGFNMPRT DSN_REGISTER_APPL APPL REGISTRATION TABLE DSNTIPZ 8
DSN6SPRM RGFCOLID DSNRGCOL REGISTRATION OWNER DSNTIPZ 6
DSN6SPRM RGFESCP ART-ORT ESCAPE CHAR DSNTIPZ 5
DSN6SPRM RGFINSTL NO INSTALL DDCTRL SUPT DSNTIPZ 1
DSN6SPRM RGFDEDPL NO CONTROL ALL APPLICATIONS DSNTIPZ 2
DSN6SPRM RGFFULLQ YES REQUIRE FULL NAMES DSNTIPZ 3
DSN6SPRM RGFDEFLT APPL UNREGISTERED DDL DEFAULT DSNTIPZ 4
DSN6SPRM RGFDBNAM DSNRGFDB REGISTRATION DATABASE DSNTIPZ 7
DSN6SPRM RGFNMORT DSN_REGISTER_OBJT OBJT REGISTRATION TABLE DSNTIPZ 9
¦DSN6SPRM MAXRBLK 0000000250 RID POOL SIZE DSNTIPC 7
¦DSN6SPRM MINRBLK 0000000001
DSN6SPRM SYSADM KARRAS SYSTEM ADMIN 1 DSNTIPP 3
DSN6SPRM SRTPOOL 0001024000 SORT POOL SIZE DSNTIPC 6
DSN6SPRM IRLMRWT 0000000060 RESOURCE TIMEOUT DSNTIPI 3
DSN6SPRM ALPOOLX 0000032256
DSN6SPRM SITETYP LOCALSITE SITE TYPE DSNTIPO 11
¦DSN6SPRM UTIMOUT 00006 UTILITY TIMEOUT DSNTIPI 7
DSN6SPRM XLKUPDLT NO X LOCK FOR SEARCHED U OR D DSNTIPI 9
¦DSN6SPRM OPTHINTS NO OPTIMIZATION HINTS DSNTIP4 8
DSN6SPRM TRKRSITE NO TRACKER SITE DSNTIPO 12
¦DSN6SPRM EDMBFIT NO LARGE EDM BETTER FIT DSNTIP4 13
DSN6SPRM EDMDSMAX 1073741824 EDMPOOL DATA SPACE MAX DSNTIPC 4
DSN6SPRM STARJOIN DISABLE
¦DSN6SPRM DBACRVW NO DBADM CREATE AUTH DSNTIPP 13
DSN6SPRM CATALOG DB2V710A CATALOG ALIAS DSNTIPA2 1
DSN6SPRM RESTART DEF RESTART RESTART OR DEFER DSNTIPS 1

212 DB2 for z/OS and OS/390 Version 7 Performance Topics


DSN6SPRM ALL-DBNAME ALL DBSTARTX DSNTIPS 2-37
DSN6FAC CMTSTAT ACTIVE DDF THREADS DSNTIPR 7
DSN6FAC TCPALVER NO TCP IP ALREADY VERIFIED DSNTIP5 3
DSN6FAC RESYNC 00002 RESYNC INTERVAL DSNTIPR 6
¦DSN6FAC RLFERRD NOLIMIT RLST ACCESS ERROR DSNTIPR 5
DSN6FAC DDF AUTO DDF STARTUP OPTION DSNTIPR 1
DSN6FAC IDTHTOIN 00000 IDLE THREAD TIMEOUT DSNTIPR 10
DSN6FAC MAXTYPE1 0000000000 MAX TYPE1 INACTIVE THREADS DSNTIPR 8
DSN6FAC TCPKPALV ENABLE TCPIP KEEPALIVE DSNTIP5 8
DSN6FAC POOLINAC 00120 POOL THREAD TIMEOUT DSNTIP5 9
DSN6GRP ASSIST NO ASSISTANT DSNTIPK 6
DSN6GRP COORDNTR NO COORDINATOR DSNTIPK 5
DSN6GRP IMMEDWRI NO IMMEDIATE WRITE DSNTIP4 14
DSN6GRP DSHARE NO DATA SHARING FUNCTION DSNTIPA1 2
DSN6GRP MEMBNAME DSN1 MEMBER NAME DSNTIPK 2
DSN6GRP GRPNAME DSNCAT GROUP NAME DSNTIPK 1
DSNHDECP DEFLANG IBMCOB LANGUAGE DEFAULT DSNTIPF 1
DSNHDECP DECIMAL , DECIMAL POINT IS DSNTIPF 2
DSNHDECP DELIM D STRING DELIMITER DSNTIPF 4
DSNHDECP SQLDELI DEFAULT SQL STRING DELIMITER DSNTIPF 5
DSNHDECP DSQLDELI APOST DIST SQL STR DELIMITER DSNTIPF 6
DSNHDECP MIXED NO MIXED DATA DSNTIPF 7
DSNHDECP SCCSID 00037 EBCDIC CODED CHAR SET DSNTIPF 8
DSNHDECP MCCSID 65534 EBCDIC CODED CHAR SET DSNTIPF 8
DSNHDECP GCCSID 65534 EBCDIC CODED CHAR SET DSNTIPF 8
DSNHDECP ASCCSID 00000 ASCII CODED CHAR SET DSNTIPF 9
DSNHDECP AMCCSID 65534 ASCII CODED CHAR SET DSNTIPF 9
DSNHDECP AGCCSID 65534 ASCII CODED CHAR SET DSNTIPF 9
DSNHDECP USCCSID 00367 UNICODE CCSID DSNTIPF 10
DSNHDECP UMCCSID 01208 UNICODE CCSID DSNTIPF 10
DSNHDECP UGCCSID 01200 UNICODE CCSID DSNTIPF 10
DSNHDECP ENSCHEME EBCDIC DEF ENCODING SCHEME DSNTIPF 11
DSNHDECP APPENSCH EBCDIC APPLICATION ENCODING DSNTIPF 13
DSNHDECP DATE ISO DATE FORMAT DSNTIP4 1
DSNHDECP TIME ISO TIME FORMAT DSNTIP4 2
DSNHDECP DATELEN 000 LOCAL DATE LENGTH DSNTIP4 3
DSNHDECP TIMELEN 000 LOCAL TIME LENGTH DSNTIP4 4
DSNHDECP STDSQL YES STD SQL LANGUAGE DSNTIP4 5
DSNHDECP CHARSET ALPHANUM EBCDIC CODED CHAR SET DSNTIPF 8
DSNHDECP SSID DB2A SUBSYSTEM NAME DSNTIPM 1
DSNHDECP DECARTH 15 DECIMAL ARITHMETIC DSNTIPF 14
DSNHDECP DYNRULES YES USE FOR DYNAMIC RULES DSNTIPF 15
DSNHDECP COMPAT OFF
DSNHDECP LC_CTYPE LOCALE LC_CTYPE DSNTIPF 12

Appendix A. Updatable DB2 subsystem parameters 213


214 DB2 for z/OS and OS/390 Version 7 Performance Topics
B

Appendix B. Enabling VSAM I/O striping in


OS/390 V2R10
VSAM striped data sets must be SMS managed and in extended format. They can only be
allocated through the ACS routines. Extended format requires storage control units that
support non-synchronous I/O operation. In this appendix we show how to define VSAM I/O
striping using the Guaranteed Space attribute of the storage class definition.

You have to do the following:


򐂰 Check if your disk supports extended format data sets.
򐂰 Define a data class with Data Set Name Type... EXT.
򐂰 Define a storage class with Sustained Data Rate > 0 and Guaranteed Space set to YES.
򐂰 Adapt your ACS routines to use the new data class and storage class.
򐂰 Use the DATACLAS and STORAGECLASS parameters on DEFINE CLUSTER for non DB2 defined
data sets.

We used ISMF to define the data class and storage class.

Check your disk


To make sure your disk supports the extended format you can issue the DEVSERV QDASD
command for the volumes you want to use.See Figure B-1 on page 215 for a sample
command response

DEVSERV QDASD,6312
IEE459I 16.00.09 DEVSERV QDASD 317
UNIT VOLSER SCUTYPE DEVTYPE CYL SSID SCU-SERIAL DEV-SERIAL EF-CHK
6312 SBOX28 2105E20 2105 3339 8903 0113-12089 0113-12089 **OK**
**** 1 DEVICE(S) MET THE SELECTION CRITERIA
**** 0 DEVICE(S) FAILED EXTENDED FUNCTION CHECKING

Figure B-1 DEVSERV QDASD command response

© Copyright IBM Corp. 2001 215


If the field EF-CHK shows **OK**, the volume will be eligible for extended format data sets.

Define a data class


You need a data class with Data Set Name Type set to EXT. See Figure B-2 for an example.

C DS N am e . . . : S Y S1 . S MS . S CD S
D at a Cl a s s N a me : D B 2S T R IP

D at a Se t Na m e T y pe . . . : EX T E ND E D
I f Ex t e nd e d . . . . . . : RE Q U IR E D
E x t en d e d A dd r e ss a b il i t y : NO
R e c or d Ac c e ss Bi a s . . : USE R
R eu s e . . . . . . . . . . : NO
I ni t i al L oa d . . . . . . : SPE E D
S pa n n ed / N o ns p a nn e d . . :
B WO . . . . . . . . . . . :
L og . . . . . . . . . . . :
L og s t re a m I d . . . . . . :
S pa c e C o n st r a in t R e l ie f . : NO
R e d uc e Sp a c e U p T o ( % ) :
B lo c k S i z e L i mi t . . . . :

Figure B-2 Sample data class definition

Define a storage class


Define a storage class with Guaranteed Space set to YES and Sustained Data Rate set to
non-zero.See Figure B-3 for an example.

CDS Name . . . . . : SYS1.SMS.SCDS


Storage Class Name : VSTRIPE
Description : TEST STORAGE CLASS FOR VSAM STRIPING

Performance Objectives
Direct Millisecond Response . . . :
Direct Bias . . . . . . . . . . . :
Sequential Millisecond Response . :
Sequential Bias . . . . . . . . . :
Initial Access Response Seconds . :
Sustained Data Rate (MB/sec) . . . : 40
Availability . . . . . . . . . . . . : C
Backup . . . . . . . . . . . . . . : S
Versioning . . . . . . . . . . . . :
Guaranteed Space . . . . . . . . : Y
Guaranteed Synchronous Write . . : N
Cache Set Name . . . . . . . . . :
CF Direct Weight . . . . . . . . :
CF Sequential Weight . . . . . . :

Figure B-3 Sample storage class definition

The Guarantee Space attribute value is set to YES, the SDR must be greater than zero, but
the value will not be used to calculate the number of stripes. In this case, the number of
stripes is determined by the number of volumes specified in the DEFINE CLUSTER
command.

216 DB2 for z/OS and OS/390 Version 7 Performance Topics


Define cluster
For non-DB2 defined data sets, an example of the DEFINE CLUSTER command is shown in
Figure B-4.

DEFINE CLUSTER -
( NAME (DB2V710G.LOGCOPY1.DS04) -
VOLUMES(* * * *) -
DATACLASS(DB2STRIP) -
STORAGECLASS(VSTRIPE) -
RECORDS(8640) -
LINEAR ) -
DATA -
( NAME (DB2V710G.LOGCOPY1.DS04.DATA) -
) CATALOG(DB2V710G)

Figure B-4 Sample DEFINE CLUSTER command

The resulting defined VSAM striped data set is allocated with three stripes, as you can see
from the response to the LISTCAT command. See Figure B-5.

CLUSTER--DB2V710G.LOGCOPY1.DS04
ATTRIBUTES
KEYLEN-----------------0 AVGLRECL---------------0
RKP--------------------0 MAXLRECL---------------0
STRIPE-COUNT-----------4
SHROPTNS(1,3) SPEED UNIQUE NOERASE
UNORDERED NOREUSE NONSPANNED EXTENDED
STATISTICS

Figure B-5 Extract from LISTCAT command response

Appendix B. Enabling VSAM I/O striping in OS/390 V2R10 217


218 DB2 for z/OS and OS/390 Version 7 Performance Topics
C

Appendix C. A method to assess the size of


the buffer pools
Collecting the statistics and careful evaluation of the numbers over a long enough time period
— at least a complete production day — gives you helpful information to assess the size of
your buffer pools. The simple approach explained in this appendix can be done with all
releases of DB2. The use of data spaces is not considered, but it is understood that
hiperpools or buffer pools in data spaces are interchangeable from the point of view of this
methodology.

Statistics report information


We will first look at the buffer pool related information reported in the STATISTICS REPORT -
LONG from DB2PM which will be used in our calculations.
򐂰 BUFFERS ALLOCATED - VPOOL called VPSIZE in this appendix
򐂰 BUFFERS ALLOCATED - HPOOL called HPSIZE
򐂰 GETPAGE REQUESTS - called GETPAGES
򐂰 SYNHRONOUS READS, PAGES READ VIA SEQ. PREFETCH, PAGES READ VIA LIST
PREFTCH, PAGES READ VIA DYN. PREFETCH - the sum of these fields is called TOTAL
PAGES READ
򐂰 SYNC. HPOOL READ, ASYNC. HPOOL READ, ASYN.DA.MOVER. HPOOL READ-S -
the sum of these fields is called Total HP READS
򐂰 SYNC. HPOOL WRITE, ASYNC. HPOOL WRITE, ASYN.DA.MOVER. HPOOL WRITE-S -
the sum of these fields is called TOTAL HP WRITES

Note
These fields are reported in DB2PM version 7 on a per-second basis. If you are using earlier
versions of DB2PM, these fields were reported on a per-minute basis. Before using them in
the calculations, convert the values to a per-second basis.

© Copyright IBM Corp. 2001 219


Concepts
In this section we introduce the concepts used in this approach and explain how to use them
in a spreadsheet. Four concepts are defined:

Estimated Vpool Residency


Estimated Vpool Residency (EVR) time is defined as the time in seconds that an average
page can stay un-referenced in the virtual buffer pool before being written to the hiperpool.
Using the fields of the DB2PM statistics report, we can calculate the following formula:

EVR = VPSIZE / TOTAL HP WRITES if HPSIZE > 0 or

EVR = VPSIZE / TOTAL PAGES READ if HPSIZE = 0

Minimum Estimated Vpool Size for x minutes residency


Minimum Estimated Vpool Size for x minutes residency (EVSmin (x)) is defined as the virtual
buffer pool size that would accommodate a page to stay in the buffer pool for x minutes. The
value can be calculated as follows:

EVSmin (x) = TOTAL HP WRITES * 60 * x if HPSIZE > 0 or

EVSmin (x) = TOTAL PAGES READ * 60 * x if HPSIZE = 0

Estimated Total pool Residency


This concept accounts for the total buffer pool size; this means the sum of VPSIZE and
HPSIZE. The Estimated Total pool Residency (ETR) time is defined as the time in seconds
that an average page is available from the buffer pools.

ETR = (VPSIZE + HPSIZE)/TOTAL PAGES READ

Minimum Estimated Total Size for y minutes residency


Minimum Estimated Total pool Size for y minutes residency (ETSmin (y)) is defined as the total
buffer pool size that would accommodate a page to stay in the buffer pool for y minutes. The
value can be calculated as follows:

ETSmin (y) = TOTAL PAGES READ * 60 * y

Spreadsheet example
After you have collected the data, enter the required fields and the foregoing formulas in a
spreadsheet to calculate the values for the cells representing these parameters.

The input fields are the data that you collected from the statistics report, the result fields are
the parameters defined in the previous section and the variables in the spreadsheet are x and
y expressed in minutes.

Altering x and y will result in new values for the minimum VP only size and the minimum total
VP and HP size.

A sample spreadsheet is shown in Figure C-1.

220 DB2 for z/OS and OS/390 Version 7 Performance Topics


BPID VPSIZE HPSIZE GetPages Total Pages read Total HP reads Total HP Writes
BP0 2500 6000 15.99 0.28 0.71 0.99
BP2 22500 252500 5371.67 1584.84 1761.67 3348.31
BP3 15000 250000 4080.00 986.6 1558.01 2547.31
BP4 4000 0 186.67 0.66 0 0
BP7 15000 40000 98.39 0.22 0.22 2.21
BP15 4000 32000 611.67 102.31 143.37 245.19
BP16 25000 225000 1206.67 688.7 327.38 1015.85
BP20 10000 0 751.67 1148.78 0 0

Lower Bound for VP (X) Residency 1


Lower Bound for VP+HP (Y) Residency 5

BPID EVR EVSmin(x)ETR ETSmin(y)


BP0 2519 60 512.36 59
BP2 7 200899 2.89 200791
BP3 6 152839 4.48 152677
BP4 6054 40 100.91 40
BP7 6773 133 4163.51 26
BP15 16 14712 5.86 14741
BP16 25 60951 6.05 60965
BP20 9 68927 0.15 68927

Figure C-1 Sample spreadsheet

Evaluation
The guidelines that we will discuss in this section are only mentioned to help you in the
evaluation process, they should not be considered as axioms. Our goal is to save virtual
storage by shrinking and/or redistributing the virtual storage among the virtual buffer pools in
the DBM1 address space and the hiperpools. This approach is not a replacement for existing
buffer pool tuning methodologies, although it can also be helpful in this area.

The basis for the evaluation is the "page miss ratio curve" where you plot buffer pool size
versus miss rate. Based on performance studies and direct customer experience, you might
be able to avoid the "knee" in the reference curve (number of pages versus reference
frequency), if you can hold onto a page in the buffer pool for about 30 seconds. Similarly, you
might be able to get onto the really flat part of the curve, if you can hold onto a page for much
longer, about 5 minutes. The values of 30 seconds and 5 minutes are offered only as rules of
thumb, they are broad guidelines and not optimal.

From the spreadsheet, identify all buffer pools with a low getpage rate. Such buffer pools may
not justify the investment in terms of virtual storage, so you may want to consider
consolidating them.

Identify the buffer pools which are most interesting from a virtual storage saving point of view;
this selection should be based on the getpage rate and the estimated vpool residency.

If the total calculated pool size is so large that it will constrain the virtual storage of the
DBM1 address space, evaluate the use of a hiperpool. As a rule of thumb, the ETSmin (y)
value should be split in 1/3 VP and 2/3 HP.

Appendix C. A method to assess the size of the buffer pools 221


222 DB2 for z/OS and OS/390 Version 7 Performance Topics
D

Appendix D. Sample PL/1 program


INSERTRS: PROCEDURE(TRACE) OPTIONS(MAIN); 00040000
DCL SYSPRINT FILE PRINT; 00040100
DCL PLIXOPT CHAR(30) VARYING STATIC EXTERNAL 00040200
INIT('ISASIZE(64K),NOSPIE,NOSTAE'); 00040300
EXEC SQL INCLUDE SQLCA; 00040400
DCL CHECKFLG CHAR(1); /*TRACE FLAG*/ 00040800
DCL TRACE CHAR VARYING; 00040900
EXEC SQL WHENEVER SQLWARNING GOTO ERROR3; 00041000
EXEC SQL WHENEVER SQLERROR GO TO ERROR2; 00041100
00041200
/********************************************************************/00041300
/* local declares */00041400
/********************************************************************/00041500
DCL INAREA CHAR(117); 00041700
DCL 1 INSTRUC BASED(ADDR(INAREA)), 00041800
5 S_TABLN CHAR(8), 00041900
5 S_PARTN CHAR(14), 00042000
5 S_OPERS CHAR(5), 00042100
5 S_DEPTN CHAR(3), 00042200
5 S_EFFBD fixed DEC(5), 00042300
5 S_EFFED fixed DEC(5), 00042400
5 S_ROUTN fixed DEC(7), 00042500
5 S_ROUTC CHAR(1), 00042600
5 S_WKCEN CHAR(5), 00042700
5 S_GRADE CHAR(3), 00042800
5 S_SUTIM FIXED BIN(31), 00042900
5 S_SUCDE CHAR(1), 00043000
5 FILL1 CHAR(3), 00043100
5 S_MATIM FIXED BIN(31), 00043200
5 S_ORTIM FIXED BIN(31), 00043300
5 S_OVRLP CHAR(1), 00043400
5 FILL2 CHAR(3), 00043500
5 S_MAXFA FIXED BIN(31), 00043600
5 S_MNSAQ FIXED BIN(31), 00043700
5 S_MNSAH FIXED BIN(31), 00043800
5 S_INSPN CHAR(1), 00043900
5 S_INSSR CHAR(2), 00044000
5 S_SHRFR fixed DEC(5,1), 00044100

© Copyright IBM Corp. 2001 223


5 S_TFACN fixed DEC(3), 00044200
5 S_HANIN CHAR(8), 00044300
5 S_HANDE CHAR(12), 00044400
5 S_HANTI CHAR(4), 00044500
5 S_FILL1 CHAR(4); 00044600
DCL 00044700
LINT BIN FIXED(31), 00044800
COMMITCTR BIN FIXED(31); 00044900
00045000
DCL INFILE FILE RECORD INPUT SEQUENTIAL; 00045100
DCL EOF BIT(1) ALIGNED INIT('0'B); 00045200
DCL I BIN FIXED(15), 00045300
ADDR BUILTIN, 00045400
UNSPEC BUILTIN, 00045500
NULL BUILTIN; 00045600
00045700
/********************************************************************/ 00045800
/* initialization */ 00045900
/********************************************************************/ 00046000
COMMITCTR = 0; 00046100
00046200
SQLCODE = 0; 00046300
00046400
CHECKFLG=TRACE; 00046500
00046600
O_EOF = '0'B; 00046700
L_EOF = '0'B; 00046800
00046900
00047000
ON ENDFILE (INFILE) EOF = '1'B; 00047100
OPEN FILE(INFILE) ; 00047200
READ FILE(INFILE) INTO(INAREA); 00047300
00047400
DO WHILE ((SQLCODE=0) & (¬EOF)) ; 00047600
00047700
00048100
EXEC SQL INSERT INTO TABLE4UT 00048200
VALUES( 00048300
:s_TABLN, :s_PARTN, :s_OPERS, :s_DEPTN, 00048400
:s_EFFBD, :s_EFFED, :s_ROUTN, :s_ROUTC, 00048500
:s_WKCEN, :s_GRADE, :s_SUTIM, :s_SUCDE, 00048600
:s_MATIM, :s_ORTIM, :s_OVRLP, :s_MAXFA, 00048700
:s_MNSAQ, :s_MNSAH, :s_INSPN, :s_INSSR, 00048800
:s_SHRFR, :s_TFACN, :s_HANIN, :s_HANDE, 00048900
:s_HANTI, :s_FILL1); 00049000
00049100
00049200
IF SQLCODE = 0 THEN 00049300
DO; 00049400
COMMITCTR = COMMITCTR + 1; 00049500
IF COMMITCTR = 10000 THEN 00049600
DO; 00049700
EXEC SQL COMMIT; 00049800
COMMITCTR = 0; 00049900
END; 00050000
END; 00050100
00050400
READ FILE(INFILE) INTO(INAREA); 00051000
if (eof) then 00051100
EXEC SQL COMMIT; 00052000

224 DB2 for z/OS and OS/390 Version 7 Performance Topics


END; 00060000
CALL CHECKID('99'); 00061000
CLOSE FILE(infile); 00062000
GOTO END; 00063000
00064000
/*******************************************************************/ 00070000
/* error handling routines */ 00080000
/********************************************************************/ 00090000
CHECKID: PROCEDURE(ID); 00100000
DCL ID CHAR(2); 00110000
dcl olabel char(10), 00120000
tlabel char(10); 00130000
dcl iolabel char(10), 00140000
itlabel char(10); 00150000
olabel = 'ADVANCE O'; 00160000
tlabel = 'ADVANCE I'; 00170000
iolabel = 'INS ORDER'; 00180000
itlabel = 'INS ITEM'; 00190000
IF CHECKFLG¬='T' THEN GOTO ENDCHK; 00200000
if id = '11' then put skip edit(id,olabel,to_orderkey) 00220000
(f(2),x(5),a(10),f(10)); 00230000
if id = '12' then put skip edit(id,iolabel,tl_orderkey) 00240000
(f(2),x(10),a(10),f(10)); 00250000
if id = '13' then put skip edit(id,tlabel,tl_orderkey) 00260000
(f(2),x(10),a(10),f(10)); 00270000
if id = '21' then put skip edit(id,itlabel,tl_orderkey) 00280000
(f(2),x(10),a(10),f(10)); 00290000
if id = '22' then put skip edit(id,tlabel,tl_orderkey) 00300000
(f(2),x(10),a(10),f(10)); 00310000
ENDCHK: END CHECKID; 00330000
CHECKID2: PROCEDURE(ID); 00340000
DCL ID CHAR(2); 00350000
IF CHECKFLG¬='C' THEN GOTO ENDCHK2; 00360000
ENDCHK2: END CHECKID2; 00400000
ERROR1: 00410000
PUT SKIP LIST('COUNT(*)=0'); 00420000
GOTO ERROR; 00430000
ERROR2: 00440000
PUT SKIP LIST('SQL ERROR'); 00450000
PUT SKIP LIST(SQLCODE); 00460000
PUT SKIP DATA(SQLCA); 00470000
GOTO OUTNOW; 00480000
ERROR3: 00490000
PUT SKIP LIST('SQL WARNING'); 00500000
PUT SKIP LIST(SQLCODE); 00510000
PUT SKIP DATA(SQLCA); 00520000
ERROR: EXEC SQL ROLLBACK; 00530000
OUTNOW: 00540000
END: END INSERTRS; 00550000

Appendix D. Sample PL/1 program 225


226 DB2 for z/OS and OS/390 Version 7 Performance Topics
E

Appendix E. Sample utilities output

COPY output with LISTDEF and TEMPLATE


J E S 2 J O B L O G -- S Y S T E M S C 6 3 -- N O D E W T S C P L X 2

12.17.31 JOB12124 ---- THURSDAY, 14 DEC 2000 ----


12.17.31 JOB12124 IRR010I USERID PAOLOR5 IS ASSIGNED TO THIS JOB.
12.17.31 JOB12124 ICH70001I PAOLOR5 LAST ACCESS AT 12:16:36 ON THURSDAY, DECEMBER 14, 2000
12.17.31 JOB12124 $HASP373 PAOLOR5C STARTED - INIT 1 - CLASS A - SYS SC63
12.17.31 JOB12124 IEF403I PAOLOR5C - STARTED - ASID=0032.
12.17.41 JOB12124 IGD01008I &DSN = PAOLOR5.FIC.DBLP0001.TSLP0001.D1214.T1717
12.17.41 JOB12124 IGD01008I &UNIT = SYSDA
12.17.41 JOB12124 IGD01008I &ALLVOL =
12.17.41 JOB12124 IGD01008I &ANYVOL =
12.18.10 JOB12124 IGD01008I &DSN = PAOLOR5.FIC.DBLP0002.TSLP0002.D1214.T1717
12.18.10 JOB12124 IGD01008I &UNIT = SYSDA
12.18.10 JOB12124 IGD01008I &ALLVOL =
12.18.10 JOB12124 IGD01008I &ANYVOL =
12.18.39 JOB12124 IGD01008I &DSN = PAOLOR5.FIC.DBLP0003.TSLP0003.D1214.T1717
12.18.39 JOB12124 IGD01008I &UNIT = SYSDA
12.18.39 JOB12124 IGD01008I &ALLVOL =
12.18.39 JOB12124 IGD01008I &ANYVOL =
12.19.01 JOB12124 - --TIMINGS (MINS.)-- ----PAGING
COUNTS---
12.19.01 JOB12124 -JOBNAME STEPNAME PROCSTEP RC EXCP CPU SRB CLOCK SERV PG PAGE SWAP
VIO SWAPS
12.19.01 JOB12124 -PAOLOR5C PH01S18 DSNUPROC 00 8031 .03 .00 1.5 166K 0 0 0
0 0
12.19.01 JOB12124 IEF404I PAOLOR5C - ENDED - ASID=0032.
12.19.01 JOB12124 -PAOLOR5C ENDED. NAME-PAOLOR5 TOTAL CPU TIME= .03 TOTAL ELAPSED
TIME= 1.5
12.19.01 JOB12124 $HASP395 PAOLOR5C ENDED
------ JES2 JOB STATISTICS ------
14 DEC 2000 JOB EXECUTION DATE
48 CARDS READ
223 SYSOUT PRINT RECORDS
0 SYSOUT PUNCH RECORDS
14 SYSOUT SPOOL KBYTES
1.50 MINUTES EXECUTION TIME
1 //PAOLOR5C JOB ACCNT#, JOB12124
// PAOLOR5, **JOB STATEMENT GENERATED BY SUBMIT**
// NOTIFY=PAOLOR5,
// MSGLEVEL=(1,1)
//********************************************************************* 00240000
//* 00250000
2 //JOBLIB DD DSN=DSN710.SDSNLOAD,DISP=SHR 00260000
//* 00270000
//* 12150000
//* STEP 18: REORGANIZE TABLESPACES, PRODUCE STATISTICS 12160000
//* 12170000
3 //PH01S18 EXEC DSNUPROC,PARM='DB2A,COPYTS' 12180000
4 XXDSNUPROC PROC LIB='DSN510.SDSNLOAD',

© Copyright IBM Corp. 2001 227


XX SYSTEM=DB2X,
XX SIZE=0K,UID='',UTPROC=''
XX*
XX**********************************************************************
5 XXDSNUPROC EXEC PGM=DSNUTILB,REGION=&SIZE,
XX PARM='&SYSTEM,&UID,&UTPROC'
IEFC653I SUBSTITUTION JCL - PGM=DSNUTILB,REGION=0K,PARM='DB2X,,'
6 //STEPLIB DD DSN=DSN710.SDSNLOAD,DISP=SHR 12190000
X/STEPLIB DD DSN=&LIB,DISP=SHR
XX*
XX**********************************************************************
XX*
XX* THE FOLLOWING DEFINE THE UTILITIES' PRINT DATA SETS
XX*
XX**********************************************************************
XX*
IEFC653I SUBSTITUTION JCL - DSN=DSN510.SDSNLOAD,DISP=SHR
7 XXSYSPRINT DD SYSOUT=*
8 XXUTPRINT DD SYSOUT=*
9 XXSYSUDUMP DD SYSOUT=*
XX*DSNUPROC PEND REMOVE * FOR USE AS INSTREAM PROCEDURE
10 //SYSIN DD * 12250000
//* 12330000
STMT NO. MESSAGE
3 IEFC001I PROCEDURE DSNUPROC WAS EXPANDED USING SYSTEM LIBRARY SYS1.TEST1.PROCLIB
ICH70001I PAOLOR5 LAST ACCESS AT 12:16:36 ON THURSDAY, DECEMBER 14, 2000
IEF236I ALLOC. FOR PAOLOR5C DSNUPROC PH01S18
IEF237I 2661 ALLOCATED TO JOBLIB
IEF237I 2661 ALLOCATED TO STEPLIB
IEF237I JES2 ALLOCATED TO SYSPRINT
IEF237I JES2 ALLOCATED TO UTPRINT
IEF237I JES2 ALLOCATED TO SYSUDUMP
IEF237I JES2 ALLOCATED TO SYSIN
IGD100I 3F14 ALLOCATED TO DDNAME SYS00001 DATACLAS ( )
IGD01008I &DSN = PAOLOR5.FIC.DBLP0001.TSLP0001.D1214.T1717
IGD01008I &UNIT = SYSDA
IGD01008I &ALLVOL =
IGD01008I &ANYVOL =
IEF285I PAOLOR5.FIC.DBLP0001.TSLP0001.D1214.T1717 CATALOGED
IEF285I VOL SER NOS= SBOX02.
IGD100I 390D ALLOCATED TO DDNAME SYS00002 DATACLAS ( )
IGD01008I &DSN = PAOLOR5.FIC.DBLP0002.TSLP0002.D1214.T1717
IGD01008I &UNIT = SYSDA
IGD01008I &ALLVOL =
IGD01008I &ANYVOL =
IEF285I PAOLOR5.FIC.DBLP0002.TSLP0002.D1214.T1717 CATALOGED
IEF285I VOL SER NOS= SBOX38.
IGD100I 3F14 ALLOCATED TO DDNAME SYS00003 DATACLAS ( )
IGD01008I &DSN = PAOLOR5.FIC.DBLP0003.TSLP0003.D1214.T1717
IGD01008I &UNIT = SYSDA
IGD01008I &ALLVOL =
IGD01008I &ANYVOL =
IEF285I PAOLOR5.FIC.DBLP0003.TSLP0003.D1214.T1717 CATALOGED
IEF285I VOL SER NOS= SBOX02.
IEF142I PAOLOR5C DSNUPROC PH01S18 - STEP WAS EXECUTED - COND CODE 0000
IEF285I DSN710.SDSNLOAD KEPT
IEF285I VOL SER NOS= SBOX12.
IEF285I PAOLOR5.PAOLOR5C.JOB12124.D0000102.? SYSOUT
IEF285I PAOLOR5.PAOLOR5C.JOB12124.D0000103.? SYSOUT
IEF285I PAOLOR5.PAOLOR5C.JOB12124.D0000104.? SYSOUT
IEF285I PAOLOR5.PAOLOR5C.JOB12124.D0000101.? SYSIN
IEF373I STEP/DSNUPROC/START 2000349.1217
IEF374I STEP/DSNUPROC/STOP 2000349.1219 CPU 0MIN 02.06SEC SRB 0MIN 00.18SEC VIRT 1176K SYS
380K EXT 4748K SYS 9784K
IEF285I DSN710.SDSNLOAD KEPT
IEF285I VOL SER NOS= SBOX12.
IEF375I JOB/PAOLOR5C/START 2000349.1217
IEF376I JOB/PAOLOR5C/STOP 2000349.1219 CPU 0MIN 02.06SEC SRB 0MIN 00.18SEC
DSNU000I DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = COPYTS
DSNU050I DSNUGUTC - LISTDEF COPYLST INCLUDE TABLESPACE DBLP00*.*
DSNU1035I DSNUILDR - LISTDEF STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - TEMPLATE COPYTMP DSN &USERID..FIC.&DB..&TS..D&MONTH.&DAY..T&HOUR.&MINUTE. UNIT
SYSDA DISP(
NEW,CATLG,CATLG) SPACE(50,25) CYL
DSNU1035I DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - OPTIONS EVENT(ITEMERROR,HALT)
DSNU1035I DSNUILDR - OPTIONS STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - COPY LIST COPYLST SHRLEVEL REFERENCE FULL YES COPYDDN(COPYTMP)
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=COPYTMP
DDNAME=SYS00001
DSN=PAOLOR5.FIC.DBLP0001.TSLP0001.D1214.T1717
DSNU400I DSNUBBID - COPY PROCESSED FOR TABLESPACE DBLP0001.TSLP0001
NUMBER OF PAGES=16480

228 DB2 for z/OS and OS/390 Version 7 Performance Topics


AVERAGE PERCENT FREE SPACE PER PAGE = 7.07
PERCENT OF CHANGED PAGES = 0.00
ELAPSED TIME=00:00:28
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=COPYTMP
DDNAME=SYS00002
DSN=PAOLOR5.FIC.DBLP0002.TSLP0002.D1214.T1717
DSNU400I DSNUBBID - COPY PROCESSED FOR TABLESPACE DBLP0002.TSLP0002
NUMBER OF PAGES=15720
AVERAGE PERCENT FREE SPACE PER PAGE = 3.22
PERCENT OF CHANGED PAGES = 72.77
ELAPSED TIME=00:00:28
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=COPYTMP
DDNAME=SYS00003
DSN=PAOLOR5.FIC.DBLP0003.TSLP0003.D1214.T1717
DSNU400I DSNUBBID - COPY PROCESSED FOR TABLESPACE DBLP0003.TSLP0003
NUMBER OF PAGES=13765
AVERAGE PERCENT FREE SPACE PER PAGE = 3.22
PERCENT OF CHANGED PAGES = 0.00
ELAPSED TIME=00:00:22
DSNU428I DSNUBBID - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DBLP0001.TSLP0001
DSNU428I DSNUBBID - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DBLP0002.TSLP0002
DSNU428I DSNUBBID - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DBLP0003.TSLP0003
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=0

Job output of LOAD partition in parallel


DSNU000I DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = LOADTS
DSNU050I DSNUGUTC - TEMPLATE TEMPINDN DSN(&USERID..DATA.TSLP0001.P&PART..D1115.T1650)
ISP(SHR,CATLG,CATLG)
SPACE(50,50) TRK
DSNU1035I DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - TEMPLATE TEMPDISC DSN(&USERID..P&PART..DISCARD) DISP(NEW,DELETE,DELETE)
SPACE(5,50) TRK
DSNU1035I DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - TEMPLATE TEMPERR DSN(&USERID..P&PART..SYSERR) DISP(NEW,DELETE,DELETE)
SPACE(5,50) TRK
DSNU1035I DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - TEMPLATE TEMPSORT DSN(&USERID..P&PART..SORTOUT) DISP(NEW,DELETE,DELETE)
SPACE(5,50) CYL
DSNU1035I DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - TEMPLATE TEMPUT1 DSN(&USERID..P&PART..SYSUT1) DISP(NEW,DELETE,DELETE)
SPACE(5,50) CYL
DSNU1035I DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - LOAD DATA LOG YES REPLACE SORTKEYS 4200 SORTNUM 16 WORKDDN(TEMPUT1,TEMPSORT)
EBCDIC CCSID(
37,0,0) ERRDDN TEMPERR
DSNU650I =DB2A DSNURWI - INTO TABLE "PAOLOR5 "."TBLP0003 " PART 1 INDDN TEMPINDN DISCARDDN
TEMPDISC
WHEN(1:2=X'0003')
DSNU650I =DB2A DSNURWI - ("PART_NO " POSITION(3:6) INTEGER,
DSNU650I =DB2A DSNURWI - "NO_FELT1 " POSITION(8:11) INTEGER NULLIF(7)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT2 " POSITION(13:16) INTEGER NULLIF(12)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT3 " POSITION(18:21) INTEGER NULLIF(17)=X'FF',
DSNU650I =DB2A DSNURWI - "TIMESTMP " POSITION(23:48) TIMESTAMP EXTERNAL NULLIF(22)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT1 " POSITION(50:50) CHAR(1) NULLIF(49)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT2 " POSITION(52:53) CHAR(2) NULLIF(51)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT3 " POSITION(55:57) CHAR(3) NULLIF(54)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT4 " POSITION(59:62) CHAR(4) NULLIF(58)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT5 " POSITION(64:68) CHAR(5) NULLIF(63)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT6 " POSITION(70:75) CHAR(6) NULLIF(69)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT7 " POSITION(77:83) CHAR(7) NULLIF(76)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT8 " POSITION(85:92) CHAR(8) NULLIF(84)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT9 " POSITION(94:102) CHAR(9) NULLIF(93)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT10 " POSITION(104:113) CHAR(10) NULLIF(103)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT11 " POSITION(115:125) CHAR(11) NULLIF(114)=X'FF',
DSNU650I =DB2A DSNURWI - "REMARK " POSITION(127:198) CHAR(72) NULLIF(126)=X'FF')
DSNU650I =DB2A DSNURWI - INTO TABLE "PAOLOR5 "."TBLP0003 " PART 2 INDDN TEMPINDN DISCARDDN
TEMPDISC
WHEN(1:2=X'0003')
DSNU650I =DB2A DSNURWI - ("PART_NO " POSITION(3:6) INTEGER,
DSNU650I =DB2A DSNURWI - "NO_FELT1 " POSITION(8:11) INTEGER NULLIF(7)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT2 " POSITION(13:16) INTEGER NULLIF(12)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT3 " POSITION(18:21) INTEGER NULLIF(17)=X'FF',
DSNU650I =DB2A DSNURWI - "TIMESTMP " POSITION(23:48) TIMESTAMP EXTERNAL NULLIF(22)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT1 " POSITION(50:50) CHAR(1) NULLIF(49)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT2 " POSITION(52:53) CHAR(2) NULLIF(51)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT3 " POSITION(55:57) CHAR(3) NULLIF(54)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT4 " POSITION(59:62) CHAR(4) NULLIF(58)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT5 " POSITION(64:68) CHAR(5) NULLIF(63)=X'FF',

Appendix E. Sample utilities output 229


DSNU650I =DB2A DSNURWI - "CHAR_FELT6 " POSITION(70:75) CHAR(6) NULLIF(69)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT7 " POSITION(77:83) CHAR(7) NULLIF(76)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT8 " POSITION(85:92) CHAR(8) NULLIF(84)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT9 " POSITION(94:102) CHAR(9) NULLIF(93)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT10 " POSITION(104:113) CHAR(10) NULLIF(103)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT11 " POSITION(115:125) CHAR(11) NULLIF(114)=X'FF',
DSNU650I =DB2A DSNURWI - "REMARK " POSITION(127:198) CHAR(72) NULLIF(126)=X'FF')
DSNU650I =DB2A DSNURWI - INTO TABLE "PAOLOR5 "."TBLP0003 " PART 3 INDDN TEMPINDN DISCARDDN
TEMPDISC
WHEN(1:2=X'0003')
DSNU650I =DB2A DSNURWI - ("PART_NO " POSITION(3:6) INTEGER,
DSNU650I =DB2A DSNURWI - "NO_FELT1 " POSITION(8:11) INTEGER NULLIF(7)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT2 " POSITION(13:16) INTEGER NULLIF(12)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT3 " POSITION(18:21) INTEGER NULLIF(17)=X'FF',
DSNU650I =DB2A DSNURWI - "TIMESTMP " POSITION(23:48) TIMESTAMP EXTERNAL NULLIF(22)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT1 " POSITION(50:50) CHAR(1) NULLIF(49)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT2 " POSITION(52:53) CHAR(2) NULLIF(51)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT3 " POSITION(55:57) CHAR(3) NULLIF(54)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT4 " POSITION(59:62) CHAR(4) NULLIF(58)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT5 " POSITION(64:68) CHAR(5) NULLIF(63)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT6 " POSITION(70:75) CHAR(6) NULLIF(69)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT7 " POSITION(77:83) CHAR(7) NULLIF(76)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT8 " POSITION(85:92) CHAR(8) NULLIF(84)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT9 " POSITION(94:102) CHAR(9) NULLIF(93)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT10 " POSITION(104:113) CHAR(10) NULLIF(103)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT11 " POSITION(115:125) CHAR(11) NULLIF(114)=X'FF',
DSNU650I =DB2A DSNURWI - "REMARK " POSITION(127:198) CHAR(72) NULLIF(126)=X'FF')
DSNU650I =DB2A DSNURWI - INTO TABLE "PAOLOR5 "."TBLP0003 " PART 4 INDDN TEMPINDN DISCARDDN
TEMPDISC
WHEN(1:2=X'0003')
DSNU650I =DB2A DSNURWI - ("PART_NO " POSITION(3:6) INTEGER,
DSNU650I =DB2A DSNURWI - "NO_FELT1 " POSITION(8:11) INTEGER NULLIF(7)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT2 " POSITION(13:16) INTEGER NULLIF(12)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT3 " POSITION(18:21) INTEGER NULLIF(17)=X'FF',
DSNU650I =DB2A DSNURWI - "TIMESTMP " POSITION(23:48) TIMESTAMP EXTERNAL NULLIF(22)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT1 " POSITION(50:50) CHAR(1) NULLIF(49)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT2 " POSITION(52:53) CHAR(2) NULLIF(51)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT3 " POSITION(55:57) CHAR(3) NULLIF(54)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT4 " POSITION(59:62) CHAR(4) NULLIF(58)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT5 " POSITION(64:68) CHAR(5) NULLIF(63)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT6 " POSITION(70:75) CHAR(6) NULLIF(69)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT7 " POSITION(77:83) CHAR(7) NULLIF(76)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT8 " POSITION(85:92) CHAR(8) NULLIF(84)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT9 " POSITION(94:102) CHAR(9) NULLIF(93)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT10 " POSITION(104:113) CHAR(10) NULLIF(103)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT11 " POSITION(115:125) CHAR(11) NULLIF(114)=X'FF',
DSNU650I =DB2A DSNURWI - "REMARK " POSITION(127:198) CHAR(72) NULLIF(126)=X'FF')
DSNU650I =DB2A DSNURWI - INTO TABLE "PAOLOR5 "."TBLP0003 " PART 5 INDDN TEMPINDN DISCARDDN
TEMPDISC
WHEN(1:2=X'0003')
DSNU650I =DB2A DSNURWI - ("PART_NO " POSITION(3:6) INTEGER,
DSNU650I =DB2A DSNURWI - "NO_FELT1 " POSITION(8:11) INTEGER NULLIF(7)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT2 " POSITION(13:16) INTEGER NULLIF(12)=X'FF',
DSNU650I =DB2A DSNURWI - "NO_FELT3 " POSITION(18:21) INTEGER NULLIF(17)=X'FF',
DSNU650I =DB2A DSNURWI - "TIMESTMP " POSITION(23:48) TIMESTAMP EXTERNAL NULLIF(22)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT1 " POSITION(50:50) CHAR(1) NULLIF(49)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT2 " POSITION(52:53) CHAR(2) NULLIF(51)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT3 " POSITION(55:57) CHAR(3) NULLIF(54)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT4 " POSITION(59:62) CHAR(4) NULLIF(58)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT5 " POSITION(64:68) CHAR(5) NULLIF(63)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT6 " POSITION(70:75) CHAR(6) NULLIF(69)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT7 " POSITION(77:83) CHAR(7) NULLIF(76)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT8 " POSITION(85:92) CHAR(8) NULLIF(84)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT9 " POSITION(94:102) CHAR(9) NULLIF(93)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT10 " POSITION(104:113) CHAR(10) NULLIF(103)=X'FF',
DSNU650I =DB2A DSNURWI - "CHAR_FELT11 " POSITION(115:125) CHAR(11) NULLIF(114)=X'FF',
DSNU650I =DB2A DSNURWI - "REMARK " POSITION(127:198) CHAR(72) NULLIF(126)=X'FF')
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPINDN
DDNAME=SYS00001
DSN=PAOLOR5.DATA.TSLP0001.P00001.D1115.T1650
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPDISC
DDNAME=SYS00002
DSN=PAOLOR5.P00001.DISCARD
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPINDN
DDNAME=SYS00003
DSN=PAOLOR5.DATA.TSLP0001.P00002.D1115.T1650
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPDISC
DDNAME=SYS00004
DSN=PAOLOR5.P00002.DISCARD
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPINDN
DDNAME=SYS00005
DSN=PAOLOR5.DATA.TSLP0001.P00003.D1115.T1650
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPDISC

230 DB2 for z/OS and OS/390 Version 7 Performance Topics


DDNAME=SYS00006
DSN=PAOLOR5.P00003.DISCARD
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPINDN
DDNAME=SYS00007
DSN=PAOLOR5.DATA.TSLP0001.P00004.D1115.T1650
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPDISC
DDNAME=SYS00008
DSN=PAOLOR5.P00004.DISCARD
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPINDN
DDNAME=SYS00009
DSN=PAOLOR5.DATA.TSLP0001.P00005.D1115.T1650
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPDISC
DDNAME=SYS00010
DSN=PAOLOR5.P00005.DISCARD
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPUT1
DDNAME=SYS00011
DSN=PAOLOR5.P00000.SYSUT1
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPSORT
DDNAME=SYS00012
DSN=PAOLOR5.P00000.SORTOUT
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TEMPERR
DDNAME=SYS00013
DSN=PAOLOR5.P00000.SYSERR
DSNU350I =DB2A DSNURRST - EXISTING RECORDS DELETED FROM TABLESPACE
DSNU364I DSNURPPL - PARTITIONS WILL BE LOADED IN PARALLEL, NUMBER OF TASKS = 3
DSNU303I =DB2A DSNURWT - (RE)LOAD PHASE STATISTICS - NUMBER OF RECORDS=8192 FOR TABLE PAOLOR5.TBLP0003
PART=1
DSNU303I =DB2A DSNURWT - (RE)LOAD PHASE STATISTICS - NUMBER OF RECORDS=8192 FOR TABLE PAOLOR5.TBLP0003
PART=2
DSNU303I =DB2A DSNURWT - (RE)LOAD PHASE STATISTICS - NUMBER OF RECORDS=8192 FOR TABLE PAOLOR5.TBLP0003
PART=3
DSNU303I =DB2A DSNURWT - (RE)LOAD PHASE STATISTICS - NUMBER OF RECORDS=8192 FOR TABLE PAOLOR5.TBLP0003
PART=4
DSNU303I =DB2A DSNURWT - (RE)LOAD PHASE STATISTICS - NUMBER OF RECORDS=8192 FOR TABLE PAOLOR5.TBLP0003
PART=5
DSNU302I DSNURILD - (RE)LOAD PHASE STATISTICS - NUMBER OF INPUT RECORDS PROCESSED=40960
DSNU300I DSNURILD - (RE)LOAD PHASE COMPLETE, ELAPSED TIME=00:01:34
DSNU042I DSNUGSOR - SORT PHASE STATISTICS -
NUMBER OF RECORDS=81920
ELAPSED TIME=00:00:00
DSNU348I =DB2A DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0003 PART
1
DSNU348I =DB2A DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0003 PART
2
DSNU348I =DB2A DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0003 PART
3
DSNU348I =DB2A DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0003 PART
4
DSNU348I =DB2A DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0003 PART
5
DSNU349I =DB2A DSNURBXA - BUILD PHASE STATISTICS - NUMBER OF KEYS=40960 FOR INDEX PAOLOR5.IXLP1003
DSNU258I DSNURBXD - BUILD PHASE STATISTICS - NUMBER OF INDEXES=2
DSNU259I DSNURBXD - BUILD PHASE COMPLETE, ELAPSED TIME=00:00:01
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=0

Job output of UNLOAD utility


DSNU000I DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = DSNTEX
DSNU050I DSNUGUTC - LISTDEF TSLPLIST INCLUDE TABLESPACE DBLP0003.TSLP0003 PARTLEVEL(1) INCLUDE
TABLESPACE
DBLP0003.TSLP0003 PARTLEVEL(2) INCLUDE TABLESPACE DBLP0003.TSLP0003 PARTLEVEL(3) INCLUDE TABLESPACE
DBLP0003.TSLP0003 PARTLEVEL(4) INCLUDE TABLESPACE DBLP0003.TSLP0003 PARTLEVEL(5)
DSNU1035I DSNUILDR - LISTDEF STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - TEMPLATE TMPDATA DSN &USERID..DATA.&TS..P&PART..D&MONTH.&DAY..T&HOUR.&MINUTE.
UNIT SYSDA
DISP(NEW,CATLG,CATLG) SPACE(50,25) TRK
DSNU1035I DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - TEMPLATE TMPPUNCH DSN &USERID..PUNCH.&TS..P&PART..D&MONTH.&DAY..T&HOUR.&MINUTE.
UNIT SYSDA
DISP(NEW,CATLG,CATLG) SPACE(1,1) TRK
DSNU1035I DSNUJTDR - TEMPLATE STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - OPTIONS EVENT(ITEMERROR,HALT)
DSNU1035I DSNUILDR - OPTIONS STATEMENT PROCESSED SUCCESSFULLY
DSNU050I DSNUGUTC - UNLOAD LIST TSLPLIST UNLDDN TMPDATA PUNCHDDN TMPPUNCH
DSNU1039I DSNUGULM - PROCESSING LIST ITEM: TABLESPACE DBLP0003.TSLP0003 PARTITION 1
DSNU1201I DSNUUNLD - PARTITIONS WILL BE UNLOADED IN PARALLEL, NUMBER OF TASKS = 2
DSNU397I DSNUUNLD - NUMBER OF TASKS CONSTRAINED BY CPUS
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TMPPUNCH
DDNAME=SYS00001

Appendix E. Sample utilities output 231


DSN=PAOLOR5.PUNCH.TSLP0003.P00000.D1214.T1755
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TMPDATA
DDNAME=SYS00002
DSN=PAOLOR5.DATA.TSLP0003.P00001.D1214.T1755
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TMPDATA
DDNAME=SYS00003
DSN=PAOLOR5.DATA.TSLP0003.P00002.D1214.T1755
DSNU251I DSNUULQB - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=8192 FOR TABLESPACE
DBLP0003.TSLP0003 PART 1
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TMPDATA
DDNAME=SYS00004
DSN=PAOLOR5.DATA.TSLP0003.P00003.D1214.T1755
DSNU251I DSNUULQB - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=8192 FOR TABLESPACE
DBLP0003.TSLP0003 PART 2
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TMPDATA
DDNAME=SYS00005
DSN=PAOLOR5.DATA.TSLP0003.P00004.D1214.T1755
DSNU251I DSNUULQB - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=8192 FOR TABLESPACE
DBLP0003.TSLP0003 PART 3
DSNU1038I DSNUGDYN - DATASET ALLOCATED. TEMPLATE=TMPDATA
DDNAME=SYS00006
DSN=PAOLOR5.DATA.TSLP0003.P00005.D1214.T1755
DSNU251I DSNUULQB - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=8192 FOR TABLESPACE
DBLP0003.TSLP0003 PART 4
DSNU251I DSNUULQB - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=8192 FOR TABLESPACE
DBLP0003.TSLP0003 PART 5
DSNU253I DSNUUNLD - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=40960 FOR TABLE
PAOLOR5.TBLP0003
DSNU252I DSNUUNLD - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=40960 FOR TABLESPACE
DBLP0003.TSLP0003
DSNU250I DSNUUNLD - UNLOAD PHASE COMPLETE, ELAPSED TIME=00:00:03
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=0

Job output of Online REORG


DSNU000I DSNUGUTC - OUTPUT START FOR UTILITY, UTILID = DSNTEX
DSNU050I DSNUGUTC - REORG TABLESPACE DBLP0001.TSLP0001 LOG NO SORTDATA SORTKEYS COPYDDN(SYSCOPY)
SHRLEVEL
REFERENCE DEADLINE NONE FASTSWITCH YES STATISTICS TABLE(ALL) INDEX(ALL) UPDATE ALL HISTORY ALL
DSNU252I DSNURULD - UNLOAD PHASE STATISTICS - NUMBER OF RECORDS UNLOADED=327680 FOR TABLESPACE
DBLP0001.TSLP0001
DSNU250I DSNURULD - UNLOAD PHASE COMPLETE, ELAPSED TIME=00:00:36
DSNU395I DSNURPIB - INDEXES WILL BE BUILT IN PARALLEL, NUMBER OF TASKS = 6
DSNU400I DSNURBID - COPY PROCESSED FOR TABLESPACE DBLP0001.TSLP0001
NUMBER OF PAGES=16526
AVERAGE PERCENT FREE SPACE PER PAGE = 7.06
PERCENT OF CHANGED PAGES =100.00
ELAPSED TIME=00:01:15
DSNU610I =DB2A DSNUSUTP - SYSTABLEPART CATALOG UPDATE FOR DBLP0001.TSLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUPT - SYSTABSTATS CATALOG UPDATE FOR PAOLOR5.TBLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUPC - SYSCOLSTATS CATALOG UPDATE FOR PAOLOR5.TBLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUTB - SYSTABLES CATALOG UPDATE FOR PAOLOR5.TBLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR PAOLOR5.TBLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUTS - SYSTABLESPACE CATALOG UPDATE FOR DBLP0001.TSLP0001 SUCCESSFUL
DSNU620I =DB2A DSNURDRT - RUNSTATS CATALOG TIMESTAMP = 2000-11-14-20.25.14.832335
DSNU304I =DB2A DSNURWT - (RE)LOAD PHASE STATISTICS - NUMBER OF RECORDS=327680 FOR TABLE
PAOLOR5.TBLP0001
DSNU302I DSNURILD - (RE)LOAD PHASE STATISTICS - NUMBER OF INPUT RECORDS PROCESSED=327680
DSNU300I DSNURILD - (RE)LOAD PHASE COMPLETE, ELAPSED TIME=00:01:16
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 1
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 2
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 3
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 4
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 5
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 6
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 7

232 DB2 for z/OS and OS/390 Version 7 Performance Topics


DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 8
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 9
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 10
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 11
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 12
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 13
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 14
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 15
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 16
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 17
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 18
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 19
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 20
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 21
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 22
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 23
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 24
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 25
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 26
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 27
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 28
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 29
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 30
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 31
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 32
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 33
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 34
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 35
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 36
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 37
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 38
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 39
DSNU393I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=8192 FOR INDEX PAOLOR5.IXLP0001
PART 40
DSNU610I =DB2A DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR PAOLOR5.IXLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUPI - SYSINDEXSTATS CATALOG UPDATE FOR PAOLOR5.IXLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUPD - SYSCOLDISTSTATS CATALOG UPDATE FOR PAOLOR5.IXLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUPC - SYSCOLSTATS CATALOG UPDATE FOR PAOLOR5.TBLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR PAOLOR5.IXLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR PAOLOR5.TBLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR PAOLOR5.IXLP0001 SUCCESSFUL
DSNU620I =DB2A DSNURDRI - RUNSTATS CATALOG TIMESTAMP = 2000-11-14-20.25.15.562612

Appendix E. Sample utilities output 233


DSNU394I =DB2A DSNURBXA - SORTBLD PHASE STATISTICS - NUMBER OF KEYS=327680 FOR INDEX PAOLOR5.IXLP0002
DSNU610I =DB2A DSNUSUIP - SYSINDEXPART CATALOG UPDATE FOR PAOLOR5.IXLP0002 SUCCESSFUL
DSNU610I =DB2A DSNUSUIX - SYSINDEXES CATALOG UPDATE FOR PAOLOR5.IXLP0002 SUCCESSFUL
DSNU610I =DB2A DSNUSUCO - SYSCOLUMNS CATALOG UPDATE FOR PAOLOR5.TBLP0001 SUCCESSFUL
DSNU610I =DB2A DSNUSUCD - SYSCOLDIST CATALOG UPDATE FOR PAOLOR5.IXLP0002 SUCCESSFUL
DSNU620I =DB2A DSNURDRI - RUNSTATS CATALOG TIMESTAMP = 2000-11-14-20.25.15.554778
DSNU391I DSNURPTB - SORTBLD PHASE STATISTICS. NUMBER OF INDEXES = 2
DSNU392I DSNURPTB - SORTBLD PHASE COMPLETE, ELAPSED TIME = 00:00:04
DSNU387I DSNURSWT - SWITCH PHASE COMPLETE, ELAPSED TIME = 00:00:14
DSNU428I DSNURSWT - DB2 IMAGE COPY SUCCESSFUL FOR TABLESPACE DBLP0001.TSLP0001
DSNU010I DSNUGBAC - UTILITY EXECUTION COMPLETE, HIGHEST RETURN CODE=0

234 DB2 for z/OS and OS/390 Version 7 Performance Topics


Special notices

References in this publication to IBM products, programs or services do not imply that IBM
intends to make these available in all countries in which IBM operates. Any reference to an
IBM product, program, or service is not intended to state or imply that only IBM's product,
program, or service may be used. Any functionally equivalent program that does not infringe
any of IBM's intellectual property rights may be used instead of the IBM product, program or
service.

Information in this book was developed in conjunction with use of the equipment specified,
and is limited in application to those specific hardware and software products and levels.

IBM may have patents or pending patent applications covering subject matter in this
document. The furnishing of this document does not give you any license to these patents.
You can send license inquiries, in writing, to the IBM Director of Licensing, IBM Corporation,
North Castle Drive, Armonk, NY 10504-1785.

Licensees of this program who wish to have information about it for the purpose of enabling:
(i) the exchange of information between independently created programs and other programs
(including this one) and (ii) the mutual use of the information which has been exchanged,
should contact IBM Corporation, Dept. 600A, Mail Drop 1329, Somers, NY 10589 USA.

Such information may be available, subject to appropriate terms and conditions, including in
some cases, payment of a fee.

The information contained in this document has not been submitted to any formal IBM test
and is distributed AS IS. The use of this information or the implementation of any of these
techniques is a customer responsibility and depends on the customer's ability to evaluate and
integrate them into the customer's operational environment. While each item may have been
reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or
similar results will be obtained elsewhere. Customers attempting to adapt these techniques to
their own environments do so at their own risk.

Any pointers in this publication to external Web sites are provided for convenience only and
do not in any manner serve as an endorsement of these Web sites.

The following terms are trademarks of other companies:

Tivoli, Manage. Anything. Anywhere.,The Power To Manage., Anything. Anywhere.,TME,


NetView, Cross-Site, Tivoli Ready, Tivoli Certified, Planet Tivoli, and Tivoli Enterprise are
trademarks or registered trademarks of Tivoli Systems Inc., an IBM company, in the United
States, other countries, or both. In Denmark, Tivoli is a trademark licensed from Kjøbenhavns
Sommer - Tivoli A/S.

C-bus is a trademark of Corollary, Inc. in the United States and/or other countries.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of
Sun Microsystems, Inc. in the United States and/or other countries.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States and/or other countries.

PC Direct is a trademark of Ziff Communications Company in the United States and/or other

© Copyright IBM Corp. 2001 235


countries and is used by IBM Corporation under license.

ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel


Corporation in the United States and/or other countries.

UNIX is a registered trademark in the United States and other countries licensed exclusively
through The Open Group.

SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET
Secure Electronic Transaction LLC.

Other company, product, and service names may be trademarks or service marks of others.

236 DB2 for z/OS and OS/390 Version 7 Performance Topics


Related publications

The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.

IBM Redbooks
For information on ordering these publications, see “How to get IBM Redbooks” on page 238.
򐂰 DB2 for z/OS and OS/390 Version 7: Using the Utilities Suite, SG24-6289
򐂰 DB2 UDB Server for OS/390 and z/OS Version 7 Presentation Guide, SG24-6121
򐂰 DB2 UDB Server for OS/390 Version 6 Technical Update, SG24-6108
򐂰 DB2 Java Stored Procedures -- Learning by Example, SG24-5945
򐂰 DB2 UDB for OS/390 Version 6 Performance Topics, SG24-5351
򐂰 DB2 for OS/390 Version 5 Performance Topics, SG24-2213
򐂰 DB2 for MVS/ESA Version 4 Non-Data-Sharing Performance Topics, SG24-4562
򐂰 DB2 UDB for OS/390 Version 6 Management Tools Package, SG24-5759
򐂰 DB2 Server for OS/390 Version 5 Recent Enhancements - Reference Guide, SG24-5421
򐂰 DB2 for OS/390 Capacity Planning, SG24-2244
򐂰 Developing Cross-Platform DB2 Stored Procedures: SQL Procedures and the DB2 Stored
procedure Builder, SG24-5485
򐂰 DB2 for OS/390 and Continuous Availability, SG24-5486
򐂰 Connecting WebSphere to DB2 UDB Server, SG24-62119
򐂰 Parallel Sysplex Configuration: Cookbook, SG24-2076-00
򐂰 DB2 for OS390 Application Design for High Performance, GG24-2233
򐂰 Using RVA and SnapShot for Business Intelligence Applications with OS/390 and DB2,
SG24-5333
򐂰 IBM Enterprise Storage Server Performance Monitoring and Tuning Guide, SG24-5656
򐂰 DFSMS Release 10 Technical Update, SG24-6120
򐂰 Storage Management with DB2 for OS/390, SG24-5462
򐂰 Implementing ESS Copy Services on S/390, SG24-5680

Other resources
These publications are also relevant as further information sources:
򐂰 DB2 UDB for OS/390 and z/OS Version 7 What’s New, GC26-9946
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Installation Guide, GC26-9936
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Command Reference, SC26-9934
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Messages and Codes, GC26-9940
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Utility Guide and Reference, SC26-9945

© Copyright IBM Corp. 2001 237


򐂰 DB2 UDB for OS/390 and z/OS Version 7 Programming Guide and Reference for Java,
SC26-9932
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Administration Guide, SC26-9931
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Application Programming and SQL Guide,
SC26-9933
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Release Planning Guide, SC26-9943
򐂰 DB2 UDB for OS/390 and z/OS Version 7 SQL Reference, SC26-9944
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Text Extender Administration and Programming,
SC26-9948
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Data Sharing: Planning and Administration,
SC26-9935
򐂰 DB2 UDB for OS/390 and z/OS Version 7 Image, Audio, and Video Extenders, SC26-9947
򐂰 DB2 UDB for OS/390 and z/OS Version 7 ODBC Guide and Reference, SC26-9941
򐂰 DB2 UDB for OS/390 and z/OS Version 7 XML Extender Administration and Reference,
SC26-9949
򐂰 OS/390 V2R10.0 DFSMS Using Data Sets, SC26-7339
򐂰 DB2 UDB for OS/390 Storage Management, IDUG Solutions Journal, Spring 2000,
Volume 7, Number 1, by John Campbell and Mary Petras
򐂰 DB2 UDB for z/OS and OS/390 Performance on the IBM zSeries 900 Server, white paper
by David Witkowski available from ibm.com/software/data/db2/os390/support.htm

Referenced Web sites


These Web sites are also relevant as further information sources:
򐂰 ibm.com/software/data/db2/os390/ DB2 for OS/390
򐂰 ibm.com/software/data/db2/os390/estimate/ DB2 Estimator
򐂰 ibm.com/storage/hardsoft/diskdrls/technology.htm ESS
򐂰 ibm.com/software/data/db2/os390/support.htm DB2 support and services

How to get IBM Redbooks


Search for additional Redbooks or redpieces, view, download, or order hardcopy from the
Redbooks Web site:
ibm.com/redbooks

Also download additional materials (code samples or diskette/CD-ROM images) from this
Redbooks site.

Redpieces are Redbooks in progress; not all Redbooks become redpieces and sometimes
just a few chapters will be published this way. The intent is to get the information out much
quicker than the formal publishing process allows.

IBM Redbooks collections


Redbooks are also available on CD-ROMs. Click the CD-ROMs button on the Redbooks Web
site for information about all the CD-ROMs offered, as well as updates and formats.

238 DB2 for z/OS and OS/390 Version 7 Performance Topics


Abbreviations and acronyms

AIX Advanced Interactive DRDA distributed relational database


eXecutive from IBM architecture
APAR authorized program analysis DTT declared temporary tables
report
EA extended addressability
ARM automatic restart manager
EBCDIC extended binary coded
ASCII American National Standard decimal interchange code
Code for Information
ECS enhanced catalog sharing
Interchange
ECSA extended common storage
BLOB binary large objects
area
CCSID coded character set EDM environment descriptor
identifier management
CCA client configuration assistant ERP enterprise resource planning
CFCC coupling facility control code ESA Enterprise Systems
CTT created temporary table Architecture

CEC central electronics complex ETR external troughput rate, an


elapsed time measure,
CD compact disk focuses on system capacity
CF coupling facility FDT functional track directory
CFRM coupling facility resource FTP File Transfer Program
management
GB gigabyte (1,073,741,824
CLI call level interface bytes)
CLP command line processor GBP group buffer pool
CPU central processing unit GRS global resource serialization
CSA common storage area GUI graphical user interface
DASD direct access storage device HPJ high performance Java
DB2 PM DB2 performance monitor IBM International Business
DBAT database access thread Machines Corporation
DBD database descriptor ICF integrated catalog facility
DBID database identifier ICF integrated coupling facility
DBRM database request module ICMF internal coupling migration
facility
DSC dynamic statement cache,
local or global IFCID instrumentation facility
component identifier
DCL data control language
IFI instrumentation facility
DDCS distributed database
interface
connection services
IRLM internal resource lock
DDF distributed data facility
manager
DDL data definition language
ISPF interactive system productivity
DLL dynamic load library facility
manipulation language
ISV independent software vendor
DML data manipulation language
I/O input/output
DNS domain name server

© Copyright IBM Corp. 2001 239


ITR internal throughput rate, a USS Unix systems services
processor time measure,
WLM workload manager
focuses on processor capacity
ITSO International Technical
Support Organization
IVP installation verification
process
JDBC Java Database Connectivity
JFS journaled file systems
JVM Java Virtual Machine
KB kilobyte (1,024 bytes)
LOB large object
LPL logical page list
LPAR logically partitioned mode
LRECL logical record length
LRSN log record sequence number
LVM logical volume manager
MB megabyte (1,048,576 bytes)
OBD object descriptor in DBD
ODBC Open Data Base Connectivity
OS/390 Operating System/390
PAV parallel access volume
PDS partitioned data set
PSID pageset identifier
PSP preventive service planning
PTF program temporary fix
PUNC possibly uncommitted
QMF Query Management Facility
RACF Resource Access Control
Facility
RBA relative byte address
RECFM record format
RID record identifier
RRS resource recovery services
RRSAF resource recovery services
attach facility
RS read stability
RR repeatable read
SDK software developers kit
SMIT System Management
Interface Tool
SP stored procedure
SRB system resource block
TCB task control block

240 DB2 for z/OS and OS/390 Version 7 Performance Topics


Index

Numerics Control Center 18


0002 183 CopyToCopy 15
0003 183 COPYTOCOPY utility 147
0147 183 Correlated subquery 38
0148 183 Coupling Facility Name Class Queues 16, 166
0217 180 Cross Loader 14
0219 180 CURRENT MEMBER 177
0220 180
0225 180 D
0319 180 Data Sharing 16
-101 83 Bypass Group Attach 167
-118 46 Data sharing 4
-129 83 Group Attach support for DL/I Batch 168
199 187 local connect using STARTECB 167
-216 36 DB2 Connect Personal Edition 18
225 189 DB2 Estimator 18, 196
5655-E62 19 DB2 Installer 18
5655-E63 19 DB2 Management Tools Package 18
5697-E98 19 DB2 PM 185
64 GB central storage 198 DB2 PM and IFI enhancements 179
DB2 Restart Light 172
A DB2 subsystem 2
ACCESS_DEGREE 30 DB2 subsystem parameters 211
Adding space to the workfiles 16 DB2 subsystem performance 87
Adding workfiles 107 DB2 tools 17
alias 13 DB2 V7
Application enablement 12 packaging 18
ARESTP 105 DB2 Version 7
assess the size of the Buffer Pools 219 overview 11
Asynchronous preformat 88 DB2 Warehouse Manager 17, 19
AUTHCACH 102 db2advis 195
Auto Alter 176 DB2CONN 168
Availability and capacity 3 DB2GROUPID 168
AVG 56 db2look 194
DBADM 13
DDL concurrency 99
B deadlock detection 112
Bind improvements 81 DEALLCT 104
BUILD2 parallelism 134 DECLARE TEMPORARY TABLE 53
Building indexes in parallel 121 Declared Temporary Table 66
Declared Temporary Tables 62
DEGREE(ANY) 33
C DELETE 2, 25, 41, 45
Cancel Thread NOBACKOUT 107
different data types 77
CARDF 142
dimension table 82
CATMAINT utility 93
DRAIN and RETRY 138
CFRM 175
DRAIN_WAIT 138
Charge features 19
DRDA server elapsed time reporting 162
CHKFREQ 104
DSETSTAT 186
CICS group attach 168
DSN_STATEMENT_TABLE 27
CLUSTERATIO 142
DSN6SPRC 133
Compression 198
DSN8ED7 102
Consistent restart 105
DSNDB07 65
Consistent restart enhancements 16
DSNTEJWA 194

© Copyright IBM Corp. 2001 241


DSNTEJWB 194 IFI 180
DSNTEJWC 194 IMMEDWRI 170
DSNTEJWD 194 IMMEDWRITE 170
DSNTEJWF 194 IMMEDWRITE bind option 17
DSNZPARM changes 185 implementing new functions
DSSTIME 104 no effort 22
DYNAMIC 61 small effort 22
Dynamic utility jobs 14, 114 implementing new functions some effort 23
DYNAMICRULES(BIND) 159 IMS 17
IMS group attach 169
IMS tools 17
E IN predicate 35
EDMBFIT 103 incomplete units of recovery during shutdown 176
EDMDSPAC 104 Index Advisor 192
EDMPOOL 103 INDREFLIMIT 143
Enabling VSAM I/O striping 205 IN-list 28
ENDEXEC 149 IN-List index access 28
Enhanced management of constraints 13 INSENSITIVE 61
Enhanced stored procedures 13 insensitive cursor 69
Equal and Not Equal predicates 36 Introduction to the redbook 1
Equal predicate 34 IRLM
ESS 90, 206 dynamic time out value 111
Evaluate uncommitted 98 subsecond deadlock detection 112
EVALUNC 98 IRLM time out 111
EXEC SQL 149
EXISTS predicate 41
EXPLAIN headings 27 J
EXPLAIN tables 26 Java
EXTENTS 141 BIND options 159
close prepared statements 161
dynamic SQL statement caching 160
F matching data type and length 160
fact table 82 memory usage 159
failures on PREPARE 82 positioned iterators 161
FARINDREF 143 selecting columns 160
FAROFFPOS 142 serialized profile 160
Fast switch 132 Java data types 159
FETCH FIRST n ROWS 152 Java support 13
Fetch first n rows 70 JDBC and SQLJ performance considerations 159
FETCH INSENSITIVE 62 JIT compilation 159
FETCH options 63 Join outer table 30
FETCH SENSETIVE 62 join transformation 38
FlashCopy 209 JOIN_DEGREE 30
FORCEROLLUP 138, 141 JVM 157
FREEPAGE 141

K
G Key performance enhancements 2
Global transactions 15
Group Attach enhancements 17, 167
groupoverride 167 L
leaf disorganization 146
LEAFFAR 146
H LEAFNEAR 146
HPGRBRBA 98 Limited fetch 12
LIST 117
I LISTDEF 114
IDFORE 104 LOAD 88
IFCID 217 92 LOAD partition parallelism 117
IFCID 225 92 LOAD partitions in parallel 118
IFCIDs 180 LOAD without SORTKEYS 122

242 DB2 for z/OS and OS/390 Version 7 Performance Topics


LOBVALA 103 PQ10465 79
LOBVALS 103 PQ22046 77
Log manager enhancements 16, 107 PQ22895 170
Log suspend and resume 107 PQ24933 77
Log-only recovery 98 PQ25337 170
Long running UR warning message 110 PQ25914 198
PQ28813 82
PQ34667 20
M PQ36206 82
materialization 59 PQ36933 198
MAX 72 PQ37807 186
MAXDBAT 104 PQ38035 97
MAXRBLK 103 PQ38174 199
MAXRTU 104 PQ38455 111, 112
message handling for CF/structure failure 176 PQ42180 169
Migration to DB2 Version 7 20 PQ43357 188, 191
MIN 72 PQ43846 83
More parallelism with LOAD with multiple inputs 14 PQ44114 165
PQ44144 176
N PQ44671 177
NEARINDREF 143 PQ44985 97
NEAROFFPOS 142 PQ45052 45
Net Search Extender 17, 20 PQ45184 150
Net.Data 18 PQ45268 117, 148
Network computing 4, 15 PQ45407 165, 176
Network monitoring 16 PQ46577 117
New features and tools 17 PQ46636 184, 191
NUMLKTS 103 PQ46759 148
PQ47178 59, 60
PQ47833 83
O PQ48306 81, 82
OFFPOS 125 PQ48588 60
OFFPOSLIMIT 142 PQ49458 153
Online DSNZPARM 102 precompiler services 13
Online LOAD RESUME 14, 124 PREFORMAT 88
Online REORG enhancements 15, 131 preformat measurement 89
Online subsystem parameters 16 PREVIEW 116
OPTIMIZE FOR m ROWS 153 pruning 57
Optional no-charge features 18 PTASKROL 104
OPTIONS 116
ORDER BY 74
ORGRATIO 145 Q
Other enhancements 5 QBLOCK_TYPE 27
OW43946 207 Quantified predicates 34
Query Management Facility 19

P
packaging 18 R
Parallel Access Volumes 90, 206 Reasons to migrate 20
Parallel data set open 90 Redbooks Web Site 238
Parallelism 28 Contact us xvii
PARENT_QBLOCKNO 27 REFP 105
partial star join 84 release skipping 20
PCTFREE 141 REORG 88
PERCDROP 144 repositioning 67
Performance and availability 16 Restart Light 17
Performance tools 5 RESTART processing 117
Persistent Coupling Facility Structure sizes 175 RESTP 105
Persistent structure size changes 17 RESYNCMEMBER 169
PLAN_TABLE 26, 27 RETRY 138
Planning for migration 20 Retry of log read request 109

Index 243
RETRY_DELAY 138 Union everywhere 12
RETVLCFK 79 UNION syntax 49
REUSE 141 uniqueness constraint 39
REXX language support 18 UNLOAD 14
RLFERR 104 UNLOAD utility 127
Row expression in IN predicate 12 updatable DB2 subsystem parameters 211
Row expressions 34 UPDATE 2, 25, 41, 45
RUNSTATS enhancements 138 UQ51114 177
RVA 209 USS 158
Utilities 3, 14
utilities output 227
S
SCROLL 61
Scrollable cursors 12, 60 V
PL/1 program 223 VARCHAR columns 79
Searched UPDATE and DELETE 41 VARCHAR index-only access 79
Security enhancements 15 variable-length rows 99
SELECT MAX 74 VDWQ 90
self-referencing subselect 45 view 13
Self-referencing subselect on UPDATE or DELETE 13 Virtual storage 91
self-referencing UPDATE/DELETE 46 Visual Explain 18
SENSITIVE 61 VSAM Data Striping 204
sensitive cursor 69 VSAM I/O striping in OS/390 V2R10 215
SENSITIVE DYNAMIC 62
SEQCACH=SEQ 207
SET CURRENT DEGREE 28 X
SET LOG SUSPEND 108 XML Extender 13
-SET SYSPARM 102
SETXCF 175 Z
Snapshot 209 z/Series 198
SQL 2
SQL performance 2, 25
Star join 82
star join
best environment 85
STARJOIN 82
STARTECB 167
STATHIST 139
STATIME 104
Statistics history 15, 139
Stored Procedure Builder 18
Stored procedures COMMIT/ROLLBACK 157
String data types 160

T
TABLE_TYPE 27
TEMP database 62
TEMPLATE 115
tools 17

U
Unicode 161
UNICODE support 16
UNION
EXISTS predicate 52
IN predicate 52
optimization 53
UNION ALL 56
UNION everywhere 49

244 DB2 for z/OS and OS/390 Version 7 Performance Topics


DB2 for z/OS and OS/390 Version 7 Performance Topics
(0.5” spine)
0.475”<->0.873”
250 <-> 459 pages
Back cover ®

DB2 for z/OS and


OS/390 Version 7
Performance Topics
Description of IBM DATABASE 2 Universal Database Server for z/OS and OS/390
Version 7 (DB2 Universal Database Server for z/OS and OS/390
INTERNATIONAL
performance and
Version 7, or just DB2 V7 throughout this book) is the eleventh TECHNICAL
availability related
release of DB2 for MVS. It brings to this platform many new data SUPPORT
functions
support, application development, and SQL language ORGANIZATION
Performance enhancements for e-business while building upon the traditional
capabilities of availability, scalability, and performance.
measurements of new
or enhanced This IBM Redbook describes the performance implications of
functions BUILDING TECHNICAL
several enhancements made available with DB2 V7. These INFORMATION BASED ON
enhancements include performance and availability delivered PRACTICAL EXPERIENCE
Usage considerations through new and enhanced utilities, dynamic changes to the
based on value of many of the system parameters without stopping DB2, IBM Redbooks are developed
performance and the new Restart Light option for data sharing environments. by the IBM International
There are many improvements to usability, such as scrollable Technical Support
cursors, support for UNICODE encoded data, support for COMMIT Organization. Experts from
IBM, Customers and Partners
and ROLLBACK within a stored procedure, the option to eliminate from around the world create
the DB2 precompile step in program preparation, and the timely technical information
definition of view with the operators UNION or UNION ALL. based on realistic scenarios.
Specific recommendations
This book will help you understand why migrating to Version 7 of are provided to help you
implement IT solutions more
DB2 can be beneficial for your applications and your DB2 effectively in your
subsystems. It provides considerations and recommendations for environment.
the implementation of the new functions and for evaluating their
performance and their applicability to your DB2 environments.

For more information:


ibm.com/redbooks

SG24-6129-00 ISBN 0738419672

Das könnte Ihnen auch gefallen