Sie sind auf Seite 1von 102

Administrator's Guide SAP BusinessObjects Planning and Consolidation 10.

0
version for the Microsoft platform
Target Audience Technical Consultants System Administrators Solution Consultants Business Process Owners Support Specialists

PUBLIC Document version: 1.0 2011-05-10

SAP AG Dietmar-Hopp-Allee 16 69190 Walldorf Germany T +49/18 05/34 34 34 F +49/18 05/34 34 20 www.sap.com

Copyright 2011 SAP AG. All rights reserved. Microsoft, Windows, Excel, Outlook, and PowerPoint are registered trademarks of Microsoft Corporation. IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, System z, System z10, System z9, z10, z9, iSeries, pSeries, xSeries, zSeries, eServer, z/VM, z/OS, i5/OS, S/390, OS/390, OS/400, AS/400, S/390 Parallel Enterprise Server, PowerVM, Power Architecture, POWER6+, POWER6, POWER5+, POWER5, POWER, OpenPower, PowerPC, BatchPipes, BladeCenter, System Storage, GPFS, HACMP, RETAIN, DB2 Connect, RACF, Redbooks, OS/2, Parallel Sysplex, MVS/ESA, AIX, Intelligent Miner, WebSphere, Netfinity, Tivoli and Informix are trademarks or registered trademarks of IBM Corporation. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Adobe, the Adobe logo, Acrobat, PostScript, and Reader are either trademarks or registered trademarks of Adobe Systems Incorporated in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation. UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group. Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems, Inc. HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C, World Wide Web Consortium, Massachusetts Institute of Technology. Java is a registered trademark of Sun Microsystems, Inc. JavaScript is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and implemented by Netscape. SAP, R/3, xApps, xApp, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP Business ByDesign, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary. These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies (SAP Group) for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.
Disclaimer

Some components of this product are based on Java. Any code change in these components may cause unpredictable and severe malfunctions and is therefore expressly prohibited, as is any decompilation of these components. Any Java Source Code delivered with this product is only to be used by SAPs Support Services and may not be modified or altered in any way.

2/102

PUBLIC

2011-05-10

Document History

The following table provides an overview of the most important document changes.
Version Date Description

1.0

2011-05-10

This is the first release.

2011-05-10

PUBLIC

3/102

Table of Contents

Chapter 1 Chapter 2 Chapter 3 3.1 3.2 3.2.1 3.2.2 3.2.3 3.2.4 3.2.4.1 3.2.4.2 3.2.5 3.2.6 3.2.7 3.2.8 3.2.8.1 3.2.8.2 3.3 3.4 3.4.1 3.4.2 3.4.3 3.5 3.5.1 3.6 3.7

Important SAP Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Monitoring of Planning and Consolidation . . . . . . . . . . . . . . . . . . . . . . . . Setting up a Minimal-Access User for Support . . . . . . . . . . . . . . . . . . . . . . . . . Monitoring with the Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . Information and Actions in the Management Console . . . . . . . . . . . . . . . . . . Management Console Installation and Use . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting and Stopping Planning and Consolidation Remotely . . . . . . . . . . . . . Monitoring the Application and the Database . . . . . . . . . . . . . . . . . . . . . . . . . Management Console Home Screen Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . Management Console Database Server Fields . . . . . . . . . . . . . . . . . . . . . . . . . . Alert Monitoring with the Management Console . . . . . . . . . . . . . . . . . . . . . . Identifying and Stopping a Resource-Intensive Process . . . . . . . . . . . . . . . . . . Performing a Detailed Server Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Management Console Event Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Event Log Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Management Console Logging Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Log and Trace Files List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Planning and Consolidation Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . Miscellaneous Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trace File for Debugging Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Manager Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Central Computing Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monitoring with the Central Computing Management System . . . . . . . . . . . Change and Transport System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Checking COM+ Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 13 15 15 16 17 18 18 20 23 24 24 24 24 25 26 26 26 26 27 27 27 28 29

4/102

PUBLIC

2011-05-10

Chapter 4 4.1 4.1.1 4.1.2 4.1.3 4.1.4 4.1.5 4.1.6 4.1.7 4.1.8 4.1.9 4.1.10 4.1.11 4.2 4.3 4.4 4.4.1 4.4.2 4.5 4.5.1 4.5.1.1 4.5.1.1.1 4.5.2 4.5.2.1 4.5.2.1.1 4.5.2.1.2 4.5.2.1.3 4.5.2.1.4 4.5.2.1.5 4.6 4.6.1 4.6.2 4.7 4.7.1

Management of Planning and Consolidation . . . . . . . . . . . . . . . . . . . . . . . Managing Your Planning and Consolidation Servers . . . . . . . . . . . . . . . . . . . . Viewing Server Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Client Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring the SLD Data Supplier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure the Connection to the Central Management System . . . . . . . . . . . Migrating Users to the Central Management System . . . . . . . . . . . . . . . . . . . . Domain User Group Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing the Credentials for Component Services . . . . . . . . . . . . . . . . . . . . . Setting up Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kerberos Authentication Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Manager Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asynchronous Interfaces and Data Transfers . . . . . . . . . . . . . . . . . . . . . . . . . . Stored Configuration Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Backing up and Restoring Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Backing up Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restoring Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Periodic Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scheduled Periodic Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optimizing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optimization Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manual Periodic Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Partitioning Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Considerations when Determining a Partitioning Scheme . . . . . . . . . . . . . . . . Data Slice Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Partition Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OLAP Custom Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optimizing the Flow of Data in the Database . . . . . . . . . . . . . . . . . . . . . . . . . . Send Governor Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Global Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Best Practices for Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . Tuning Planning and Consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31 31 32 32 33 34 35 38 39 40 41 42 42 43 43 44 44 45 47 47 47 47 48 49 49 51 52 54 55 56 57 58 59 59

2011-05-10

PUBLIC

5/102

4.7.2 4.7.3 4.7.4 4.7.5 4.7.6 4.7.7 4.7.8 4.7.9 4.7.10 4.7.10.1 4.7.10.2 Chapter 5 5.1 Chapter 6 6.1 6.1.1 6.1.2 6.1.3 6.1.4 6.1.5 6.1.6 6.1.7 6.1.8 6.1.9 6.1.10 6.1.11 6.2 Chapter 7 7.1 7.2 7.3 7.4

Hardware Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operating System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System Software Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tblDefaults Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model Customizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Identifying Performance Bottlenecks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FACT, FAC2, and WB Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FACT, FAC2, and WB Table Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . FACT Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61 62 62 64 65 66 70 71 73 74 75

High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 High Availability Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Software Change Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transport and Change Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Environment Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dimension Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Security Profile Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Logic File Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data and Data File Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transformation and Conversion File Migration . . . . . . . . . . . . . . . . . . . . . . . . Data Manager Package Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moving Microsoft SQL Server-Based Data Manager Packages . . . . . . . . . . . . . . Moving File-based Data Manager Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . Report, Input Form, and Book Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . Documents View Republication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Support Packages and SAP Notes Implementation . . . . . . . . . . . . . . . . . . . . . . Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planning and Consolidation Version Information . . . . . . . . . . . . . . . . . . . . . . Analyzing Problems Using Solution Manager Diagnostics . . . . . . . . . . . . . . . . Installing Appsight Black Box Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reporting and Analyzing System Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 80 80 81 82 82 83 84 84 85 85 86 86 86 87 87 87 87 88

6/102

PUBLIC

2011-05-10

7.5 7.6 7.7 7.8 7.9 Chapter 8 8.1 8.2 8.3

Generating and Analyzing Trace Files Using E2E Trace . . . . . . . . . . . . . . . . . . Logging and Tracing Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting Client and Server Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting in Data Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Support Desk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote Support Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CA Wily Introscope Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem Message Handover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89 92 93 93 94 97 97 97 98

2011-05-10

PUBLIC

7/102

This page is left blank for documents that are printed on both sides.

Important SAP Notes

1 Important SAP Notes

CAUTION

Check regularly to see which SAP Notes are available for this Administrator's Guide.
Important SAP Notes SAP Note Number Title Comments

1573539

SAP Planning and Consolidation 10.0 This is the Central Note for Planning and SP00, version for the Microsoft Consolidation 10.0. platform This note contains important information about the installation of SAP Planning and Consolidation 10.0, version for the Microsoft platform.

2011-05-10

PUBLIC

9/102

This page is left blank for documents that are printed on both sides.

Getting Started

2 Getting Started

CAUTION

This guide does not replace the daily operations handbook that we recommend customers create for their specific production operations.
About this Guide

Designing, implementing, and running Planning and Consolidation at peak performance 24 hours a day has never been more vital for your business success than now. This guide provides a starting point for managing Planning and Consolidation solutions and maintaining and running them optimally. It contains specific information for various tasks and lists the tools that you can use to implement them. This guide also provides references to the documentation required for these tasks, so you sometimes also need other Guides such as the Master Guide and SAP Library.
Naming Conventions

In this document, the following naming conventions apply:


Variable
<PC_server:port>

Description

<drive>

Server name or IP address and port number of SAP Planning and Consolidation application location. The drive where SAP Planning and Consolidation is installed. The default for this is C:/PC_MS.

2011-05-10

PUBLIC

11/102

This page is left blank for documents that are printed on both sides.

3 3.1

Monitoring of Planning and Consolidation Setting up a Minimal-Access User for Support

3 Monitoring of Planning and Consolidation

There are a number of methods that are available to you for monitoring your Planning and Consolidation system. Not all of the application described in the following sections may be available to you and you may also have alternative monitoring tools that you use. The following sections describe some of the SAP-recommended monitoring applications and also how to setup a user, with minimal access to Planning and Consolidation, who can provide support without impacting the Planning and Consolidation data.

3.1 Setting up a Minimal-Access User for Support


A user, who performs monitoring and diagnostic services, requires only minimal access to Planning and Consolidation data. Therefore, it may be advisable to create a user with minimal access rights particularly for remote monitoring and diagnosis. This user can send and retrieve Planning and Consolidation data but cannot modify system settings in the Administration Client, such as managing environments or models. This level of access is sufficient for purpose of remote support and monitoring using functions such as End-to-End (E2E) trace. The following procedure describes the creation and configuration of such a user.
Procedure

1. 2. 3. 4.

Log on to the EnvironmentShell environment through the Administration Client. Create a Task Profile for the new user. Choose Security Task Profiles , and then select Add New Task Profile from the Manage Task Profiles action pane. In the Add New Task Profile assistant, enter a suitable task profile name and description. Choose Next and select tasks as listed in the following table:
Category Task

Comments Web reporting

Administer Comments Edit Comments Administer Documents Edit Crystal Dashboards Edit Documents

2011-05-10

PUBLIC

13/102

3 3.1

Monitoring of Planning and Consolidation Setting up a Minimal-Access User for Support Category Task

Edit Reports Data Manager Edit Workspaces Cancel Any User Packages Download Data Edit Packages Edit Transformation Files Edit Conversion Files Edit Package Schedules for Any User Run Administration Packages Run Packages Edit Book and Distribution templates Publish Books Run Documents from EPM add-in Edit content of Public Folder Edit Journals Lock/Unlock Journals Post Journals Reopen Journals Unpost Journals View Journals Edit Ownership Manager View Ownership Manager Edit Templates Run Drill Throughs Use Input Forms and Save Manage Audit Manage KPIs Use Automated Variance Analysis Run Audit Reports Run Business Process Flow Reports Run Comment Reports Run Security Reports Run Work Status Reports Use Business Process Flows Use Offline Collection Use Offline Distribution Use System When Offline

Publish

Folder Access Journal

Consolidations Analysis and Collection

Data Audit Automated Variance Analysis System Report

Business Process Flows Collaboration System Security

14/102

PUBLIC

2011-05-10

3 3.2

Monitoring of Planning and Consolidation Monitoring with the Management Console Category Task

Work Status Console

Use Work Status Management View System Logs

5. 6.

Choose Security Users and then select Add new user from the Manage Users action pane. In the Add New Users wizard, enter a suitable user name or use the Search function to find an available user name.
NOTE

7. 8.

When you are assigning a user name it may only comprise alphanumeric characters and the underscore (_). That is the characters \ / : * ? < > | ; , & % are not permitted. Choose Next at the next two steps, and then assign the task profile that you created to the user. Check that the user has no teams assigned and only the minimal task profiles assigned and click Apply.

Result

You have created a user with minimal access to the Planning and Consolidation system. Anybody, who needs access for remote support and monitoring can use this user.
NOTE

You may need to configure the individual functions and tools to accept this user. For more information, see the documentation for the individual function and tools.

3.2 Monitoring with the Management Console


The Management Console monitors a Planning and Consolidation and Microsoft Platform installation at the hardware, server software, and Planning and Consolidation level to provide system baselines, to allow proactive system performance optimization, and to assist in platform support and troubleshooting.

3.2.1 Information and Actions in the Management Console


Information in the Management Console

The following information is available in the Management Console: The status of Planning and Consolidation processes Performance metrics of the Windows Server Operating System, the SQL Server, and Analysis Services A list of which Planning and Consolidation users are active on the system

2011-05-10

PUBLIC

15/102

3 3.2

Monitoring of Planning and Consolidation Monitoring with the Management Console

Details about what function each user is executing A list of resource-intensive processes The Management Console also has access to a detailed server diagnosis utility.
Actions in the Management Console

The following actions can be performed in the Management Console: Activate and deactivate Planning and Consolidation logging End resource-intensive processes Manage logging schedules (archiving and deletion parameters)
NOTE

There is no means of changing logging severity levels in the Management Console; system logs are either enabled or disabled. By default, logging is disabled. For more information about enabling or disabling logs, see Managing Event Log Records [page 24]. For information about the severity levels recorded in the logs, see the Status field description in Management Console Logging Fields [page 25].
NOTE

Only an authorized system administrator can perform these actions.

3.2.2 Management Console Installation and Use


Management Console Installation

For information about setting up the Management Console, see http://service.sap.com/ instguidescpm-bpc Planning and Consolidation 10.0, version for the Microsoft platform SBOP Plan & Consol 10.0 M Installation Guide . To access the Management Console, open a browser and enter http://<PC_server:port>/sapmc/ sapmc.html, where <PC_server:port> describes your Planning and Consolidation application server.
Centralized Monitoring in a Server Farm

In an environment where multiple application servers are implemented (for the purposes of providing high availability and network load balancing), each server has its own Management Console. You can use each console to monitor the server on which it is installed. You can also monitor any other server in the server farm environment by selecting the link to any other application servers Management Console from the dropdown list at the top of the Management Console. To provide a centralized view of the application servers, you can use SAP Solution Manager diagnostics to link to the various consoles. Additional monitoring capabilities are provided by CA Wily Introscope. See CA Wily Introscope Integration [page 97].

16/102

PUBLIC

2011-05-10

3 3.2

Monitoring of Planning and Consolidation Monitoring with the Management Console

Prerequisites

The Management Console requires Microsoft SQL Server 2008.

3.2.3 Starting and Stopping Planning and Consolidation Remotely


For maintenance and support purposes, it is often useful to be able to start and stop Planning and Consolidation remotely. You can do this from the Management Console.
Prerequisites

Ensure that you have Java Virtual Machine installed on your client and on your browser Configure your browser to allow Java Enable the Java plug-in
Procedure

1. 2. 3.

4.

Start the Management Console http://<PC_server:port>/sapmc/sapmc.html. When the Management Console opens, choose New. In the New System dialog, enter the Hostname of the Planning and Consolidation system that you wand to address and the Managed Object (the name of the Planning and Consolidation System). After you have entered the Hostname, you can also choose Discover to locate all SAP instances on the server. Choose Finish. Enter your username and password. The Planning and Consolidation instance appears in the in the tree view at the left of the Management Console. You can expand the tree to view the Planning and Consolidation components; selecting a component displays information about the component on the right of the Management Console. Right-clicking on any component in the Planning and Consolidation tree, including the main entry, shows a context menu with the following options: Start Stop Manage DB Connect to server Export Thes options are self-explanatory. If the component is running, Start is grayed out. Similarly Stop is grayed out if the component is not running.

2011-05-10

PUBLIC

17/102

3 3.2

Monitoring of Planning and Consolidation Monitoring with the Management Console

3.2.4 Monitoring the Application and the Database


The Management Console provides performance metrics for the Windows Server Operating System, the SQL Server, and Analysis Services. In addition, many of these metrics are summarized on the Management Console Home page. The information provided for each of these system tiers is detailed in the tables in the following topics.

3.2.4.1 Management Console Home Screen Fields


The following table lists fields in the Management Console Home screen.
Field Description Path

Process ID Image Name Mem Usage

Task ID App ID Name Description Service Name Shutdown After Hits By User

Status Breakdown

CPU Utilization

A numerical identifier that uniquely identifies a process while it runs. The process name The amount of the physical memory used by the process. By default, this column is sorted in descending order, so that the most resource-intensive process appears at the top. The process ID of the currently selected task The model ID of the currently selected task The name of the currently selected task The description of the currently selected task The service name of the currently selected task The value that appears after system shutdown The number of times that each user has accessed Planning and Consolidation Displays a graphical representation of the HTTP status codes in the systems, by percentage. Indicates the current processor load, as a percentage. This number is an average across all of the available

Home Home Home

Task Manager Task Manager Task Manager

Home Home Home Home Home Home Home

Center pane Center pane Center pane Center pane Center pane Center pane Hits by User

Home

Status Breakdown

Home

System Performance (App Server)

18/102

PUBLIC

2011-05-10

3 3.2 Field

Monitoring of Planning and Consolidation Monitoring with the Management Console Description Path

processors or cores available on the machine. Memory: Pages Writes/sec This indicates the number of physical memory pages per second being used by the system. The value is a sum of all physical memory banks in the machine. Available Bytes The physical memory, in available bytes, that can be used by the system without the need to use virtual memory or the page file. The value is a sum of all physical memory banks in the machine. Avg Disk Bytes per Read The average number of bytes per read from the physical disk(s) in the machine Avg Disk Write Queue Length The average number of write requests that were queued for the selected disk during the sample interval Current Disk Queue Length The average number of bytes per read from the physical disk(s) in the machine. This value is an aggregate of all drives on the machine . Current Locks Indicates the number of locks on tables or modedls currently in the database. This number is an aggregate of the locks on all of the databases on the machine. Current Lock Waits Indicates the number of clients waiting for a lock to be released. This is an aggregate of all of the clients waiting for lock releases for all of the databases on the machine. Current Connections The number of current, active server connections Cache Evictions per Second This number indicates the frequency with which the cache is returning information to the disk. If this number consistently ranges from 50 to 100, evaluate your hardware resources.

Home

System Performance (App Server)

Home

System Performance (App Server)

Home

Server Disk IO

Home

Server Disk IO

Home

Server Disk IO

Home

Analysis Services Connections and Locks

Home

Analysis Services Connections and Locks

Home Home

Analysis Services Connections and Locks Analysis Services Connections and Locks

2011-05-10

PUBLIC

19/102

3 3.2

Monitoring of Planning and Consolidation Monitoring with the Management Console NOTE

The App ID, Name, Description, Service Name, and Shutdown After fields contain a value only when a DLL host is selected. You can use that value determine the identity of the COM object.

3.2.4.2 Management Console Database Server Fields


The following table describes the management console database server fields:
Field Description Path

SPID ECID Status Login Name Host Name BLK DBName Command Request ID OLE DB Calls

Refer to Microsoft SQL Server documentation.

Database Server Database Server Who

Summary Who SQL Server

The number of OLE database calls.


CAUTION

Database Server Server Statistics

Summary

SQL

Ensure that this value is never greater than the number of user connections. Active Temp Tables The database server utilizes temp tables to handle data-intensive queries. A lack of temp tables under these conditions can indicate low disk space. Processes Blocked Indicates the number of processes that are blocked because of a query. CPU Utilization Indicates the current processor load, as a percentage. This number is an average across all of the available processors/cores available on the machine. Memory: Pages Writes/sec This indicates the number of physical memory pages per second being used by the system. The value is a sum of all physical memory banks in the machine. Available Bytes The physical memory in available bytes that can be used by the system without the need to use virtual memory or the page file. The value is a sum of all physical memory banks in the machine.

Database Server Server Statistics

Summary

SQL

Database Server Summary Server Statistics Database Server Summary Performance (Database Server)

SQL System

Database Server Summary Performance (Database Server)

System

Database Server Summary Performance (Database Server)

System

20/102

PUBLIC

2011-05-10

3 3.2 Field

Monitoring of Planning and Consolidation Monitoring with the Management Console Description Path

Current Locks

Current Lock Waits

Current Connections

Cache Evictions per Second

Database Server Summary Indicates the number of locks on tables or Analysis Services Connections & models currently in the database. This number is an aggregate of the locks on all of Locks the databases on the machine. Database Server Summary Indicates the number of clients waiting for a Analysis Services Connections & lock to be released. This is an aggregate of all of the clients waiting for lock releases for all of Locks the databases on the machine. Database Server Summary The number of current, active server Analysis Services Connections & connections. Locks Database Server Summary This number indicates the frequency with Analysis Services Connections & which the cache is returning information to Locks the disk.
RECOMMENDATION

Avg Disk Bytes per Read Avg Disk Write Queue Length Current Disk Queue Length

User Connections OLE DB Calls

If this number consistently ranges from 50 to 100, we recommend that you evaluate your hardware resources. Database Server The average number of bytes per read from Who the physical disks in the machine Database Server The average number of write requests that Database Disk IO were queued for the physical disks in the machine Database Server The current number of queued disk operations. If this number spikes during poor Database Disk IO processing performance, especially during the base or aggregating phases, then the current disk storage solution may not be adequate to support the needs of Analysis Services. Ideally, the value of this performance counter should be as low as possible at any given time. Database Server The number of users with a current, open Server Statistics connection to the database server. Database Server The number of OLE database calls. Server Statistics
CAUTION

SQL Server SQL Server Who

SQL Server

Who

SQL Server SQL Server

SQL SQL

Active Temp Tables

Ensure that this value is never greater than the number of user connections. Database Server The database server utilizes temp tables to handle data-intensive queries. A lack of temp Server Statistics tables under these conditions can indicate low disk space.

SQL Server

SQL

2011-05-10

PUBLIC

21/102

3 3.2 Field

Monitoring of Planning and Consolidation Monitoring with the Management Console Description Path

Processes Blocked Operations by Database SPID Status Login Name Host Name BLK By DB Name Command CPU Time Disk IO Last Batch Program SPID 1 Request ID Instance SqlMessage Message Step ID Step Name SqlSeverity JobID JobName RunStatus RunDate RunDuration Operator Emailed OperatorNetsent OperatorPaged RetriesAttempted Server Current Locks

Indicates the number of processes that are blocked because of a query. A graphical display of the current, active processes, categorized by database. Refer to Microsoft SQL Server documentation.

Database Server SQL Server Server Statistics Database Server SQL Server Operations by Database Database Server SQL Server Who2

SQL Who

Refer to Microsoft SQL Server documentation.

Database Server SQL Server Server Agent Statistics

SQL

Current Lock Waits

Current Connections

Database Server Analysis Services Indicates the number of locks on tables or Analysis Services Connections & models currently in the database. This number is an aggregate of the locks on all of Locks the databases on the machine. Database Server Analysis Services Indicates the number of clients waiting for a Analysis Services Connections & lock to be released. This is an aggregate of all of the clients waiting for lock releases for all of Locks the databases on the machine. Database Server Analysis Services The number of current, active server Analysis Services Connections & connections. Locks

22/102

PUBLIC

2011-05-10

3 3.2 Field

Monitoring of Planning and Consolidation Monitoring with the Management Console Description Path

Cache Evictions per Second

Database Server Analysis Services This number indicates the frequency with Analysis Services Connections & which the cache is returning information to Locks the disk.
RECOMMENDATION

Avg Disk Bytes per Read Avg Disk Write Queue Length Current Disk Queue Length

If this number consistently ranges from 50 to 100, we recommend that you evaluate your hardware resources. The average number of bytes per read from the physical disks in the machine. The average number of write requests that were queued for the physical disks in the machine This counter represents the current number of queued disk operations. If this number spikes during poor processing performance, especially during the base or aggregating phases, then the current disk storage solution may not be adequate to support the needs of Analysis Services. Ideally, the value of this performance counter should be as low as possible at any given time.

Database Server Analysis Services Analysis Services Disk IO Database Server Analysis Services Analysis Services Disk IO Database Server Analysis Services Analysis Services Disk IO

3.2.5 Alert Monitoring with the Management Console


You can set alerts in the Management Console, to provide a visual cue when designated server thresholds are exceeded. For example, you can set an alert when CPU Utilization exceeds a certain percentage.
Function Navigation What You Need to Know

Home System Performance (App Server) Setting alerts on the application server Choose Set Thresholds.

Database Server Summary Setting alerts on the database server Choose Set Thresholds.

System Performance (Database Server)

The System Performance panel is highlighted by a red border while the threshold is exceeded. The System Performance panel is highlighted by a red border while the threshold is exceeded.

2011-05-10

PUBLIC

23/102

3 3.2

Monitoring of Planning and Consolidation Monitoring with the Management Console

3.2.6 Identifying and Stopping a Resource-Intensive Process


To identify a resource-intensive process, examine the Task Manager on the Home screen. By default, the processes in this table are sorted by descending order, in terms of memory usage. The most resourceintensive process appears at the top. To stop a process, select it in the Task Manager on the Home screen, and choose Stop Process. For more information, see Management Console Home Screen Fields [page 18].
NOTE

You can set the recovery options for a process to determine whether or not it restarts automatically after you end it using Management Console. To examine and modify these settings in Microsoft Windows, choose Start Administrative Tools Services . In the context menu of the service, choose Properties. On the Recovery tab, select the conditions under which the service should restart.

3.2.7 Performing a Detailed Server Diagnosis


In the Management Console, you can run a diagnostic utility to create a detailed report that includes information about: The status of the various component services Database connectivity Microsoft Internet Information Services (IIS) configuration To run a detailed server diagnostic, choose Planning and Consolidation Logging Server Diagnostic .

3.2.8 Management Console Event Logs 3.2.8.1 Managing Event Log Records
The debug logs in Planning and Consolidation are optional, and (by default) not active. However, the logging section does display all of the status and error messages that Planning and Consolidation issues during normal operation. In the logging area of the Management Console, you can: Activate or deactivate debugging logs Set the amount of time that elapses before logs are deleted or moved to a history table in the database
Function Navigation What You Should Know

Activate or deactivate optional

Planning and Consolidation Logging Options Manage Debug Logs

Admin

For performance reasons, do not enable debugging logs on an open-ended basis in a production environment. Enable debugging

24/102

PUBLIC

2011-05-10

3 3.2

Monitoring of Planning and Consolidation Monitoring with the Management Console Navigation What You Should Know

Function

component logging Planning and Consolidation Logging Admin Archive current history Options Manage System Logs Move current logs to history logs

Delete all history logs Archive dated history logs Delete dated history logs

Planning and Consolidation Logging Admin Options Manage System Logs Delete history logs Planning and Consolidation Logging Admin Options Schedule Move current logs if Planning and Consolidation Logging Admin Options Schedule Purge history logs if

logs for just the minimum time needed to complete your analysis. This function moves all of the logging messages currently being displayed on the screen to the database. Current logs are saved in tblLogs, while archived logs are saved in tblLogHist. This function deletes all history logs in the system. These logs are moved to the tblLogHist database table. This function deletes history logs, based on conditions that you in the system.

3.2.8.2 Management Console Logging Fields


The fields listed in the table below can be found in the logging section. To access these fields, choose Planning and Consolidation Logging Planning and Consolidation Logging .
Features

You can search for records by any of the fields listed below. However, because of a Management Console limitation, we recommend that you avoid the use of time parameters in a search.
Field Description

ID

The job or process ID.


NOTE

System Job Status

Date

This field can be double-clicked to open the Planning and Consolidation Logging dialog box, which contains message metadata, including detailed information or error messages (where applicable). This field displays the source Planning and Consolidation component. The job or process name The severity level of the logging message: Information (contains descriptive information about the event) Warning (this event may warrant further investigation) Error (this log entry notifies you of an unexpected or adverse outcome) The date on which the process executed.

2011-05-10

PUBLIC

25/102

3 3.3

Monitoring of Planning and Consolidation Log and Trace Files List

3.3 Log and Trace Files List


This release of Planning and Consolidation creates the following set of model-specific log and trace files independent from those of the Management Console, which are in the logging directory of each Planning and Consolidation host: Trace files Bpctrace.x.log located in <Drive>/logging/trace, where x is a number between 0 and 9, such as bpctrace. 5.log. The default severity of traces is Error. Log files Bpclog.x.log located in <Drive>/logging/log, where x is a number between 0 and 9, such as Bpclog.5.log. The default severity of logs is Info.

3.4 Other Planning and Consolidation Log Files


This section details log files, from an operations perspective.

3.4.1 Miscellaneous Log Files


The following table contains a list of miscellaneous Planning and Consolidation log files:
Log File Notation Location Description

osoftdiagnostic.txt HTTPERR *.log

serverpath/server management/ systemroot/LogFiles/ systemroot/LogFiles/W3SVC1/

Output of the SAP Planning and Consolidation Server Diagnostics program IIS HTTP error log files General IIS log files

3.4.2 Trace File for Debugging Logic


You can turn on tracing for script logic and business rules when you need to troubleshoot a particular script or rule. We recommend that this is only used by experienced Planning and Consolidation consultants and support. We also recommend that these files are removed periodically since they take up a considerable amount of space. The activity is recorded in a file called debuglogic.log and stored in <drive>\webfolders \<environment>\<model>\privatepublication\<username>\<date>.

26/102

PUBLIC

2011-05-10

3 3.5

Monitoring of Planning and Consolidation Central Computing Management System

3.4.3 Data Manager Log Files


Whenever you use a Data Manager package to move Planning and Consolidation data, the system creates a log file. This file can be useful in troubleshooting the execution of packages. We recommend that these files are removed once the packages have completed since they take up a considerable amount of space. These logs are stored in <Drive>\webfolders\<environment>\<model>\privatepublication \<username>\tempfiles. The name of the log file contains the following details: The name of the package A timestamp The extension .LOG
EXAMPLE Validatetransformation20090915211503.log

3.5 Central Computing Management System


You can set up Central Computing Management System (CCMS) within SAP Solution Manager for monitoring your SAP Planning and Consolidation system

3.5.1 Monitoring with the Central Computing Management System


You can set up Central Computing Management System (CCMS) within SAP Solution Manager for monitoring .NET application servers within your SAP Planning and Consolidation system. You can set up log file and process monitoring to monitor managed hosts, which are the SAP Planning and Consolidation application servers. Monitoring in the central monitoring system is based on SAP CCMS agent functionality, which you must install on the monitored hosts. For information about setting up CCMS for use with Planning and Consolidation, see SAP Note 1379213. After setting up and configuring monitoring for SAP Planning and Consolidation, log on to SAP Solution Manager, then access CCMS. In the SAP Menu, select Tools CCMS Control/Monitoring CCMS Monitor Sets (transaction code RZ20) . The following monitor sets are available in CCMS within SAP Solution Manager for Planning and Consolidation:

2011-05-10

PUBLIC

27/102

3 3.6

Monitoring of Planning and Consolidation Change and Transport System

Availability Monitoring

A simple Generic Request and Message Generator (GRMG) scenario for SAP Planning and Consolidation is available. This checks the availability of the SAP Planning and Consolidation application server, and presents the status of its current availability.
Error Monitoring

The log file for SAP Planning and Consolidation is monitored for error patterns. The monitor present its status and alerts according to the presence of error messages in the SAP Planning and Consolidation log, which is located in <Drive>\logging\log. If errors occur in this log, you can display them in transaction CCMS by selecting Open Alerts.
Operating System Monitoring

The servers hosting the SAP Planning and Consolidation application are monitored for resource consumption. Operating system metrics such as overall CPU and memory consumption are reported. In addition, the following operating system processes are monitored: w3wp.exe (Microsoft Internet Information Services application pool process) dllhost.exe (DLL application process) BPCSendGovernor.exe (SAP Planning and Consolidation Send Governor process) You can customize the thresholds for alert triggering to suit your business needs. You can access technical configuration details at http://server_name:port/osoft/app/ SMDWebService/BPCSMDService.asmx.

3.6 Change and Transport System


The Change and Transport System (CTS+) supports the movement of transportable content between Planning and Consolidation servers.
Prerequisites

The Change and Transport System requires the following prerequisite applications: SAPJco3 The SAP Java Connector (SAP Jco) is a toolkit that allows a Java application to communicate with any SAP system. di_cmd_tools This is part of the SAPJco toolkit. Java JDK The Java Development Kit.

28/102

PUBLIC

2011-05-10

3 3.7

Monitoring of Planning and Consolidation Checking COM+ Objects

Vcredist_x86.exe Microsoft Visual C++ redistributable version. For information about downloading and installing these applications together with information about environmental variable settings and configuring CTS+, see the Application Help.
Features

The purpose of transporting content with CTS is to ensure that you have two or more Planning and Consolidation servers with the same content, and look and feel by transporting configurations and content from one system into another. This means that, for example, that you can duplicate the look and feel of your development environment in the quality and production environments. CTS is an export tool. You need to select the configurations and content for transport, and CTS creates a transport package. You can download this transport package and import it on the target server. You can use the CTS tool for the following activities: Selecting the content to include in the transport Attach additional objects to a transport request View the objects in a transport request Export the selected content to the transport package Organize the transport CTS does not provide any means of tracking or monitoring between the source and target systems. You must ensure that you always use the correct transport packages or existing content on the target system may be overwritten.
More Information

For more information about CTS+, and how to transport content, see the Application Help.

3.7 Checking COM+ Objects


You can check the state of your COM+ components using the OS command BPCCOMPlusCheck. You can access and use this command in the SAP Solution Manager Diagnostics OS Command Console.
NOTE

As of Planning and Consolidation 10.0, to improve performance, only the following COM+ applications are used: For Planning and Consolidation: Data Manager, which has components that handle some Data Manager tasks x86, which has components for the services control wrapper and transfer members

2011-05-10

PUBLIC

29/102

3 3.7

Monitoring of Planning and Consolidation Checking COM+ Objects

For the BUI: .NET Data Helper x86, which has application printing components
Features

This tool checks the existence of all Planning and Consolidation COM+ applications and classes. If the classes exist, the tool tries to create an instance for each class. To resolve an error for a particular COM+ component, do the following: 1. Check that the component exists in the Global Assembly Cache (GAC) and, if it does not exist, register using gacutil/i <FileFullPath>. 2. Use the regsvcs <FileFullPath> .NET command to register COM+. 3. Verify that the COM+ application exists using Component Services. If it does not exist, do the following: 1. Create a new empty COM+ application manually. 2. Drag and drop the VB assembly into the empty application 4. Change Account for the registered COM+ object that is in the Identity tab of the properties. 5. Change COM+ Security (Connect / Impersonate). To execute the COM+ check tool and display its results in SAP Solution Manager Diagnostics, access SAP Solution Manager then choose Workcenter Root Cause Analysis (Transaction SOLMAN_WORKCENTER) Host Analysis <SAP BPC system> OS Command Console . In the OS Command Console application, select the host, OS Command Group SAP BPC and execute the command BPCComplusCheck with Send Command.

30/102

PUBLIC

2011-05-10

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

4 Management of Planning and Consolidation

SAP provides you with an infrastructure to help your technical support consultants and system administrators effectively manage all SAP components and complete all tasks related to technical administration and operation.

4.1 Managing Your Planning and Consolidation Servers


You use the Server Manager to manage a Planning and Consolidation server. The Server Manager is accessible on the .NET application server. With the Server Manager, you can do the following: View server information. See Viewing Server Information [page 32]. Change client and server options. See Client Options [page 32] and Server Options [page 33], respectively. Configure the System Landscape Directory (SLD). See Configure System Landscape Directory [page 34]. Configure the Central Management System (CMS). See Configure Central Management System [page 35]. Change the credentials for component services. See Changing the Credentials for Component Services [page 40]. Define user groups within Active Directory domains. See Domain User Group Setup [page 39]. Back up and restore environments. See Backing up and Restoring Environments [page 44]. Change the Server Manager language by selecting Options Language <Language> . System administrators can change environment parameters and model parameters within the Administration Client (see Environment Parameters and Model Parameters within the application help in the SAP Library at http://help.sap.com.) For information on the tasks that system administrators and nonsystem administrators can perform in Server Manager, see Server Manager Security [page 42].

2011-05-10

PUBLIC

31/102

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

4.1.1 Viewing Server Information


You can use the Server Manager to view information about your Planning and Consolidation server components.
Features

The System Information window shows information about the server, including its version, the operating system version and available memory. You can access the System Information window by doing the following: Start Server Manager by selecting (All) Programs SAP Server Manager from the Windows Start menu on your .NET application server. The System Information window opens by default. You can click the Refresh button to refresh the information on the screen.
More Information

Performing a Detailed Server Diagnosis [page 24] Managing Your Planning and Consolidation Servers [page 31] Monitoring with the Management Console [page 15]

4.1.2 Client Options


This function allows you to set or change options related to Planning and Consolidation clients.
Features

You can access the Client Options screen from the Server Manager by selecting Options Client Options. Click Update to save your changes. You can set the following option: Client Auto Update You can use the Client Auto Update program to set up the server so that it determines if a connecting client upgrade is necessary. If the system finds a newer version, it prompts you to perform the upgrade. If turned off, clients must be installed manually.
NOTE

If users without administrator rights use the auto update, enter an administrator ID and password in the Admin ID and Admin password fields, respectively. The administrator ID should be a member of the local administrators group on all the computers running the auto update. This setting is not needed if all users have administrator rights.

32/102

PUBLIC

2011-05-10

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

4.1.3 Server Options


This function allows you to change options related to Planning and Consolidation servers. Many of the server options are set during the installation process.
Features

You can access the Client Options screen from the Server Manager by selecting Options Server Options. After modifying the options, click Update to save your changes.
Server Option Value Description

SQL Server Name Instance name Port number Provider OLAP Server Name Instance name Port number File Share Server Name Local data path Application Server Name External server name Virtual server name Website HTTP compression Protocol Port number Authentication type

Scheduler Server Name

The name of the SQL DB server. Defaults to the local server. The SQL instance name. If left blank, the default instance is used. The port number. The default for the SQL Server is 1443. The only available value is SQL. The name of the OLAP server. The OLAP instance name. The instance name can be changed. The default value is 2383. The default value is the name of the File Share server that should be the computer name (NetBIOS name). Where the data files are saved on File Share server. By default, <Drive> \Data. The name of the application server. TCP/IP address for accessing the server from outside a firewall. The server name for load balancing if it is installed. The IIS Web site name, if it differs from the default Web site. The default value is No. (Yes provides better performance in some situations.) The available values are http or https. The default value is http. The port number to which the application server connects. 80 is the default for http, 443 is the default for https. Note: If you have chosen SAP BusinessObjects User Management System in the installation screen, Authentication type does not appear (as that IIS authentication type is Anonymous in CMS mode). Windows or Kerberos. The default is Windows. If you change this value from Windows to Kerberos, you must make some additional Server Option changes. See Kerberos Authentication Settings [page 42]. The name of the server used for scheduling, usually the application server, for example, GMPV50072862B. If you have multiple application servers, select the appropriate one.

2011-05-10

PUBLIC

33/102

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

4.1.4 Configuring the SLD Data Supplier


The System Landscape Directory (SLD) Data Supplier is the central directory of all of your system landscape information. It contains a repository of all SAP software and a representation of the technical systems, that is, the hosts on which software is installed as well as the software products and components, versions, support packages and patches that are currently installed. The software components of a product version are installed on hosts and form systems that are visible to the administrator. An administrator must have knowledge about all the systems that are in the landscape and about the versions, support packages, and patches of the software components that are installed on these systems. This kind of information is stored in the SLD and is called the Landscape Description (LD). The SLD is also a repository of software components that can theoretically be installed in the landscape. This kind of information is stored in the SLD as a Component Repository (CR). SLD data suppliers automatically register the systems on the SLD server and keep the system information up-to-date. They collect and send data about the systems to the SLD. For every newly discovered system or component, the SLD creates an association to the corresponding entry in the Component Repository. Thus, the SLD provides reliable and up-to-date system landscape information.
Prerequisites

The SLD Data Supplier requires the following components to work with Planning and Consolidation: Application server Java The most recent Component Repository (CR) Content a recent version of the Common Information Model (CIM) For a definitive list of version requirements for these components, see the Product Availability Matrix on SAP Service Marketplace at http://service.sap.com/pam. Search on Planning and Consolidation.
NOTE

For more information about installing the CR Content and CIM, see SAP Note 669669.
Features

You can access the Configure SLD screen from the Server Manager by selecting Options Configure SLD . Enter data as follows: Enable If the check box is selected, the Planning and Consolidation system transfers system data to the SLD server periodically. Server host

34/102

PUBLIC

2011-05-10

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

The host on which the SLD server is deployed and running. Port The HTTP or HTTPS port at which the SLD server is listening. User name The user name associated with the SLD credentials. Password The user password associated with the SLD credentials. Transfer interval (hours) The interval (in hours) when the Planning and Consolidation system periodically transfers currently active data to the SLD server. This is only relevant if the Enable check box is selected. Use HTTPS This indicates whether the data is transferred via secure connection or not. System ID This is automatically completed by the system and is read-only in this view. The System ID (SID) is a three digit string starting with a Capital letter followed by a combination of numbers or letters. This concept is used by all SAP systems. Customers maintain SIDs in the SAP Service Marketplace (SMP) for Planning and Consolidation systems so that they can log error messages under the relevant SID of their Planning and Consolidation system. Since customers can choose a SID in the Service Marketplace, customers should also enter the same SID during the Planning and Consolidation Installation so that the system in Solution Manager can be identified based on the same SID as in SMP Update Click Update to update the information in SLD. This sends your system information (such as product version, components, and hardware information of the host, for example, number of CPUs and memory size) to SLD. This is done by an HTTP (or HTTPS) post request to the SLD server.

4.1.5 Configure the Connection to the Central Management System


This function allows you to set or change configuration options related to the Planning and Consolidation server authentication mode.
Prerequisites

The Central Management System (CMS) is installed and the Planning and Consolidation system has an administrator-level ID and password to the CMS.

2011-05-10

PUBLIC

35/102

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

To log on to CMS as an administrator, you need the following information:


Field Description

System name Trusted CMS name

Authentication type Administrator ID Administrator password Group name

Web service URL

CMS system name (server name:port number), for example, CMSServer:6400 CMS system name If a CMS cluster name exists, use the cluster name (multiple names should be separated by a comma ,). Select the appropriate value from the list The user identifier of the dedicated administrator account Password for the administrator account Group name of the SAP BusinessObjects Enterprise users, who have access to the system This group name is filtered as the default when adding a user in Planning and Consolidation Administration. (Multiple names should be separated by a comma ,.) The default is http://<CMS name>:8080/dewbobje/ If your system is configures with an SSL protocol and a specific port, amend this protocol accordingly.

Features

The following authentication modes are possible: Windows CMS The Server Manager Options menu has different entries according to the authentication mode selected.
Windows Authentication Mode

From the Server Manager Options menu, you have the following entries that are specific to the Windows authentication mode: Define System User Groups Here you can define groups of Windows users that have similar system responsibilities. Enable CMS Authentication mode Switch from Windows authentication to CMS authentication.
NOTE

If you have more than one server, you need to change the authentication mode on all servers.
CAUTION

Once CMS authentication is enabled, it is not possible to revert to Windows authentication.


CMS Authentication Mode

We recommend CMS authentication mode. On installation of Planning and Consolidation, CMS authentication mode is selected by default.

36/102

PUBLIC

2011-05-10

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

From the Server Manager Options menu, you have the following entries that are specific to the CMS authentication mode: Configure CMS The following parameters can be defined in the SAP BusinessObjects User Management System view:
Field Description

The name of the system CMS system name If a CMS cluster name exists, use the cluster name (multiple names should be separated by a comma ,). Authentication type Select the appropriate value from the list Administrator ID The user identifier of the dedicated administrator account Administrator password Password for the administrator account Group name Group name of the SAP BusinessObjects Enterprise users, who have access to the system This group name is filtered as the default when adding a user in Planning and Consolidation Administration. (Multiple names should be separated by a comma ,.) Cache expiration duration The time (in minutes) after which the cache is cleared Heartbeat interval After a period of inactivity, the CMS session may expire at the server, after which the Planning and consolidation system cannot communicate with the server The Heartbeat function periodically simulates activity to keep the session active. Set the Heartbeat interval (in minutes) to a suitable level for the CMS session. System name Trusted CMS name

Click Update to update the user management system with these new values. CMS Migration Select this option to migrate all current Windows user authentication information to CMS user authentication information. To use this menu option, you should be logged on to CMS as an administrator. If you are not already logged on, enter the correct logon information when prompted. You are guided through the following actions: Select one or more Windows environments to migrate to CMS from the list that is displayed. The system displays a list of Windows user IDs for the selected environments. Click 1. Validate user to enable the system to align Windows and BusinessObjects user IDs. A green tick indicates that a corresponding BusinessObjects user exists while a red cross means that there is no BusinessObjects user corresponding to the Windows user ID. Click 2. Migrate, and then click OK to migrate the validated Windows user information to CMS. When the migration is complete, click 3. View result to view a detailed result log of the migration process.

2011-05-10

PUBLIC

37/102

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

If required, you can save a copy of the result log. Set complete Click 4 Set complete. The migrated environments are set as CMS mode environments. These environments are no longer visible in the migration wizard.
More Information

Migrating Users to the Central Management System [page 38]

4.1.6 Migrating Users to the Central Management System


The Central Management System (CMS) migration wizard validates existing Windows users against corresponding CMS users. Then the wizard can migrate all existing user information to CMS user information. Existing Windows users that do not already exist in a CMS database should be set up individually in advance.
Prerequisites

CMS is installed and the Planning and Consolidation system has an administrator-level ID and password to the CMS.
Procedure

You access the CMS migration wizard from the Server Manager. Choose Options CMS Migration Wizard . If you are not already logged on, you are required to log on to CMS as an administrator. 1. Validate user The wizard displays a list of environments that can be migrated to CMS Select one or more of these environments The wizard displays list of the Windows user IDs assigned to these environments Click 1. Validate user 2. Migrate The wizard validates the existing Windows users against corresponding CMS users. A green tick indicates that a corresponding BusinessObjects user exists. A red cross means that a corresponding BusinessObjects user does not exist at the CMS server. In this case, click Cancel to exit the wizard and set up each user in CMS before returning to the wizard. When all the required users are validated, click 2. Migrate.

38/102

PUBLIC

2011-05-10

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

3.

4.

View result All validated Planning and Consolidation user information (those users marked with the green tick) in database table or file system are changed to CMS user information. Click 3. View result to obtain a detailed result log. You can save the result log if required. Set complete Click 4 Set complete. The migrated environments are set as CMS mode environments. These environments are no longer visible in the migration wizard.

Result

Users can log on to Planning and Consolidation through CMS authentication.


NOTE

If you have owner and reviewer properties defined in your dimension worksheets, you must update them manually if they contain any Windows user information.
More Information

Configure Central Management System [page 35]

4.1.7 Domain User Group Setup


Rather than allowing all users within Active Directory (AD) to access Planning and Consolidation, you can limit the pool of users by adding them to a particular domain and then giving access to only those users. This is important because if you try to add a user from the entire AD, the system may time out while searching.
Features

You can define user groups from the Server Manager by selecting Options Define system user groups. You use the following features for defining a user group: Choosing user group names The default group name is Domain users if a domain user installs the Planning and Consolidation server. The default group name is Local users if a local user installs the Planning and Consolidation server. The group name is displayed in the Add New Users assistant in the Administration Client. You can modify the settings for an existing group by selecting the name of the group from the list. Defining Filters You use filters to define user groups. The following table includes examples of filters you can define:

2011-05-10

PUBLIC

39/102

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers Scenario Example Description

Single organizational unit (OU) Multiple OUs

OU=Marketing OU=Sales;OU=Marketing

Multiple OUs from a single container

OU=Sales;OU=Marketing;CN=Users

A group (or user) in an OU

CN=DM,OU=Sales

Multiple groups (or CN=DM,OU=Sales;CN=DM,OU=Sales2 users) in an OU (when multiple groups are in a single or different groups)

Mixed condition

CN=DM,OU=Sales;CN=FR,OU=Sales2; CN=HR,CN=Users

Finds users of the Marketing OU. Finds users of the Sales and Marketing organizational units. Finds users of the Sales and Marketing organizational units and the Users container. Finds users of the DM group in the Sales organizational unit. Finds the users of the DM group in the Sales2 organizational unit and the users in the DM group in the Sales organizational unit. Finds users of the DM group in the Sales organizational unit, users of FR group in Sales2 organizational unit, and users of HR group in the Users container.

4.1.8 Changing the Credentials for Component Services


When the Planning and Consolidation server is installed, the installing user provides service IDs and passwords. The system uses the IDs and passwords to register services on the server. We recommend that you do not change the password. However, if required, for example, if one of the Windows passwords is changed, you must change the password for the applicable Planning and Consolidation service. If the passwords do not match, the system may not work correctly. This procedure describes how to reset the credentials for your Planning and Consolidation component services.
NOTE

If you only need to change the password for a service user account, do the following: from the Server Manager, select Server Reset Logon Credentials . Enter the COM+ ID and password, then click Update.

40/102

PUBLIC

2011-05-10

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

Prerequisites

The password has been changed in Microsoft Windows.


Procedure

1. 2. 3. 4. 5. 6.

Select Start Programs Administrative Tools Component Services. Navigate to COM+ Applications. Select Component Services Computers My Computer COM+ Applications. Select a service, for example, OSoftUserManage, right-click, and select Properties. Select the Identity tab, enter the new password in the Password and Confirm Password fields, then click OK. Repeat the last two steps for each of the remaining services.

4.1.9 Setting up Firewalls


If your Planning and Consolidation server is behind a firewall, you must make a few changes to your setup. You must define external IP addresses and open a firewall port.
Features

Defining external IP addresses You must define external IP addresses for Planning and Consolidation components for each environment you want users to be able to access across the firewall. You change the following Server Options [page 33]: Application Server External Server name This is the external IP address or fully qualified domain name where the application server resides. Reporting Services Server External server name This is the external IP address or fully qualified domain name where the Reporting Services server resides. Opening Firewall Ports You must open the HTTP default port (80), and optionally, port 443 if you want users to access the system through a secure sockets layer (SSL). If you are authenticating through a firewall with Active Directory authentication, you must have a secured channel open and port 445 open for inbound and outbound traffic if you have NetBT (NetBIOS over TCP/IP) disabled. If NetBT is enabled, you must have port 139 open for inbound and outbound traffic. You need to:

2011-05-10

PUBLIC

41/102

4 4.1

Management of Planning and Consolidation Managing Your Planning and Consolidation Servers

Define external IP addresses in the Server Options [page 33] window. Open the TCP/IP ports from your firewall software.

4.1.10 Kerberos Authentication Settings


If, during the server installation, you select Kerberos as the authentication type (as opposed to the default Microsoft Windows authentication), the system automatically makes some default settings for IIS and Web.config. After the installation, if you want to change the authentication type from Microsoft Windows to Kerberos, you must change the settings for IIS and Web.config manually. In addition, settings for some third-party applications, such as Windows Server Active Directory and Microsoft Internet Explorer, are not supported by the installation and should be set manually, as described below.
Features

The following table lists the setting for third-party applications required for Kerberos authentication:
3rd Party Description

IIS

Only Integrated Windows Authentication for osoft and fp_client virtual directory is selected in directory security on the application server. Do not select Basic authentication. (There is no fp_client virtual directory on the application server.) Active Directory settings for Select the Trusted computer for delegation option for the application server in delegation the computer properties of Active Directory. Select the Account is trusted for delegation option in the user properties of Active Directory. Local intranet settings of Internet The URL for the application server is added to the local intranet settings Explorer (IE) in IEr. Select the Automatic logon only in Intranet Zone option in the security settings of the local intranet. The URL for the application server is added to the intranet settings in IE of the client. Select the Automatic logon only in Intranet Zone option in the security setting of the local intranet. Integrated Windows authentication option Enable the Integrated windows authentication option in IE on the client. in IE

4.1.11 Server Manager Security


This topic lists the tasks that a system administrator (the SYSADMIN user specified during the installation) can perform in the Server Manager.

42/102

PUBLIC

2011-05-10

4 4.2

Management of Planning and Consolidation Asynchronous Interfaces and Data Transfers Navigation Path

Server Manager Task

Launch the Server Manager View the server information Run server diagnostics Reset logon credentials Set up and maintain debug users Choose the server language Define system user groups

Not applicable Server Information Server Diagnostic Server Reset Login Credentials Server Maintain Debug Users Options Language Options Define System User Groups

4.2 Asynchronous Interfaces and Data Transfers


Asynchronous Interfaces

There are the following asynchronous data exchange interfaces in Planning and Consolidation: EPM add-in for Microsoft Office data transfers (specifically, the EPMSaveData and reporting functions) Data Manager package execution
Data Consistency in the Event of an Interruption in Data Transfer

Data consistency is automatically managed by the system in the event of an interruption in the transfer of data between a client and the server. For instance, an interruption can occur due a workstation crash or restart. Upon resumption of the data transfer, Planning and Consolidation checks for differences between data that has already been transferred and the data that remains to be transferred. Planning and Consolidation only transfers data that was not previously transferred. Given that data consistency is preserved by the system in this way, there is no separate interface for managing interrupted data transfers.
NOTE

If a Data Manager package fails to finish executing, you may experience an issue in which you cannot clear the status of the package. This does not affect the integrity of the data. For information about clearing the package status, see Troubleshooting in Data Manager [page 94].

4.3 Stored Configuration Values


For information about the variables that are set during the installation process, see http:// service.sap.com/instguidescpm-bpc Planning and Consolidation 10.0 version for the Microsoft platform SBOP Plan & Consol 10.0 M Installation Guide .

2011-05-10

PUBLIC

43/102

4 4.4

Management of Planning and Consolidation Backing up and Restoring Environments

4.4 Backing up and Restoring Environments


SAP Planning and Consolidation Server Manager provides a facility for backing up and restoring environments. This can be useful during implementation, system recovery, and infrastructure maintenance. The following are important considerations when using Backup and Restore: Backup and Restore are comprehensive and complete. There is no incremental update, there is no partial restore. All data, master data, and metadata is included. Backup and Restore also transfers all security profiles. The backup copies all files in the \webfolders and the SQL database. Your OLAP data is recreated upon restoring an environment. The backup file does NOT include audit or logfile information. Restoring an environment over an old one deletes everything from the old one, except its audit information. When restoring an environment, Planning and Consolidation creates the storage model of the write-back OLAP partition to be MOLAP. You should reset the storage model to be a write-back OLAP partition.

4.4.1 Backing up Environments


You can back up an environment when you want to archive it or you want to move it from one Planning and Consolidation server to another. For example, you would back up an environment before moving it from a development to production server, or if you are installing a new version Planning and Consolidation on a new server. After you back up an environment, you can move it to another server using the Restore task in the Server Manager. See Restoring Environments [page 45]. You can back up environments located on a single server or in a multiserver configuration. The backup process creates three folders. The following table describes the folders and what is contained in each.
Folder Description

FileDB SQL Webfolder

Contains the .zip file that contains the files in the backed up FileDB folder. Contains a .bak file that contains the backed up SQL database. Contains the .zip file that contains the files in the backed up Webfolder folder.

NOTE

The OLAP databases are not backed up. They are re-created during the restore procedure.

44/102

PUBLIC

2011-05-10

4 4.4

Management of Planning and Consolidation Backing up and Restoring Environments CAUTION

Use this procedure only if you want to back up an environment. Contact your support representative for information about backing up an environment manually.
NOTE

The normal backup process backs up any Planning and Consolidation configuration files that reside on the file share. All application configuration information is stored in the Planning and Consolidation database, and is backed up and restored with the database. Files and directories that are used for logging and configuration that reside on your local machine are not automatically backed up. You should include these files and directories in your regular file back up procedure.
Procedure

To back up an environment: 1. From your application server, select Windows Start (All) Programs SAP Server Manager . 2. Select Environment Backup Environment . 3. Select the checkbox next to one or more environments that you want to back up.
NOTE

4. 5.

Select the Select All Environments checkbox to select all the environments in the list. In the Destination field, enter the path that describes where you want to save the backed up files, or click the browse button to search for the target backup folder. If the SQL database is on a separate server, select the Use backup files path on a remote SQL server checkbox, and enter the name of the path to the remote SQL database, for example, \\ServerName \O5Backups. This folder must be shared and writeable.
NOTE

6. 7. 8.

The Use backup files path on a remote SQL server option is also used when a DMZ configuration exists between the SQL server and the application server. Click Next. Click OK, then click Close. To move the backed up environments to one or more servers, see Restoring Environments [page 45].

4.4.2 Restoring Environments


You restore environments when you want to take an environment that has been backed up, and load it on to a different Planning and Consolidation server. The destination server must have access to the directory that contains the backed up files. You can restore an environment on a single server or multiserver configuration.

2011-05-10

PUBLIC

45/102

4 4.4

Management of Planning and Consolidation Backing up and Restoring Environments

Procedure

To restore an environment: 1. From your application server, select the Windows Start (All) Programs SAP Server Manager . 2. Select Environment Restore Environment . 3. In the Step 1 dialog box, do one of the following: If it is a single server configuration, enter the path to the folder that contains the backed up environment in the Envrironment Folder field, or click the browse button to search for the folder. Click Next. If it is a multiserver configuration, do the following: 1. You can leave the Environment Folder field blank. 2. In the Webfolder field, enter the path and name of the zip file that contains the backed up Webfolder folder, or click the browse button to search for the file. 3. In the FileDB field, enter the path and name of the zip file that contains the backed up FileDB folder, or click the browse button to search for the file. 4. In the SQL database field, enter the path and name of the SQL server .bak file, or click the browse button to search for the file. If the SQL database is on a separate server, select the Use backup files path on a remote SQL server check box and enter the name of the path to the remote SQL database, for example, \\ServerName\O5Backups. This folder must be shared and write-able.
NOTE

4.

5. 6.

The Use backup files path on a remote SQL server option is also used when a DMZ configuration exists between the SQL server and the application server. 5. Click Next. In the Step 2 dialog box, do one of the following: If it is a single server configuration, you can leave the fields blank. The current server is assumed. Click Next. If it is a multiserver configuration, enter the server names for each component. If the database server is installed as a non-default instance, enter <DB Server name>\<non-default instance name>. Click Next. In the Step 3 dialog box, wait until the restore process is complete, then click Close. The restore procedure resets the connection strings on the application server to the internal server name. So if you require users to access the environment from an external IP address, you must modify those settings.

46/102

PUBLIC

2011-05-10

4 4.5

Management of Planning and Consolidation Periodic Tasks

7.

8.

When restoring an environment, the restore procedure creates the storage model of the writeback OLAP partition to be MOLAP. You should reset the storage model to be a write-back OLAP partition. For more information, see Installing SQL Server in the Installation Guide. Save each model in the environment by doing the following: 1. From the Administration Client, select a model node under Model. 2. Select Modify Model from the Manage Models action pane. 3. Select Modify Model without making any changes. 4. Repeat for each model.

4.5 Periodic Tasks 4.5.1 Scheduled Periodic Tasks


The application optimization task can be scheduled periodically but can also be carried out as required.

4.5.1.1 Optimizing Models


To maintain good performance, models need to be optimized. There are no universal guidelines for the frequency with which you should optimize models. Optimization frequency depends on the characteristics of your hardware environment and your model. However, you can set the system to remind you to optimize the model when the system database grows to a certain size. You can use either the Administration Client or the Data Manager to optimize applications. In the Data Manager, optimization tasks can be added to administrative packages. The optimization process is included in the AdminTask_Optimize sample package. For more information about modifying, running, and scheduling packages, and using the Administration Client to manually optimize models, see the application help on SAP Library: http://www.help.sap.com.

4.5.1.1.1

Optimization Options

The optimization options interact with the three types of data storage in different ways. For more information, see Data Storage Types and the Optimization Process [external document]. The following table lists the optimization options available in Planning and Consolidation, and the effect of each on the various storage types:

2011-05-10

PUBLIC

47/102

4 4.5

Management of Planning and Consolidation Periodic Tasks Description

Optimization Type

Lite Optimization

Incremental Optimization

Full Process Optimization

Clears real-time data storage and moves it to short-term data storage. This option does not take the system offline, so you can schedule it to run during normal business activity. We recommend a Lite Optimization when the write-back table exceeds 200 KB. Clears both real-time and short-term data storage and moves both to long-term data storage. This option takes the system offline, so it is best run at off-peak periods of activity. Clears both real-time and short-term data storage and processes the partition. This option takes the system offline and takes longer to run than the Incremental Optimization. It is best scheduled at downtime periods. You can schedule Full Process Optimization to be weekly or less frequently depending upon your requirements.

Features

You can manually initiate optimization from the Administration Client. From the Model node, you can select a model and then choose Optimize Model from the Manage Models action pane. In the Optimize Models assistant, you have the following options: Select additional or different models to optimize Choose the type of optimization to apply Choose whether to compress the database.
NOTE

If you run a Full Process Optimization with the Compress Database option, the model is processed, not each partition. If you have custom partitions and run Full Process Optimization, only the partition that had data moved from FAC2 and the WB table is processed. For example, you have three custom partitions (FACT2008, FACT2009, and FACT2010), and the FAC2 and the WB table have only 2009 data, in this case, if you run a Full Process Optimization, only FACT2009 is processed. Choose whether to defragment the index Optimizing the database can cause the database index to become fragmented. Selecting this option forces a reindex of the database. You can also reindex your database directly from SQL.

4.5.2 Manual Periodic Tasks


Partitioning is a manual task that is required periodically but may also be required at other times.

48/102

PUBLIC

2011-05-10

4 4.5

Management of Planning and Consolidation Periodic Tasks

4.5.2.1 Server Partitioning


You can use partitioning to significantly reduce the overall processing time for medium and large models.
NOTE

Significant portions of the following information have been copied in fair use from Microsoft Web sites.

4.5.2.1.1

Partitioning Overview

Partitioning is perhaps the single most significant thing you can do to improve performance in a large model. Dividing a model into multiple partitions reduces total processing time for the following reasons: Multiple partitions can be processed in parallel. You can decrease the total time required to process a model by processing multiple partitions in parallel (provided you have sufficient processor and memory resources). By default, Analysis Services processes each partition in a model serially. For large models, parallel processing results in dramatic performance benefits in the following cases: During the initial load of the data warehouse During full model processing During model refreshes
NOTE

If the Analysis Services server has insufficient memory to store the aggregations for each partition being processed, Analysis Services uses temporary files, which negates the performance benefit you are trying to achieve through the use of parallel processing. To process partitions in parallel, you must use a tool that calls the Analysis Management Object (AMO) interface. For an example, see the Parallel Processing sample application in the SQL Server Resource Kit. Only some partitions need to be processed. You can process only the partitions that have been updated with new data. This decreases the overall time required to update a model with new data. Analysis Services can process one or several small partitions more quickly than it can process a single large partition containing all of the data for the model.

2011-05-10

PUBLIC

49/102

4 4.5

Management of Planning and Consolidation Periodic Tasks EXAMPLE

If you partition your model by time, you merely need to process the partitions containing data from the most recent period, rather than processing the entire model. Different partitions can have different aggregation designs. You can design additional aggregations on heavily queried partitions and design fewer aggregations on less heavily queried partitions. If you keep the number of the partitions with a large number of aggregations small, and design fewer aggregations for those partitions that are queried less frequently, you can reduce overall processing time. However, if you plan to merge partitions at a later date, you must ensure that the merged partitions have the same aggregation design at the time of the merger. Different partitions can have different storage modes. You can use different storage modes for particular purposes.
EXAMPLE

You can create a small partition using ROLAP to enable you to implement real-time OLAP, and then use MOLAP for all other partitions to maximize query responsiveness. Partitions can be refreshed individually. You can refresh a partition more quickly than an entire model, consuming fewer resources, and affecting fewer users. When a partition is incrementally updated, a temporary partition is created and then merged into the existing partition. This can result in data fragmentation, which is similar to disk or index fragmentation. As a result, you should schedule a regular refresh of a partition to enable Analysis Services to resort the data for faster multidimensional access, create better multidimensional mapping files, and make smaller aggregations. The division of a model into multiple partitions is transparent to the user. When a query requests data that spans multiple partitions, Analysis Services uses aggregations or scans data in each partition and then combines this information to resolve the query and return the results to the user. You can significantly increase query responsiveness and processing performance by horizontally segmenting the data by one or more keys, such as date or department, and dividing the model into multiple partitions. From an operational perspective, you can improve Analysis Services performance by keeping partition sizes reasonable and setting partition data slices. For more information about each of these configuration issues, see the Microsoft Analysis Services Performance Guide on www.microsoft.com/technet. Each partition file has an extension of fact.data. When a partition file exceeds 5 GB or more than 20 million records, you should consider dividing the partition into multiple partitions. While the above parameters are useful as guidelines, these are not the only considerations. However: Smaller partitions can be processed faster than larger partitions.

50/102

PUBLIC

2011-05-10

4 4.5

Management of Planning and Consolidation Periodic Tasks

When data changes, you do not need to process all of the partitions in the model. Queries can be executed faster on smaller partitions if the data slice is set properly. In this scenario, Analysis Services does not need to scan as much data to resolve multiple queries. Performance degradation typically occurs when the cache is repeatedly refreshed from the disk. This becomes especially problematic when there are large numbers of alternate hierarchies, since the time needed to reprocess each partition varies directly with the number of hierarchies.

4.5.2.1.2

Considerations when Determining a Partitioning Scheme

Consider the aspects described in this topic when determining a partitioning scheme.
Features
Data Usage

The primary factor that you should use to determine your partitioning scheme is your data usage. Partitions can be sliced by any number of dimensions.
EXAMPLE

Time Entity Category Some combination of dimensions

Performance

In addition to the data size considerations, partition so that the data slices created allow a single partition to be accessed for logic, reports, and so on. In many cases, this makes the Time dimension a natural choice for partitioning and slices. However, this is not necessarily always the case. You can create partitions across multiple dimensions; TIME and ACCOUNT, for example. Since it may be necessary to access multiple partitions for certain tasks, this approach may improve performance in some cases and degrade it in others. In many cases, partitioning just by TIME (without partitioning by any other dimensions) is the most effective strategy. Be sure to measure the results of your strategy. In one real-world case, a partitioning scheme that used CATEGORY and CURRENCY did not improve performance over the default. There is some evidence to indicate that multidimension partitioning does not seem to improve performance by as great a margin as the implementation of an initial, single-dimension scheme (in other words, it is common to experience diminishing returns).

2011-05-10

PUBLIC

51/102

4 4.5

Management of Planning and Consolidation Periodic Tasks

Administrative Implications

In addition to performance, you need to consider the administrative implications of your partitioning scheme: Balance ease of administration with scheme complexity: If the Planning and Consolidation administrator needs to spend hours every week repartitioning, it may not be the optimal solution. Period modifications necessitate partition recreation: When you create a partition according to time slices, if you want to modify, shift or roll-up periods, you must recreate the partitions accordingly.
CAUTION

Allow for the complete processing of dimensions before processing any partitions based on that dimension. Ensure completion of dependent partition processing: When processing partitions, you must ensure that a dimension is completely processed prior to processing any partitions based on it. You may experience negative consequences if you process a partition while one of its dimensions is also being processed.

4.5.2.1.3

Data Slice Definition

Before you can partition, you must define in detail the data slice for each partition.
Features

If the data slice value for a partition is set properly, Analysis Services can quickly eliminate irrelevant partitions from the query processing and significantly reduce the amount of physical I/O and processor time needed. To enable Analysis Services to take full advantage of partitions, you must define the data slice for each partition in the Partition Wizard of Analysis Manager. The data slice identifies the actual subset of data contained in each partition. The Partition Wizard does not require you to set this data slice when you create a partition. As a result, it is possible to create a partition without setting the data slice.
CAUTION

Always set the data slice when creating the partition. Failure to do so can result in the addition of considerable system overhead (artificially increasing response times). Without the data slice, Analysis Services cannot limit a query to the appropriate partitions and must scan each partition even if zero cells are returned. To draw an analogy with SQL Server, creating a partition without a data slice is like creating a partitioned view without the CHECK clause. While you can do it, you force the query optimizer to scan all of the partitions in the view because you have not

52/102

PUBLIC

2011-05-10

4 4.5

Management of Planning and Consolidation Periodic Tasks

given it enough metadata to figure out what partition to access when a query is issued. While the Analysis Services runtime engine does not use a relational query optimizer (it has its own component that accomplishes a similar operation), it uses the data slice in roughly the same way: as metadata to tell it which partitions to scan if an aggregate cannot be used or is not available. If you partition a model by month, and have 36 months worth of data (in 36 partitions), and if you do not specify the data slice, then the runtime engine must scan all 36 partitions to answer a query. If you specify the data slice, it could potentially only have to scan 1/36th the data, with an obvious improvement in performance. To maximize querying performance with partitions, construct partitions with data slices that mirror the data slices required by your users. For example, suppose you are deploying a model that typically tracks data as a time series (such as a financial model), and most queries retrieve data based on period. You should partition the model by period to provide the greatest performance benefit. With very large models, partitioning along multiple dimensions (such as time and department) can yield substantial query responsiveness benefits. Remember that each partition can have a different aggregation design.
RECOMMENDATION

If you are creating rolling monthly partitions as each month closes, you should ensure that the data slice is set for each new partition after it is created.
Data Slices Result in the Creation of JOIN and WHERE Clauses

Setting a data slice also causes Analysis Services to add a JOIN and a WHERE clause to the SQL statement used for retrieving data from the source database during processing. The WHERE clause limits the data retrieved by the SQL statement to the data that belongs in the data slice.
EXAMPLE

If you say that a partition's data slice is June 2008, Analysis Services adds a join to the time dimension and adds the WHERE clause:
WHERE <month field> = June AND <year field> = 2008

or whatever the appropriate member and level names are. If you do not define a data slice and you have multiple partitions, Analysis Services does not restrict the data that is retrieved from the source database. Without the data slice, if you just happen to have July 2008 data in the June partition, Analysis Services does not complain, it just double-counts the July 2008 data. For more information, see Maintaining Partitions in SQL Server Books Online. By specifying the data slice, the system can add these JOIN and WHERE clauses that assist in maintaining the integrity of the data.

2011-05-10

PUBLIC

53/102

4 4.5

Management of Planning and Consolidation Periodic Tasks NOTE

We do not recommend changing the DataCompressionSettings registry settings.

4.5.2.1.4

Partition Planning

Partition planning ensures that you can reproduce and verify the partitioning scheme.
Prerequisites

You have chosen your basic scheme and determined the manner in which you would like to slice your data.
Features

You define the partition specifications in a table as shown below.


NOTE

Ensure that the Fact prefix is part of every partition name. It is important to prepend Fact to each partition name, as Save Model follows the naming convention:
Fact<ModelName>#####

The following tables contain examples of the planning that you should complete prior to creating your partitions for various fact tables. Preparing this information in this format greatly decreases the amount of time that you need to create the partitions. The following table contains planning details for the dbo.tblFactFINANCE table:
Partition Name Slice Filter (The Where Clause) Storage Mode

FACTFINANCE2002

2002.TOTAL

FACTFINANCE2003

2003.TOTAL

FACTFINANCE2004

2004.TOTAL

FACTFINANCE2005JAN FACTFINANCE2005FEB

2005.TOTAL. 2005.Q1. 2005.JAN 2005.TOTAL. 2005.Q1. 2005.FEB

(dbo.tblFactFINANCE.TIMEID >= '20020100') AND (dbo.tblFactFINANCE.TIMEID <= '20021200') (dbo.tblFactFINANCE.TIMEID >= '20030100') AND (dbo.tblFactFINANCE.TIMEID <= '20031200') (dbo.tblFactFINANCE.TIMEID >= '20040100') AND (dbo.tblFactFINANCE.TIMEID <= '20041200') dbo.tblFactFINANCE.TIMEID = '20050100' dbo.tblFactFINANCE.TIMEID = '20050200'

MOLAP

MOLAP

MOLAP

MOLAP MOLAP

54/102

PUBLIC

2011-05-10

4 4.5

Management of Planning and Consolidation Periodic Tasks Slice Filter (The Where Clause) Storage Mode

Partition Name

FACTFINANCE2005MAR FACTFINANCE2005Q2

2005.TOTAL. 2005.Q1. 2005.MAR 2005.TOTAL. 2005.Q2

FACTFINANCE2005Q3

2005.TOTAL. 2005.Q3

FACTFINANCE2005Q4

2005.TOTAL. 2005.Q4

dbo.tblFactFINANCE.TIMEID = '20050300' (dbo.tblFactFINANCE.TIMEID >= '20050400') AND (dbo.tblFactFINANCE.TIMEID <= '20050600') (dbo.tblFactFINANCE.TIMEID >= '20050700') AND (dbo.tblFactFINANCE.TIMEID <= '20050900') (dbo.tblFactFINANCE.TIMEID >= '20051000') AND (dbo.tblFactFINANCE.TIMEID <= '20051200')

MOLAP MOLAP

MOLAP

MOLAP

The following table contains planning details for the dbo.tblFAC2FINANCE table:
Partition Name Slice Filter (The Where Clause) Storage Mode

FAC2FINANCE

All

[no filter]

MOLAP

The following table contains planning details for the dbo.tblFACTWBFINANCE table:
Partition Name Slice Filter (The Where Clause) Storage Mode

FACTWBFINANCE

All

[no filter]

ROLAP

4.5.2.1.5

OLAP Custom Partitioning

In Planning and Consolidation, you can create an OLAP custom partition function and scheme. This setting is kept even during manipulation (add, modify, copy, or restore) environments or models. The FINANCE model has a number of partitions as part of the default installation. Of those, the FINANCE partition is the only one you need to address by deleting that partition and replacing it with partitions that divide the data into smaller segments.
Procedure

1. 2. 3.

Start SQL Server Business Intelligence Development Studio, which is part of the Microsoft SQL Server suite. Choose File Open Analysis Services Database . In the Connect to Database dialog box: Choose Connect to existing database.

2011-05-10

PUBLIC

55/102

4 4.6

Management of Planning and Consolidation Optimizing the Flow of Data in the Database

Enter the server name. Choose the database. 4. Choose OK. The Microsoft Visual Studio screen appears. 5. In the context menu of the FINANCE model, which is located in the CUBES section of the Solution Explorer pane, choose Open to open a tab displaying the default FINANCE model partitions. 6. Delete the FINANCE partition. 7. Choose New Partition. The Partition Wizard dialog box appears. 8. In the Specify Source Information dialog box: Choose Finance as the Measure group. Choose Data Source View.AppDef as the Look in. Select dbo.tblFactFinance as the table to use. 9. Choose Next. In the Restrict Rows dialog box: Choose Specify a query to restrict rows. Enter a query. For more information, see Partition Planning [page 54]. Choose Check to verify the validity of the query. Choose Next. In the Completing the Wizard dialog box, choose Design aggregations later. Choose Finish. 10. Repeat steps 7 through 9 for each partition you need to create.
NOTE

When you have created partitions, remember to schedule a regular reset of the partitions. For example, if you have created a partition for 2009, you must reset the partition to use it for 2010. If you create an OLAP custom partition function and scheme, you must adhere to the following naming rule: Partition function name: <dim>RangePF Partition scheme name: <dim>RangePS

4.6 Optimizing the Flow of Data in the Database


Planning and Consolidation has a number of features that you can use to control the flow of data to the database and the way that the data in the database is accessed. These include the following features: Send Governor table

56/102

PUBLIC

2011-05-10

4 4.6

Management of Planning and Consolidation Optimizing the Flow of Data in the Database

Planning and Consolidation uses this table is a staging area before writing the data to the database. Global Cache Global cache is an area of memory or a disk on the server that holds query results. That is, if you run a query, the results are stored in the global OLAP cache for potential reuse.

4.6.1 Send Governor Table


Planning and Consolidation stages data in the Send Governor (SG) table before writing it to the writeback table. This allows you to have more control over the rate at which data is written to the writeback table. You can create differing strategies, depending on your resource constraints or system usage. You can control the following SG and write-back table parameters: Frequency at which the system sends data to the write-back table The number of records that are added to the write-back table at one time The number of additional threads that can be created to send data from the SG table to the writeback table The number of records that can be processed by one thread The SG table count that is used when sending data and is scalable to allow for parallel processing The following environment parameters control Send Governor functionality: INTERVAL_CHECK_SEND UNITPER_SP THREAD_MAXNUM_SG MAXCELLS_THREAD_SG SEND_SGTABLE_COUNT See the Environment Parameters topic in the Planning and Consolidation application help, http:// help.sap.com/epm.
Considerations when Modifying Send Governor Parameters

Send Governor environment parameters can be modified as a means of tuning performance. When tuning performance, consider the following: For situations in which you have a larger user base (with a large number of concurrent users) you may wish to use fewer threads with a bigger interval. Increasing the value of MAXCELLS_THREAD_SG would improve performance for a single send, while degrading the systems ability to process concurrent users. Increasing the value of UNITPER_SP would improve performance for a single send, while degrading the systems ability to process concurrent users.

2011-05-10

PUBLIC

57/102

4 4.6

Management of Planning and Consolidation Optimizing the Flow of Data in the Database

When you change the SEND_SGTABLE_COUNT parameter, you should also choose Modify Model from the action pane to save your changes

4.6.2 Global Cache


There are two alternative types of cache used by the OLAP processor Local and Remote: Local OLAP processor cache The results calculated by the OLAP processor are stored in the roll area for each session. Global OLAP processor cache This is a cross-transaction application buffer. The query navigation states and query results that the OLAP processor calculates are stored on the application server. For a similar query request, the OLAP processor can access the data stored in the cache. Query execution is accelerated when the OLAP processor can read data from the cache. This is because the cache can be accessed more quickly than querying the database.
Features

Global cache parameters are defined during system implementation. However, it may emerge from the evaluation of data that the global cache parameters need to be adjusted to fit system demands. In this case, you are able to change these settings later. The following is a brief overview of the global cache parameters. For a more detailed description, refer to the SAP application help at http://help.sap.com and search for Global Cache.
Cache Inactive

Setting this parameter means that the global cache is centrally deactivated.
Local Cache Size in MB

This parameter determines the memory size of the local OLAP processor cache.
Global Cache Size in MB

This parameter determines the maximum memory use for all objects in the global cache. Memory use means the amount of memory used by the objects in the shared memory. Shared memory refers to the swapping of data if the memory reaches its maximum size. The actual memory use in the shared memory is generally higher because administrative data is also written there.
Persistence Mode

The persistence mode allows you to determine whether and in what form cache data is to be stored: Main Memory Cache with or Without Swapping

58/102

PUBLIC

2011-05-10

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

This persistence mode parameter determines what happens to the data in the cache if the memory reaches its maximum size. A proportion of the data must either be removed or swapped. A process using the Least Recently Used (LRU) algorithm determines which data is affected by this. Persistent Cache for each Application Server or Cross-Application Server This persistence mode parameter determines whether the data is to be stored in a file (flat file) or in a database table. The following table is an overview of how the persistence modes can be used:
Persistent Cache for each Application Server or CrossApplication Server

Persistence Mode

Main Memory Cache with or Without Swapping

Inactive Flat File

The cached data is stored as a file in a directory on the application server or cross-application server in the network. Database Table: Main Memory Cache with Swapping: The cached data is When the cache memory is exhausted, data is stored in stored in the database Cluster table Transparent table (BLOB) the database in a non-transparent cluster table or in a as a non-transparent transparent table with Binary Large Object (BLOB). cluster table or as a transparent table with BLOB (Binary Logical Object). Cluster tables differ in whether they have the application server in the key. This depends on the cache mode.

When the cache memory has been exhausted, data is removed. IMain Memory with Swapping: When the cache memory has been exhausted, data is stored in a file

4.7 Best Practices for Performance Management 4.7.1 Tuning Planning and Consolidation
This section provides guidance and techniques for developing high-performance models by explaining what can be done to tune each layer, from the hardware up through the system software to the model and the reports.

2011-05-10

PUBLIC

59/102

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

Features

The practices in this section help to provide you with a solid understanding of your model and a foundation for a high performance solution. Its overall organization is as a checklist. By working through this section from beginning to end, you can determine which best practices have been applied and which have not.
NOTE

Not all techniques are necessary, or even appropriate, for all situations. Also, the concepts in this section have not been tested in all scenarios and your results may vary.
The Tuning Layers

The layers addressed are: Hardware Operating System SQL Server Analysis Services Environment database Data load processes Reports and Input Forms Model Maintenance
Tuning Minimum Consideration

If nothing else, you should focus on the following: Use large servers Eliminate HTTP connections Use report templates
More Information

Hardware Configuration [page 61] Operating System Configuration [page 62] System Software Configuration [page 62] tblDefaults Parameters [page 64] Model Customizations [page 65] Model Design [page 66] Model Maintenance [page 70] Identifying Performance Bottlenecks [page 71] FACT, FAC2, and WB Tables [page 73]

60/102

PUBLIC

2011-05-10

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

4.7.2 Hardware Configuration


Use the following guidelines to configure your Planning and Consolidation hardware.
Prerequisites

Hardware configuration can be complex, and its relationship to performance might not be readily apparent. We recommend that you engage technical consulting to ensure all details are in order.
Features

When reviewing your hardware configuration, consider the following points.


How Many Servers?

While your performance profile may vary, for large models we recommend at least an application server and a database server. You might consider load balancing. If possible, an additional server is recommended; splitting the OLAP and SQL components on different servers provides the most scalable solution. The appropriate most configuration can only be determined when the application design is known. Working with SAP resources provides the experience to assist you.
Server Performance

Faster hardware typically provides the biggest performance boost. In particular: Lots of fast CPUs, either dual core or hyperthreaded Enough RAM to hold the entire database and all processes High speed, properly configured I/O devices Consider no less than: Four dual core 3GHz, or faster, CPUs 8 GB RAM Minimum (16 or 32 is better) Three or four separate RAID drives and controllers for operating system and files, and data, temp and log files
Clients

Since Planning and Consolidation is a client application, client workstations should have as much RAM as possible. Informal testing has shown that upgrading a client machine's RAM reduces complex expansion and report times by more than 50%.
RECOMMENDATION

If the clients are using Citrix on shared hardware, contact technical consulting.

2011-05-10

PUBLIC

61/102

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

4.7.3 Operating System Configuration


Use the following guidelines to configure your operating system.
Prerequisites
Disk Configuration

Put data, logs, tempdb, and the operating system on separate physical drives.
Virus Scanning Software

Make sure virus scanning is limited to the files that are absolutely needed (avoid scanning the transaction log or the read/write actions performed by SQL and Analysis Services).
Setting Connection Timeouts

Connection timeouts help reduce the loss of processing resources consumed by idle connections. When you enable connection timeouts, IIS enforces the time-outs at the connection level. See the procedures below.
Procedure
To Set an HTTP Keep-alive

1. 2. 3. 4.

In IIS Manager, expand the local computer and click the Web Sites or FTP Sites folder. On the right pane, double-click HTTP Response Headers. In the list that appears, right-click to show the context menu. In that menu, choose Set Common Headers. In the Set Common Headers window, choose Enable HTTP Keep-alive. Click OK.

To Set a Connection Time-out Value

1. 2. 3. 4.

In IIS Manager, expand the local computer and click the Web Sites or FTP Sites folder. Right-click on the site where you want to set a connection time-out value. On the context menu, choose Manage Web Site Advance Settings . In the pop-up window, scroll down to Behavior and expand Connection Limits. Type the maximum number of seconds that IIS should maintain an idle connection before resetting the connection in the value of Connection Time-out (seconds).

4.7.4 System Software Configuration


Use the following guidelines to configure your system software. System software refers to the SQL, Analysis Services, and other layers above the operating system but not including Planning and Consolidation and the application itself.

62/102

PUBLIC

2011-05-10

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

Features
SQL Server Configuration

Make sure you are running the SAP supported SQL server version for both SQL and OLAP (see the SAP Planning and Consolidation 10.0 Master Guide. Make sure AWE is configured for the large amount of memory you have installed in the hardware. If disk I/O is an issue, look into using file separation techniques.
Analysis Services Configuration for SQL 2008

We have found the majority of the settings for SQL should follow the Microsoft documented guidance. We have also found that the Query/MaxThread value should be 10 (default value) plus the total number of Planning and Consolidation databases (environments) plus twice the number of CPUs. Without making the appropriate setting, processing time is significantly slower. For more details, contact the SAP support team.
Operating System Services

Turning off services that are not required saves memory and CPU. The following is a list of services that you do not need on the Planning and Consolidation server. Alert Application Management Transfer service ClipBook Computer Browser Distributed Link Tracking client Distributed Transaction Coordinator Fax Service Indexing Service Internet Connection Sharing Logical Disk Manager Administrative Service Messenger Microsoft NetMeeting Remote Desktop Sharing Network DDE Performance logs and alerts Protected Storage QoS RSVP Remote Access Auto Connection Manager Remote Access Connection Manager Remote Procedure Call (RPC) locator

2011-05-10

PUBLIC

63/102

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

Routing and Remote Access RunAs service Security Account Manager Server SmartCard SmartCard Helper System Event Notification TCP/IP NetBIOS Helper Service Telephony Telnet Uninterruptible Power Supply Utility manager Windows Installer Windows Time You should work closely with technical consulting and your IT team to ensure that these services are not actually in use by other processes on the server, and that their disabling does not adversely affect functionality or performance.

4.7.5 tblDefaults Parameters


Use the following guidelines to set tblDefault parameters.
Features
Planning and Consolidation Send Governor Configuration Modifications

This section provides some background on what the Send Governor (SG) accomplishes, so that you can understand better how this works. The SG is designed to manage the Microsoft Analysis Services locks. This ensures consistent performance for the user and avoids deadlocks. Set the following Send Governor values in tblDefaults:
Send Governor Settings (KeyID) Default Batch mode Real time mode

THREAD_MAXNUM_SG: INTERVAL_CHECK_SEND: MAXCELLS_THREAD_SG: UNITPER_SP:

3 3000 msec 1000000 1000000

0 Increase Increase Increase

Increase Decrease Decrease Decrease

64/102

PUBLIC

2011-05-10

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

For an environment that has many concurrent send activities, consider adding an additional 1 to 2 seconds (on average) during sends to help improve data read and Lite Optimization (a specific function of the Administration Client) performance.
NOTE

The time for a Send from SAP BusinessObjects EPM solutions, add-in for Microsoft Office includes both the time to send plus (on average) half the Interval_Check_Send time. A significant drop in locks results from decreasing the THREAD_MAXNUM_SG to 1.

4.7.6 Model Customizations


Use the following guidelines to customize your Planning and Consolidation models.
Features

A core area of planning for large models consists of knowing what data will flow into which of the FACT, FAC2 and WB tables when, and how. By proactively managing this flow, you are taking the most significant step to ensuring a smooth performance profile for your models. FACT, FAC2, and WB Tables [page 73] provides more background on the FACT tables and how they are used.
Consolidations Package Changes Intercompany Package

Configure the package to write to FAC2 not to WB.


Currency Translation Package

Currency translation generates numerous records in the database. These can be created in the WB table by default, but this can result in (worst case) millions of records in the WB table, which would create a serious performance issue. At one customer in Europe, the currency translation process generated 2.5 million WB records. The package should be configured to insert into the FAC2 table. The default FXTRANS logic can also be modified to performance this function.
NOTE

Both packages can support the insertion of records into FACT after which we can process just the FACT partition. This would be seriously considered for a very large production environment where the FAC2 table is heavily used by other processes.
Data Manager Package Changes

As mentioned above, smart use of the WB, FAC2, and FACT tables is essential.

2011-05-10

PUBLIC

65/102

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

By default, Planning and Consolidation import package writes into FAC2 and triggers a processing of the associated partition. When many users concurrently import their data, all the processing can easily lock the system. This situation is exacerbated if other processes are being run, such as FX conversion or Intercompany eliminations. This condition has been particularly acute when users are allowed to run packages whenever they choose, often resulting in a large number of submissions in a short period of time (at budget or closing deadline, naturally). The following techniques can be used to mitigate the performance impacts: You can import into WB table if you are importing so frequently that locking becomes an issue, especially if the imported files are relatively small. A scheduled light optimization takes care of periodically emptying the WB. Throttling the throughput. In a customer case, the submitted files are not immediately imported, but held in a queue. A separate process merges the files and imports the merged file. This process is both more efficient and reduces locks. Use bulk data loads into the FACT table if there is a maintenance window that allows for such loading.

4.7.7 Model Design


Use the following guidelines to design your models for maximum performance.
Dimensions

The choice of concurrent dimension is key. Planning and Consolidation locking is by 3 to 5 dimensions: for example, a customer uses the Entity, Category, and Time for the other locking dimensions. The combination of Entity, Category, and Time is locked by Planning and Consolidation when data is sent. Planning and Consolidation locks are one of several kinds of locks in the system: SQL and Analysis Services also have lock. This section addresses Planning and Consolidation Locks.
EXAMPLE

Budgeting and forecasting by function Sales, Marketing, IT, Legal: A Function dimension is secured and used for this purpose. When a user chooses Send, this sends to all entities, so other users are locked out until that send ends. The sends queue fills up quickly and response time might increase from a minute to 10 or 20 minutes. The solution is to change the concurrent dimensions to the function dimension instead of the entity type dimension. Therefore the legal entity dimension is not considered in the lock (albeit still a secured dimension).

66/102

PUBLIC

2011-05-10

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

Consideration of the need for alternative hierarchies is necessary to ensure optimal dimension processing and performance for reporting. More dimensions create more complex joins and retrievals. If you can break up a large model into business process specific models with simpler dimensionality, it is possible to gain performance (and usability) improvements.
Reports and Input Forms General Guidelines

When building or reviewing sheets: See how much data is to be entered: many small sheets or one large workbook have advantages and disadvantages, and must be considered carefully. Avoid refreshing after sends if possible. Use Excel formulas to calculate totals instead of retrieving them. This makes the process more real-time. Use park-n-go if applicable. This discourages multiple send and refresh. Try to avoid using multiple EPMRetrieveDatas in a single cell. (that is, EPMRetrieveDatas+ EPMRetrieveDatas) Try to avoid having the dimension parameter of an EPMRetrieveDatas formula depend on the result of another EPMRetrieveDatas formula. See reporting best practices on the Corporate Performance Management Community on SDN (https://www.sdn.sap.com/irj/sdn/bpxcpm). This requires logon.
Asymmetric Queries

Be aware of the following: Sheet calculation order How Row and Column ID headings are created Nested expansions SAP BusinessObjects EPM solutions, add-in for Microsoft Office queries are designed to optimize the refresh from the Column/Row grid. This means that it expects some commonality in the columns (that is, Time dimension members across all columns and most, if not all, columns point to the same context). When reports vary from this format, performance degrades. An example of a report that does not follow a clean grid format is one where each column has a different set of dimensions mapped to it. In other words, it is not just one dimension across the columns but many dimensions referenced with consistency.
EXAMPLE

Assuming a basic four-dimension application (to really see the performance degradation you need an 8 to 12 dimension model):

2011-05-10

PUBLIC

67/102

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

If you have Time in the columns and Accounts in the row, all other dimensions are in the page key (a single reference per query). This is ideal. If you have Category and Time in the columns and Accounts in the row, this is still good as Entity is in the page key. If you have a different Entity, Category, and Period in each column and Accounts in the row, performance is poor. Therefore, nested expansion does not help here. This does not depend on expanding up to three dimensions in a column or row. It is related to varied dimension members for each cell. There is no way to optimize the query for the data retrieval.
Logic Type (MDX, SQL, or SQL Server Stored Procedures)

Most models strictly use two versions of logic to solve calculations in Planning and Consolidation, namely, MDX or SQL logic (a proprietary syntax of Planning and Consolidation). The main difference between MDX and SQL is that MDX logic is not scalable. This means that when the number of concurrent users increases, the performance degradation is huge. For this basic reason, we recommend SQL logic rather than MDX. The approach to using MDX is: Dimension logic is useful in the account dimension for the ratio (KPI) account only. All other formulas are better defined with SQL. Use dimension logic that spans time (such as dynamic open balances) with great care. These are often better performed in SQL logic. Due to the large impact on the performance, do not define a large quantity of hierarchies in the dimension. The source/destination regions defined in the logic must be small as much as possible. Reduce as much as possible the number of COMMITS. Use caution about using MDX-based logic calculations dependent upon large dimensions. At baselevel members, these MDX functions perform well and often are desirable. However, at parent levels in large hierarchies, these calculations often result in high Analysis Server CPU utilization. The approach to using SQL is: Select the right trigger to work only with the existing records, and not with all possible cases. For example, in the Budget application, you have to calculate all revenues by region and product. In this case, you have Revenues=Units*Price. The right trigger is units because typically Price is defined for all products; conversely, Units are defined by regions/product. This is because not all products are available in all regions. In this case, the system applies the formula only for existing cases. Conversely, if you define Price as trigger, the system executes the calculation if the product

68/102

PUBLIC

2011-05-10

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

is not filled. The result is the same, but in the second case (Price as trigger) the system has to execute more calculations. Load into memory only the information that you need. For example, if you have to calculate Revenues, you define as a region only Price and Units. You do not need to load into memory the account Revenues. As a result, the system is faster because it scans fewer records. Use properties when possible to write more efficient and compact logic. For example, if you have to apply the same calculation on a specific set of accounts instead of specifying all single accounts, you can define a property and in the SQL logic you can test the property (using a SELECT statement) to define the account set. This approach is much faster than specifying all accounts and then using a WHEN criteria to perform the filtering. Reduce as much as possible the number of COMMITS the GO command can be a useful alternative, but exercise care here as well.
NOTE

Do not use the GO instruction in place of a COMMIT instruction. All instructions that are COMMIT-specific (for example, XDIM_MEMBERSET) are still COMMIT-specific and not GO-specific. In other words, you cannot redefine the data region to process for each GO instruction, but only for each COMMIT instruction. The GO instruction only sets a stopand-go point between WHEN and ENDWHEN structures of the same COMMIT section, that is, of the same data region. Any GO instruction defines the end of a logic section, more or less in the same way as a COMMIT instruction, in that the logic is executed normally up to that point. Do not leave too many calculations in default logic. You can leave currency conversion or Intercompany Eliminations to a later phase. Finally, as an alternative to standard Planning and Consolidation-based SQL logic is pure SQL Server stored procedures. Planning and Consolidation SQL logic allows for the calling of SQL stored procedures directly from the Planning and Consolidation logic; thus it makes database-only processing a real possibility. This last alternative might be useful under the following circumstances: There is a large amount of data to be allocated. Standard SQL logic requires heavy use of temporary memory variable for its processing. Making the decision to use SQL logic or SQL stored procedures is a careful balancing act. SQL stored procedures do not allow for simple administration in the way that Planning and Consolidation SQL logic does. This can require Planning and Consolidation Administrators to rely on IT-oriented database resources. However, from a scalability and performance standpoint, the SQL stored procedures method can be preferable.

2011-05-10

PUBLIC

69/102

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

4.7.8 Model Maintenance


Use the following guidelines to maintain models for efficiency.
Features
Compression

Data compression (through eAdmin, rather than other mechanisms) can have a significant impact on model and partition processing times. It is nearly linear: if compression can reduce the number of records in the Fact tables by 50%, you can expect a nearly 50% reduction in the time to process the model. Of course, this varies with alternate hierarchies, and so on. Here are three customer case studies:
Test Percent reduction in records Percent reduction in process time

1 2 3

75% (from 40 to 10 million) 50% (from 32 to 16 million) 75% (From 4 to 1 million)

70% (from 2.5 hrs to 45 min) 67% (from 2 hrs to 40 minutes) (from 30 min to min)

Lite Optimization

As data grows in the Writeback table, it has a noticeable impact on report performance. By regularly performing Lite Optimizations, you can keep report performance reasonable.
Storing Historical Data

A very large amount of data can be a problem for processing times, especially if the maintenance window is small. Many times you need to store more information than needed on a regular basis. This could be in the form of versions of a Budget or Forecast, number of years worth of data, storing LC and USD for USD entities (or whatever the primary reporting currency is), or before and after allocation results. Reconsider this need so that the environment in use shows what you need on a regular basis. 1. Clear out data that is no longer in use. 1. After finalizing the Budget, you can typically clear the data from the version categories. 2. Additionally, after the Budget for the current year is complete, the prior years Budgets can be removed. 3. Store only Forecast versions that you want to analyze. 2. If you want to have these versions for reference or audit, move them to a separate environment, which will almost never be used, and clear them from the active one. Remember that multiple environments on the same server take resources, so evaluate this accordingly. 3. Archive historical data.

70/102

PUBLIC

2011-05-10

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

1.

2. 3.

Many companies only want to analyze a certain number of years' worth of data. If you have data outside that threshold, extract the data or back up the database so that you can always return to it and then delete it from the database. You can also use Book Publication to display historical reports on an as needed basis without storing the data in the fact table. Create a historical environment to hold historical information that few people need to see. This is often the best choice, as you can also maintain the historical entity and other structures for historical reference. For example, create a historical environment at the end of each fiscal year, so you have each years snapshot available when needed.

Keep the Wizards Directory Small

This reduces the initial load time. Although not a problem in many cases, a large Wizards directory can negatively affect user experience.

4.7.9 Identifying Performance Bottlenecks


Use this information to identify bottlenecks that decrease performance.
Features
Monitoring Tools

There are two basic performance monitoring tools provided with the Windows server operating system: the Task Manager and the Performance Monitor. Set the Task Manager to track on the Processes tab, and choose Show processes from all users. Choose the CPU header, so that the processes using the most CPU time are identified at the top of the list. This is critical to the success of this test. The Performance Monitor provides much more detail than the Task Manager, but you must configure it to capture the relevant metrics. Performance Monitor can also capture the performance profile for analysis later. Here, we do not describe how to use Performance Monitor, but identify what to monitor. Other Monitoring tools include: Client and server logs Black box SAP product support PSS Diag Microsoft Professional Services Wily Introscope
Memory

Although all of the counters in the Memory performance object are useful, two are notable when measuring Analysis Services overall performance:

2011-05-10

PUBLIC

71/102

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

Memory \ Pages/sec This performance counter indicates the number of I/O operations needed to support virtual memory. Ideally, this number should be as low as possible; a high number demonstrates too little available physical memory. Increasing available physical memory reduces the number of page faults, and therefore reduces the amount of virtual memory used to support active processes such as Analysis Services. Memory \ Available bytes This performance counter indicates the amount, in bytes, of available physical memory. Combined with the Pages/sec system counter, you can use this counter to further quantify the amount of available physical memory.
Disk I/O

Disk storage performance is central to Analysis Services performance. The following counters can be compared directly with various performance counters maintained by the Analysis Services performance objects to determine if disk storage represents querying or processing performance impact. A common cause of poor disk storage performance is the performance of other applications running on the system. Disk storage supports all applications running on a given system; active applications draw resources away from Analysis Services. These performance counters can provide a better picture of absolute querying and processing performance by comparing all of the disk storage activity on the system with the disk storage activity tracked by Analysis Services. Analysis Services: Agg Cache \ Evictions / Sec Evictions measure how frequently Analysis Services flushes its cache and refreshes from the database / disk. This can make performance poor. Physical Disk \ Avg. Disk Bytes/Read This represents the average number of bytes transferred from disk storage during a single read operation. Physical Disk \ Current Disk Queue Length and Physical Disk: Average Disk Queue Length This counter represents the current number of queued disk operations. If this number spikes during poor processing performance, especially during the base or aggregating phases, then the current disk storage solution might not be adequate to support the needs of Analysis Services. Ideally, the value of this performance counter should be as low as possible at any given time.
Processor

Scaling up to multiprocessor servers allows for much greater analysis server performance. Scaling up, however, does not necessarily provide a linear increase in performance. Other factors, such as physical memory or disk storage, can affect the increase provided by scaling up an Analysis server. Processor \ % Processor Time

72/102

PUBLIC

2011-05-10

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

If this is consistently above 80-90%, find out what process is taking consuming the CPU time. The Task Manager provides the information regarding which process is using the CPU. Contact SAP Services and Technical Consulting if the total CPU regularly runs over 90%, or steadily uses up 100% of one or more of the CPUs in the system.
Analysis Services Analysis Server: Connections \ Current Connections in Progress

This counter indicates all the connections waiting on OLAP at any given point. Analysis Server: Locks \ Current Locks This counter is particularly useful. It measures the current number of locked objects, and can indicate when locks are preventing users from accessing data in a timely fashion. Analysis Server: Locks \ Current Lock Waits This counter measures the number of clients waiting for a lock.
SQL Server: Buffer Manager \ Buffer Cache Hit Ratio

The buffer cache hit ratio measures the percentage of pages that were in memory and did not require a disk access to access the data. A buffer cache hit ratio should be nearly 99 percent. If the ratio is lower, there may be memory constraints that affect performance.

4.7.10 FACT, FAC2, and WB Tables


Use this information to maintain system performance in relation to system storage. There are three tiers of storage for each model: WRITEBACK (WB) table This is real-time data input (a ROLAP partition). This data is the most current data sent to the system. Data sent by the EPM add-in for Microsoft Office data sends and Investigator browser data sends are in real-time storage. FAC2 table This is for short-term and Data Manager imports (a MOLAP partition). This data is not real-time data, but is also not in long-term storage yet. When you load data through the Data Manager (automatic data load from external data sources), it loads the data to short-term storage so that the loaded data does not affect system performance. Only the partition associated with this table is processed, so the system is not taken offline. FACT table This is for long term history and storage (a MOLAP partition). This is the main data storage. All data eventually resides in long-term storage. Data that is not accessed very often remains in longterm storage so that the system maintains performance.

2011-05-10

PUBLIC

73/102

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

Typically, these tables are identified by prefixing the model name with tblFACT, tblFAC2, and tblWB. For example, for the Planning model, the three tables are named tblFACTPlanning, tblFAC2Planning, and tblFACTWBPlanning. Periodically clearing real-time data greatly optimizes the performance of the system and an optimization process is required (this can be scheduled automatically based on given parameters like a numbers of records threshold). Similarly, over time, the FACT table can grow to a size that is affecting performance. If this happens, you can improve performance by further partitioning. However, to divide the FACT table into series of partitions, you need to create an XMLA script that contains many <Create ...> elements. This is difficult to achieve especially when you need to find an optimal definition for the partitions. The optimization process and a partitioning tool are described in the following.

4.7.10.1 FACT, FAC2, and WB Table Optimization


Real-time data is stored in the write-back (WB) table, and short-term data is written to the FAC2 table. The various optimization options provide you with ways to clean these tables by moving real-time data to the short-term store and short-term data to the long term store.
Features

The following types of optimization are available: Lite optimization Incremental optimization Full-process optimization Compress database
Lite Optimization

Lite optimization clears real-time data storage (WB) and moves it to short-term data storage (FAC2). This option does not take the system offline, and can be scheduled during normal business activity.
Incremental Optimization

Incremental optimization clears both real-time and short-term data storage (WB and FAC2) and moves both to long-term data storage (FACT). Run this option run when the system is offline. This option takes the system offline, so run it during off-peak periods of activity.
Full-process Optimization

Full-process optimization clears both real-time and short-term data storage and processes the dimensions.

74/102

PUBLIC

2011-05-10

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management

This option takes the system offline and takes longer to run than the incremental optimization. It is best scheduled to run at downtime periods for example, after a month-end close.
Compress Database

The compress database option is available to rationalize the FACT tables. This sums multiple entries for the same context into one entry so that data storage space is minimized. Compressed databases also process more quickly.

4.7.10.2 FACT Partitioning


The initial design is that each model has three tables (WRITEBACK, FAC2 and FACT) for real-time, short-term, and long-term data respectively. This scheme, in conjunction with the optimization methods, provides a scalable solution as the WRITEBACK (WB) and FAC2 tables are never overfilled and can be accessed quickly. However, over time, FACT can become very large and can be responsible for a degradation in performance. You can improve performance by partitioning the FACT into smaller, more manageable tables.
Features

You should schedule partitioning at a down time period to minimize inconvenience to you users. The partitioning process can be manual or automatic.
Manual Partitioning

SAP has developed a wizard for partitioning FACT. This wizard has the following steps: Step 1 Login Step 2 Select Environment Step 3 Select Model Step 4 Select Dimensions for the New Partition Definitions These dimensions are the primary key of the partitions. Typically, you would use Time and Category but any other combination is possible. Step 5 Set New Partition Definition Select appropriate members for each of the selected dimensions. Step 6 Performance Check Create an MDX query to execute for the perfoemance check. A simple query could be such as the following example:
SELECT

2011-05-10

PUBLIC

75/102

4 4.7

Management of Planning and Consolidation Best Practices for Performance Management NON EMPTY {[TIME] [2008.QTR_JUNE].[TIME].[2008.QTR_SEP]} ON 0, NON EMPTY {[BUSORG.H1].Children} ON 1 FROM {OPERATIONS}

Step 7 Review New Partition Definition The system executes the MDX query seven times to calculate average performance. The display shows the reults and the executed query. Choose Next to build the partition. Step 8 Completed The final display shows: Partitions in original state (before partition) The partitions are shown as a table with columns for Name, Last processed, Type (MOLAP, ROLAP), and Source Partitions in current state (after partition) The partitions are shown as a table with columns for Name, Last processed, Type (MOLAP, ROLAP), and Source Performance numbers These are indicated be a simple barchart for the before and after versons.
Automatic Partitioning

To create a partition scheme automatically, you only need to specify the environment model. The system uses the SQL Query Engine (SQE) log to create an optimum partition scheme based on use and query history.

76/102

PUBLIC

2011-05-10

5 5.1

High Availability High Availability Recommendations

5 High Availability

5.1 High Availability Recommendations


To ensure high availability, efficiently manage incoming server requests, and eliminate single points of failure for SAP Planning and Consolidation, you should take the following recommendations into consideration when designing your configuration: Install multiple application servers (that is, create server tiers). Use a hardware or software load balancer to manage incoming traffic for the application server. Implement the Microsoft SQL Server solutions for high availability and fail-over for your clustered servers. For more information about Planning and Consolidation architecture and architecture, see: http://service.sap.com/instguidescpm-bpc Planning and Consolidation 10.0 version for the Microsoft platform SBOP Plan & Consol 10.0 M Master Guide . http://service.sap.com/instguidescpm-bpc Planning and Consolidation 10.0 version for the Microsoft platform SBOP Plan & Consol 10.0 M Installation Guide .

2011-05-10

PUBLIC

77/102

This page is left blank for documents that are printed on both sides.

Software Change Management

6 Software Change Management

Transports between your development, test, and production environments must be manually performed by your Planning and Consolidation administrator or an SAP representative.
RECOMMENDATION

We recommend that you involve SAP technical consultants in transport operations. This section provides tips for incrementally moving Planning and Consolidation information through these environments. These procedures should be considered recommendations and guidelines, and could require modification based on your environment. IT infrastructures typically support a development, a test, and a production instance of a business system. This guide describes how to manage the transport from test to production. While the transport from development to test is analogous, your development procedures may vary.
RECOMMENDATION

If you are developing custom Data Manager packages or creating custom logic, contact SAP product support to ensure that your changes can be supported.
NOTE

Planning and Consolidation does not support the extension of functionality using code-based interfaces (as in an application programming interface, for example). You can, however, modify your Data Manager packages and custom logic through standard Planning and Consolidation tasks. Supported packages and custom logic are not affected by system version upgrades. Business users create reports, templates, published books, and transformation files in the production system. You can refresh your test system from production with backup and restore. Remember that security profiles transfer as well, so make any necessary modifications after the restoration. In some cases, you may want to incrementally move artifacts. For example, you might need to do this if there are other parts of the system also moving from development through test to production. If so, follow the procedures described in this section, switching the source and target environments. Establishing the right business processes is an essential step to ensuring that system component transport is both compliant and efficient. Planning and Consolidation customers have typically instituted a change request system, leveraging existing change request systems. Users, developers,

2011-05-10

PUBLIC

79/102

6 6.1

Software Change Management Transport and Change Management

and testers can request the move of specific named artifacts from one environment to another. An assigned system operator is responsible for executing the change request and notifying the requestor that the artifact has been moved. Once the requester has verified that the correct artifact has been moved properly, the change request is completed.
RECOMMENDATION

A change request system can be implemented in any number of ways (including e-mail, for example). However, we recommend that you use a robust request management system.
NOTE

Migrate dimension and security changes to the production environment first (if these changes are not implemented first, logic, data loads, and reports may not work appropriately).
CAUTION

For more information about using proper testing procedures to mitigate risks when modifying the production environment, see Test Environment Usage [page 80].

6.1 Transport and Change Management


This section provides tips for incrementally moving information from a Planning and Consolidation development environment to a Planning and Consolidation production environment. It also discusses proper testing procedures to mitigate risks when modifying the production environment. Note that these are recommended procedures and could require modification based on individual environments. Planning and Consolidation includes the Change and Transport System (CTS), which is a tool that helps you to organize development projects in the ABAP Workbench and in Customizing, and then transport the changes between the SAP systems in your system landscape. For more information, see SAP Help Portal at http://help.sap.com.

6.1.1 Test Environment Usage


The purpose of having a development environment is so that change requests can be thoroughly tested in an environment that is nearly identical to production, without interrupting the daily use of Planning and Consolidation. With this in mind, the test process should include, but not be limited to, the following: Ensure that the test environment is synchronized with the production environment (including items such as dimensions, data, templates, and logic) before starting the testing. By providing a comparable baseline of information, this assists with troubleshooting should the testing not

80/102

PUBLIC

2011-05-10

6 6.1

Software Change Management Transport and Change Management

produce the desired result. For example, if the test is to enhance performance on logic, the data should be synchronized. Then after the improved logic runs, the data should correspond to what it was before the changes. Test in steps. Do not make all the changes at once. Test logic on small subsets of data. Change the hierarchy in pieces. Modify reports bit by bit. This allows for trouble shooting of any issues that arise during the test. The final test should duplicate what happens in the production environment. For example, the tests are performed on logic. The logic has been verified to work on a small subset of data (one month and one category), but is run for a larger set on production (full year and one category). Make sure that you run the test for the full year so that you know how the system reacts under more realistic conditions. A full year could max out memory or CPU or run a lot longer than expected based on your monthly run. When you increase the number of tests run in the test environment, you improve the accuracy of your risk assumptions in the production environment. When running data validation tests, make sure that you test not only that the changes have returned the expected results based on modifications, but also that data that should not be affected by the change truly was not. For example, if there was a logic change to calculate a new account. First, validate that the new account was calculated correctly. Then make sure that existing reports tie out as before.

6.1.2 Dimension Updates


Dimension files are *.xls files in the x:\PC_home\WebFolders\AppSet\AdminApp folder on the server. Dimension files have the same name as the dimension name. The security file is named Users.xls. Considerations when migrating Dimension and Security updates to production are: Make sure the new file is in sync before modifying it. Always make backups before moving the files. If using the NEWID field to change a member name, make sure that you make this change again, manually in production, rather than just copying the tested file over the production file. Using the NEWID field ensures that all data in the Fact Tables is converted to the new name. Overwriting the name in the ID column does not trigger this change in the Fact Tables. If adding properties, make sure to ad those properties prior to Validating and Processing the dimension. If modifying security, make sure the data access security is appropriate for the target environment. Sometimes there is different security access assigned on development and production.

2011-05-10

PUBLIC

81/102

6 6.1

Software Change Management Transport and Change Management

After Validate and Process of the updated dimensions, make sure the advanced and default logic files are updated as appropriate. Some advanced and default logic files include a filter called *XDIM_MEMBERSET. This filter dynamically creates a list of dimension members each time the file is saved. The list is updated when the file is saved (not on dimension Validate and Process). For example, if there is an *XDIM_MEMBERSET that selects all base level accounts (the Default Currency Conversion, DefaultTranslation.lgf has this) and a base level account (which is not calculated) is added, the filter needs to be refreshed. Therefore, if a base level account is added, all logic with a filter (as described) should be saved again.

6.1.3 Security Profile Updates


If there are changes to dimension structures being transported, the data access security profiles should also be updated. You should consider the following items, in regard to this process: By default, there is no access to secured dimension members unless otherwise specified. In most environments, the use of Team Profiles simplifies the assignments of rights.

6.1.4 Logic File Migration


There are three types of logic within Planning and Consolidation: advanced formulas, dimension formulas, and report formulas. Dimension formulas have Excel files (*.xls) associated with them and advanced formulas have logic files (*.lgf) associated with them. The third type involves formulas in reports and input forms. When moving logic within Planning and Consolidation we are referring to Advanced Formulas or Dimension Formulas. All Advanced formulas are stored in the <Drive> \WebFolders\<Environment>\AdminApp\<ModelName> folder on the server. Dimension formulas are stored in the dimension sheets in the Formula column. Dimension sheets are found in the <Drive> \WebFolders\<Environment>\AdminApp. Compiled dimension formulas are located in the <Drive> \WebFolders\<Environment>\AdminApp\dimension folder. Planning and Consolidation Advanced Formulas logic has the following file structure:
File Extension Description

.xls .lgf

.lgl

These files include default.xls, and a number of other *.xls files used as templates for script logic files, only used for 4.2 version. These files are text files that are created from the logic statements in the logic editor in the Administration Client. In some cases where expert logic is needed, you can create LGF files. These are library files that are included in other logic files.

82/102

PUBLIC

2011-05-10

6 6.1

Software Change Management Transport and Change Management Description

File Extension

.lgx

These are compiled logic files. Compiled files are the files that are actually run by the SQL and Analysis Services logic engines. Things to consider when moving logic: Before modification, make sure the files to be modified are in sync on production Make a backup of the production logic files (*.xls, *.lgf, *.lgx, *.lgl) Make sure that you move any associated files (that is, those files that are included in the file being moved) with the logic being moved. Logic files have the following statement if they do include another logic file: *INCLUDE LogicFilename.* Ensure that all dimensions are updated before moving logic Overwriting the tested files with the development files is acceptable Save each file moved in the appropriate order Appropriate order means that some files should be saved before others. For example, the Default logic includes Default Translation. Default Translation should be saved before Default. Always save the file included in the current logic file before the current logic file. Only files that have been changed need to be resaved, unless there are dimension changes. For more information about migrating dimensions, see Dimension and Security Updates [page 81].

6.1.5 Data and Data File Migration


Data file loading is typically tested during implementation, and whenever there are structural changes to the data. It is not necessary to run every data load through the test environment before loading data files. Changes to data file structures to be loaded into Planning and Consolidation should first be tested in the test environment. Once these tests are complete, they can be moved to the production machine. You can move data files to the production server using the Download data file and Upload data file options in Data Manager. You should consider the following items when moving these files: Make backup copies of the production files before the move. Moving the data files does not load the data to the server. Occasionally, data in the database needs to be moved from one server to another. This would be the situation if the data to be moved was loaded through an input form or is not encompassed in one data file. To move data from one server to another: 1. Use an export Data Manager package to extract base level data from the test server. 2. Use an import Data Manager package to load the data on production.
RECOMMENDATION

Use Data Manager to import and export Data Manager packages. Do not perform a direct tableto-table copy of data from one server to another. This requires highly trained SQL expert capabilities and can easily lead to corrupted data.

2011-05-10

PUBLIC

83/102

6 6.1

Software Change Management Transport and Change Management

6.1.6 Transformation and Conversion File Migration


Changes to transformation and conversion files should first be tested in the test environment. Once these tests are complete, they can be moved to the production machine. Transformation and Conversion files are located in several places, depending on whether the files apply to a specific site or the entire company. For files that are applicable to the company, look in the Data Manager folder directly under the Model folder:
X:\BPC_home\Webfolders\<Environment>\<Model>\Data Manager\TransformationFiles X:\BPC_home\Webfolders\<Environment>\<Model>\Data Manager\ConversionFiles

For team-related files, look in the Data Manager folder directly under the Team Name folder in the Teams directory under the application:
<Path to Model folder>\Teams\<TeamName>\DataManager\TransformationFiles <Path to Model folder>\Teams\<TeamName>\DataManager\ConversionFiles

When moving these files, ensure that the following steps have been completed: Before modification, make sure the files to be modified are in sync. Make backup copies of the production files before the move. When moving the transformation and conversion files make sure that you copy the filename.xls and the filename.CDM or filename.TDM file for each type of file. The xls and the CDM (Conversion XML) and TDM (Transformation XML) files are required for the transformation logic to work.

6.1.7 Data Manager Package Migration


Data Manager packages are stored on the server. If you modify or create a package on the test system, it is necessary to move the package to production. Data Manager packages can be stored in two locations: Microsoft SQL Server Separate files
RECOMMENDATION

Store Data Manager packages as separate files, since these can be backed up with the rest of the environment.
NOTE

To migrate data packages, you must have access to both the source and target servers.

84/102

PUBLIC

2011-05-10

6 6.1

Software Change Management Transport and Change Management

6.1.8 Moving Microsoft SQL Server-Based Data Manager Packages


Procedure

1. 2.

In Enterprise Manager on the server, choose Start Programs Microsoft SQL Server Enterprise Manager . Choose Data Transformation Services local packages .
NOTE

3. 4.

The source package is located in this window. Open the source package in design mode. Choose Package Save As . The package can be saved directly to the target server from this location or it can be saved as a Structured Stored File (*.dtsx) and copied to the target server. Once it is moved it can be opened on the production server in Enterprise Manager and saved.
NOTE

5.

A package cannot be saved with the same name as an existing package. Before moving the file, rename the existing package on the target server. If the package you are adding is new, you should add it to the Data Manager. For more information, see the application help on SAP Library: http://www.help.sap.com..

6.1.9 Moving File-based Data Manager Packages


Procedure

1.

Locate the package files (*.dtsx): For company-related files, look in the Data Manager folder directly under the model folder:
X:\BPC_home\Webfolders\<Environment>\<Model>\DataManager\PackageFiles

For team-related files, look in the Data Manager folder directly under the Team Name folder in the Teams directory under the model:
<Path to Model folder>\Teams\<TeamName>\DataManager\PackageFiles

2.

Copy the package file to the corresponding location on the target server.
NOTE

3.

A package cannot be saved with the same name as an existing package. Before moving the file, rename the existing package on the target server. If the package you are adding is new, you should add it to the Data Manager. For more information, see the SAP Data Manager Guide.

2011-05-10

PUBLIC

85/102

6 6.2

Software Change Management Support Packages and SAP Notes Implementation

6.1.10 Report, Input Form, and Book Migration


Reports, Input Forms, and Books are located on the server at this location:
x:\BPC_Home\WebFolders\<Environment>\<Model>\Excel

They are simple Excel files or templates. They should be moved from test to production and the paths should be identical. If these are site-specific reports and input forms, add them to the appropriate site location. The books in Excel are templates. The html or pdf books must be republished on the production system.

6.1.11 Documents View Republication


The system indexes Planning and Consolidation content in the Documents view when it is posted. That index is unique to the environment in which it is posted. If you move a document to a new environment, the indexes that reference the document are not updated. It is therefore a better idea to republish these documents on the production server than to try to move them. Additionally, as users can create these types of items, it is more difficult to ensure that the version on development is in sync with production.
NOTE

The above guideline does not apply when you move the entire environment from development to production (SQL Database, OLAP Cube, WebFolders, and FileDB).

6.2 Support Packages and SAP Notes Implementation


SAP Planning and Consolidation Support Packages are available on the Software Download Center of the Service Marketplace (SMP). Each Support Package is tied to an individual SAP Note. A description of the Support Package and the installation instructions are included in the note. Support Packages are cumulative, meaning that each subsequent Support Package contains fixes from all prior Support Packages since the last major release. For example, Planning and Consolidation 10.0 SP03 would contain fixes included in Planning and Consolidation 10.0 SP01 and SP02. You must install the preceding major release (that is, 10.0) before installing a Support Package. Support Packages are typically installed on the application server. If a client update is required, we provide an SMS package so you can push the client software to users, so no downtime is necessary. SAP Planning and Consolidation 10.0, version for the Microsoft Platform does not support SAP Solution Manager Maintenance Optimizer or the SAP Note Assistant at this time.

86/102

PUBLIC

2011-05-10

7 7.1

Troubleshooting Planning and Consolidation Version Information

7 Troubleshooting

7.1 Planning and Consolidation Version Information


To effectively troubleshoot Planning and Consolidation, you must determine your current version of the system. You can use one of the following methods to determine your version of the system: When using client-based subcomponents of the system (such as SAP BusinessObjects EPM solutions, add-in for Microsoft Office), click the user name in the Session Information section of the action pane. The version is found in the dialog box that is displayed. When using the Server Manager, examine the System Information dialog box (this appears when the Server Manager is first started). See Managing Your Planning and Consolidation Servers [page 31]. When using the Web Client, click About Planning and Consolidation 10.0 in the footer section of any page. There are also two text files, OfficeVersion.txt and AdminVersion.txt, that contain the version information for the Office and Administration clients respectively. These text files are updated whenever the associated client is updated, for example by the Client Auto Update facility (see also Client Options [page 32]).

7.2 Analyzing Problems Using Solution Manager Diagnostics


The diagnostics functions in SAP Solution Manager allow identification, analysis, and resolution of problems. You can set up Solution Manager Diagnostics after you set up your Planning and Consolidation servers. For information about setting up Solution Manager Diagnostics, see Connecting to Solution Manager Diagnostics in the Planning and Consolidation Installation Guide.

7.3 Installing Appsight Black Box Service


Procedure

1. 2. 3.

Download and extract the *.rar files attached to SAP Note 1356729 from SAP Service Marketplace. Create a directory on your C:\ drive called Identify. Copy either Triserv2.rpr or Triserv_XpressServer.rpr file into the Identify folder depending on what application is needed to be monitored.

2011-05-10

PUBLIC

87/102

7 7.4

Troubleshooting Reporting and Analyzing System Changes

Triserv2.rpr BlackBox profile that is used by the application. This profile outlines the type of process for BlackBox to monitor. This filename must match the one that is in the startservice.bat file. Triserv_XpressServer.rpr The Xpress Server profile. This filename must match the one that is in the startservice.bat file. 4. Install AppSight Black Box Service in standalone mode. 1. Run AppSight Black Box Service.exe. 2. Select Install. 3. Accept the agreement and choose Next. 4. Enter any information for username and company name and choose Next. 5. Leave the server prompt blank for standalone mode and choose Next. 6. Leave Black Box with no license option and choose Finish. 5. Copy the startservice.bat and stopservice.bat files into the Identify folder. Startservice.bat starts the application using the profile path and the naming convention for the log. Stopservice.bat stops the Blackbox application. You must stop the application before you can copy the log. 6. Run startservice.bat to begin logging. Once you start the application, the Identity folder contains an .ASLfile, which is the log that will write all the information from the application. The computer name and date are used as a variable for the naming convention. The log stays at 0 bytes until the service is stopped and only then is its actual size shown.
RECOMMENDATION

Stop and start the service at the end of each day, copy the file to another folder, and have the application create a new one. This allows you to monitor the file size.

7.4 Reporting and Analyzing System Changes


Within Solution Manager Diagnostics, you can use E2E Change Reporting and Change Analysis (E2E CA) to view and report on technical configuration changes that have been made to your SAP Planning and Consolidation systems. Change Reporting and Analysis provides a top-down view of configuration parameters and configuration parameter changes. It is based on the data of the Configuration and Change Database (CCDB). Documentation is available to you on SAP Service Marketplace at http://service.sap.com/almtools. Navigate to SAP Solution Manager and Tools SAP Solution Manager End-to-End Root Cause

88/102

PUBLIC

2011-05-10

7 7.5

Troubleshooting Generating and Analyzing Trace Files Using E2E Trace

Analysis , then review the E2E Change Analysis - User Guide as well as the documentation listed under Installation and Configuration.

7.5 Generating and Analyzing Trace Files Using E2E Trace


You can generate trace files on client and server components for troubleshooting purposes using E2E Trace. Trace files collect information about client and server interactions presenting trace information about the entire request and response of a process step. The trace files then upload to the server for analysis in SAP Solution Manager Diagnostics (SMD). Client side and server side trace information is displayed in the E2E Trace application in SMD. E2E Trace is delivered and installed with Planning and Consolidation. To configure and activate tracing on a client machine, see Logging and Tracing Configuration [page 92]. After enabling and running tracing, you can obtain information about evaluating the results of the trace in the E2E Trace Analysis - User Guide in the Diagnostics section of SAP Service Marketplace.
Prerequisites

Ensure that the latest Planning and Consolidation Clients with the E2E Trace plug-in are installed on your client machine. Introscope Workstation has been downloaded to your PC. The DotNet Agent of the Planning and Consolidation server is online. Refer to the section Verifying the DotNet Agent of the Planning and Consolidation Server is Online below for instructions. The minimum release on the SAP Solution Manager Diagnostics side for E2E Trace Analysis is Solution Manager 7.0 EhP 1 SP23.
Procedure

Generating and analyzing trace files using E2E Trace involves the following tasks, which are described below: Enable tracing in the ABAP back-end system Perform a trace in the Administration module Perform a trace in the Microsoft Excel module Manually upload the trace file to SMD if not done automatically Evaluate the trace file in SAP Solution Manager
Perform a Trace of the Administration Module

1. 2.

Launch the E2E Plug-In by running plugin-starter-gui.exe. Select Assign, choose OSoftAdminMain.exe, then choose Save.

2011-05-10

PUBLIC

89/102

7 7.5

Troubleshooting Generating and Analyzing Trace Files Using E2E Trace

3. 4.

5. 6. 7.

Select Launch, then ensure that Instrument HTTP protocol is selected and that wininet is set as the protocol. When the Administration module opens, enter the following values in the E2E Trace Plug-in user interface: 1. Enter a name for your trace in Business Transaction Name. After uploading the trace to SMD, you locate the trace by this name. 2. Set the Session Trace Level to High. 3. Enter the SMD server host. 4. Enter the SMD HTTP port. Choose Start Transaction in the E2E Trace Plug-in user interface, then choose OK to log on to the Administration module. Choose Stop Transaction in the E2E Trace Plug-In user interface to upload the transaction XML to the SMD server. In the E2E Trace application within SMD, collect the corresponding trace of the .NET server and ABAP server.

Perform a Trace of the Microsoft Excel Module

1. 2. 3. 4.

5. 6. 7. 8.

Launch the E2E Plug-In by running plugin-starter-gui.exe. Select Assign, choose Excel.exe, then choose Save. Select Launch, then ensure that Instrument HTTP protocol is selected and that wininet is set as the protocol. When the Administration module opens, enter the following values in the E2E Trace Plug-in user interface: 1. Enter a name for your trace in Business Transaction Name. After uploading the trace to SMD, you locate the trace by this name. 2. Set the Session Trace Level to High. 3. Enter the SMD server host. 4. Enter the SMD HTTP port. Click Log On in the Excel tool bar. Choose Start Transaction in the E2E Trace Plug-in user interface, then choose OK to log on to the Excel module. Choose Stop Transaction in the E2E Trace Plug-In user interface to upload the transaction XML to the SMD server. In the E2E Trace application within SMD, collect the corresponding trace of the .NET server and ABAP server.

Manually Upload the Trace File

If you need to manually upload a trace file to SMD, perform these steps:

90/102

PUBLIC

2011-05-10

7 7.5

Troubleshooting Generating and Analyzing Trace Files Using E2E Trace

1. 2. 3.

On the client machine on which you recorded the trace, expand the Manually upload section. Choose Browse under Upload BusinessTransaction.xml. The file to upload appears in <trace plug-in folder>\Logs. Select the BusinessTransaction.xml file and choose Upload.

Evaluation of Traces in SAP Solution Manager

1. 2. 3. 4. 5. 6.

In SAP Solution Manager, access the Root Cause Analysis work center. Choose End-To-End Analysis. Select the query that contains all systems involved within the E2E Trace and select all systems. Choose Trace Analysis to open a new window with the E2E Trace Analysis application Select the trace from the list. If you want the SMD to collect corresponding server side trace data, choose Select systems for trace collection dynamically.
NOTE

This starts trace data collection and results in a list of success or error messages. If you forgot to enable tracing or wait too long between trace recording and trace data collection (for example, more than one day), trace data may not be found. 7. Select the first step of the recorded E2E Transaction Trace and choose Display. 8. Select the Summary tab if it is not selected. 9. Select the Message table tab. 10. Expand the tray Server Analysis and choose the Request tree tab. 11. Choose Expand all to see incoming http calls, outgoing DotNet Connector calls, and incoming RFC calls, then do one or more of the following: To view Introscope Transaction Trace data, select a line with incoming http calls, then choose Display Introscope Transaction Trace. To view ABAP Trace data, select a line with incoming RFC calls, then choose Display aggregated ABAP Trace. To view ABAP SQL Trace data, select a line with incoming RFC calls, then choose Display ABAP SQL Trace Summary.
Verifying the DotNet Agent of the Planning and Consolidation Server is Online

As a prerequisite for the automatic trace collection, make sure that the DotNet Agent of the Planning and Consolidation server is online. 1. Choose Workstation New Investigator . 2. Drill down to Super Domain and locate the hostname of the .NET server.

2011-05-10

PUBLIC

91/102

7 7.6

Troubleshooting Logging and Tracing Configuration

3. 4.

When the DotNet Agent of the Planning and Consolidation server is online, the node DotNet Process appears. If the node DotNet Process does not appear, the W3WP process of the Planning and Consolidation server may have shut down. Trigger an action on the Planning and Consolidation server such as connecting the Administration Client to the server. After this, the node DotNet Process should appear.

More Information

Logging and Tracing Configuration [page 92] Log and Trace File Management (in the application help in the SAP Library at http://help.sap.com)

7.6 Logging and Tracing Configuration


You can create log and trace files for troubleshooting purposes. You can view the log and trace files in the Log Viewer tool of Solution Manager diagnostics after performing the configuration described below.
Features

Configuring logging and tracing involves setting a trace level, trace user, and log level for all environments. After you have completed your troubleshooting, you can deactivate logging and tracing. The settings for the logging and tracing-related environment parameters are stored in a file named Logging.cfg within the file server directory Webfolders\AdminTemplates\. For example, <Drive> \Data\Webfolders\AdminTemplates\Logging.cfg. You can activate tracing within Additional Tasks in the Environment Tasks action pain of the Administration Client for an individual user by choosing Manage Logging, then indicating the type of log and trace and the user. To view and analyze the log and trace files in the Log Viewer application of SAP Solution Manager Diagnostics, access SAP Solution Manager then choose Workcenter Root Cause Analysis (Transaction SOLMAN_WORKCENTER) System Analysis <SAP BPC system> Log Viewer <host> Start Log Viewer . In the Log Viewer application, select the appropriate log or trace file in the drop-down box.
NOTE

The system selection to find your <SAP BPC system> allows for searching and filtering for attributes like Installation Number, System ID (SID), or System Type. You can also define your own queries. The host selection displays all hosts when a multiserver system landscape exists.

92/102

PUBLIC

2011-05-10

7 7.7

Troubleshooting Troubleshooting Client and Server Issues

7.7 Troubleshooting Client and Server Issues


This section lists client issues that you may encounter, as well as suggestions for troubleshooting and resolving the problem: Problem: Marginal text and buttons in Planning and Consolidation dialog boxes are unreadable or incorrectly formatted. Analysis: Determine the DPI font settings in the host computer. Solution: On the Settings tab in the Microsoft Windows Display Properties dialog box, choose Advanced. On the General tab, ensure that the DPI setting is 96 DPI.
NOTE

This problem is only found in Planning and Consolidation 7.5 SP00. As of SP01, this issue has been resolved. Problem: Input and output errors occur when sending data from a client machine to the application server. Analysis: These errors appear in the system event log of the application server when connected through Remote Desktop and Microsoft Terminal Services. Solution: Do not connect from the client to the application server using Remote Desktop.

7.8 Troubleshooting Reports


The following list contains Planning and Consolidation problems that you may encounter, as well as suggestions for troubleshooting and resolving the problem: Problem: There is a problem when retrieving information from the application server. Analysis: This error can occur when you are accessing large spreadsheets. Solution: Refresh the data (repeat if necessary). If refreshing does not correct the error, log out of Planning and Consolidation, log back in, and retry. Problem: A report template crashes Excel (when you try to expand it) after a version upgrade. Analysis: This is caused by the inclusion of extraneous spaces in Planning and Consolidation formulas. Solution: Review your formulas and remove all extra spaces. For example, the following formula caused this problem (prior to the removal of the extra space) when used in a production system: =IF
( ISERROR (#)

2011-05-10

PUBLIC

93/102

7 7.9

Troubleshooting Troubleshooting in Data Manager

7.9 Troubleshooting in Data Manager


The following list contains Data Manager problems that you may encounter, as well as suggestions for troubleshooting and resolving the problem: Problem: Unable to add a new package to the system. Analysis: This needs to be accomplished using Microsoft SQL Server. Solution: To add a package, use the following procedure: 1. Open Microsoft SQL Enterprise Manager on the Planning and Consolidation server. 2. Create a copy of your package (save it using another name). 3. Move the package to the following location:
<BPC_HOME>\webfolder\environment\model\teamfiles\[teamname]\datamanager \packagefiles\

4. Choose Data Manager Organize Package List 5. Click Add Package, select the new package, and save your changes. Problem: Unable to clear the status of a package that did not complete Analysis: The status can be changes using Microsoft SQL Server. Solution: To clear the status of a package that did not complete: 1. Open Microsoft SQL Enterprise Manager on the Planning and Consolidation server. 2. In the environment in which the package was executing, clear the package from the tblDTSLOG table. Problem: Unable to run Data Manager packages. This can occur after you choose eData and the system prompts you to refresh the data. The following error message can occur:
An error occurred while getting server information.

Analysis: This can occur for a number of reasons: The BPCSendGovernor may not be responding properly. The system may be low on resources (such as memory). The Planning and Consolidation Global Cache may not be responding properly. Solution:

94/102

PUBLIC

2011-05-10

7 7.9

Troubleshooting Troubleshooting in Data Manager

To resolve this issue, complete the following steps: Restart the BPCSendGovernor service. Restart IIS. Problem: Unable to save changes to conversion or transformation files on the server after validating and saving them from the client computer. The system indicates that the file has been successfully saved. However, new files do not appear on the server, and modifications to existing files are not reflected. Analysis: This occurs when client workstation Environment Variables are incorrectly set. Solution: Modify the TEMP and TMP environment variables to reflect the C:\TEMP folder on the local workstation.

2011-05-10

PUBLIC

95/102

This page is left blank for documents that are printed on both sides.

8 8.1

Support Desk Management Remote Support Setup

8 Support Desk Management

8.1 Remote Support Setup


SAP support needs to be able to work remotely for highest efficiency and availability. For this support, SAP uses the remote connection with SAProuter for a specific problem that you log by creating a customer message in the SAP Support Portal. For information about SAProuter, see the following SAP Note:
SAP Note Title Comment

486688

Schedule VPN connection to SAP network See also the SAP Notes that this SAP Notes refers to for specific settings or parameters that are necessary

For further assistance, see the following SAP Note:


SAP Note Title Comment

812386

RFC connection to the SAPNet R/3 front end

8.2 CA Wily Introscope Integration


To enable application analysis (including performance monitoring), CA Wily Introscope (IS) is integrated into SAP Solution Manager Diagnostics (SMD). SAP provides CA Wily IS instrumentation for SAP Planning and Consolidation. IS for Microsoft .NET is an application management solution for managed .NET applications, running on Microsofts Common Language Runtime (CLR) environment. CA Wily IS offers Dashboards for performance and stability analysis. In addition, the Investigator provides a detailed view on all applications and environment metrics reported by the IS agent to the IS Enterprise Manager, which is the CA Wily IS server and part of SAP Solution Manager. User-specific interaction can be traced in CA Wily IS using the Transaction Trace. Metrics, which are collected and reported through tracers defined in Probe Builder Directives .pbd files, define the information that is collected at runtime. The CA Wily IS .NET agent collects this information and reports it to the Enterprise Manager. The Enterprise Manager stores these metrics in its own database. You can view performance metrics using the IS Workstation or the IS WebView application.

2011-05-10

PUBLIC

97/102

8 8.3

Support Desk Management Problem Message Handover

Prerequisites

To enable IS for Planning and Consolidation, install and configure the CA Wily IS .NET agent on the SAP Planning and Consolidation application server hosts. For more about information about setting up and configuring CA Wily Introscope for SAP Planning and Consolidation, refer to SAP Note 1126554 as well as SAP Note 797147 and its attached FAQ document. For more information about the installation, configuration, and use of SAP Solution Manager Diagnostics, visit the SAP Service Marketplace at http://service.sap.com/diagnostics.
Procedure

1. 2. 3. 4. 5. 6.

Log on to Root Cause Analysis workcenter of SAP Solution Manager (transaction code solman_workcenter). Select System Analysis from the detail navigation menu. Choose the query that contains the SAP Planning and Consolidation system or find it in All Technical Systems. Select the SAP Planning and Consolidation system from the systems selection table. Choose CA Wily Introscope and log on to the CA Wily IS WebView. Choose Start Introscope, then log on to the Introscope WebView. Do any of the following: Select the Console tab to view Wily Dashboards. Select the Investigator tab to view the Wily Investigator tree. Select the Transaction Viewer tab to view Wily Transaction Trace.

8.3 Problem Message Handover


Problem messages can be logged at SAP Support Portal, located on SAP Service Marketplace, which is located here: http://service.sap.com/. You can use component strings to efficiently direct your Planning and Consolidation-related support message. The EPM-BPC-MS component string applies to Planning and Consolidation as a whole. The component strings in the following table apply to the Web Client:
Component String
EPM-BPC-MS-WEB EPM-BPC-MS-WEB-CNT EPM-BPC-MS-WEB-REP EPM-BPC-MS-WEB-STA EPM-BPC-MS-WEB-OTH

Area

Web Client (general) Documents View Web Reports Getting Started Other

The component strings in the following table apply to theEPM add-in for Microsoft Office:

98/102

PUBLIC

2011-05-10

8 8.3

Support Desk Management Problem Message Handover Area

Component String
EPM-BPC-MS-EXC EPM-BPC-MS-EXC-DM EPM-BPC-MS-EXC-JRN EPM-BPC-MS-EXC-SR EPM-BPC-MS-EXC-MNU EPM-BPC-MS-EXC-OTH

EPM add-in for Microsoft Office (general) Data Manager Journals Send/Retrieve Custom Menus Other

The component strings in the following table apply to Interface for Word:
Component String
EPM-BPC-MS-WRD EPM-BPC-MS-WRD-SR EPM-BPC-MS-WRD-OTH

Area

EPM add-in for Microsoft Office (general) Send/Retrieve Other

The component strings in the following table apply to Interface for Powerpoint:
Component String
EPM-BPC-MS-PPT EPM-BPC-MS-PPT-SR EPM-BPC-MS-PPT-OTH

Area

EPM add-in for Microsoft Office (general) Send/Retrieve Other

The component strings in the following table apply to Administration:


Component String
EPM-BPC-MS-ADM EPM-BPC-MS-ADM-WEB EPM-BPC-MS-ADM-CNS EPM-BPC-MS-ADM-OTH

Area

Administration (general) Web administration Administration Client Other

The component strings in the following table apply to Server Manager:


Component String
EPM-BPC-MS-SVM EPM-BPC-MS-SVM-BKP EPM-BPC-MS-SVM-DIA EPM-BPC-MS-SVM-OTH

Area

Server Manager (general) Backup/Restore Diagnostic Other

The component strings in the following table apply to Audit:


Component String
EPM-BPC-MS-AUD

Area

Audits (general)

2011-05-10

PUBLIC

99/102

8 8.3

Support Desk Management Problem Message Handover Area

Component String
EPM-BPC-MS-AUD-MA EPM-BPC-MS-AUD-REP

Manage Audit Audit Reports

The component strings in the following table apply to Comments:


Component String
EPM-BPC-MS-COM EPM-BPC-MS-COM-SR EPM-BPC-MS-COM-OTH

Area

Comments (general) Send/Retrieve Other

The component strings in the following table apply to Solutions and the API Toolkit:
Component String
EPM-BPC-MS-SOL EPM-BPC-MS-API

Area

Solutions API Toolkit

The component strings in the following table apply to the Starter Kit:
Component String
EPM-BPC-MS-SK

Area

Microsoft Version Starter Kit

100/102

PUBLIC

2011-05-10

SAP AG Dietmar-Hopp-Allee 16 69190 Walldorf Germany T +49/18 05/34 34 34 F +49/18 05/34 34 20 www.sap.com

Copyright 2011 SAP AG. All rights reserved.

Das könnte Ihnen auch gefallen