You are on page 1of 266

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.

info ll

BusinessObjects Enterprise XI 3.0/3.1: Designing and Deploying a Solution

Learners Guide BOE330

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Copyright

2009 SAP BusinessObjects. All rights reserved. SAP BusinessObjects owns the following United States patents, which may cover products that are offered and licensed by SAP BusinessObjects and/or affliated companies: 5,295,243; 5,339,390; 5,555,403; 5,590,250; 5,619,632; 5,632,009; 5,857,205; 5,880,742; 5,883,635; 6,085,202; 6,108,698; 6,247,008; 6,289,352; 6,300,957; 6,377,259; 6,490,593; 6,578,027; 6,581,068; 6,628,312; 6,654,761; 6,768,986; 6,772,409; 6,831,668; 6,882,998; 6,892,189; 6,901,555; 7,089,238; 7,107,266; 7,139,766; 7,178,099; 7,181,435; 7,181,440; 7,194,465; 7,222,130; 7,299,419; 7,320,122 and 7,356,779. SAP BusinessObjects and its logos, BusinessObjects, Crystal Reports, Rapid Mart, Data Insight, Desktop Intelligence, Rapid Marts, Watchlist Security, Web Intelligence, and Xcelsius are trademarks or registered trademarks of Business Objects, an SAP company and/or affiliated companies in the United States and/or other countries. SAP is a registered trademark of SAP AG in Germany and/or other countries. All other names mentioned herein may be trademarks of their respective owners.

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

C O N T E N T S
About this Course
Course Introduction...................................................................................................xiii Course Description.....................................................................................................xiv Course Audience........................................................................................................xiv Prerequisites................................................................................................................xiv Additional Education................................................................................................xiv Level, delivery and duration....................................................................................xiv Applicable certifications and designations.............................................................xiv Course success factors.................................................................................................xv Course setup.................................................................................................................xv Course materials..........................................................................................................xv Learning process .........................................................................................................xv

Lesson 1

Reviewing BusinessObjects Enterprise Architecture, Administration and Security


Reviewing BusinessObjects Enterprise Architecture, Administration and Security ...........................................................................................................................1 BusinessObjects Enterprise architecture.....................................................................2 BusinessObjects Enterprise architecture basic terminology............................2 Logging into BusinessObjects Enterprise...........................................................5 Starting the Server Intelligence Agent................................................................7 Setting a schedule for a Web Intelligence document........................................9 Running a scheduled Web Intelligence document..........................................11 Viewing a Web Intelligence document on demand........................................13 BusinessObjects Enterprise Security.........................................................................15 How rights work in BusinessObjects Enterprise.............................................15 Access Levels........................................................................................................15 Advanced rights...................................................................................................17 Applying user and group rights to objects.......................................................18 Top-level folder security.....................................................................................18 Folder-level security............................................................................................18 Object-level security.............................................................................................19 Inheritance.............................................................................................................19 Rights specific to object type...............................................................................20 Activity: User rights flash scenarios..................................................................21

Table of ContentsLearners Guide

iii

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Reviewing BusinessObjects Enterprise architecture, administration and security..................................................................................................................22 Lesson summary..........................................................................................................23

Lesson 2

Identifying Requirements
Identifying Requirements...........................................................................................25 Identifying requirements............................................................................................26 Asking the right questions..................................................................................26 Activity: Workshop 1...........................................................................................26 Case Study: Jade Publishing...............................................................................26 Content Management and Delegated Administration ..................................30 Managing the promotion of content..................................................................32 Activity: Jade Publishing Case Study - Instructor led....................................32 Review: Identifying requirements.............................................................................34 Lesson summary..........................................................................................................35

Lesson 3

Ensuring Availability of your Business Intelligence Solution


Ensuring Availability of your Business Intelligence Solution...............................37 What is a Business Intelligence Solution?................................................................38 Business Intelligence Solution............................................................................38 The challenges of discussing how to ensure availability...............................38 High Availability and Fault Tolerance Concepts ...........................................39 High Availability..................................................................................................41 Network Storage Solutions.................................................................................45 Vertical and Horizontal Scaling.........................................................................46 Backup....................................................................................................................47 Deployment Strategies for High Availability..................................................47 Designing a system for high availability..........................................................47 High availability in the Application Tier.................................................................50 Web client to web server.....................................................................................50 Web server to web application server...............................................................51 Wdeploy................................................................................................................52 High availability in the Intelligence Tier..................................................................53 Web Application Server to BusinessObjects Enterprise cluster ...................53 CMS clustering.....................................................................................................53 Migrating and backing up CMS system data...................................................54 Managing active/passive File Repository Servers..........................................54 Crystal Reports Cache Server.............................................................................56 Event Server..........................................................................................................56 High availability in the Processing Tier...................................................................57

iv

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Crystal Reports Processing Server ....................................................................57 Crystal Reports Job Server .................................................................................57 Report Application Server .................................................................................57 List of Values Job Server .....................................................................................57 Web Intelligence Processing Server ..................................................................58 Desktop Intelligence Servers..............................................................................58 Connection Server................................................................................................58 Understanding Server Groups...........................................................................59 Working with server subgroups........................................................................60 Modifying the group membership of a server.................................................61 Creating a disaster and backup recovery plan........................................................62 Disaster and risk management considerations................................................62 Creating a successful disaster recovery plan...................................................62 Disaster Recovery Process...................................................................................63 Creating a backup copy of BusinessObjects Enterprise system data ...........65 Using a backup copy of BusinessObjects Enterprise system data during disaster recovery ..................................................................................................66 Creating a backup recovery plan.......................................................................67 Understanding hot and cold/active and passive fail-over systems.............68 Hot and cold/active and passive fail-over architecture.................................69 Review: Ensuring Availability of your Business Intelligence Solution...............72 Lesson summary..........................................................................................................73

Lesson 4

Performance, Scalability and Sizing


Performance, Scalability and Sizing..........................................................................75 Designing a scalable system.......................................................................................76 What is scalability?...............................................................................................76 General BusinessObjects Enterprise scalability goals ....................................76 Increasing overall system capacity....................................................................77 Increasing scheduled reporting capacity..........................................................77 Increasing report viewing capacity...................................................................78 Improving web response speeds........................................................................80 Sizing a BusinessObjects Enterprise deployment ..................................................82 The sizing process................................................................................................82 Step 1: Determining load.....................................................................................82 Estimating potential users...................................................................................83 Estimating concurrent active users....................................................................83 Estimating simultaneous requests.....................................................................83 Step 2: Determining the number of required services....................................86 Server Intelligence Agent....................................................................................87 Memory requirement ..........................................................................................87 Central Management Server...............................................................................88 Processor requirements.......................................................................................88 CMS clustering across subnets...........................................................................88

Table of ContentsLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Memory requirements.........................................................................................89 Crystal Reports Cache Server.............................................................................90 Processor requirements.......................................................................................91 Memory requirements.........................................................................................91 File Repository Servers (FRS).............................................................................91 Repository location..............................................................................................91 Calculating the number of FRSs required........................................................92 Processor requirements.......................................................................................92 Disk requirements................................................................................................92 Adaptive Processing Servers and Adaptive Job Servers................................92 Search Server.........................................................................................................93 Client Auditing Proxy Service............................................................................93 Services for Publishing........................................................................................93 Program Scheduling Service...............................................................................94 Replication Service...............................................................................................95 Event Server..........................................................................................................95 Processor and memory requirements...............................................................95 Desktop Intelligence Servers..............................................................................95 Processor requirements.......................................................................................95 Memory requirements.........................................................................................96 Enterprise Performance Manager (EPM)..........................................................97 Processor requirements.......................................................................................98 Memory requirements.........................................................................................98 Web Intelligence Processing Server...................................................................98 Processor requirements.......................................................................................98 Memory requirements.........................................................................................99 Physical Address Extension support (PAE) ....................................................99 Cache clean............................................................................................................99 Cache dirty..........................................................................................................100 Refresh.................................................................................................................100 Web Intelligence Job Server..............................................................................101 Processor requirements.....................................................................................101 Disk requirements..............................................................................................101 Crystal Reports Processing Server...................................................................101 Algorithm used when Maximum Simultaneous report jobs is set to unlimited.............................................................................................................102 Comparison of Crystal Reports Job Server and Crystal Reports Processing Server ..................................................................................................................102 Crystal Reports Processing Server deployment.............................................102 Dedicated Crystal Reports Processing Server machine................................103 Shared Crystal Reports Processing Server machine......................................103 Crystal Reports Processing Server Service Groups on a machine..............104 On-Demand (live data) versus Saved Data Viewing (prescheduled instance)...............................................................................................................104 Crystal Reports Processing Server Data Sharing...........................................105 Processor requirements.....................................................................................106 Memory requirements.......................................................................................106

vi

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Details/Optimization........................................................................................106 Crystal Reports Job Server................................................................................107 Processor requirements.....................................................................................107 Report Application Server................................................................................107 Processor requirements.....................................................................................108 Memory requirements.......................................................................................108 List of Values Job Server....................................................................................108 Processor requirements.....................................................................................109 Memory requirements.......................................................................................109 Connection Server..............................................................................................109 Web Application Server/Web Application Container Server.....................109 Processor requirements.....................................................................................110 Multi-Dimensional Analysis Server (MDAS).................................................111 Processor requirements.....................................................................................111 Memory requirements.......................................................................................112 Disk requirements..............................................................................................112 Query as a Web Service (QaaWS)....................................................................113 Processor requirements ....................................................................................113 Memory requirements ......................................................................................114 Live Office...........................................................................................................114 Step 3: Determining the configuration of machines......................................115 Step 4: System database tuning........................................................................117 Performance criteria...........................................................................................117 Designing an architecture plan................................................................................121 Determining system load..................................................................................121 Determining reporting requirements..............................................................122 Determining system deployment requirements............................................123 Activity: Determining architecture requirements.........................................123 Review: Performance, Scalability and Sizing.........................................................126 Lesson summary........................................................................................................127

Lesson 5

Deploying a System
Deploying a System...................................................................................................129 Installing and configuring BusinessObjects Enterprise ......................................130 Preparing the environment for installation....................................................130 Installing BusinessObjects Enterprise.............................................................131 Testing the installation......................................................................................132 Activity: Wdeploy..............................................................................................133 Configuring BusinessObjects Enterprise........................................................138 Intelligence Tier..................................................................................................138 Server Intelligence Agent (SIA)........................................................................138 Central Management Server.............................................................................139 Guidelines for CMS clustering ........................................................................139 Clustering CMS machines.................................................................................140

Table of ContentsLearners Guide

vii

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Activity: Clustering CMS machines................................................................143 Crystal Reports Cache Server...........................................................................145 Input and Output File Repository Servers (FRS)...........................................147 Activity: Configuring an active/passive FRS................................................147 Adaptive Servers................................................................................................149 Event Server........................................................................................................149 Processing Tier....................................................................................................150 Job Servers...........................................................................................................150 Desktop Intelligence Processing Server..........................................................151 Desktop Intelligence Cache Server..................................................................152 Connection Server..............................................................................................154 Web Intelligence Processing Server.................................................................154 Crystal Reports Processing Server...................................................................156 Report Application Server................................................................................158 Activity: Configuring processing tier servers................................................159 Activity: Revert back to a single machine BusinessObjects Enterprise deployment.........................................................................................................161 Troubleshooting BusinessObjects Enterprise........................................................163 Using best practices when troubleshooting...................................................163 Using a strategic troubleshooting method.....................................................163 Activity: Troubleshooting BusinessObjects Enterprise deployments........165 Review: Deploying a system....................................................................................167 Lesson summary........................................................................................................168

Lesson 6

Content Management
Content Management................................................................................................169 Designing a secured content management plan...................................................170 What is a content plan?.....................................................................................170 Content management considerations..............................................................170 Analyzing stakeholder needs...........................................................................171 Creating a content management plan.............................................................171 Creating a logical content plan.........................................................................172 Delegated administration..................................................................................173 Using rights to delegate administration.........................................................174 Choosing between Modify the rights users have to objects options ...............................................................................................................................175 Owner rights.......................................................................................................176 Creating groups for row and column security...............................................176 Implementing row and column security using Business Views and Universes ............................................................................................................177 Managing security rights across multiple sites..............................................178 Rights required on the Origin site ..................................................................178 Rights required on the Destination site...........................................................179 Federation specific objects ...............................................................................179

viii

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Replicating security on an object.....................................................................180 Replicating security on an object using access levels....................................181 Documenting your content management plan..............................................181 Activity: Content management plan...............................................................181 Designing an instance management plan..............................................................183 Planning instance management.......................................................................183 Setting instance limits........................................................................................183 Activity: Instance management........................................................................184 Designing a system auditing plan...........................................................................185 Designing and implementing system auditing.............................................185 Activity: System auditing..................................................................................185 Managing Content in Multiple Deployments........................................................186 Understanding the key terms in content management................................186 Managing the dependencies.............................................................................187 Differentiating BusinessObjects LifeCycle Manager, Import Wizard, and Federation............................................................................................................188 Understanding the Import Wizard ........................................................................189 The roles of the Import Wizard........................................................................189 Updating objects.................................................................................................190 Identifying objects by name..............................................................................191 Identifying objects by CUID.............................................................................192 Activity: CUID generation................................................................................193 Using the Import Wizard to import data........................................................195 Managing BusinessObjects LifeCycle Manager....................................................208 Understanding Life-Cycle Management .......................................................208 Introducing BusinessObjects LifeCycle Manager..........................................209 Installing BusinessObjects LifeCycle Manager..............................................210 Authentication and authorization...................................................................210 Navigating in BusinessObjects LifeCycle Manager......................................210 Defining a promotion job..................................................................................211 Managing the dependents of a job...................................................................211 Mapping the dependents..................................................................................212 Scheduling a job..................................................................................................212 Promotion with or without Security................................................................213 Testing promotion..............................................................................................214 Rolling back a job...............................................................................................215 Using the Version Management System.........................................................215 Using Subversion as the Version Management System...............................215 Understanding "Air Gap" requirements.........................................................216 Managing the Federation Services..........................................................................218 Reviewing Federation........................................................................................218 Replication types and mode options...............................................................219 Refresh from Origin or Refresh from Destination ........................................219 Managing conflict detection and resolution ..................................................221 One-way replication conflict resolution..........................................................221 Two-way replication conflict resolution.........................................................223 Managing Object Cleanup................................................................................225

Table of ContentsLearners Guide

ix

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

How to use Object Cleanup..............................................................................225 Object Cleanup limits........................................................................................226 Object Cleanup frequency.................................................................................226 Using Web Services in Federation...................................................................227 Session variable..................................................................................................227 File caching..........................................................................................................228 Custom deployment..........................................................................................228 Best Practices.......................................................................................................229 Limitations..........................................................................................................232 Troubleshooting error messages......................................................................233 Activity: One way replication (instructor led)...............................................235 Activity: One way and two way replication..................................................235 Review: Content Management.................................................................................238 Lesson summary........................................................................................................239

Answer Key
Review: Reviewing BusinessObjects Enterprise architecture, administration and security ...............................................................................................................243 Review: Identifying requirements...........................................................................245 Review: Ensuring Availability of your Business Intelligence Solution.............246 Review: Performance, Scalability and Sizing.........................................................247 Review: Deploying a system....................................................................................248 Review: Content Management.................................................................................250

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Designing and deploying a solution


Introductions, Course Overview...........................................30 minutes
Lesson 1

Reviewing BusinessObjects Enterprise Architecture, Administration and Security.....................................................1.5 hours


BusinessObjects Enterprise architecture BusinessObjects Enterprise Security Lesson 2

Identifying Requirements..............................................................2 hours


Identifying requirements Lesson 3

Ensuring Availability of your Business Intelligence Solution...............................................................................................3 hours


What is a Business Intelligence Solution? High availability in the Application Tier High availability in the Intelligence Tier High availability in the Processing Tier Creating a disaster and backup recovery plan

Lesson 4

Performance, Scalability and Sizing..........................................4 hours


Designing a scalable system Sizing a BusinessObjects Enterprise deployment Designing an architecture plan Lesson 5

Deploying a System.........................................................................6 hours


Installing and configuring BusinessObjects Enterprise Troubleshooting BusinessObjects Enterprise

AgendaLearners Guide

xi

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Lesson 6

Content Management......................................................................6 hours


Designing a secured content management plan Designing an instance management plan Designing a system auditing plan Managing Content in Multiple Deployments Understanding the Import Wizard Managing BusinessObjects LifeCycle Manager Managing the Federation Services

xii

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

About this Course


Course Introduction
This section explains the conventions used in the course and in this training guide.

About this CourseLearners Guide

xiii

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Course Description
This four-day instructor-led course teaches system administrators the skills and knowledge required to design and deploy a BusinessObjects TM Enterprise system. In this course, students will learn how to analyze and identify customer requirements in order to design a BusinessObjects Enterprise solution using the concepts of High Availability, Scalability, Sizing, Disaster Recovery and Lifecycle Management. Using a case study, students will design, build and troubleshoot a system and then produce a content management plan using advanced security and replication techniques. The business benefit of this course is that it provides system architects/administrators with an understanding of the concepts necessary to effectively design and deploy a Business Intelligence solution using the BusinessObjectsTM Enterprise platform.

Course Audience
The target audience for this course is system architects/administrators who are experienced with BusinessObjects Enterprise and will be responsible for designing and deploying solutions for their organization.

Prerequisites
Learners should have attended the following courses: BusinessObjects Enterprise XI 3.0: Administration and Security BusinessObjects Enterprise XI 3.0: Administering Servers Windows To be successful, learners who attend this course should have the following experience: Windows conventions Familiarity with Windows Server 2000/2003 administration Windows Server 2000/2003 security concepts (global/local groups, and directory structure)

Additional Education
For more information on Lifecycle Management it is recommended that you attend the Lifecycle Management: Deployment, Application and Best Practices course.

Level, delivery and duration


This instructor-led offering is a four-day course.

Applicable certifications and designations


This course is applicable to the level III exam in the following certification: Business Objects Certified Professional - BusinessObjects Enterprise XI 3.0.

xiv

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Course success factors


Your learning experience will be enhanced by: Activities that build on the life experiences of the learner Discussion that connects the training to real working environments Learners and instructor working as a team Active participation by all learners

Course setup
Refer to the setup guide for details on hardware, software, and course-specific requirements.

Course materials
The materials included with the course materials are: Name card Learners Guide The Learners Guide contains an agenda, learner materials, and practice activities. The Learners Guide is designed to assist students who attend the classroom-based course and outlines what learners can expect to achieve by participating in this course. Evaluation form At the conclusion of this course, you will receive an electronic feedback form as part of our evaluation process. Provide feedback on the course content, instructor, and facility. Your comments will assist us to improve future courses. Additional resources include: Sample files The sample files can include required files for the course activities and/or supplemental content to the training guide. Online Help Retrieve information and find answers to questions using the online Help and/or users guide that are included with the product.

Learning process
Learning is an interactive process between the learners and the instructor. By facilitating a cooperative environment, the instructor guides the learners through the learning framework.

Introduction
Why am I here? Whats in it for me? The learners will be clear about what they are getting out of each lesson.

About this CourseLearners Guide

xv

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Objectives
How do I achieve the outcome? The learners will assimilate new concepts and how to apply the ideas presented in the lesson. This step sets the groundwork for practice.

Practice
How do I do it? The learners will demonstrate their knowledge as well as their hands-on skills through the activities.

Review
How did I do? The learners will have an opportunity to review what they have learned during the lesson. Review reinforces why it is important to learn particular concepts or skills.

Summary
Where have I been and where am I going? The summary acts as a recap of the learning objectives and as a transition to the next section.

xvi

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Lesson 1

Reviewing BusinessObjects Enterprise Architecture, Administration and Security


Reviewing BusinessObjects Enterprise Architecture, Administration and Security
This first lesson covers Business Intelligence, BusinessObjects Enterprise architecture and the BusinessObjects Enterprise security model. After completing this lesson, you will be able to: Understand the BusinessObjects Enterprise architecture Understand the BusinessObjects Enterprise security model

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

BusinessObjects Enterprise architecture


This unit provides a review of the BusinessObjects Enterprise architecture including certain key process flows and some terminology used throughout this course. After completing this unit, you will be able to: Define BusinessObjects Enterprise architecture terms Describe key architectural process flows

BusinessObjects Enterprise architecture basic terminology


Host
A host can be a physical computer or virtual machine.

Server
A server is an Operating System (OS) level process hosting one or more services. For example, CMS and Adaptive Processing Server are servers. A server runs under a specific OS account and has its own PID.

Service
A service is a server subsystem that provides a specific function. The service runs within the memory space of its server under the Process ID (PID) of the parent container (server). For example, the Web Intelligence Scheduling and Publishing Service is a subsystem running within the Adaptive Job Server.

Node
A node is a collection of BusinessObjects Enterprise servers, all running on the same host. One or more nodes can be on a single host. Each node is managed by a Server Intelligence Agent (SIA).

Server Intelligence Agent


The SIA is a locally run service managed by the operating system. The task of the Server Intelligence Agent (SIA) is to start, stop, and monitor locally run BusinessObjects servers. When one of the managed servers goes down unexpectedly, the SIA restarts the server immediately. When you issue a command in the CMC to stop a server, the SIA stops the server. When you create a SIA, you create a new node. A node is a collection of BusinessObjects Enterprise servers which run on the same host and are managed by a single SIA. You can add servers to the node and you can have more than one node on the same machine. The SIA continuously monitors server status information, which is stored in the CMS databases. When you change a server's settings or add a new server in the CMC, the CMS notifies the SIA and the SIA performs the jobs accordingly.

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

BusinessObjects Enterprise architecture

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

BusinessObjects Enterprise tiers

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Logging into BusinessObjects Enterprise

1. The web client sends the schedule request in an URL via the web server to the Web Application Server. 2. The Web Application Server interprets the .jsp or .aspx page and the values sent in the URL request and determines that the request is a logon request. The Web Application Server

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

sends the user name, password, and authentication type to the specified CMS for authentication. 3. The CMS validates the user name and password against the appropriate database (in this case BusinessObjects Enterprise authentication is authenticated against the system database). 4. Upon successful validation, the CMS creates a session for the user in its own memory. 5. The CMS sends a response to the Web Application Server to let it know that the validation was successful. The Web Application Server generates a logon token for the user session in its memory. For the rest of this session, the Web Application Server uses the logon token to validate the user against the CMS. 6. The Web Application Server formats the response to send to the client. The Web Application Server sends the response back to the users machine where it is rendered in the web client.

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Starting the Server Intelligence Agent

1. The Server Intelligence Agent (SIA) starts up and looks in the local bootstrap files for a list of CMSs (can be either local or remote) to connect to. This CMS list is kept up-to-date and refreshed instantly as soon as a new CMS appears. According to the information in the

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

bootstrap files, the SIA either: (1) starts the local CMS and connects to it or (2) connects to a remote CMS when a local CMS is not found. 2. After the SIA has successfully connected to the CMS, the SIA polls the CMS for a list of server service(s) it needs to manage. The CMS finds information on server services and their configuration information from the system database. 3. The system database returns the list of server service(s) and the associated configuration information back to the CMS (such as Adaptive Job Server, Destination Job Server, Desktop Intelligence Processing Server, and Crystal Reports Processing Server). 4. The CMS sends the list of server service(s) and the configuration information to the SIA. 5. The SIA starts the server services (such as Adaptive Job Server, Destination Job Server, Desktop Intelligence Processing Server, and Crystal Reports Processing Server) and begins monitoring them. The SIA starts all the server services according to the associated configuration information.

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Setting a schedule for a Web Intelligence document

1. The web client submits a schedule request in an URL, typically via the web server to the Web Application Server.

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

2. The Web Application Server interprets the URL request and determines that the request is a schedule request. The Web Application Server sends the schedule time, database login values, parameter values, destination, and format to the specified CMS. 3. The CMS ensures that the user has rights to schedule the object. If the user has sufficient rights, the CMS adds a new record to the system database. The CMS also adds the instance to its list of pending schedules.

10

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Running a scheduled Web Intelligence document

1. The CMS constantly checks the system database to determine if there is any schedule to be run at that time.

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

11

2. When the scheduled time arrives, the CMS sends the schedule request and all the information about the schedule request to the Adaptive Job Server that houses the Web Intelligence Scheduling and Publication Service. 3. The Adaptive Job Server (Web Intelligence Scheduling and Publication Service) locates an available Web Intelligence Processing Server based on the Maximum Jobs Allowed value configured on each Web Intelligence Processing Server. 4. The Web Intelligence Processing Server determines the location of the Input File Repository Server (FRS) that houses this document and the universe metalayer file on which the document is based. The Web Intelligence Processing Server then requests the documents from the Input FRS. The Input FRS locates the Web Intelligence document as well as the universe file on which the document is based and then streams them to the Web Intelligence Processing Server. 5. The Web Intelligence document is placed in a temporary directory on the Web Intelligence Processing Server. The Web Intelligence Processing Server opens the document in memory. The QT.dll generates the SQL from the Universe on which the document is based. The Connection Server (component of the Web Intelligence Process Server) connects to the database. The query data passes through QT.dll back to the Document Engine where the document is processed. A new successful instance is created. 6. The Web Intelligence Processing Server uploads the document instance to the Output FRS. 7. The Web Intelligence Processing Server notifies the Adaptive Job Server (Web Intelligence Scheduling and Publication Service) that document creation is completed. If the document is scheduled to a destination (File System, FTP, SMTP, or Inbox), the Adaptive Job Server retrieves the processed document from the Output FRS and delivers it to the specified destination(s). Assume that this is not the case in this example. 8. The Adaptive Job Server (Web Intelligence Scheduling and Publication Service) updates the CMS with the job status. 9. The CMS updates the job status in its memory, and then writes the instance information to the BusinessObjects Enterprise system database.

12

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Viewing a Web Intelligence document on demand

1. The web browser sends the view request to the Web Application Server via the web server. 2. The Web Application Server determines the request is for a Web Intelligence document and sends a request to the CMS to ensure the user has the appropriate rights to view the document.

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

13

3. The CMS sends a response to the Web Application Server to confirm the user has sufficient rights to view the document. 4. The Web Application Server sends a request to The Web Intelligence Processing Server requesting the document. 5. The Web Intelligence Processing Server requests the document from the Input File Repository Server as well as the universe file on which the requested document is built upon. The universe file has all the metalayer information that includes row- and column-level security. 6. The Input File Repository Server streams a copy of the document to the Web Intelligence Processing Server as well as the universe file on which the requested document is built upon. 7. The Web Intelligence Report Engine opens the document in its memory. 8. The Web Intelligence Report Engine uses the QT component (inproc) and ConnectionServer (inproc) that reside in its memory space. The QT component generates/validates/regenerates the SQL and connects to the database to run the query. The ConnectionServer uses the SQL to get the data from the database to the Report Engine where the document is processed. 9. The Web Intelligence Processing Server sends the viewable document page that was requested to the Web Application Server. The Web Application Server forwards this viewable page to the web server. The web server sends the viewable page to the users machine where it is viewed in the web browser.

14

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

BusinessObjects Enterprise Security


A solid understanding of the BusinessObjects security model is essential for system architects. This unit is a review of the BusinessObjects security model. After completing this unit, you will be able to: Understand how rights work in BusinessObjects Enterprise Describe how access levels are used Understand top-level, folder-level and object-level security Understand inheritance

How rights work in BusinessObjects Enterprise


Rights are the base units for controlling user access to the objects, users, applications, servers, and other features in BusinessObjects Enterprise. They play an important role in securing the system by specifying the individual actions that users can perform on objects. Besides allowing you to control access to your BusinessObjects Enterprise content, rights enable you to delegate user and group management to different departments, and to provide your IT people with administrative access to servers and server groups. It is important to recognize the difference between rights set on objects or folders, and rights set on principals (the users and groups) who access them. For example, to give a manager access to a particular folder, in the "Folders" area, you add the manager to the access control list (the list of principals who have access to an object) for the folder. You cannot give the manager access by configuring the manager's rights settings in the "Users and Groups" area. The rights settings for the manager in the "Users and Groups" area are used to grant other principals (such as delegated administrators) access to the manager as an object in the system. In this way, principals are themselves like objects for others with greater rights to manage. Each right on an object can be granted, denied, or unspecified. The BusinessObjects Enterprise security model is designed such that, if a right is left unspecified, the right is denied. Additionally, if settings result in a right being both granted and denied to a user or group, the right is denied. There is an important exception to this rule. If a right is explicitly set on a child object that contradicts the rights inherited from the parent object, the right set on the child object overrides the inherited rights. This exception applies to users who are members of groups as well. If a user is explicitly granted a right that the user's group is denied, the right set on the user overrides the inherited right.

Access Levels
Access levels are groups of rights that users frequently need. They allow administrators to set common security levels quickly and uniformly rather than requiring that individual rights be set one by one.

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

15

Predefined access levels


BusinessObjects Enterprise comes with several predefined access levels. These predefined access levels are based on a model of increasing rights: Beginning with View and ending with Full Control, each access level builds upon the rights granted by the previous level. The following table summarizes the rights that each predefined access level contains.
Access level Description Rights involved

View

If set on the folder level, a principal can view the folder, objects within the folder, and each object's generated instances. If set at the object level, a principal can view the object, its history, and its generated instances.

View objects View document instances

Schedule

View access-level rights, plus: A principal can generate Schedule the document to instances by scheduling an run object to run against a Define server groups to specified data source once or process jobs on a recurring basis. The Copy objects to another principal can view, delete, folder and pause the scheduling of Schedule to destinations instances that they own. Print the report's data They can also schedule to different formats and Export the report's data destinations, set parameters Edit objects that the user and database logon owns information, choose servers Delete instances that the to process jobs, add contents user owns to the folder, and copy the Pause and resume object or folder. document instances that the user owns

View On Demand

A principal can refresh data on demand against a data source.

Schedule access-level rights, plus: Refresh the report's data

16

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Access level

Description

Rights involved

Full Control

All available rights, including: Add objects to the folder A principal has full Edit objects administrative control of the Modify rights users have object. to objects Delete objects Delete instances

No Access

The user or group is not able to access the object or folder.

No rights

Custom access levels


In addition to the predefined access levels, you can also create and customize your own, which can greatly reduce administrative and maintenance costs associated with security. Consider a situation in which an administrator must manage two groups, sales managers and sales employees. Both groups need to access five reports in the BusinessObjects Enterprise system, but sales managers require more rights than sales employees. The predefined access levels do not meet the needs of either group. Instead of adding groups to each report as principals and modifying their rights in five different places, the administrator can create two new access levels, Sales Managers and Sales Employees. The administrator then adds both groups as principals to the reports and assigns the groups their respective access levels. When rights need to be modified, the administrator can modify the access levels. Because the access levels apply to both groups across all five reports, the rights those groups have to the reports are quickly updated.

Advanced rights
To provide you with full control over object security, the CMC allows you to set advanced rights. These advanced rights provide increased flexibility as you define security levels for objects at a granular level. Use advanced rights settings, for instance, if you need to customize a principal's rights to a particular object or set of objects. Most importantly, use advanced rights to explicitly deny a user or group any right that should not be permitted to change when, in the future, you make changes to group memberships or folder security levels. The following table summarizes the options that you have when you set advanced rights.
Rights option Description

Granted

The right is granted to a principal.

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

17

Rights option

Description

Denied

The right is denied to a principal. The right is unspecified for a principal. By default, rights set to Not Specified are denied. The right applies to the object. This option becomes available when you click Granted or Denied. The right applies to sub-objects. This option becomes available when you click Granted or Denied.

Not Specified

Apply to Object

Apply to Sub-Objects

Applying user and group rights to objects


Security in BusinessObjects Enterprise flows in the following manner: Top-level folder security Folder-level security Object-level security

Top-level folder security


Top-level folder security is the default security set for each specific object type (for example Universes, Web Intelligence Application, Groups and Folders). Each object type has its own top-level folder (root folder) that all the objects below inherit rights from. If there are any access levels common to certain object types that apply throughout the whole system, set them at the top-level folder specific to each object type. For example, if the Sales group requires the View access level to all folders, you can set this at the root level for Folders.

Folder-level security
Folder-level security enables you to set access-level rights for a folder and the objects contained within that folder. While folders inherit security from the top-level folder (root folder), subfolders inherit the security of their parent folder. Rights set explicitly at the folder level override inherited rights.

18

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Object-level security
Objects in BusinessObjects Enterprise inherit security from their parent folder. Rights set explicitly at the object level override inherited rights.

Inheritance
Rights are set on an object for a principal in order to control access to the object, however, it is impractical to set the explicit value of every possible right for every principal on every object. Consider a system with 100 rights, 1000 users, and 10,000 objects: to set rights explicitly on each object would require the CMS to store billions of rights in its memory, and, importantly, require that an administrator manually set each one. Inheritance patterns resolve this impracticality. With inheritance, the rights that users have to objects in the system come from a combination of their memberships in different groups and subgroups and from objects which have inherited rights from parent folders and subfolders. These users can inherit rights as the result of group membership; subgroups can inherit rights from parent groups; and both users and groups can inherit rights from parent folders. By default, users or groups who have rights to a folder inherit the same rights for any objects that are subsequently published to that folder. Consequently, the best strategy is to set the appropriate rights for users and groups at the folder level first, then publish objects to that folder. BusinessObjects Enterprise recognizes two types of inheritance: group inheritance and folder inheritance.

Group inheritance
Group inheritance allows principals to inherit rights as the result of group membership. Group inheritance proves especially useful when you organize all of your users into groups that coincide with your organization's current security conventions. When group inheritance is enabled for a user who belongs to more than one group, the rights of all parent groups are considered when the system checks credentials. The user is denied any right that is explicitly denied in any parent group, and the user is denied any right that remains completely not specified thus, the user is granted only those rights that are granted in one or more groups (explicitly or through access levels) and never explicitly denied.

Folder inheritance
Folder inheritance allows principals to inherit any rights that they have been granted on an object's parent folder. Folder inheritance proves especially useful when you organize BusinessObjects Enterprise content into a folder hierarchy that reflects your organization's current security conventions. For example, suppose that you create a folder called Sales Reports, and you provide your Sales group with View On Demand access to this folder. By default, every user that has rights to the Sales Reports folder will inherit the same rights to the reports that you subsequently publish to this folder. Consequently, the Sales group will have View On Demand access to all of the reports, and you need to set the object rights only once, at the folder level.

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

19

Rights override
Rights override is a rights behavior in which rights that are set on child objects override the rights set on parent objects. Rights override occurs under the following circumstances: In general, the rights that are set on child objects override the rights that are set on parent objects. In general, the rights that are set on subgroups or members of groups override the rights that are set on groups. You do not need to disable inheritance to set customized rights on an object. The child object inherits the rights settings of the parent object except for the rights that are explicitly set on the child object. Also, any changes to rights settings on the parent object apply to the child object. Rights override lets you make minor adjustments to the rights settings on a child object without discarding all inherited rights settings. Consider a situation in which a sales manager needs to view confidential reports in the Confidential folder. The sales manager is part of the Sales group, which is denied access to the folder and its contents. The administrator grants the manager View rights on the Confidential folder and continues to deny the Sales group access. In this case, the View rights granted to the sales manager override the denied access that the manager inherits from membership in the Sales group.

Scope of rights
Scope of rights refers to the ability to limit the extent of rights inheritance. To define the scope of a right, you decide whether the right applies to the object, its sub-objects, or both. By default, the scope of a right extends to both objects and sub-objects. Scope of rights can be used to protect personal content in shared locations. Consider a situation in which the finance department has a shared Expense Claims folder that contains Personal Expense Claims subfolders for each employee. The employees want to be able to view the Expense Claims folder and add objects to it, but they also want to protect the contents of their Personal Expense Claims subfolders. The administrator grants all employees View and Add rights on the Expense Claims folder, and limits the scope of these rights to the Expense Claims folder only. This means that the View and Add rights do not apply to sub-objects in the Expense Claims folder. The administrator then grants employees View and Add rights on their own Personal Expense Claims subfolders. Scope of rights can also limit the effective rights that a delegated administrator has. For example, a delegated administrator may have Securely Modify Rights and Edit rights on a folder, but the scope of these rights is limited to the folder only and does not apply to its sub-objects. The delegated administrator cannot grant these rights to another user on one of the folder's sub-objects.

Rights specific to object type


Different types of objects have different functionality. For example, while you can schedule a Crystal report or Web Intelligence report, you cannot schedule a hyperlink. As a result, some rights differ by object type, depending on the functionality of the object. You can set object-specific rights to be Explicitly Granted, Explicitly Denied, or Not Specified.

20

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Object-specific rights consist of the following: General rights for the object type These rights are identical to general global rights (for example, the right to add, delete, or edit an object), but you set them on specific object types to override the general global rights settings. Specific rights for the object type These rights are available for specific object types only. For example, the right to export a report's data appears for Crystal reports but not for Word documents. Type-specific rights are useful because they let you limit the rights of principals based on object type. Consider a situation in which an administrator wants employees to be able to add objects to a folder but not create subfolders. The administrator grants Add rights at the general global level for the folder, and then denies Add rights for the folder object type. Rights are divided into the following collections based on the object types they apply to: General These rights affect all objects. Content These rights are divided according to particular content object types. Examples of content object types include Crystal reports, Adobe Acrobat PDFs, and Desktop Intelligence documents. Application These rights are divided according to which BusinessObjects Enterprise application they affect. Examples of applications include the Web Intelligence and Desktop Intelligence. System These rights are divided according to which core system component they affect. Examples of core system components include Calendars, Events, and Users and Groups. Type-specific rights are in the Content, Application, and System collections. In each collection, they are further divided into categories based on object type.

Activity: User rights flash scenarios


Objective
Solve the 24 user rights flash scenarios.

Instructions
1. Open the following file: ../Lesson 3/User Rights Flash Scenarios/1.html Click through the first flash scenario and then solve the puzzle. 2. Using the menu in the file, solve each scenario in turn.

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

21

Review: Reviewing BusinessObjects Enterprise architecture, administration and security


Review: Reviewing BusinessObjects Enterprise architecture, administration and security
1. In the context of BusinessObjects Enterprise, what is the difference between a server and a service? 2. What is the Server Intelligence Agent? 3. What is an Access Level? 4. List the options available when setting advanced rights. 5. Describe the concept of rights override. 6. What is 'scope of rights'?

22

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Lesson summary
After completing this lesson, you are now able to: Describe the BusinessObjects Enterprise architecture Define the BusinessObjects Enterprise architecture basic terminology Describe several core process flows within BusinessObjects Enterprise Work with the BusinessObjects Enterprise security model Explain how rights work in BusinessObjects Enterprise Explain how access levels work in BusinessObjects Enterprise Understand inheritance Understand the guidelines for planning security

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Reviewing BusinessObjects Enterprise Architecture, Administration and SecurityLearners Guide

23

24

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Lesson 2

Identifying Requirements
Identifying Requirements
The first stage in designing an Enterprise Business Intelligence solution is identifying the requirements of the organization. This involves asking the correct people the correct questions. This lesson introduces a case study that you will use throughout the course to design and develop a Business Intelligence solution for a fictional company using BusinessObjects Enterprise. After completing this lesson, you will be able to: Consider the many different requirements that might affect a Business Intelligence solution Translate business requirements into Business Intelligence specific requirements necessary for architectural design

Identifying RequirementsLearners Guide

25

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Identifying requirements
When you are setting about the design of a Business Intelligence solution you need to identify the requirements of the organization. This invariably involves finding the answers to several questions. After completing this unit, you will be able to: Understand how to identify system requirements

Asking the right questions


One of the most important set of questions to ask is about the users of the system. You need to know what type of users they are, how they plan to use the system and how many there are so that you can design a robust system that can handle the load appropriately. Some example questions would be as follows: Who are the end users? How many are there? What departments are they in? What are their roles? What information do they need? Why? How is the information used? How are they getting this information today? What system are they currently using?

Activity: Workshop 1
Instructions
1. Discuss in groups what it means to have a highly available/fault tolerant solution and share your own experiences with the group. Prepare to share with the rest of the class. 2. Discuss in groups what it means to size the solution and share your own experiences with the group. Prepare to share with the rest of the class. 3. Discuss in groups what it means to deploy the system and share your own experiences with the group. Prepare to share with the rest of the class. 4. Discuss in groups what it means to develop a content management plan and share your own experiences with the group. Prepare to share with the rest of the class.

Case Study: Jade Publishing


This case study is to be used for activities throughout this training course.

Company information
The first Jade Publishing store opened in the year 2000 in China. It is a subsidiary of a larger company called Crowns Mendocino. Jade Publishing continues the C.M. brand's dedication to

26

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

customer service and strong commitment to quality. Jade Publishing has a strong direct mail business that distributes millions of catalogs a year and has a highly successful e-commerce site. Jade Publishing employs 2000 people. The Jade Publishing Beijing office will be the early adopter of a BI system. The Beijing office is the main Jade Publishing office and has employees in all aspects of the business including: Marketing Sales Designers Finance Shipping IT

Jade Publishing also has other offices: Shanghai (Shanghai Province) and Guangzhou (Guangdong Province). The company is running a province centric model, where each main province office has a very similar structure. Shanghai and Guangzhou offices are much smaller (250 employees each). The head office of the whole company, as mentioned earlier, is in Beijing.

System architecture and deployment planning


When planning the system architecture and deployment, keep in mind the numbers provided are results of ongoing interviews with Jade Publishing. Based on these interviews a thorough analysis has gone into the conversion of the customer needs into BOE requirements.

Business Needs
There is an immediate need for an enterprise-wide query and analysis, file storage, and reporting distribution for Jade Publishing that will allow users to modify documents on-the-fly and build new queries to get answers. Their mandate is to have the users view and analyze reports online whenever possible. There may be highly formatted reports and interactive documents where users can modify report structures or build brand new interactive documents. There is a need to allow for prebuilt reports as well as interactive documents that can either be modified or newly created by sales people without asking IT for assistance. Upper Management in all departments want to query and analyze their own department's numbers and figures directly, to make better business decisions. They would like to standardize on a single BI framework that is scalable, fault tolerant and supports the best BI tools. There is a need for interactive user-friendly documents and high quality presentation reports that would provide users the ability to run their reports and documents on an ad hoc or recurring scheduled basis. Taking advantage of personal categories and favorites or sending reports direct to a user's email or Inbox would be helpful.

Identifying RequirementsLearners Guide

27

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

The IT department would like to give each department the ability to manage users within their business unit. This way IT can restrict one business unit to impact how that unit can use the system while also being able to restrict what data that unit can see. The delegated administrator would be able to manage the replication of their content. Since this is the first implementation of a BI system within the Jade Publishing Beijing office, there will be a development and production environment. There will also possibly be the need to replicate the system in the Guangzhou office in order to prove the effectiveness of the system. Jade Publishing needs to burst customized reports via email to all provincial outlets with sales information.

User Requirements
While C.M. has offices worldwide we are only dealing with the implementation of a BI system within the Jade Publishing office in Beijing. Beijing has 1500 employees Guangzhou has 250 employees Shanghai has 250 employees Jade Publishing estimates the following system usage: 10% of the total users connect to the system concurrently (150 + 25 + 25 = 200). 10% of the concurrent active users will be heavy users of the system (logged into the system and viewing reports nearly continuously, averaging one request per second) (15 + 2.5 + 2.5 = 20). 10% of the concurrent active users will be active users of the system (logged into the system frequently throughout the day averaging one request every four seconds) (15 + 2.5 + 2.5 = 20). 30% of the concurrent active users will be moderate users of the system (logged into the system from time to time throughout the day averaging one request every eight seconds) (45 + 7.5 + 7.5 = 60). 50% of the concurrent active users will be light users of the system (logged into the system infrequently and will view a couple of reports and log out) (75 + 12.5 + 12.5 = 100). You can estimate that the number of simultaneous requests pertaining to viewing sessions will be an estimated 75 Crystal Reports simultaneous requests (simultaneous viewing sessions) and 30 Web Intelligence simultaneous requests (simultaneous document viewing/edit sessions) and 20 Desktop Intelligence simultaneous requests (concurrent viewing jobs). 3% of users will use Voyager to design workspace documents (4.5 + 0.75 + 0.75 = 6). 5% of the users will use Crystal Reports to design reports. 30% of the users will use Web Intelligence to design reports. 10% of the users will use Desktop Intelligence to design reports.

28

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Report processing requirements


Sales and Manufacturing data is stored provincially. Each province stores data in various regional SQL 2000 databases. Your analysis of reporting requirements has identified the following number of reports in the BusinessObjects Enterprise system: Beijing office 100 total Crystal Reports. 50 Crystal Reports will run daily. 30 Crystal Reports will run weekly. 20 Crystal Reports will run at month end. 50 are highly analytical and interactive in nature Web Intelligence documents. 25 Web Intelligence documents will run daily. 25 are highly analytical and interactive in nature Voyager documents. 100 operational Desktop Intelligence reports. The bulk of report processing will happen through scheduling: A few of the Crystal reports will be used in Publications to burst to regional sales managers. 80% (80 at the end of the month) of Crystal Reports and (20 daily) Web Intelligence documents will run after hours. 20% (20 at the end of the month) of Crystal Reports and (5 daily) Web Intelligence documents will run during working hours. 50% of all Crystal Reports and Web Intelligence documents will require prompting before being run, which may be common across different reports. Many prompts will be based on information that is subject to change frequently. Because of backups and other maintenance duties, the after hours Crystal Reports and Web Intelligence documents need to run within a five hour window. Each Crystal Reports on average takes about 10 minutes to run. Each Web Intelligence document takes about 2 minutes to run. The average Crystal report size is 1MB. The average Web Intelligence document size is 500Kb. North Maple anticipates a 20% annual growth in the number of reports and documents hosted by the system. Guangzhou office (assume 16.6% of Beijing numbers). Shanghai office (assume 16.6% of Beijing numbers).

Report viewing requirements - more detail


Analysis of report viewing requirements has identified the following: You can estimate that the number of simultaneous requests pertaining to viewing sessions will be an estimated 75 Crystal Reports simultaneous requests (simultaneous viewing sessions) and 40 Web Intelligence simultaneous requests (simultaneous document

Identifying RequirementsLearners Guide

29

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

viewing/edit sessions) and 20 Desktop Intelligence simultaneous requests (concurrent viewing jobs). Beijing office 90% or greater of Crystal Reports viewing requests will consist of viewing of report instances (90% of 75 = 67.5). At most, 10% of Crystal Reports viewing requests will be for ondemand/ad hoc report data (10% of 75 = 7.5) 90% or greater of Web Intelligence documents viewing requests will be for on demand/ad hoc document data (90% of 40 = 36). At most, 10% of Web Intelligence documents viewing requests will consist of viewing document instances (10% of 40 = 4). 100% of Desktop Intelligence viewing requests will be viewed using an HTML viewer (20). Desktop Intelligence viewing requests will consist of viewing reports with presaved data and only 30% (6) of the requests will require a refresh against the database. Guanzhou office (assume 16.6% of Beijing numbers). Shanghai office (assume 16.6% of Beijing numbers).

Infrastructure requirements
Jade's infrastructure backbone is based on Windows 2003 Server and TCP/IP. The network is set up to use a star schema design with a center in Beijing. This means that a network request from Guangzhou to Shanghai has to travel through Beijing. The Beijing office has a DMZ and corporate policy states that no application servers can be located within the DMZ. Existing datastores include: Oracle Financials and the Oracle DSS (data warehouse). Provincial and Departmental data is stored in a number of MS SQL Server databases. The IT department has standardized their server hardware to the following specifications: Quad Intel Xeon 3.2 GHz processors, 12 GBs of RAM (For the purpose of the classroom in your sizing activity assume that the customer is using 1 CPU boxes with 1 GB RAM). 36 GB RAID 1 (mirrored) drive for operating system and applications. 736 GB RAID 1/5 (mirrored) drive for application data. 1 Gbps Ethernet Connection from servers to the LA.

Content Management and Delegated Administration


Security requirements
The IT department would like to give each department the ability to manage users within their business unit. This way IT can restrict one business unit to impact how that unit can use the system while also being able to restrict what data that unit can see.

30

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

The Beijing office will be the central location for the creation of report templates. These report templates will be distributed to the satellite offices. The central office must delegate administration to the satellite offices in order to allow them to replicate this content on their sites. Once the Beijing office is operational, it is planned that satellite offices will need to replicate some of the reports from the Beijing Production system to their own Production system. Jade Publishing wants to send a monthly sales report to the different provincial stores. Jade Publishing wants to send a monthly sales report to the different province locations using the two-way replication method. The administrator at the Origin site creates a report, which administrators at each Destination site replicate and run against that province store's database. A group structure that will support both the business units and the four roles (within each business unit). A content structure that ensures that each business unit does not see the other's content.

Security roles
The deployment will need to support two business units, Sales and Marketing. From an existing deployment they would like to simulate the use of roles that has been used very effectively. They have four roles: Administrator Each business unit has there own administrators. This is not the same as system/platform administrators as they can only administer for the team to which they belong. The tasks of the administrator are limited to User Management (promoting users between the other roles). They have also been identified for managing the replication jobs as needed. They do not have the capability to manage any of the content nor view any documents which include data. Power User This user maintains the universes as well as creates the reports needed by the business. They will also maintain all the content for their business unit. They will also manage the data security. Analysts These users are allowed to view their business unit's content and create schedules to assist them in better understanding the business. They are allowed to manage any content they create, although they are only allowed to save reports into their own areas. Viewers This group of users is restricted to refreshing a report when they need it. As part of the planned use of XI 3.1, only the Power Users will have Desktop Intelligence installed, and this is to be used only when they cannot create the required document using Web Intelligence. They will of course have Designer installed to manage the universes. You also need to define an additional (temporary) role for the off-shore developers converting the documents.

Identifying RequirementsLearners Guide

31

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Instance management requirements


Your analysis of the Jade Publishing business requirements has identified the following instance management requirements: All instances of Sales reports and documents must be kept. All other reports and documents should keep 10 instances.

Auditing requirements
Jade Publishing is keen to see the ratio of usage between Desktop Intelligence and Web Intelligence. For this they have asked if you can audit all activity for these products.

Testing the solution


Tests you may want to perform to ensure you set up the main part of this workshop: Can users from one business unit see the content of the other? Do the users have the correct functionality within the product? Have the delegated administrators the ability to add/delete/modify anything they shouldn't? If the role of the delegated administrators has to change (for example refreshing documents) how easy is this to achieve? Can the delegated administrators promote/demote users between their business unit's groups?

Managing the promotion of content


Within Jade Publishing there is a BI Administrator who is tasked to deploy Business Objects Enterprise XI 3.1. As part of the future use of the system you will implement Life Cycle Management techniques. The BI Administrator expects the environment to consist of a Development environment, which will also be used as a Test environment, and a Production environment. The development environment uses a different reporting database than the production environment. As part of implementing LCM practices the report templates and other dependant enterprise objects must be promoted between the Development and Production environments.

Activity: Jade Publishing Case Study - Instructor led


1. Instructor opens the Jade Publishing website from the resource CD. 2. Go through the website content, highlighting the information grouped under the following areas: High Availability/Fault Tolerance Sizing

32

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Deployment Content Management

Identifying RequirementsLearners Guide

33

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Identifying requirements


1. You are the administrator. You are told that there will potentially be 100 users viewing the same report at the time. Where would you say this information belongs? A) Content Management B) Sizing C) Deployment D) High Availability

2. List two examples of customer information that pertains to deployment requirements. 3. What are the major 'requirement areas' for which you need to collect information?

34

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Lesson summary
After completing this lesson, you are now able to: Ask the right questions to help with identifying requirements Translate business requirements into Business Intelligence requirements

Identifying RequirementsLearners Guide

35

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

36

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Lesson 3

Ensuring Availability of your Business Intelligence Solution


Ensuring Availability of your Business Intelligence Solution
This lesson focuses on the concepts that must be taken into account when you design and deploy a Business Intelligence solution so that it is available to your business and your users. Scalability, fault tolerance, deployment strategies, and disaster recovery are key concepts to cover. Note: Refer to the reference materials on the resource CD in the Activity_Resources\Lesson3_High_Availability folder for more information on this lesson. After completing this lesson, you will be able to: List and explain the factors that need to be considered when designing and deploying a highly available Business Intelligence solution Identify specific disasters and develop specific strategies for accommodating them Design a highly available system that supports specific business requirements

Ensuring Availability of your Business Intelligence SolutionLearners Guide

37

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

What is a Business Intelligence Solution?


What is a Business Intelligence Solution?
To ensure that your system meets your requirements for reliability, you need to consider a multitude of internal and external factors. After completing this unit, you will be able to: Describe the challenges in a Business Intelligence solution Explain the key factors in making your systems fault tolerant and highly available Define the deployment strategies for high availability in BusinessObjects Enterprise

Business Intelligence Solution


A Business Intelligence Solution is one that delivers information to you so that you can make informed business decisions. BusinessObjects Enterprise XI 3.1 is an important component in this solution but there are other components that are required in order to provide the functionality and reliability you need for your business. Even the simplest single server deployment of BusinessObjects XI 3.1 depends on many complex systems and processes. Considering your deployment of BusinessObjects Enterprise XI 3.1 as a subset of your overall Business Intelligence solution, what other components are involved?

The challenges of discussing how to ensure availability


It is impossible to discuss how to ensure availability without including discussions around: Process flows. Deployment strategies. Concepts such as disasters and corresponding recovery strategies.

38

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

The challenge is that these topics are so interrelated that it is not possible to explore any one of these four areas in detail without touching upon another.

High Availability and Fault Tolerance Concepts


Line of Business Application
A line-of-business (LOB) application is a critical computer application that is vital to running an organization such as accounting, supply chain management, and resource planning applications. If a LOB application becomes unavailable, essential business functions are lost or suffer. For some companies, their Business Intelligence solution may not be as mission-critical. If it fails, steps must be taken to correct that but the panic to do so is minimal. For such companies, BusinessObjects Enterprise XI 3.1 is not an LOB application. Conversely, for other organizations such as health care units, production systems stop and patients' lives are threatened when BusinessObjects Enterprise and the BI Solution is unavailable. For such companies, significant resources need to go into ensuring this LOB solution is available.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

39

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Disaster
A disaster is any unanticipated event that creates a defined problem. Disasters are usually considered in terms of severity. Some examples of disasters include: A report is inadvertently deleted from the DEV system. A reporting database is unavailable. The primary administrator leaves the company suddenly. A software vender support contract expires. A server shuts down unexpectedly. The connection to the data center is lost. Note: While planned downtime for maintenance creates unavailability, this is not considered a disaster.

Business Continuity Planning


Business Continuity Planning (BCP) is an interdisciplinary concept used to create and validate a practiced logistical plan on how an organization will recover and restore partially or completely interrupted operations within a predetermined time after a disaster or extended disruption. This logistical plan is called a Business Continuity Plan. In simple terms, BCP is working out how to stay in business in the event of a disaster or incidents such as lost systems, building fires, sudden loss of the CEO, earthquakes, or wars.

Disaster Recovery
Disaster recovery planning is an important part of the larger process of Business Continuity Planning. Disaster recovery is the process, policies, and procedures of restoring operations that are critical to the resumption of business, including regaining access to data (records, hardware, or software), communications (incoming, outgoing, toll-free, or fax), workspace, and other business processes after a disaster. To increase the opportunity for a successful recovery of valuable records, a well-established and thoroughly tested disaster recovery plan must be developed. A disaster recovery plan includes plans for coping with the unexpected or sudden loss of communications and/or key personnel. Disaster recovery planning can also include how to manage maintenance or planned unavailability as well.

Single Point of Failure


A single point of failure as it relates to ensuring availability of your BI solution is a single failure that creates a denial of services or requests to the users. In reality, removing all single points of failure is extremely difficult to achieve and its cost is also enormous. For an example, it may be easy to implement a cluster of three CMS servers in a cluster to remove a single point of failure in the CMS availability but if the three CMS servers are connected to the same network switch, then the switch becomes a single point of failure. To resolve this single point of failure, you connect each of the three CMS servers to independent switches. However an unforseen disaster at the Internet Service Provider can interrupt your

40

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Wide Area Network communication and create a situation where no one can connect to your BI solution (even though it is running, it is unavailable). As a result, your task is to reduce all the single points of failure as possible and create a robust disaster recovery plan.

Fault tolerance
Fault tolerance or graceful degradation is the property that enables a system to continue operating property in the event of the failure of (or one or more faults within) some of its components. The basic characteristics of fault tolerance require: No single point of failure: All systems are fully redundant. Fault isolation to the failing component: A disaster cannot cause failures in other areas. If the Output FRS fails, the Job Servers that process the schedule requests will be impacted. However, if there is another Output FRS available to handle the request, then the schedule and view requests can still function as normal because all the failure is isolated to the single Output FRS itself. Fault containment to prevent propagation of the failure: A disaster cannot cause a chain of further disasters. If your current workload demands two CMS servers to work together in your cluster, when one goes down, the entire workload will depend on the remaining server. This may cause too much workload for the CMS to handle and thus the performance is severely degraded. The solution for this potential problem is to have a cluster of three CMS servers so that if one fails, the remaining two CMS servers can still handle the workload. No single point of repair: If a system experiences a failure, it must continue to operate without interruption during the repair process. Note: The key point in fault tolerance is "no denial of services". Should a disaster occur, the system will still be available to your users.

High Availability
High availability is a system design protocol and associated implementation that ensures a certain absolute degree of operational continuity during a given measurement period. Fault tolerance means your systems are always available; high availability means your system is almost fault tolerant but a certain amount of down time (planned or unplanned periods of unavailability) is expected. Many fault tolerant solutions are used to create a highly available deployment.
Note: The terms "uptime" and "availability" are not synonymous. A system can be up, but not available, as in the case of a network usage. Availability is usually expressed as a percentage of uptime in a given year.

Availability is usually expressed as a percentage of uptime in a given year. In a given year, the number of minutes of unavailability (planned and unplanned) is totaled for a system and then divided by the total number of minutes in a year (approximately 525,600 minutes), producing a percentage of unavailability. The complement of this is the percentage of uptime, which is what is typically referred to as the availability of the system.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

41

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Availability % 95% 99% 99.5% 99.99%

Downtime per year 18.25 days 3.65 days 1.83 days 52.6 minutes

Downtime per month 36 hours 7.20 hours 3.60 hours 4.32 minutes

Downtown per week 8.4 hours 1.68 hours 50.4 minutes 1.01 minutes

The higher the percentage specified in your business case, the more resources that must be employed to satisfy those requirements.

Failover
Failover is the capability to switch over automatically to a redundant or standby computer server, system, or network upon the failure or abnormal termination of the previously active server, system, or network. Failover happens without human intervention and generally without warning, unlike switchover. There are many types of failover. Some will maintain availability while others create a moment of unavailability before becoming available again. During the moment of unavailability, there may be a denial of services. Two CMS servers operating in a cluster is an example of failover that maintains availability. If CMS 1 is managing your session and it goes down, you will be automatically connected to CMS 2 without experiencing any denial in service.

Load Balancing
Load Balancing is a technique (usually performed by load balancers) to spread work between two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, throughput, or response time. The balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch). In the case of a CMS cluster, the workload can come from multiple users managed by both CMS servers. The two CMS servers then communicate their current load with each other to determine which one will manage the other CMS responsibilities such as database maintenance and schedule management. In the case of a hardware device that manages a load-balancing cluster, it is common to have the entire workload come through a load-balancing front end. This front end offers session persistence or "stickiness" by sending the client to the same back-end server for the duration of the current session. Note: BIG-IP by F5 is a common hardware load-balancing solution used by many web farms.

42

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Active/Active and Active/Passive Functionality


Active/Active and Active/Passive are general functionality that can be applied and implemented in a variety of fashions. Active/Active refers to two redundant resources (servers or services) that are both functioning to support the system as a whole. A CMS cluster in which multiple CMS servers are running and sharing the workload at the same time is an example of an Active/Active configuration. Active/Passive refers to two or more redundant resources but only one is running and actively managing requests; the other resource is either not running or running in a waiting state until it is further promoted to active state by administrators following the failure of the current active resource. Two Input File Repository Servers running at the same time is an excellent example of Active/Passive.

High availability server clustering


High-availability clusters (also known as HA Clusters or Failover Clusters) are computer clusters that are implemented primarily for the purpose of improving the availability of services. They operate by having redundant computers or nodes working together and sharing data to provide services and functionality. When a service fails, those processes can be moved to be managed by other nodes so the service is once again available. The shared data is always available.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

43

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

In addition, each node has many redundant elements to improve reliability such as multiple power supplies and network cards. Many of these components are designed to be changed while the server is still running thus reducing unavailability.

HA clusters can be configured in the following ways: Active/Active In active/active, traffic intended for the application can be processed by any one of the active nodes. This is possible when all nodes have access to the same data. Some clustered instances of database servers are good example of this. Database queries go from a client to a virtual server name but the actual processing of the query can be done by any available node that hosts the same instance of the database. Allowing multiple instances to access the same database (storage) simultaneously provides fault tolerance, load balancing, and performance benefits by allowing the system to scale out. At the same time, since all nodes access the same database, the failure of one instance hosted by a specific node will not cause the loss of access to the database itself. Active/Passive Active/passive configuration provides a fully redundant instance of each node running but only one is actively processing requests; all others are in a waiting state. Should a disaster occur on the current node, the process will be stopped on that node, moved over to another redundant node and then started. This configuration typically requires the most amount of extra hardware. The extra hardware can be mitigated by splitting the hosted applications. Each node runs some applications actively and is also acting as the passive node for applications that run on other nodes. The

44

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

challenge is to ensure that if a failure occurs, the remaining nodes are able to support all applications. Note: Some commonly used clustering services are 'Veritas Cluster Server' and 'Microsoft Cluster Server' (MSCS). These servers are excellent options for hosting highly available resources such as file shares for the FRS root folders and cluster aware databases such as MS SQL Server and Oracle for the CMS system database and other reporting databases.

Network Storage Solutions


Storage Area Network (SAN)
SAN is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries and optical jukeboxes) to servers in such a way that, to the operating system, the devices appear as locally attached. Although cost and complexity are dropping, as of 2007 SANs are still uncommon outside larger enterprises.

Network Attached Storage (NAS)


By contrast to a SAN, Network Attached Storage (NAS) is essentially a self-contained computer that is connected to a network with the sole purpose of supplying the file-based data storage services to other devices on the network. The operating system and other software on the NAS

Ensuring Availability of your Business Intelligence SolutionLearners Guide

45

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

unit provide the functionality of data storage, file systems, access to files, and management of these components. Unlike SAN where the remote file system appears to be local, it is clear to the operating system that the NAS resource is remote. The NAS resource appears as a network share accessed by a UNC path (for example \\Server_Name\File_Share\Sub_Folder\File).

NAS Heads
A NAS head refers to a NAS which does not have any on-board storage but connects to a SAN. In effect, a NAS acts as a translator between the file-level NAS protocols (NFS, CIFS) and the block-level SAN protocols (Fibre Channel, iSCSI). Thus NAS heads combine the advantages of both SAN and NAS technologies.

Vertical and Horizontal Scaling


Scalability is a key success factor for business applications in a dynamic environment. BusinessObjects Enterprise XI 3.1 scales both vertically and horizontally. Vertical scaling (otherwise known as scaling up) means adding more hardware resources (for example processors and memory) to the same machine. In the context of BusinessObjects Enterprise XI 3.1, this means the ability to add multiple servers on the same machine. Horizontal scaling (otherwise known as scaling out) means adding more machines into the solution. In the context of BusinessObjects XI 3.1, this means the ability to run multiple servers on multiple machines seamlessly.

46

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Backup
Backup refers to making copies of data so that these additional copies may be used for restoration after a data loss event occurs. These additional copies are typically called "backups". Backups are useful primarily for two purposes: To restore an entire system following a major disaster. To restore small numbers of files after they have been accidentally deleted or corrupted. In BusinessObjects Enterprise XI 3.1 context, the three critical pieces of data to backup are: The CMS system database. This may also include the audit database. Input File Repository root folder. Output File Repository root folder.
Note: It is important that these components are synchronized.

Deployment Strategies for High Availability


Creating a true fault tolerant BI solution requires removing all single points in failure. Since removing all single points of failure is an immense and extremely cost prohibitive task, the reality is that ensuring availability means deploying a highly available BI solution with backup and disaster recovery strategies that support your organizations business needs. When designing a system for high availability, you need to consider: Availability requirements and how much time is acceptable for the system to be down. Failover processing, system backup, and data storage. Vertical and horizontal scaling options. Resources incorporated into a Business Intelligence solution are vast and there are many different ways these resources can interact with each other. The next topic in this lesson will focus specifically on BusinessObjects Enterprise XI 3.1 and some of its dependencies.

Designing a system for high availability


Single server deployments are not highly available
In a single server deployment, you can duplicate many of the BusinessObjects Enterprise 3.1 servers to provide redundancy. If one service fails, the duplicate service could continue to process requests. However, if there is a catastrophic failure affecting the entire server, BusinessObjects Enterprise will become unavailable until the problem is resolved. Planned maintenance can also create periods of unavailability. Whether the downtime is planned or unplanned, if it supports the business requirements of availability, a single server solution may be sufficient but it is not regarded as a highly available deployment.

Designing a multiple-server system for high availability


Depending on the number of machines you can dedicate to BusinessObjects Enterprise, you have many options for designing a system for high availability, taking the advantage of both vertical and horizontal scaling.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

47

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

In a multiple-server deployment, duplicate services are running on different machines and the BusinessObjects Enterprise architecture provides the redundancy and reliable communications to ensure high availability of BusinessObjects Enterprise. If one service fails or if there is a hardware failure on the machine running the service, the duplicate service on another machine continues to process requests with little impact to the system. What little impact there is can be managed through configuration settings so there is no denial of services. The BusinessObjects architecture is commonly divided into 3 tiers. It is easier to design a highly available deployment by looking at configuration options within each tier focusing on how to utilize both vertical and horizontal scaling to maintain high availability for both the servers and the resources they depend on. The following diagram illustrates fault tolerance in a multiple server environment:

48

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Ensuring Availability of your Business Intelligence SolutionLearners Guide

49

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

High availability in the Application Tier


BusinessObjects Enterprise XI 3.1 is exposed to the users primarily through web interfaces. In this unit, you will learn about creating high availability in the application tier (sometimes referred to as the web tier or web application tier). After completing this unit, you will be able to: List deployment options for the application tier Explain vertical and horizontal scaling options Describe the new capabilities of the wdeploy tool and its role in creating high availability in the application tier

Web client to web server


Obviously there is little that you can do to make Web Clients highly available so the Web Server is usually the first place to begin planning for high availability. You may wish to implement an external web server in front of the Business Intelligence stack for one or many of the following reasons: You wish to implement a DMZ. You want the web server to serve up static web content only (for example .gif files), thereby removing some of the load from the web application server (WAS). You can separate dynamic and static content using the wdeploy tool, which is included with BusinessObjects Enterprise to achieve this. It can take the standard desktop.war file produced during installation of BusinessObjects Enterprise and break it apart into two smaller WARs; one just containing the static content, and the other containing the dynamic .JSP files and Servlets. You wish to offload the SSL work from Web Application Servers. Here you encounter a problem however. If you have one web server then you have a single point of failure. You can eliminate the single point of failure by introducing multiple web servers. But this introduces a new problem; now your users have multiple web servers and therefore multiple URLs to choose from. How can do you get around this?

DNS round robin


"Domain Name Service round robin" is one solution whereby the DNS server sends web requests to each web server in a round robin manner. DNS round robin is not very popular, but it is very affordable to implement. One problem with this method is that a DNS server cannot verify the availability of hosts in its records. Thus, it might easily hand out the IP address of a server that is switched off or one that is up but has a crashed service.

Hardware load balancer


These are hardware appliances that present a single URL to the end users, and then forward traffic on to the web server farm behind it. Hardware load balancers can fail too if but in practice they tend to be very reliable.

50

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Hardware load balancers have several benefits, for example: Monitoring network availability of servers within a cluster A load balancer can be configured to not direct traffic to a server that is switched off. Accommodating session state A loadbalancer can be configured with client affinity, so it can maintain session state when necessary. Note: Client affinity is sometimes referred to as "session affinity" or "sticky sessions". Client affinity is implemented using either IP-based switching or cookie-based switching. Most organizations prefer cookie-based switching since it does not suffer from problems associated with NAT translation (where all of the user's behind a particular NAT device appear to come from the same IP address and therefore are all sent to the same web server).

Web server to web application server


All web application server vendors produce web server plug-ins that forward traffic onto the web application server. This means that as you add more web application servers to your environment for high availability, they spread the load across all the web application server nodes. This means that you can make use of multiple web application servers for serving InfoView to your user base. Just as with the link between hardware load balancers and the web servers behind them, it is essential that the link between the web server and the web application server also maintains session affinity. In other words it is important to configure the system so

Ensuring Availability of your Business Intelligence SolutionLearners Guide

51

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

that once a session has been initiated on an individual web application server node, the user is always redirected back to that same node for the remainder of the session. InfoView can be used with clustered web application servers (for example IBM WebSphere ND). Increasingly web application servers make use of clustering in order to implement failover. They replicate a user's HTTP session on to a secondary node so that if the primary node fails the user's session still exists somewhere in the cluster.

Wdeploy
BusinessObjects Enterprise comes with a tool to ease the deployment of web applications on supported web application servers. Based on the Apache Ant scripting tool, wdeploy allows you to deploy WAR files to a web application server in two ways: Standalone mode All web application resources are deployed together on a web application server that serves both dynamic and static content. Distributed mode The application's dynamic and static resources are separated: static content is deployed to a web server; dynamic content is deployed to a web application server. Note: The wdeploy tool was covered in the pre-requisite course for this offering, BusinessObjects Enterprise XI 3.0: Administering Servers - Windows. Note: You will be practicing deployment using wdeploy in the Deployment lesson.

52

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

High availability in the Intelligence Tier


One of the most important areas of a BusinessObjects Enterprise system in which to design high availability is the Intelligence tier. After completing this unit, you will be able to: Understand high availability in the Intelligence tier Describe the process of CMS clustering Manage active/passive file repository servers

Web Application Server to BusinessObjects Enterprise cluster


Once a user has initiated a session, the web application server then communicates with the Intelligence tier, or BusinessObjects Enterprise tier. When you log into BusinessObjects Enterprise through any of the client tools you are typically asked to provide four pieces of information: A user name A password The "system" you wish to log into The authentication type in use with that system

The system you specify is really the hostname and the port number of a host running a CMS. Note: It is best to use the fully qualified domain name of the host. Note: Interestingly due to the way that BusinessObjects Enterprise failover works, the hostname you specify may not actually be the CMS you end up connecting to.

CMS clustering
The CMS service controls the entire Intelligence Tier and all the components that reside on the BusinessObjects Enterprise platform. CMS clusters are a set of two or more CMS servers that function as a single CMS. The CMSs in a cluster work together to maintain the common system database. When users log into BusinessObjects Enterprise they are authenticated by one of the CMSs in the cluster. Each CMS service talks to the others to maintain a dynamic list of all the CMS processes available within the cluster. Each CMS passes this list to each platform service as they connect to the CMS. This means that should a CMS service die, all the other services will have already been given the names of all the other CMS services available in the cluster and they will automatically try to re-establish connection with one of those. CMS clustering benefits include: Increased CMS availability

Ensuring Availability of your Business Intelligence SolutionLearners Guide

53

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Clustered CMS machines use a heartbeat signal to determine clustered server availability. In the event of a failure, the workload of the failed CMS is picked up by an available CMS within the cluster. Providing for incremental growth and scalability As the number of BusinessObjects Enterprise users grows, adding CMS servers to the cluster allows ease of expansion and continued availability. CMS clusters also allow for easy basic machine maintenance without hampering the BusinessObjects Enterprise distribution. This maintenance might include loading a service pack or updating hardware on your CMS. If you have two CMS machines then you can work on one machine at a time without disturbing the production system. Improved performance and consistency of Enterprise system When users log into BusinessObjects Enterprise, they are authenticated by one of the CMS servers. The CMS servers in a cluster use round robin mechanism when taking on new users. If there are for example 50 users logged on the cluster containing two CMS machines, it is very likely that 25 of the users are logged on to CMS1 and the other 25 users are logged on the CMS2. CMS machines in a cluster communicate via TCP/IP. The nature of a TCP/IP message is that its signal will only be received by the intended service. When CMSs start up, they communicate with the CMS system database to receive a list of other CMSs in the system. Once the CMS receives the list of CMSs in the system, it communicates with the other CMSs to establish a cluster connection.

Migrating and backing up CMS system data


Organizations often need to copy data from one BusinessObjects Enterprise system to another BusinessObjects Enterprise system. This process can be used to periodically backup a deployment's CMS system database. BusinessObjects Enterprise provides the ability to migrate, or copy, the contents of a CMS system database to another database type. You can also copy CMS data from a different CMS database of an earlier version to your current CMS system database. When you move the entire CMS system database to a different data source, the migration process destroys any information already present in the destination data source. If you are performing an off-site back up of your system database you would use this process to move the current system database to an empty database. This migration process is performed in the CCM by clicking the Specify CMS Data Source button on the toolbar. Note: For more information on the procedure for migrating and backing up CMS system data, please refer to the Administering Servers - Windows course.

Managing active/passive File Repository Servers


The BusinessObjects Enterprise environment may have multiple File Repository Servers, but only one is active at any given time. The order of start up determines which File Repository

54

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Server is active and which one is passive. The first File Repository service to start is the active File Repository Server. If the active File Repository Server shuts down for any reason, the backup (passive) File Repository Server becomes active and takes over. Tip: To determine which FRS is active and which is passive, you can use the Metrics property of a server to verify the start time. The server that has been running the longest is the active server.

Preferred high availability FRS filestore configuration


All the File Repository Servers point to a disk location where the actual report templates and files reside. By default, the files are stored on the local host's file system on the server upon which you install the BusinessObjects Enterprise. However having just one FRS pointing to local file storage has two problems: It is a single point of failure The files are not visible to another server It is recommended to use the following configuration for your FRS filestore. You create a shared, common clustered folder on another host across the network then point the FRS pairs to it. If one of the servers within the cluster where the files are stored goes down, the backup FRS pairs will still be able to access the files through the cluster.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

55

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Crystal Reports Cache Server


The Crystal Reports Cache Server is responsible for handling Crystal Reports report viewing requests from the DHTML, ActiveX, and Java viewers. The Crystal Reports Cache Server manages viewable pages. If the requested page is present on the Crystal Reports Cache Server, it is sent back to the Web Application Server; otherwise, the page is requested from the Crystal Reports Processing Server.

Event Server
The Event Server manages file-based events. When you set up a file-based event within BusinessObjects Enterprise, the Event Server monitors the directory that you specified. When the appropriate file appears in the monitored directory, the Event Server triggers your file-based event: that is, the Event Server notifies the CMS that the file-based event has occurred. The CMS can then start any jobs that are dependent upon the file-based event.
Note: Schedule-based and custom events are managed through the CMS.

56

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

High availability in the Processing Tier


The purpose of this unit is to learn how to make the processing tier highly available. After completing this unit, you will be able to: Describe the BusinessObjects Enterprise servers in the Processing tier Understand the concept of server groups Create and work with server groups

Crystal Reports Processing Server


The Crystal Reports Processing Server is primarily responsible for responding to page requests by processing Crystal Reports and generating Encapsulated Page Format (.EPF) pages. A single .EPF file represents one page of a Crystal report. The Crystal Reports Processing Server retrieves data for the report from the latest instance or directly from the database (depending on the users request and security level). Specifically, the Crystal Reports Processing Server responds to page requests made by the Crystal Reports Cache Server. The Crystal Reports Processing Server and Crystal Reports Cache Server interact closely, so cached .epf pages are reused as frequently as possible, and new pages are generated as soon as they are required.

Crystal Reports Job Server


The Crystal Reports Job Server processes report objects as requested by the CMS and generates report instances. The Crystal Reports Job Server has the ability to process report files and object packages. Object packages are collections of objects that can be grouped together and managed by the BusinessObjects Enterprise system as a single object.

Report Application Server


The Report Application Server (RAS) is very similar to the Processing Server. It, too, is primarily responsible for responding to page requests by processing reports and generating EPF pages. However, the RAS uses an internal caching mechanism that involves no interaction with the Cache Server. Specifically, the Report Application Server (RAS) processes reports that InfoView users view with the Interactive DHTML viewer. The RAS also provides the reporting capabilities that allow InfoView users to create and modify Crystal reports over the web. Additionally, the Report Application Server is used at the time of viewing or submitting schedule requests for reports containing dynamic prompts and cascading lists of values.

List of Values Job Server


The List of Values (LOV) Job Server processes scheduled list of values objects. These are objects that contain the values of specific fields in Business Views. Lists of values are used to implement

Ensuring Availability of your Business Intelligence SolutionLearners Guide

57

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

dynamic prompts and cascading lists of values with Crystal Reports. List of value objects do not appear in the CMC or in InfoView. The List of Values Job Server behaves similarly to the Crystal Reports Job Server in that it retrieves the schedule objects from the Input File Repository Server and saves the instance it generates to the Output File Repository Server. There is never more than one instance of a list of values object. It is possible that a schedule has not been set for a list of values object or that the schedule for that list of values object has been scheduled in such a way that the object might be missing prompt data information. For example, a list of values object with three levels of hierarchy (Country, Region, City) has a schedule that has been set to gather data for the Country and Region levels but not the City level. When the list of values object is viewed, the Report Application Server will query the database in order to retrieve information for the lowest level (for example, City).

Web Intelligence Processing Server


The Web Intelligence Processing Server is used to create, edit, view, and analyze Web Intelligence documents (stored in the Input/Output File Repository Servers). It also processes scheduled Web Intelligence documents and generates new instances of the document, which it stores in the Output File Repository Server. Depending on the users access rights and the refresh options of the document, the Web Intelligence Processing Server will use cached information or it will refresh the data in the document and then cache the new information.

Desktop Intelligence Servers


The Desktop Intelligence Processing Server, Desktop Intelligence Cache Server, and Desktop Intelligence Job Server handle documents with a .rep extension, which correspond to documents formerly known as Full-Client documents. The Desktop Intelligence Processing Server is used to view and analyze Desktop Intelligence documents. The Desktop Intelligence Cache Server minimizes document processing by caching documents and sharing cached documents between users. The Desktop Intelligence Job Server processes scheduling requests it receives from the CMS for Desktop Intelligence documents and generates instances of Desktop Intelligence documents.

Connection Server
If the viewing preference is set to Desktop Intelligence format, the Connection Server is invoked when users want to edit and view Desktop Intelligence documents through InfoView in three-tier mode. The Connection Server libraries are present on the Web Intelligence Report Server, Desktop Intelligence Report Server, Desktop Intelligence Job Server, Desktop Intelligence Cache Server,

58

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

and several EPM server services. These libraries make it possible for these services to query the database directly without communicating with the Connection Server service. When a report designer launches in three-tier Desktop Intelligence mode, the connection to the data provider is managed by the BusinessObjects Enterprise Connection Server. No middleware needs to be installed on the client machine in that case, and the user does not need to create a data source name or define a connection to the universe or other data provider.

Understanding Server Groups


Server groups provide a way of organizing your BusinessObjects Enterprise servers to make them easier to manage. That is, when you manage a group of servers, you need only view a subset of all the servers on your system. More importantly, server groups are a powerful way of customizing BusinessObjects Enterprise to optimize your system for users in different locations, or for objects of different types. If you group your servers by region, you can easily set up default processing settings, recurrent schedules, and schedule destinations that are appropriate to users who work in a particular regional office. You can associate an object with a single server group, so the object is always processed by the same servers. And you can associate scheduled objects with a particular server group to ensure that scheduled objects are sent to the correct printers, file servers, and so on. Thus, server groups prove especially useful when maintaining systems that span multiple locations and multiple time zones. If you group your servers by type, you can configure objects to be processed by servers that have been optimized for those objects. For example, processing servers need to communicate frequently with the database containing data for published reports. Placing processing servers close to the database server that they need to access improves system performance and minimizes network traffic. Therefore, if you had a number of reports that ran against a DB2 database, you might want to create a group of Processing Servers that process reports only against the DB2 database server. If you then configured the appropriate reports to always use this Processing Server group for viewing, you would optimize system performance for viewing these reports. After creating server groups, configure objects to use specific server groups for scheduling, or for viewing and modifying reports. Use the navigation tree in the Servers management area of the CMC to view server groups. The Server Groups List option displays a list of server groups in the details pane, and the Server Groups option allows you to view the servers in the group.

To create a server group


To create a server group, you need to specify the name and description of the group, and then add servers to the group. 1. Go to the Servers management area of the CMC. 2. Choose Manage New Create Server Group. The Create Server Group dialog box appears. 3. In the Name field, type a name for the new group of servers. 4. Use the Description field to include additional information about the group.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

59

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

5. Click OK. 6. In the Servers management area, click Server Groups in the navigation tree and select the new server group. 7. Choose Add Members from the Actions menu. 8. Select the servers that you want to add to this group; then click the > arrow. Tip: Use CTRL+click to select multiple servers. 9. Click OK. You are returned to the Servers management area, which now lists all the servers that you added to the group. You can now change the status, view server metrics, and change the properties of the servers in the group.

Working with server subgroups


Subgroups of servers provide you with a way of further organizing your servers. A subgroup is just a server group that is a member of another server group. For example, if you group servers by region and by country, then each regional group becomes a subgroup of a country group. To organize servers in this way, first create a group for each region, and add the appropriate servers to each regional group. Then, create a group for each country, and add each regional group to the corresponding country group. There are two ways to set up subgroups: you can modify the subgroups of a server group, or you can make one server group a member of another. The results are the same, so use whichever method proves most convenient.

To add subgroups to a server group


1. Go to the Servers management area of the CMC. 2. Click Server Groups in the navigation tree and select the server group you want to add subgroups to. This group is the parent group. 3. Choose Add Members from the Actions menu. 4. Click Server Groups in the navigation tree, select the server groups that you want to add to this group, and then click the > arrow. Tip: Use CTRL+click to select multiple servers. 5. Click OK. You are returned to the Servers management area, which now lists the server groups that you added to the parent group.

60

Designing and deploying a solutionLearners Guide

To make one server group a member of another


1. Go to the Servers management area of the CMC. 2. Click the group that you want to add to another group. 3. Choose Add to Server Group from the Actions menu. 4. In the Available server groups list, select the other groups that you want to add the group to, then click the > arrow. Tip: Use CTRL+click to select multiple servers. 5. Click OK.

Modifying the group membership of a server


You can modify a server's group membership to quickly add the server to (or remove it from) any group or subgroup that you have already created on the system. For example, suppose that you created server groups for a number of regions. You might want to use a single Central Management Server (CMS) for multiple regions. Instead of having to add the CMS individually to each regional server group, you can click the server's "Member of" link to add it to all three regions at once.

To modify a server's group membership


1. Go to the Servers management area of the CMC. 2. Locate the server whose membership information you want to change. 3. Choose Properties from the Manage menu. 4. In the Properties dialog box, click Existing Server Groups in the navigation list. In the details panel, the Available server groups list displays the groups you can add the server to. The Member of Server Groups list displays any server groups that the server currently belongs to. 5. To change the groups that the server is a member of, use the arrows to move server groups between the lists, then click OK.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

61

Creating a disaster and backup recovery plan


A disaster recovery plan consists of precautions to be taken in the event of a full system failure due to a natural disaster or a catastrophic event. The plan needs to minimize the effects of the disaster on the organization so that the organization is able to maintain or quickly resume mission-critical functions. After completing this unit, you will be able to: Create a successful disaster recovery plan Create a backup copy of BusinessObjects Enterprise system data Use a backup copy of BusinessObjects Enterprise system data during disaster recovery Understand hot and cold fail-over systems

Disaster and risk management considerations


When designing a system for high availability, you need to consider how much time it is acceptable for the system to be down. To minimize the amount of downtown, you need to plan for fail-over processing, system backup, and data storage. Questions you need to consider when you plan for risk management: How much risk is your organization prepared to undertake versus the cost of mitigating that risk? Is it important to you if your Business Intelligence system is down for 30 seconds, a minute, an hour, a day, or even a week? How much downtown can you tolerate before the loss of your Business Intelligence system has a serious impact on your organization? Assuming your organization would like to at least partially mitigate the risk, how much are you willing to spend? With this in mind what level of continuity do you need? What would you do if you lost an entire data center? As Business Intelligence becomes more critical to organizations, more and more companies are designing their Business Intelligence systems to be highly available. Some companies additionally factor disaster recovery into their design.

Creating a successful disaster recovery plan


Disaster Recovery is the process, policies and procedures of restoring operations critical to the resumption of business, including regaining access to data (records, hardware, software, etc.), communications (incoming, outgoing) workspace, and other business processes after a natural or human-induced disaster. A BusinessObjects Enterprise disaster recovery plan often involves implementing redundant servers in a backup system that mirrors the primary system. In the event that the primary system goes down, the back up system is still available and becomes the operational (production) system.

62

Designing and deploying a solutionLearners Guide

It is not critical to have the backup system deployed in the same detailed manner as the production system. This could consume additional licenses and hardware that is not readily available or affordable. The backup system is not designed to handle the load capacity of the production system for an extended period of time. If necessary, consider the issue of load capacity on the backup system when it is used as the production system post disaster. The key to a successful disaster recovery plan is that it is implemented smoothly and as automatically as possible with data that is as close if not equal to the production environment at the time the production system fails. The speed at which the backup system is brought up is of the utmost importance in a disaster recovery plan. The geographic location of the backup system is an important part of a successful disaster recovery plan. Make sure the backup system is located far enough from the production system so that the backup system is not threatened by the same catastrophic event that might threaten network connectivity or other necessary communication with the primary system. An automated process should notify administrators of the need to bring up the backup system when required. As with a single-server environment, when planning for a disaster in a multi-server environment, the most important aspect to consider is the backup and recovery of your system data. Depending on how critical your system is, it may be equally important to have minimal down time of the system. If your system must remain available during a disaster, it is recommended to have a backup system that is a mirror of the primary system. A disaster recovery plan requires that a redundant environment be set up. The two environments should not run at the same time; otherwise, there is a possibility that the primary and backup systems will not be synchronized. A disaster recovery plan requires that the primary and backup systems each be configured as a member of the same cluster regardless of location, but the CMSs in the primary system and the CMSs in the backup system are configured to run off of the primary and backup system databases respectively. The disaster recovery plan involves a minimum of two servers (not including database servers and web servers), but it can be scaled to meet the individual customer's requirements. In the most basic plan, each machine contains all of the BusinessObjects Enterprise components.

Common Strategies
Make backups to tape and send it to another site at regular intervals (preferably daily) Make backups to disk on-site and automatically copy them to off-site disk Replicate data to another location Make local mirrors of system and/or data Use disk protection technology such as RAID

Disaster Recovery Process


A Disaster Recovery architecture can be divided into two different types: Available Most Disaster Recovery designs will fall into this category. Essentially, the Disaster Recovery system is available for use if the main production BI system becomes catastrophically

Ensuring Availability of your Business Intelligence SolutionLearners Guide

63

unavailable. When the system becomes available is a business process decision, but most likely within four hours. Highly Available The Disaster Recovery system is available for use immediately if the main production BI system becomes catastrophically unavailable. The first step in creating a disaster recovery architecture is gathering information.

Gathering data about the production system


In most cases a disaster recovery system will not be located in the same building or geographical area as the production system. It is important to understand how this will affect you in your disaster recovery design.

Gathering data about the people in the system


Disaster recovery staff may not be the same staff who work on the production system. It is important to understand their relationship to the production staff. The disaster recovery location may be thought of as a data-housing center without application implementation responsibility.

People you need to consider


1. Database administrators. 2. Network administrators (local and remote). 3. Business Intelligence administrators (local and remote). 4. Backup and recovery administrators (local and remote). 5. Business line (needed for timely data. Different types of data). 6. Third party staff (Either outsourced staff or those who are geographically removed).

Components you need to consider


1. BusinessObjects Servers. 2. Databases (CMS, audit, and third-party sources). 3. File Repository Servers. 4. Custom code (SDK, web pages, images). 5. Network topologies. 6. Service Packs. 7. Geographical location. 8. Ports, ODBC, firewall, and proxies. 9. Other components (For example, client browser, load balancing, web application servers, web server, network-layer/application-layer, intranet/extranet, SSL, IIE certificate , Java certificate, load balancer certificate, security, authentication, authorization and SSO, Kerberos, active directory/LDAP).

64

Designing and deploying a solutionLearners Guide

Disaster recovery in a multiple server environment

Note: When backing up your primary system, you need to back up the CMS system database, the content of the Input FRSs and Output FRSs, the user ID and password for the Administrator account, the application code residing on the Web Application Server, and registry settings if manual changes were performed.

Creating a backup copy of BusinessObjects Enterprise system data


A successful copy of system data into a new blank database provides a snapshot of the CMS system database at the time it was copied. This copy is the cornerstone of your backup of system data. For a complete backup of system data, you should also copy the input and output files whenever you copy the CMS system data to ensure both are synchronized. The other CMSs in the cluster may remain running during these backups only if the backup procedure does not lock the database files or make the database inaccessible to the other CMSs. Ideally, no users should be logged into the system during a backup and no jobs should be scheduled to run during that time. While not necessary, it is ideal to verify that no schedules are either running or set to run during a backup as this will store a job in "running" status which will result in an incorrect instance status if restoring to that backup in the future. While this outcome is undesirable, the impact to the health of the system is minimal.
Note: New or changed system data not included in the latest backup will be lost when restoring from backup. Due to the relatively small size of the system database, it should take a very short period of time to complete this procedure (a minute or less) and should be completed frequently.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

65

To backup system data using the CCM


1. Create a blank SQL Server database and an associated DSN. 2. Stop the CMS. 3. From the Configuration tab of the CMS properties, click Specify in the CMS Data Source section of the dialog box. 4. From the CMS Database Setup dialog box, select Copy data from another Data Source. 5. In the Specify Data Source dialog box, specify the source database, and browse to the empty database created in step 1.

6. Click OK. Once the Migrating database progress bar reaches completion, a message appears indicating that the CMS database setup has completed. 7. Click OK.

Using a backup copy of BusinessObjects Enterprise system data during disaster recovery
Most companies have third-party software and processes in place for performing database backups and replication on a regular basis. Even though these procedures and processes may already be in place, administrators can use the CCM to copy your production system database to a backup without the need to use any third-party tools for backing up databases. Once backup is complete, you need to repoint the CMS to the backup copy.

To repoint a CMS to a backup copy of CMS system data


1. From the CCM, verify that the CMS has been stopped.

66

Designing and deploying a solutionLearners Guide

2. From the Configuration tab of the CMS properties, click Specify in the CMS Data Source section of the dialog box. 3. From the CMS Database Setup dialog box, select Select a Data Source. 4. In the Specify Data Source dialog box, specify the backup copy of the CMS system data. 5. Click OK.

Creating a backup recovery plan


Backup refers to making copies of data so that these additional copies may be used to restore the original data after a data loss event. These additional copies are typical called "backups". Backups are useful primarily for two purposes: To restore a state following a disaster. To restore small numbers of files after they have been accidentally deleted or corrupted. The Business Objects Central Management Server (CMS) database contains reference addresses to reporting objects that are housed in a separate report object file storage area, managed by the File Repository Server (FRS). The CMS and the FRS must have complete referential integrity. This distributed reference pointer to report object architecture presents a data recovery issue that could potentially prevent successful recovery of the application environment. The difference in size between the CMS database and the FRS report objects requires different lengths of time to complete the backup process. Most company backups process would not preserve the referential integrity of the CMS database and the corresponding FRS objects because of the likelihood that the contents of one of the stores (CMS or FRS) will change while the FRS is being backed up. A backup and restore solution is needed at a single and concurrent point in time between the CMS and FRS contents in order to retain their integrity.

Backup Recovery Process


The backup recovery process is much simpler than disaster recovery process. The only components you need to consider are the CMS, FRS and any custom code that directly affects the CMS and FRS. The FRS and CMS backup process should be automated and coordinated at the exact same time. This can be accomplished via automated jobs, scripts, or third party tools.

XI 3.0 Command line BIAR file


BIAR Engine (biarengine.jar) is available in BusinessObjects Enterprise XI 3.0 to support import and export of BIAR files within repository. Apache Derby (a small footprint database) is used for BIAR to Live system or Live system to BIAR workflow. Instead of loading all objects into memory, system loads the objects into Derby database temporarily prior to committing to BusinessObjects Enterprise or to the BIAR file. The default location of the biarengine.jar is stored in C:\ProgramFiles\Business Objects\common\4.0\java\lib. To run the biarengine.jar, type on the command line:
Java jar biarengine.jar <Property File>

Ensuring Availability of your Business Intelligence SolutionLearners Guide

67

Objects supported via Command Line Calendar, Event Category, Personal Category Universe Connection, Universe Crystal Report, Web Intelligence User/User Group/Folder WORD, PPT, XLS, PDF, RTF, TEXT Profile Program Object Packages Shortcut

Objects not supported via Command Line Dashboard Pages, Xcelsius, Flash Agnostic documents Analytic Discussions Inbox, Favorites Folder My InfoView Overload Publication, Encyclopedia Process Tracker Query as a Web Service Desktop Intelligence templates LOV and prompt groups Security settings on objects Business Views and associated objects

Understanding hot and cold/active and passive fail-over systems


A hot fail-over system (also known as active and passive disaster recovery system) has one or more servers that have been added using clustering (or through the add server option in the CCM) and multiple servers are running at the same time. When a server fails, the other server immediately takes on the responsibility of serving all requests. A cold fail-over system consists of one or more servers that have been clustered. The primary server runs and performs all necessary work in the environment while the additional servers remain stopped. Many companies prefer the cold fail-over system because it has no impact on licenses. The obvious drawback of the cold fail-over system is the reliance on human intervention to start the cold server after the primary server has failed. This introduces a lag in up-time that may or may not be acceptable to some companies. A highly available fail-over means that one server can adequately handle all capacity. Thus, if one goes down, the other server of the live cluster can also handle the complete load. When both servers are running, the system is effectively running at half of its potential capacity. The hot fail-over system is considered a highly available fail-over system.

68

Designing and deploying a solutionLearners Guide

Standard fail-over refers to a system where both servers can handle the load together, but in a fail-over situation, capacity would become somewhat diminished. For maximum efficiency, strive to meet these configuration suggestions when planning a backup system as part of your company's disaster recovery plan. The directory structure of the primary and backup systems should be identical. The ODBC layer and ODBC drivers of the primary and backup systems should be identical. The data source names of the primary and backup systems should be identical. Each server which connects to the reporting database must have the same version of the middleware for connecting to those database servers. Each machine should use the same account to run the services (either the system account or the same domain NT user account names and passwords).

Hot and cold/active and passive fail-over architecture


In this disaster recovery architecture the remote backup site is left completely switched off. In the usual state of events, you would point to the active site through a switchable domain name. While the active system is operational, you replicate the system database and FRS file store from the active to the passive site on a regular basis. Each system in each location is completely self-contained and does not rely on any component from another location. Each system has its own set of web servers, web application servers, BusinessObjects Enterprise tier components, system database, FRS file store and the necessary data sources.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

69

In this architecture: All of the CMSs in a location point to their own local database (and do not point to a shared database in another location). All of the FRSs in a location point to their own local file store. All of the components in the passive location(s) would be kept completely down and switched off. Many organizations make use of a SAN (storage area network) technology for both the FRS file store and for the data files on which the database resides. SANs have excellent replication features built into them, and many organizations employ these to regularly replicate the CMS system database and the FRSs file store over to the passive locations. Note: Do not maintain individual system database and file stores in each location, merely replicating the one active system database to remote locations.

Setting up the Active/Passive Disaster Recovery


Start all of the services in all locations (not simultaneously) to allow the services to register with the cluster. This involves pointing any of the remote CMSs to the system database on the active location temporarily. Then once all the services have registered, point the CMS back to their own local databases.

The Failover Procedure


In the event of a complete failure of the primary location, you would: 1. Wait for the replication of the system database and FRS file store between the primary and backup site to complete. 2. Start the database containing the replicated system data. 3. Start the BusinessObjects Enterprise tier. 4. Start the web application server and web server tier. 5. Point all of your end users from the failed site to the failover site (usually by instructing the hardware switch to forward all requests to the backup site rather than the original primary site. Another way to do this would be to update the IP address of the DNS records to point to the backup site. Note: The CMS that processes in the backup sites must be completely switched off (and not simply disabled in the CMC). If CMS stays active it will register itself.

Resuming normal operations


In the event that the primary site is available to come back online, the process would operate in reverse. You bring down the backup site entirely, and allow the system database and FRS file store to replicate back to the primary site. Then you restart the primary BusinessObjects Enterprise system and point the users back to the primary system. Since in this architecture you only have one active system database running at any one time (although it is in two places), any activity that occurred when the primary site was down will be honored when the original system gets restored. Any personal documents created while the system was running in the backup facility will appear when normality is resumed.

70

Designing and deploying a solutionLearners Guide

Active/Active Disaster Recovery


It is possible of running the primary and backup sites simultaneously (active/active configuration). However, this is only possible under certain conditions: 1. Your licensing agreement can support both sites running simultaneously 2. The latency between the data centers must be negligible (ideally less than 10 minutes and certainly no more than 20 minutes) The reason for this is that the Intelligence tier components were not designed to be split across networks with high latencies. For example all the CMSs within a cluster like to be "close" to each other. However it is quite possible for the reporting engines to be spread out across WANs. Note: It is not recommended that you run active/active configuration unless you are running dark fibre between the two sites.

Ensuring Availability of your Business Intelligence SolutionLearners Guide

71

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Ensuring Availability of your Business Intelligence Solution


1. What is a disaster? 2. True or false? Fault tolerance is also referred to as graceful degradation. 3. True or false? BusinessObjects Enterprise XI 3.1 scales vertically but not horizontally. 4. Name two benefits of hardware loadbalancers. 5. BusinessObjects Enterprise comes with a tool to ease the deployment of web applications on supported web application servers. What is it called? 6. What is a CMS cluster?

72

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Lesson summary
After completing this lesson, you are now able to: List and explain the factors that need to be considered when designing and deploying a highly available Business Intelligence solution. Meet the challenges in discussing high availability Define high availability and fault tolerance concepts Describe high availability in the application tier Describe high availability in the intelligence tier Describe high availability in the processing tier Identify specific disasters and develop specific strategies for accommodating them Design a highly available system that supports specific business requirements

Ensuring Availability of your Business Intelligence SolutionLearners Guide

73

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

74

Designing and deploying a solutionLearners Guide

Lesson 4

Performance, Scalability and Sizing


Performance, Scalability and Sizing
This lesson provides an overview of how BusinessObjects Enterprise systems are designed. It discusses the various reasons for sizing a system including the concept of scalability. The lesson includes a methodology you can use to calculate a sizing solution. A case study is introduced here to help you to visualize how to analyze the business requirements and practice the sizing concepts learnt. Note: There are some differences in the sizing information for Publishing and Query as a Web Service (QaaWS) between BusinessObjects Enterprise XI 3.0 and XI 3.1. At the time of writing, the information pertaining to these differences was not available and therefore, the information contained in this lesson applies to BusinessObjects Enterprise XI 3.0. After completing this lesson, you will be able to: Describe what makes a system scalable Size a BusinessObjects Enterprise deployment Design an architecture plan

Performance, Scalability and SizingLearners Guide

75

Designing a scalable system


The BusinessObjects Enterprise architecture is scalable in that it allows for a multitude of server configurations, ranging from standalone, single-machine environments to large-scale deployments supporting global organizations. The flexibility offered by the products architecture allows you to set up a system that suits your current report requirements without limiting the possibilities for future growth and expansion. Define scalability List and describe general BusinessObjects Enterprise scalability considerations

What is scalability?
Scalability is the capacity to address additional system load by adding resources without fundamentally altering the implementation architecture or implementation design. When building a scalable system, the main objective is to achieve a linear relationship between the amount of resources added and the resulting increase in performance, while maintaining the speed of the transactions after additional users are added and more requests need to be processed. There are two methods of scaling BusinessObjects Enterprise: vertical and horizontal. Vertical scaling, also referred to as scaling up, is the ability to improve system performance by taking advantage of the hardware on a single server. Horizontal scaling, also referred to as scaling out, is the ability to improve system performance by adding computers to the same enterprise solution. Adding memory to the machine on which the Web Intelligence Processing Server is running to improve performance is an example of vertical scaling. Adding a new machine to the enterprise system and configuring additional Crystal Reports Processing Server services on it, thus improving performance, is an example of horizontal scaling. Depending on your situation, you can run all services on one machine or you can run them on separate machines. For example, you can run the Central Management Server and the Event Server on one machine, while you run the Adaptive Processing Server on a separate machine. The same service can also run in multiple instances on a single machine.

General BusinessObjects Enterprise scalability goals


When scaling your system, it is important to understand the general considerations for system scalability and how each of the BusinessObjects Enterprise servers is responsible for particular aspects of your system. You can scale a system to: Increase overall system capacity. Increase schedule reporting capacity. Increase report viewing capacity. Increase on-demand viewing capacity. Increase interactive viewing capacity. Improve web response speeds.

76

Designing and deploying a solutionLearners Guide

Configure a web farm for load balancing. Optimize custom web applications. This section focuses on the different aspects of your systems capacity, discusses the relevant components, and provides a number of ways in which you might modify your configuration accordingly. The overall performance and scalability of BusinessObjects Enterprise can be affected by external factors. When thinking about the overall performance and scalability of BusinessObjects Enterprise, keep in mind these points: BusinessObjects Enterprise depends upon your existing IT infrastructure. BusinessObjects Enterprise uses your network for communication between servers and for communication between BusinessObjects Enterprise and client machines on your network. Make sure that your network has the bandwidth and speed necessary to provide BusinessObjects Enterprise users with acceptable levels of performance. BusinessObjects Enterprise processes reports against your database servers. If your databases are not optimized for the reports you need to run, the performance of BusinessObjects Enterprise may suffer. The way in which your network is configured and the third-party tools that are used in correlation with BusinessObjects Enterprise may affect the performance of BusinessObjects Enterprise.

Increasing overall system capacity


As the number of report objects and users on your system increases, you can increase the overall system capacity by clustering two (or more) Central Management Servers (CMS). You can install multiple CMS services on the same machine. However, to provide server redundancy and fault tolerance, you should ideally install each cluster member on its own machine. CMS clusters can improve overall system performance because every BusinessObjects Enterprise request results, at some point, in a server component querying the CMS for information stored in the CMS database. When you cluster two CMS machines, you instruct the new CMS to share in the task of maintaining and querying the CMS database.

Increasing scheduled reporting capacity


All reports (Crystal Reports, Desktop Intelligence Reports, and Web Intelligence Reports) that are scheduled are processed by a job server. You can expand BusinessObjects Enterprise by running individual job servers on multiple machines or by running multiple job servers of different types on a single multi-processor machine. Note: The Web Intelligence Processing Server is added because it is responsible for scheduling the physical processing of a Web Intelligence document. If the majority of your reports are scheduled to run on a regular basis, there are several strategies you can adopt to maximize your systems processing capacity:

Performance, Scalability and SizingLearners Guide

77

Install all job servers (Crystal Reports Job Servers, Desktop Intelligence Job Server, Web Intelligence Job Servers) and the Web Intelligence Processing Server in close proximity to (but not on the same machine as) the database server against which the reports run. Ensure that the File Repository Servers are readily accessible to the installed services (so you can send a read report objects request from the Input File Repository Server and send a request to write report instances to the Output File Repository Server quickly). Depending upon your network configuration, these strategies may improve the processing speed of your scheduled documents because there is less distance for data to travel over your corporate network. Verify the efficiency of your reports. When designing reports in Crystal Reports, there are a number of ways in which you can improve the performance of the report itself by modifying record selection formulas, using the database servers resources to group data, incorporating parameter fields, and so on. Verify the efficiency of your Desktop Intelligence reports. When designing reports in Desktop Intelligence, there are several ways in which you can improve the performance of the report itself by modifying record selection formulas, using the database servers resources to group data, incorporating parameter fields, and so on. Most importantly, it comes down to how well the universe on which the report is based is built. Use event-based scheduling to create dependencies between large or complex reports. For instance, if you run several very complex reports on a regular nightly basis, you can use Schedule events to ensure that the reports are processed sequentially. This is a useful way of minimizing the processing load that your database server is subject to at any time. If some reports are much larger or more complex than others, consider distributing the processing load through the use of server groups. For instance, you might create two server groups, each containing one or more Crystal Reports Job Servers. When you schedule recurrent reports, you can specify that it be processed by a particular server group to ensure that especially large reports are executed by more powerful servers. Increase the hardware resources that are available to your scheduling services. If the server is currently running on a machine along with other BusinessObjects Enterprise components, consider moving the server to a dedicated machine. If the new machine has multiple CPUs, you can install multiple Crystal Reports Job Servers, Desktop Intelligence Job Servers, and/or Web Intelligence Processing Servers on the same machine (typically no more than one service per CPU).

Increasing report viewing capacity


Increasing on-demand viewing capacity
When you provide many users with View On Demand access to reports, you allow each user to view live report data by refreshing reports against your database server. For most requests, the Crystal Reports Processing Server retrieves the data and performs the report processing, and the Crystal Reports Cache Server stores recently viewed report pages for possible reuse. If users use the Advanced DHTML viewer, the Report Application Server (RAS) processes the request. If users use Web Intelligence, the Web Intelligence Processing Server processes the request.

78

Designing and deploying a solutionLearners Guide

For Desktop Intelligence requests, the Desktop Intelligence Processing Server retrieves the data and performs the report processing, and the Desktop Intelligence Cache Server stores recently viewed report pages for possible reuse, which is similar to how Crystal Reports objects are processed. The Connection Server is utilized when viewing Desktop Intelligence documents through InfoView when the preference is set to Desktop Intelligence format (Windows only). If your reporting requirements demand that users have continual access to the latest data, you can increase capacity in the following ways: Increase the maximum allowed size of the cache for Crystal Reports and/ or Desktop Intelligence reports. Verify the efficiency of your reports. When designing reports in Crystal Reports or Desktop Intelligence, there are a number of ways in which you can improve the performance of the report itself by modifying record selection formulas, using the database servers resources to group data, incorporating parameter fields, and so on. The efficiency of the design of your universes impacts the processing speed of Crystal Reports or Desktop Intelligence reports. Increase the number of Crystal Reports Processing Servers that serve requests on behalf of any single Crystal Reports Cache Server. You can install additional Crystal Reports Processing Servers on multiple machines. Increase the number of Desktop Intelligence Processing Servers that serves requests on behalf of any Desktop Intelligence Cache Server. You can install additional Desktop Intelligence Processing Servers on multiple machines. Increase the number of Crystal Reports Processing Servers, Crystal Reports Cache Servers, Web Intelligence Processing Servers, Desktop Intelligence Processing Servers, Desktop Intelligence Cache Servers, Connection Servers, and Report Application Servers on the system, and then distribute the processing load through the use of server groups. For instance, you might create two server groups, each containing one or more Crystal Reports Processing Servers along with one or more Report Application Servers. You can then specify individual reports that should always be processed by a particular server group.

Increasing interactive viewing capacity


When you provide many users with interactive viewing capability, you allow each user to interact with live data by refreshing reports against your relational database server or OLAP data source. If your reporting requirements demand users have continual access and interactivity with live data, you can increase capacity in the following ways: Verify the efficiency of your reports. When designing reports with Web Intelligence you will gain the most improvement by carefully investigating the efficiency of the underlying universe. Increase the number of Web Intelligence Processing Servers that create, view, and edit Web Intelligence portal requests. Increase the number of Web Application Servers for viewing and editing of OLAP Intelligence reports.

Performance, Scalability and SizingLearners Guide

79

Improving web response speeds


Because all user interaction with BusinessObjects Enterprise occurs over the web, you may need to investigate a number of areas to determine exactly where you can improve web response speeds. These are some common aspects of your deployment that you should consider before deciding how to expand BusinessObjects Enterprise: Assess your web servers ability to serve the number of users who connect regularly to BusinessObjects Enterprise. Use the administrative tools provided with your web server software (or with your operating system) to determine how well your web server performs. If the web server is limiting web response speeds, consider increasing the web servers hardware and/or setting up a web farm (multiple web servers responding to web requests to a single IP address). If web response speeds are slowed only by report viewing activities, you can increase scheduled reporting capacity and on-demand viewing capacity. Take into account the number of users who regularly access your system. If you are running a large deployment, ensure that you have set up a CMS cluster. If you find that a single application server (for example, the Tomcat Java Web Application Server) inadequately serves the number of scripting requests made by users who access your system on a regular basis, consider the following options: Increase the hardware resources that are available to the application server. If the application server is currently running on the web server or on a single machine with other BusinessObjects Enterprise components, consider moving the application server to a dedicated machine. Consider setting up two (or more) Java application servers. Consult the documentation for your Java Web Application Server for information on load-balancing, clustering, and scalability. Note: If your deployment requires a higher level of failover and fault tolerance, exceeds three or more servers, or serves a large number of users who would be severely affected by any system outage, then it is strongly advised that you distribute the load of the application tier over multiple application servers balanced through a hardware load balancer. Each application server will connect to any CMS service running in the cluster, so the failure of one application server would only affect the application tier, not the processing tier.

Using a web farm for load balancing


A web farm is a group of two or more web servers working together to handle browser requests. Web farms are not currently supported by BusinessObjects Enterprise. However, that is not to say that they wont work together. Some customers have successfully deployed using web farms, but it is not a scenario that is tested during the QA cycle, as a web server is not required as part of a BusinessObjects Enterprise deployment. For Java environments specifically, there are no known issues with the use of any web server proxy components that are supported by the application server vendor or provider.

80

Designing and deploying a solutionLearners Guide

Note: For proxy load balancing, just as with hardware-based load balancing, the use of sticky sessions is required.

Optimizing custom web applications


If you are developing your own custom desktops or administrative tools with the BusinessObjects Enterprise Software Development Kits (SDK), be sure to review the libraries and APIs. You can now, for instance, incorporate complete security and scheduling options into your own web applications. You can also modify server settings from within your own code to further integrate BusinessObjects Enterprise with your existing intranet tools and overall reporting environment. In addition, be sure to check the developer documentation available on your BusinessObjects Enterprise product CD for performance tips and other scalability considerations. The query optimization section in particular provides some preliminary steps to ensuring that custom applications make efficient use of the query language.

Performance, Scalability and SizingLearners Guide

81

Sizing a BusinessObjects Enterprise deployment


Every enterprise reporting deployment is unique and careful sizing calculations and additional post-install testing is necessary in creating a successful deployment. After completing this unit, you will be able to: Define the sizing process Size a BusinessObjects Enterprise deployment

The sizing process


To effectively size a BusinessObjects Enterprise deployment, you need to perform these steps: 1. Determine the system load. 2. Determine the number of services required. 3. Determine the configuration of machines. 4. Perform system testing and tuning. The following sections describe each step and provide sizing calculations as appropriate.
Note:

BusinessObjects Enterprise is a highly flexible system and there are many variables that could impact how an optimal configuration might look. This guide offers conceptual information and measurements that reflect observed BusinessObjects Enterprise system and component behavior. Sizing formulas in this course can assist in understanding the nature of relationships between user interactions and service functionality, as well as how this relates to CPU utilization, memory, and/or disk consumption. For the purposes of this course, most system and performance testing was conducted on machines averaging at 2.5 GHz clock speed with 2 GB memory per CPU. For optimal performance, it is recommended that you use the sizing guidelines to create an initial sizing plan. Once installed, you should test and tune the system according to the testing results, then retest to ensure the system meets your requirements before you deploy to a production system.

Step 1: Determining load


Load defines the amount and types of use and activity that will interact with the BusinessObjects Enterprise system. Load can be broken down into various types of user interactions and user types. When determining system load: Estimate the number of potential users (Named Users). Potential users are users with the ability to log onto the system. Estimate the number of concurrent active users.

82

Designing and deploying a solutionLearners Guide

Concurrent active users are users who are logged on and interacting with the system at the same time (clicking folders, viewing reports, scheduling, and so on). Note the distinction between concurrent active users and concurrent users; concurrent active users are all actively interacting with the system at various rates, while concurrent users are all logged on but may or may not be interacting with the system. Estimate the number of simultaneous requests. Simultaneous requests are actions performed by concurrent active users at the same time. Simultaneous requests include such things as logging on, clicking folders, viewing a report page (cached or not), opening a Web Intelligence document, scheduling, refreshing, and so on. Concurrent active users and simultaneous requests are the load types that will most impact the required resources and the appropriate configuration to support a high performance and highly reliable BusinessObjects Enterprise system. These measurements can be estimated based on the number of potential users.

Estimating potential users


This is the easiest number to calculate, as this is the total population of users who have the ability to access the BusinessObjects Enterprise environment.

Estimating concurrent active users


When calculating the size and configuration of a deployment, it is important to determine the expected number of concurrent active users. Concurrency ratios are, on average, 10% to 20% of the total potential user base. This can vary significantly depending on the nature and breadth of the deployment, but is a reasonable rule of thumb for planning purposes. A guideline for estimating the concurrent active users is: concurrent active users = 10% to 20% of total potential user base For example, at 10% of the total potential user base: 1000 potential users = 100 estimated concurrent active users

Estimating simultaneous requests


The quickest method for estimating the number of simultaneous requests is to calculate 10% of concurrent active users. For example: 1000 concurrent active users x 10% = 100 Simultaneous Requests Dividing users into types of users based on how they use the system allows you to more accurately determine the number of simultaneous requests. The following process is an example of one methodology that might be used to estimate this number of simultaneous requests in more detail:

Performance, Scalability and SizingLearners Guide

83

1. Classify users according to the load they place upon the system. 2. Calculate the percentage of concurrent users by usage type. 3. Determine the simultaneous usage rate.

Classifying users according to load


For the purposes of this calculation, users are divided into the following types: Heavy Users Users who are constantly logged onto the system and viewing reports nearly continuously (100% request rate). Active Users Users who are logged onto the system frequently throughout the day, averaging one request every four seconds (25% request rate). Moderate Users Users who are logged onto the system from time to time throughout the day, averaging one request every eight seconds (12% request rate). Light Users Users who log onto the system infrequently, view a couple of reports, and log out, averaging one request every 16 seconds (6% request rate).

Calculating the percentage of concurrent users by usage type


Once the user types are divided, you can calculate the percentage of each type of user.

Example
Assume there are 100 concurrent users in the system. The breakdown of concurrent users by type is as follows:
User types # of concurrent active users by type Percentage of concurrent users by type

Heavy users Active users Moderate users Light users Total

15 45 25 15 100 concurrent active users

15% 45% 25% 15% 100%

84

Designing and deploying a solutionLearners Guide

Determining the simultaneous usage rate


Now that the percentage of each type of user has been determined, you can calculate the simultaneous usage rate based on user type.
Concurrent active users by type Assumed number of simultaneous requests % of simultaneous usage rate

For every 100 heavy users For every 100 active users For every 100 moderate users For every 100 light users

100 25 12 6

100% 25% 12% 6%

Using the simultaneous usage rate for each user type, you can then determine the overall simultaneous usage rate for the system by using this formula: ((Concurrent Users)*(% of Heavy Users)*(Heavy user rate)) + ((Concurrent Users)*(% of Active Users)*(Active user rate)) + ((Concurrent Users)*(% of Moderate Users)*(Moderate user rate)) + ((Concurrent Users)*(% of Light Users)*(Light user rate)) = Calculated Simultaneous Users (rounded up) For example: ((100) * (15/100) * (1.00)) + ((100) * (45/100) * (0.25) + ((100) * (25/100) * (0.12) + ((100) * 15/100) * (0.06) = 31

Performance, Scalability and SizingLearners Guide

85

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Based on the assumption of 100 concurrent users and the types of activities each user is likely to perform (15% heavy, 45% active users, 25% moderate users, 15% light users), there will be an average of 31 simultaneous requests.

Step 2: Determining the number of required services


The BusinessObjects Enterprise XI suite consists of many core services, some of which are essential to system operation and others that are optional. You must determine which services are required and how many are needed to provide the desired functionality and optimal performance When sizing a system, you need to determine the number of services required for these servers: Core Services Server Intelligence Agent (SIA) Central Management Server (CMS) Crystal Reports Cache Server Input/Output File Repository Server Adaptive Processing Server Search Server Client Auditing Proxy Service Publishing Post Processing Service Publishing Service

Adaptive Job Server Destination Delivery Scheduling Service Program Scheduling Service Publication Scheduling Service Destination Configuration Service Replication Service Event Server Processing Tier Desktop Intelligence Desktop Intelligence Processing Server Desktop Intelligence Cache Server Desktop Intelligence Job Server Connection Server Web Intelligence Web Intelligence Processing Server Web Intelligence Job Server Crystal Reports Crystal Reports Processing (Page) Server Crystal Reports Job Server

86

Designing and deploying a solutionLearners Guide

Report Application Server List of Values Job Server Voyager Multi-Dimensional Analysis Server (MDAS) Enterprise Performance Manager Services (EPM) Dashboard Manager Analytics Application Tier Web Application Server Web Application Container Server (WACS) Voyager Query as a Web Service (QaaWS) Live Office

Note: The service threshold numbers are based on benchmark testing performed by Business Objects. These numbers are only intended to be used as a guideline to help you determine the size and configuration of your system. Once you perform your initial sizing calculations, you can adjust the sizing numbers and system configuration to accommodate specific system requirements. Sizing a deployment differs depending on the environment. There are many factors that will cause the threshold numbers to vary, such as the size and complexity of reports, the usage of the system, network speed, and size and configuration of servers.

Server Intelligence Agent


One Server Intelligence Agent (SIA) is required to run on each machine regardless of CPU configuration. The SIA maintains server status according to the settings you specify in the CMC. It processes the CMC's requests to start, stop, monitor, and manage all servers on the node, and it also monitors potential problems and automatically restarts servers that have shut down unexpectedly. The SIA ensures optimal performance by continually monitoring server status information, which is stored in the CMS database. When you change a server's settings or add a new server in the CMC, the CMS notifies the SIA, and the SIA performs the task. The SIA is automatically configured during installation, but you can change these settings through the CCM.

Memory requirement
Only RAM memory consideration is required to maintain the execution of SIA. The recommended RAM memory is 350MB.

Performance, Scalability and SizingLearners Guide

87

Central Management Server


The number of CMS services required in a system depends on: The number of concurrent active users The number of simultaneous requests involving viewing or querying CMS objects If there is a high volume of batch scheduling If software fault tolerance (clustering) is required

Processor requirements
The number of CPUs required to support CMS services is highly dependent on the type of CMS activity. For instance, large updates to the CMS system database (for example, adding or deleting a large number of users or viewing or querying a large number of objects) will use intensive CPU time. For increased CMS throughput and response times, allocate additional CPU resources. Follow these guidelines when determining the processor requirements for the CMS: One CPU for every 500 concurrent active users One CMS service for every 600-700 concurrent active users One CPU for every estimated 100 simultaneous requests Consider clustering CMSs if there are 600 or more concurrent active users or if software fault tolerance is required

Example
Question: What is the estimated number of CMS services and CMS CPUs required to support 4000 concurrent active users in a highly active system? a) 4000 concurrent active users / 500 concurrent active users per CPU = 8 CPUs b) 4000 concurrent active users / 700 concurrent active users per CMS service = 5.71 services (round up to 6 CMS services) Answer: In order to support 4000 concurrent users in a highly active system, it would take six CMS services installed across eight available CPUs. Note: This example is a guideline only as capacity numbers are highly dependent upon other factors such as CPU speed, network, database connectivity, and so on. Note: At least two CMS services are required for Fault Tolerance; if one CMS service is shut down, another CMS service takes over the workload.

CMS clustering across subnets


A cluster that has two or more CMS cluster members on different subnets is technically possible and has been QA tested. This configuration is supported by Business Objects, strictly providing that no significant additional network latency is created as a result of an additional subnet.

88

Designing and deploying a solutionLearners Guide

The most important factor to ensure efficient CMS clustering performance is to eliminate excessive latency between CMS services and the CMS Database. For example, CMS1 and the CMS system database are located in the same datacenter in New York. CMS2 is a member of the same cluster as CMS1 but is located in China and must communicate with the CMS database in New York. Excessive network latency of CMS2 in China to the CMS database in New York would be problematic. Ensure that all CMS members of a cluster have uniform communication speeds to the system database. For best performance, run each CMS cluster member on a machine that has the same type of CPU. For more detailed clustering information refer to the online BusinessObjects Enterprise XI Administrator's guide.

Memory requirements
For best performance, run each CMS cluster member on a machine that has the same amount of memory. Memory usage is roughly controlled by the number of objects stored in the object cache. This is controlled by the Windows registry key "MaximumObjectsToKeepInMemory". This key specifies the maximum number of objects that the CMS stores in its memory cache. Increasing the number of objects reduces the number of database calls required and greatly improves CMS performance. However, placing too many objects in memory may result in the CMS having too little memory remaining to process queries. The upper limit for this setting is 100000 and the default setting is 10000.

Calculating the database file size


To determine the amount of memory to put on the CMS machine, you need to estimate how many objects you will have in the system. Each row in the system database stores one BusinessObjects Enterprise object. The average size of a BusinessObjects Enterprise object is 1024 bytes. Use this calculation to determine the database file size: (Size Data + Size Indexes) * (1 + (1 - page fill factor)) = ((Number Rows * 1024) + Number Indexes * (32 * number of rows * 2)) * 1.3 = 1.3 * Number Rows * (1024 + Number Indexes * 64) = 1.3 * number rows * 2624 This table lists the estimated database size based on the number of objects contained within a BusinessObjects Enterprise system:
Number of objects 10,000 100,000 1,000,000 Database size 33,312 333,125 3,331,250 Unit KB KB KB

Performance, Scalability and SizingLearners Guide

89

Number of objects 10,000,000

Database size 33,312,500

Unit KB

If you are using Sybase, use this calculation to determine the database file size: One lob per data page is allocated. = ((Number Rows * 844) + (Number Rows * Page Size) + Number Indexes * (32 * number of rows * 2)) * 1.3 = 1.3 * Number Rows * (2624 + Page Size) This table lists the estimated Sybase database size based on the number of objects contained within a BusinessObjects Enterprise system:
Number of objects 10,000 100,000 1,000,000 10,000,000 Database size 87,360 873,600 8,736,000 87,360,000 Unit KB KB KB KB

Note: There are many factors that can increase the amount of memory required for the CMS. For example, report instances that contain numerous prompts will be larger than average as every string prompt requires 256 bytes. In normal operation, the CMS will only keep the most recently accessed objects in memory. During periods of rapid object access, such as batch reporting, the CMS may exceed the specified amount of memory. The more individual accounts and objects that need to be loaded by the CMS, the more memory it will use. In addition, more objects contained in a particular level for a user will create more overhead on the CMS when the user attempts to access the folder level. The CMS unloads any users that are not currently logged onto the system. Anytime a session is released, the user is logged off of the CMS. When the user session is released, the CMS unloads any objects sessions that are associated with the user.

Crystal Reports Cache Server


The threshold that determines the number of Crystal Reports Cache Server services required is the maximum simultaneous processing threads per Crystal Reports Cache Server service. Note: The number of simultaneous processing threads is equal to the number of simultaneous requests.

90

Designing and deploying a solutionLearners Guide

Processor requirements
Follow this guideline when determining the processor requirements for the Crystal Reports Cache Server: 200 maximum simultaneous processing threads per CPU 400 maximum simultaneous processing threads per Crystal Reports Cache Server service

Memory requirements
Follow this guideline when determining the memory requirements for the Crystal Reports Cache Server: Estimate one MB per simultaneous processing thread + 17MB base

Example
Question: You have a highly active system using ActiveX as the predominant preview engine. What is the estimated number of Crystal Reports Cache Server services and Crystal Reports Cache Server CPUs required to support 4000 concurrent active users generating 400 simultaneous requests? a) Number of CPUs necessary to support concurrent active users: 400 simultaneous requests / 200 simultaneous requests per CPU = 2 CPUs. b) Number of services necessary to support 400 simultaneous requests per service = one service. Answer: It will take one Crystal Reports Cache Server service installed across a dual CPU machine to support 4000 concurrent active users generating 400 simultaneous requests in a highly active system. Alternatively, it will take two Crystal Reports Cache Server services installed across two single CPU machines.

File Repository Servers (FRS)


The Input File Repository Server manages objects (Crystal Reports, OLAP Intelligence reports, Web Intelligence documents, program objects, Microsoft Excel files, Microsoft Word files, Microsoft PowerPoint files, Adobe Acrobat PDFs, rich text format files, text files, hyperlinks, object packages) that have been published to the system by administrators or end users (using the Publishing Wizard, the Central Management Console, the Import Wizard, or a Business Objects designer component such as Crystal Reports or the Web Intelligence Java or HTML Report Panels). The Output FRS maintains all the instances that have been produced from reports (Crystal or Web Intelligence), programs, and object packages that have been scheduled.

Repository location
You may have multiple Input and Output FRS services on one or several machines to support a high-availability environment; however, the FRS services will behave in an active/passive

Performance, Scalability and SizingLearners Guide

91

fashion whereby the first available FRS will be active and all other FRS services will remain passive unless the active FRS becomes unavailable. The Input and Output FRS do not have to reside on the same machine. The location of the FRS repositories is managed through the CMC in the Servers section under the Properties tab. Note: To optimize system performance on the File Repository Servers, network settings on Windows 2000 Server could be set to "Maximize Throughput for File Sharing". This will give a higher priority to file sharing applications.

Calculating the number of FRSs required


At least one Input and Output File Repository Server is required. In larger deployments, there may be multiple Input and Output File Repository Servers for redundancy. In this case, all Input File Repository Servers must share the same directory. Likewise, all Output File Repository Servers must share a directory.

Processor requirements
The File Repository Servers require higher I/O resources (faster disk and network) and fewer CPU resources. When estimating the number of CPUs in the BusinessObjects Enterprise system, the File Repository Servers are not considered as they have little impact on system memory.

Disk requirements
Enough disk space must be available to store files. Typically the Output FRS will require more disk space than the Input FRS. The Output FRS maintains all the instances (with saved data) that have been produced from reports (Crystal or Web Intelligence), programs, and object packages that have been scheduled, and as such will require proportionately more disk space. For both the Input and Output FRS, the amount of space required will vary from system to system; however, knowing the average file size and multiplying this by the number of projected instances will assist in estimating total disk needs.

Adaptive Processing Servers and Adaptive Job Servers


Adaptive Processing Server (APS) and Adaptive Job Servers (AJS) are java host containers that run other services. The APS can host one or more of the following services: Search Service, Client Auditing Proxy Service, Publishing Post Processing Service and Publishing Service. The AJS can host one or more of the Destination Delivery Scheduling Service, the Publication Scheduling Service, the Program Scheduling Service, the Replication Service and the Destination Configuration Service. Note: Both Adaptive Processing Server and Adaptive Job Server are multi-threaded Java-based processes that are capable of managing and running concurrently separate sessions for various services.

92

Designing and deploying a solutionLearners Guide

Search Server
This service provides the Content Search capability in InfoView. The search service extracts contents from documents to create the index and runs any queries against the index. It is hosted by the Adaptive Processing Server. If your system has a large number of simultaneous search requests, you can set up several search services to balance the load between them. Load: 25/CPU Memory: 500MB Disk space: same size as FRS

Client Auditing Proxy Service


The Client Auditing Proxy Service manages the logging of events of all Business Objects Enterprise services. Load: N/A Memory: 500MB Disk space: N/A

Services for Publishing


Publishing is the process of delivering documents such as Crystal reports, Web Intelligence documents, and Desktop Intelligence documents to large sets of recipients. Publishing Post Processing Service: This service is responsible for any post processing of publication job, including PDF merging and publication extension processing. Refer to BusinessObjects Enterprise Publisher's Guide for more information on this service. Publishing Service: This service is core to any publication processing. The service dispatches to and coordinates with other server components to complete the processing of a publication. Web Intelligence Scheduling and Publication Service: This service is used when processing scheduled Web Intelligence documents and scheduled publications that include Web Intelligence documents.

Adaptive Processing Server


The following points are recommended when configuring the Adaptive Processing Server: Create multiple instances of the Adaptive Processing Server if multiple publications have to be run concurrently. Three publications per APS instance. Increase the java heap size to - 1024 MB. Run the Publishing Service and Publishing Post Processing Service on separate Adaptive Processing Servers.

Performance, Scalability and SizingLearners Guide

93

Processor requirements
Increase the number of concurrent JobServerChild processes to five JobServerChild processes per CPU.

Hard disk recommendations


Using a faster disk input/output mechanism will achieve better performance. Use striped disks. Move the Output FRS to a dedicated clustered node with striped disks. Physically separate the Input and Output FRSs. Do not share a disk controller between Input FRS disks and output FRS disks. For publications delivered to a large number of external recipients, for example via email, you should consider not storing published document instances in the BusinessObjects Enterprise folder. Clear "Default Enterprise Location" from the list of destinations.

Memory requirements
The following table contains the memory requirements for publishing:
BusinessObjects Enterprise server Physical memory required per instance of server (MB)

CMS Adaptive Processing Server Crystal Reports/Desktop Intelligence Job Server child process Crystal Reports/Desktop Intelligence Job Server (parent)

624 1024

200

12

Program Scheduling Service


The Program Scheduling Service manages the schedule of executable objects. Memory and disk space consumption varies depending on the individual executable runtime requirement. Load: N/A Memory: N/A Disk space: N/A

94

Designing and deploying a solutionLearners Guide

Replication Service
The Replication Service is responsible for managing Replication Jobs in the Federation process. There is no sizing information specific to the Replication Service. Replicated content is sent via Web Services and in a default installation of BusinessObjects Enterprise, all of its Web Services utilize the same web service provider. This means that larger Replication Jobs may tie up the web service provider longer and slow down its response to other web service requests as well as any applications it serves. If you plan to replicate a large number of objects at once, or run several Replication Jobs in sequence, you may consider deploying Federation Web Services on its own Java Application server using your own web services provider.

Memory requirements
If your single Replication Job replicates many objects, or if you are sharing the Application Server with other applications, it is recommended to increase the available memory your Java Application Server can use. If you deployed BusinessObjects Enterprise and Tomcat, the default available memory is one GB. You can increase this to 2GB if needed.

Event Server
The Event Server manages file-based events. When you set up a file-based event within BusinessObjects Enterprise, the Event Server monitors the directory that you specified. When the appropriate file appears in the monitored directory, the Event Server triggers your file-based event. That is, the Event Server notifies the CMS that the file-based event has occurred. The CMS then starts any jobs that are dependent upon your file-based event.

Processor and memory requirements


The Event Server under normal enterprise usage is not a processing or memory intensive server and as such will not be weighted in the sizing process. If Event Server functionality is required, allocate this service into the system but do not estimate any additional CPUs for this service.

Desktop Intelligence Servers


The Desktop Intelligence Processing Server, Desktop Intelligence Cache Server, and Desktop Intelligence Job Server have their own sizing requirements.

Processor requirements
Every user request for a Desktop Intelligence document is passed to the Desktop Intelligence Cache Server if the viewer preference is set to HTML viewer. If the information is located in cache, the result is returned to the user without further processing. If the information is not available in cache, the request is sent to the Desktop Intelligence Processing Server. To assist

Performance, Scalability and SizingLearners Guide

95

with high availability, the Desktop Intelligence Processing Server creates subprocesses for each document (instead of handling multiple documents in the same executable). The maximum number of subprocesses can be adjusted through a configuration setting in order to avoid system saturation. Similarly, the manner in which subprocesses can be recycled to process different documents can also be adjusted by specifying the inactivity time-out after which the subprocess can be released. Note: Processor requirements will vary depending on the size and complexity of the documents and the type of action being performed (view vs. refresh). Follow these guidelines when determining processor requirements for the Desktop Intelligence Cache Server: 200-400 simultaneous requests per Desktop Intelligence Cache Server service 50 simultaneous requests per CPU Follow these guidelines when determining processor requirements for the Desktop Intelligence Processing Server: One Desktop Intelligence Processing Server service installed per machine 8-12 simultaneous requests per CPU

Memory requirements
The amount of memory required specifically for Desktop Intelligence depends on the number of Desktop Intelligence report users, the volume and size of the documents they use and the actions performed on the documents.

Memory Requirements Desktop Intelligence Cache Server


When determining memory requirements for the Desktop Intelligence Cache Server, consider these points: The maximum value of the Desktop Intelligence Cache Server size is set in the CMC. The default value is 100 MB. This value sets the maximum amount of physical memory for the cache within the fccache process. However, the total size of the physical memory footprint of the Desktop Intelligence Cache Server process will be almost twice that amount. If the cache size set in the CMC is excessively large, the Desktop Intelligence Cache Server will continue to use the space until it reaches the maximum. It will then launch the cache cleanup mechanism. Once cleaned up, the cache size (and the process size) will remain the same. In other words, the cleanup mechanism will not reduce the size of the memory footprint.

96

Designing and deploying a solutionLearners Guide

If the cache size set in the CMC is too small, the Desktop Intelligence Cache Server process will be forced to use temporary files of the disk as part of the cache. The resulting disk I/O will have a negative impact on performance. The amount of cache to keep when document cache is full parameter is calibrated for a good balance between maintaining enough copies of the documents in the cache and releasing enough space for new ones. However, it can be tuned for specific needs if necessary.

Desktop Intelligence Processing Server


The initial memory size for the Desktop Intelligence Processing Server is 22 MB. Subprocesses have an initial memory size of 70 MB; however, this size will grow according to the size and complexity of the processed document.

Enterprise Performance Manager (EPM)


Performance Management uses nine server processes:
EPM Service AA Alert & Notification Server AA Analytics Server: AA Dashboard Server AA Individual Profiler Server AA Metric Aggregation server AA Predictive Analytic Server AA Repository Management Server AA Set Analyzer Server AA Statistical Process Control Server Role Alerting services Analytic processing, rendering, and transformation services Dashboard management services Individual analytic services Metrics services Predictive analytic services Performance Manager repository services Set analytic services Process control analytic services

The AA Dashboard and AA Analytic servers are the principle servers required for heavy loads. They can have multiple servers running at the same time, as they are enabled for load-balancing. The other servers can also be deployed on several machines, however, only one instance can be active at a time. The other instances function as backup. While analytics can be based on universe queries, performance management analytics interact with metrics to render their information. Because they work with a much smaller amount of data and with significantly less-complex queries, they can run more quickly. In addition, analytics are managed through a process called AA Analytic, which is a multi-threaded object that can handle the processing required of hundreds of concurrent analytic requests.

Performance, Scalability and SizingLearners Guide

97

Processor requirements
Although AA Analytic and AA Dashboard servers may expand to several CPUs, the best throughput will be obtained by running one service (either AA Analytic or AA Dashboard) per CPU. For sizing estimates on AA Analytics, it is recommended to use a range of 18 to 40 Simultaneous Requests per available CPU. This can be highly dependent on the complexity of the analytics. For sizing estimates on AA Dashboards, it is recommended to use a maximum of 40 Simultaneous Requests per available CPU. This can be highly dependent on the number of analytics displayed on the dashboard and the type of action.

Example configuration
If there is an expected 160 concurrent active users accessing a performance management dashboard, and assuming conservatively 40 maximum simultaneous requests per processor, potential configuration is: One QUAD machine with 2 AA Analytics and 2 AA Dashboard services where each service is configured to support 40 concurrent active users

Memory requirements
Depending on the design of a report and the types of actions being performed (viewed, modified, refreshed) memory requirements will vary. A general guideline for sizing the AA Analytic server is 200Mb per service. A general guideline for sizing the AA Dashboard server is a base value of 120Mb per service. Note: In a fashion similar to report processing elements, the AA Dashboard server memory footprint will be augmented each time an analytic is shown in the dashboard, so it is mostly dependent on the number and complexity of the analytics presented.

Scalability
The scalability of PM is dependent on both the power of the machine on which you deployed PM server processes and the power of the database machine particularly when the metrics use live data.

Web Intelligence Processing Server


The Web Intelligence Processing Server has the following sizing requirements.

Processor requirements
The Web Intelligence Processing Server will expand to as many CPUs as needed and can process several documents in parallel, so one Web Intelligence Processing Server is required per machine.

98

Designing and deploying a solutionLearners Guide

Follow these guidelines when determining processor requirements for the Web Intelligence Processing Server: 25-40 maximum simultaneous connections per CPU One service per CPU Sample configuration: If there is an expected 100 concurrent active users viewing or modifying a Web Intelligence document a typical optimal configuration would look like the following: One quad processor machine with four Web Intelligence Processing Server services installed to support 25 concurrent active users each (per service), for a sum of 100

Memory requirements
Depending on the design of a report and the types of actions being performed (viewed, modified, refreshed) memory requirements will vary. A "refresh" request demands the greatest amount of memory for a Web Intelligence document as the database is queried and the entire dataset will be transferred to the Web Intelligence server. When using several very large documents, it may be necessary to increase the number of Web Intelligence Processing server services to more than one per machine, in order to avoid reaching the 2GB user process address space limit. More Web Intelligence Processing servers can be added as necessary to satisfy the memory requirements. The following sections describe various strategies to manage this. Note that these are highly dependent on the type of documents and the way the Web Intelligence Processing server service is used. Load balancing between the Web Intelligence Processing servers is automatically achieved.

Physical Address Extension support (PAE)


QA testing of BusinessObjects Enterprise XI has used the /PAE switch to increase memory access. The /PAE switch changes the addressing mode to allow the O/S to access more than 4GB of RAM. By using the /PAE switch each process is still limited to 2GB of user addressable space, but the system can have more of these large processes running at once.

Cache clean
This kind of workflow corresponds to opening a Web Intelligence document from the cache and does not necessitate a regeneration of the document. Note: Some documents are not cacheable by design. For instance, all documents that contain "current time" or "current user" cannot be kept in the cache, as the construction of the document differs for every access. The performance and scalability in this case mainly depends on disk IO performance when accessing the folder:

Performance, Scalability and SizingLearners Guide

99

You can optimize disk performances on: C:\Program Files\Business Objects\BusinessObjects Enterprise 11.5\Data\<cms name>\storage or the equivalent directory path on a UNIX system.

Cache dirty
This kind of workflow corresponds to opening a Web Intelligence document that necessitates a regeneration of the document without accessing the database. For instance, if the document contains some data that cannot be cached. In this case, the performance and scalability mostly depends on the CPU power and the memory size of the system where the Web Intelligence Processing Server service is running. The memory requirement can be calculated as follows: If X = the initial memory size of the Web Intelligence Processing Server process Y = the memory size of Web Intelligence Processing Server after opening a typical Web Intelligence document Then Y - X = the memory size necessary to open a typical document Z = 1.5GB / (Y - X) = the number of typical documents that can be opened simultaneously by one Web Intelligence Processing Server Note: This number is very conservative, as typically, one has much more concurrent users than the number of documents opened simultaneously.

Refresh
This kind of workflow corresponds to opening a Web Intelligence document and refreshing it. In this case, access to the reporting database is required to get the data and regenerate the document. The performance and scalability of these workflows depends on: The number of simultaneous connections that can be made to the Database The Database power (memory and CPU) The memory and CPU of the middle-tier (same calculation as for Cache Dirty workflow above) The performance of the Database can be highly optimized if the Database system is configured in such a way that it caches the SQL queries. This avoids recalculating queries if they are found to be similar. This can happen when a user refreshes a report for the first time and then another user performs the same refresh to the document without knowing that the first user has done it already. If SQL queries caching is enabled, the refresh for the second user will be much shorter than that of the first user.

100

Designing and deploying a solutionLearners Guide

Web Intelligence Job Server


The Web Intelligence Job Server processes scheduling requests it receives from the CMS for Web Intelligence documents. It forwards these requests to the Web Intelligence Processing Server, which will generate the instance of the Web Intelligence document. The Web Intelligence Job Server does not actually generate object instances.

Processor requirements
The Web Intelligence Job Server has a comparable function to the Crystal Reports Job Server in that it is responsible for handling Scheduled Jobs. However, the Web Intelligence Job Server does not actually "process" reports, but only acts as a scheduling manager or "router" sending jobs to be processed by the Web Intelligence Processing Server. Note: One parameter to be aware of is -requestTimeout N - where N is in milliseconds, the default is 600000 and lowest allowed value is 30000. If you expect that reports will run longer than 10 minutes when scheduled, this setting should be increased. One CPU can optimally support five maximum jobs (processes) higher or lower depending on report complexity and size One Web Intelligence Job Server service can handle 20 simultaneous requests

Disk requirements
For the Web Intelligence Job Server service, sufficient hard drive disk space should be available in the temp directory for the creation of temp files during report processing. The data from the database server is stored in these files until it can be saved and compressed in the report. Hard drive access speed to the temp directory may have an impact on the speed at which a report processes. Optimize disk performances on: C:\Program Files\Business Objects\BusinessObjects Enterprise 11.5\Data\procSched\<machinename>.Web_IntelligenceJobServer or the equivalent directory path on a UNIX system.

Crystal Reports Processing Server


The Crystal Reports Processing Server creates Processing Server subprocesses. Each subprocess loads CRPE and then initiates threads or print jobs as needed in a unique memory space. If an individual print job were to fail for any reason, only those threads contained in the Crystal Reports Processing Server subprocess would be affected. All other subprocesses within the Crystal Reports Processing Server service would be unaffected. In addition, individual subprocesses are shut down after so many requests and a new subprocess is started, if required, so as to maximize resource management.

Crystal Reports Processing Server Definitions


Processing Server Service: Service that manages subprocesses

Performance, Scalability and SizingLearners Guide

101

Processing Server Service Sub Process: Process responsible for managing report jobs Report Job: Thread responsible for generating report pages requested by report viewers Maximum Simultaneous Report Jobs: The total number of report jobs that can be contained in a processing server service

Algorithm used when Maximum Simultaneous report jobs is set to unlimited


For the Crystal Reports Processing Server service, the number of Crystal Reports Processing Server subprocesses and the total maximum number of simultaneous viewing report jobs are determined by the following default algorithm: Max # of report jobs (threads) = (# CPU's) * (25) minimum of 50 on a single CPU Max # of subprocesses = (Max # Report Jobs) / 10 report jobs per subprocess (rounded up) Max # of processes = (Max # of subprocesses) + 1 parent process The maximum report jobs per subprocess is set at 10. Note: The preceding calculations only come into effect when the Maximum Simultaneous Report Jobs setting is set to Unlimited. By default, the above algorithm is used to determine the maximum number of simultaneous report jobs on a particular machine. The algorithm has been purposely tuned conservatively to favor reliability (lower number of simultaneous report jobs per CPU) to work optimally in most reporting environments and configurations. The default can easily be overridden in the CMC (in Crystal Reports Processing Server properties). The Jobs limited to option gives the administrator the ability to increase or decrease the maximum number of simultaneous report jobs that can run on a single Crystal Reports Processing Server service (parent process).

Comparison of Crystal Reports Job Server and Crystal Reports Processing Server
The Crystal Reports Processing Server is designed to process a large set of smaller reports, whereas the Crystal Reports Job Server is designed to process a smaller set of very large reports. Smaller reports are less complex and contain a smaller set of data. They are suitable for a large group of users to view as on-demand reports (live data). Larger more complex reports that must retrieve and process a very large set of data should be scheduled (saved data).

Crystal Reports Processing Server deployment


You may choose to deploy the Crystal Reports Processing Server on a dedicated machine, on a shared machine, or in a server group. Only one Crystal Reports Processing Server service is required per machine.

102

Designing and deploying a solutionLearners Guide

The Crystal Reports Processing Server creates new subprocesses and stops Crystal Reports Processing Server subprocesses on an as-needed basis. If you set the Maximum Simultaneous Report Jobs setting for the Crystal Reports Processing Server to Unlimited, the server will automatically adjust to the necessary number of subprocesses. The Crystal Reports Processing Server detects the number of processors on the machine and scales accordingly. If the default of Unlimited is used, and an additional Crystal Reports Processing Server service is added, both Crystal Reports Processing Server services will use the default algorithm: Max # of Report Jobs (threads) = (# CPU's) * (25). Note: Multiple Crystal Reports Processing Server services should not be installed on a single server unless there is a need to serve multiple server groups. In this case, it is recommended that the total maximum simultaneous report jobs for all installed Crystal Reports Processing Server services on the server not exceed 25 per CPU.

Dedicated Crystal Reports Processing Server machine


With dedicated Crystal Reports Processing Servers, it is advisable to use the default setting of Unlimited for the Maximum Simultaneous Report Jobs setting. When using the default setting of Unlimited, the Crystal Reports Processing Servers maximum number of simultaneous report jobs will be calculated as 25 x number of CPUs, with a minimum of 50 (25 by calculation, but minimum is always 50). 1 CPU - Maximum Simultaneous Report Jobs = 50 2 CPUs - Maximum Simultaneous Report Jobs = 50 4 CPUs - Maximum Simultaneous Report Jobs = 100 8 CPUs - Maximum Simultaneous Report Jobs = 200 Note: The maximum number of simultaneous report jobs is equal to the amount of user simultaneous requests.

Shared Crystal Reports Processing Server machine


There may be cases where it is advisable to change the default setting from Unlimited to throttle back the maximum number of Crystal Reports Processing Server threads created on a single machine. This might be advisable in cases where the Crystal Reports Processing Server is sharing resources with other services such as the CMS, Crystal Reports Job Server, or Central Configuration Manager, as would be the case where the complete BusinessObjects Enterprise stack is installed on a standalone machine. When not using the unlimited jobs setting one CPU can handle 25-75 (50 is a safe setting) simultaneous requests.

Example
Question: What is the estimated number of Crystal Reports Processing Server services and Crystal Reports Processing Server CPUs required to support 2000 concurrent active users

Performance, Scalability and SizingLearners Guide

103

generating 200 simultaneous requests for viewing Crystal Reports in a highly active system using ActiveX as the predominant preview engine? a) Number of CPUs necessary to support concurrent users: 200 simultaneous requests / 25 simultaneous requests per CPU = 8 CPUs b) Number of services necessary to support concurrent users: Because the Processing Server service scales itself by spawning multiple subprocesses (see Algorithm used when maximum simultaneous report jobs is set to unlimited) Number of services necessary to support 200 simultaneous requests: 200 simultaneous requests/ 200 simultaneous requests per service = 1 service Answer: It will take 1 Crystal Reports Processing Server service installed across an 8-way CPU machine to support 2000 concurrent active users generating 200 simultaneous requests in a highly active system or 2 Crystal Reports Processing Server services installed across two quad CPU machines. The Crystal Reports Processing Server service can be changed from the default setting of Unlimited to a recommended range of 25 - 75 maximum simultaneous report jobs per available CPU. The setting of 25 - 75 can be adjusted higher or lower depending on the environment for example, report complexity, size, and so on. Values below 25 per CPU may be appropriate if the machine resources are being shared among other processes present on the machine (for example, CMS, Crystal Reports Job Server, Web Intelligence Servers).

Crystal Reports Processing Server Service Groups on a machine


For a dedicated Crystal Reports Processing Server machine, install an additional Crystal Reports Processing Server service to support Server Groups. For each Crystal Reports Processing Server service, change the default setting of Unlimited to a recommended range of 25 - 75 maximum simultaneous report jobs per available CPU. Each Crystal Reports Processing Server Group might include the Crystal Reports Processing Server service(s) that support a unique set of report qualities (for example, Crystal Reports Processing Server Group One services more complex reports and Crystal Reports Processing Server Group Two services less complex reports), so maximum and minimum settings applied to Crystal Reports Processing Server services might be different. The setting of 25 - 75 can be adjusted higher or lower depending on the environment (such as report complexity, size, and so on).

On-Demand (live data) versus Saved Data Viewing (prescheduled instance)


Live data
On-demand reporting gives users real-time access to live data, straight from the database server. Use live data to keep users up-to-date on constantly changing data, so they can access information that's accurate to the second. For instance, if the managers of a large distribution

104

Designing and deploying a solutionLearners Guide

center need to keep track of inventory shipped on a continual basis, live reporting is the way to give them the information they need. Before providing live data for all your reports, consider whether or not you want all of your users accessing the database server on a continual basis. If the data isn't rapidly or constantly changing, all those requests to the database do little more than increase network traffic and consume server resources. In such cases, you may prefer to schedule reports on a recurring basis so that users can always view recent data (report instances) without hitting the database server.

Saved data
To reduce the amount of network traffic and the number of hits on your database servers, you can schedule reports to be run at specified times. When the report has been run, users can view that report instance as needed, without triggering additional hits on the database. Report instances are useful for dealing with data that isn't continually updated. When users navigate through report instances, and drill down for details on columns or charts, they don't access the database server directly; instead, they access the saved data. Consequently, reports with saved data not only minimize data transfer over the network, but also lighten the database server's workload. For example, if your sales database is updated once a day, you can run the report on a similar schedule. Sales representatives then always have access to current sales data, but they are not hitting the database every time they open a report. CPU utilization and memory consumption will be relatively comparable between live data viewing and saved data viewing; however viewing saved data reports will, on average, decrease viewing response times and increase throughput and system efficiency.

Crystal Reports Processing Server Data Sharing


The Oldest On-Demand Data Given To a Client (in minutes) setting controls how long the Crystal Reports Processing Server uses previously processed data to meet requests. If the Crystal Reports Processing Server receives a request that can be met using data that was generated to meet a previous request and the time elapsed since that data was generated is less than the value set here, the Crystal Reports Processing Server will reuse this data to meet the subsequent request. Reusing data in this way significantly improves system performance when multiple users need the same information. When setting the value of the Oldest Processed Data Given to a Client setting consider how important it is that your users receive up-to-date data. If it is very important that all users receive fresh data (perhaps because important data changes frequently), you may need to disallow this kind of data reuse by setting the value to zero. The default is zero, meaning that all users will, by default, receive fresh data. If Data Sharing can be used in a system, this can decrease the number of CPUs required to view reports.

Performance, Scalability and SizingLearners Guide

105

Processor requirements
Follow this guideline when determining processor requirements for the Crystal Reports Processing Server: 25-75 simultaneous requests per CPU if setting maximum simultaneous report jobs manually One service per server Note: For sizing estimates based on the number of simultaneous requests per CPU, it is recommended to use a range starting from the default 25 to the recommended maximum of 75.

Memory requirements
Depending on the design of a report and the number of records retrieved from the database, memory requirements may vary. When a report is viewed and loaded into memory, the report is decompressed and expanded up to as much as 40 times the original report file size (with saved data/retrieved records).

Example (minimum memory requirements on Crystal Reports Processing Server machine)


500 KB report file size (contains saved data) = 500 KB * 40 (decompression ratio) = 20 MB 25 reports * 20 MB = 500 MB of minimum memory required

Details/Optimization
The Crystal Reports Processing Server parent process manages several Crystal Reports Processing Server subprocesses. Each Crystal Reports Processing Servers subprocess manages multiple report jobs (threads). By design, the number of multiple report jobs (threads) cannot exceed 10 per single Crystal Reports Processing Server subprocess. Here is the calculation for the number of expected Crystal Reports Processing Server report jobs and subprocesses: If maximum simultaneous report jobs is set to unlimited, which should be called automatic, then: 1 CPU - Maximum Simultaneous Report Jobs = 50 = 5 subprocesses 2 CPUs - Maximum Simultaneous Report Jobs = 50 = 5 subprocesses 4 CPUs - Maximum Simultaneous Report Jobs = 100 = 10 subprocesses 8 CPUs - Maximum Simultaneous Report Jobs = 200 = 20 subprocesses If maximum simultaneous report jobs is set to a specific number, then: Maximum Number of Simultaneous Jobs \ 10 = Number of subprocesses (rounded up) For example, if you set Maximum Simultaneous Report Jobs to 61 then:

106

Designing and deploying a solutionLearners Guide

Number of subprocesses = 61 \ 10 = 6.1 rounded up to 7 There will be 7 Crystal Reports Processing Server subprocesses if the number of simultaneous report jobs is equal to 61 regardless of the number of CPUs. This number can be adjusted up or down after some specific tests have been performed on a specific system. When Crystal Reports Processing Server parent process starts, it preloads a Crystal Reports Processing Server subprocess in anticipation of receiving a new request. When a request to create a viewable page arrives to Crystal Reports Processing Servers subprocess, it loads CRPE engine and executes a request.

Crystal Reports Job Server


The Crystal Reports Job Server has the following sizing requirements.

Processor requirements
Follow these guidelines when determining the processor requirements of the Crystal Reports Job Server: One CPU can optimally support five simultaneous requests (processes). One Crystal Reports Job Server service can support up to 20 jobs across four CPUs. This formula will help you determine the number of Crystal Reports Job Server CPUs required in your deployment: ((# of reports) x (average report processing time)) / ((time window) x (max jobs per Crystal Reports Job Server service)) = # of required Crystal Reports Job Server CPUs Note: When performing this calculation, the average report processing time and the time window variables must be measured in the same time unit (for example, both in minutes). The maximum jobs per Crystal Reports Job Server per CPU is generally constant at five; however, this number can change depending on the reporting and network environment. For example: ((200 reports) x (average 1 minute of processing time)) / ((10 minute time window) x (5)) = 4 Crystal Reports Job Servers installed across single CPU machines required Note: Large complex reports that must retrieve and process a very large set of data should be scheduled. Jobs are run as independent processes rather than threads on the Crystal Reports Job Server. This requires more resources for each job, but allows for efficient processing of a smaller set of large, heavy-processing reports.

Report Application Server


The default Maximum Simultaneous Report Jobs is set at 75 for each RAS service. This value, as with the Crystal Reports Processing Server, may be adjusted according to anticipated load

Performance, Scalability and SizingLearners Guide

107

and available hardware resources. The guideline of 25 to 75 (default) Simultaneous Report Jobs per CPU is recommended. However the ideal setting for your reporting environment is highly dependent upon your hardware configuration, your database software, and your reporting requirements. Additionally, a recommended guideline is to run one Report Application Server per CPU. Note: RAS server is used when you create Crystal Reports publications for dynamic recipients.

Example
For each available CPU run 1 RAS service with a setting of 25 to75 Maximum Simultaneous Report Jobs (use 25 for Optimal Performance)

Processor requirements
Follow these guidelines when determining processor requirements for the Report Application Server: 1 CPU can manage 25 - 75 maximum simultaneous processing jobs 1 Report Application Server service per CPU Sample configuration: If 100 concurrent active users are expected to be viewing or modifying a Crystal report, a typical optimal configuration would look like the following: 1 quad processor machine with 4 Report Application Server services installed to support 25 concurrent active users each (per CPU) for a sum of 100

Memory requirements
Depending on the design of a report, the number of records retrieved from the database, memory requirements may vary. When a report is viewed and loaded into memory the report is decompressed and expanded up to as much as 40 times the original report file size (with saved data/retrieved records). Example (minimum memory requirements on the RAS machine): 500KB Report File Size (contains saved data) = 500KB * 40 (decompression ratio) = 20MB 25 Reports * 20MB = 500MB minimum memory required

List of Values Job Server


The List of Values Job Server has the following sizing requirements.

108

Designing and deploying a solutionLearners Guide

Processor requirements
The number of CPUs required to support the List of Values service is dependent on the number of concurrent jobs processing. By default, the Maximum Jobs Allowed setting is set to five. For most environments this setting will not have to be changed and a single CPU is sufficient. Follow these guidelines when determining processor requirements for the List of Values Job Server: One CPU can handle five simultaneous Crystal Reports requests. One service can handle 20 simultaneous requests.

Memory requirements
List of values objects are Crystal Reports objects that have multiple groups representing different cascading hierarchy levels. The file size of the list of values objects (stored in the Output File Repository Server) can be used to calculate memory requirements. When a report is processed and loaded into memory, the report is decompressed and expanded up to as much as 40 times the original report file size (with saved data/retrieved records). Example (minimum memory requirements for the LOV machine): 100 KB report file size (contains saved data) = 100 KB * 40 (decompression ratio) = 4 MB 5 LOV reports * 4 MB = 20 MB minimum memory required

Connection Server
Threshold numbers for the Connection Server are still being gathered and are not available at this time.

Web Application Server/Web Application Container Server


Note: The Web Application Container Server is used in deployments with no Java Web Application Server. Depending on how the system is utilized, the Web Application Server (Apache Tomcat, BEA WebLogic, IBM WebSphere) can manage differing numbers of concurrent user sessions and simultaneous requests. The main functions of the Web Application Server within the BusinessObjects Enterprise system are: Processing the .NET/Java script Translating the Encapsulated Page Files (page on demand) to DHTML pages Communicating with the Crystal Reports Cache Server for report view requests Managing session state information for the users Communicating with the Web Intelligence Processing Server for view requests of Web Intelligence reports

Performance, Scalability and SizingLearners Guide

109

Communicating with the Desktop Intelligence Cache Server for view requests of Desktop Intelligence reports Communicating with the Connection Server for view request of Desktop Intelligence reports (Windows only)

Processor requirements
Note: Processor requirement guidance is generalized and relative requirements may change based on the individual characteristics of different Web Application Servers. To better understand specific Web Application Server characteristics, consult with the specific vendor. Based on internal performance testing, one Web Application Server can manage approximately 400 concurrent user sessions (user session = one logged on user) per processor. And generally, a service can efficiently manage 100 simultaneous requests (for example, request = a user clicking on a folder). Under normal circumstances, it is improbable that all concurrent users would make a request simultaneously; therefore, the following numbers allow for and differentiate between "concurrent user sessions" and "simultaneous requests." Because the service deals with two thresholds (Maximum number of Concurrent User Sessions and Maximum number of Simultaneous Requests), it is important to consider both when determining the required hardware. This can be illustrated in the following examples:

Example 1
A single available processor (one processor) with one Web Application Server service running could efficiently service 400 concurrent user sessions and can handle 100 simultaneous user requests.

Example 2
A dual processor machine (two processors) with two Web Application Server services running could efficiently service 800 concurrent user sessions and can handle 200 simultaneous user requests.

Example 3
A quad processor machine (four processors) with four Web Application Server services running could efficiently service 1600 concurrent user sessions and can handle 400 simultaneous user requests.

Rule of Thumb
One Web Application Server = 400 concurrent user sessions (user session = one logged on user) One Web Application Server = 100 simultaneous requests per processor (for example, request = a user clicking on a folder) As a base guideline, it is recommended to estimate one Web Application Server per 100 simultaneous requests, however each viewer type will have its own characteristics that will impact the capacity for concurrent users or simultaneous requests and as such this number can be higher and lower.

110

Designing and deploying a solutionLearners Guide

Viewing in ActiveX or Java Viewers


One Web Application Server = 100 simultaneous requests per processor (for example, request = a user interacting with the report)

Viewing in HTML Viewer (Crystal report, Web Intelligence report, Desktop Intelligence report)
One Web Application Server = 50 simultaneous requests per processor (for example, request = a user interacting with the report)

Multi-Dimensional Analysis Server (MDAS)


Voyager allows for native access and analysis of OLAP servers from Microsoft (including Yukon cubes), Hyperion, IBM, and SAP. It is implemented in a back-end tier (MDAS.exe) and in the application server tier.

Processor requirements
Users are differentiated here between the "consumer user", and "power user" persona: Power user: Does in-depth OLAP analysis, drills, sorts, drag-and-drop, performs calculations, modifies the chart, possibly opens multiple connections, uses highlights and exceptions Consumer user: Displays reports created by a power user, does simple operations (sorts) and prints The impact on processor requirements being very different, it is required to differentiate these two communities of users.

Heavy users
Heavy users use the system continuously, without think time. This is the extreme case. Benchmarks have shown that Consumer users uses the CPU more intensively than Power users, since Power users' load is concentrated on the OLAP cube. The recommendation is: Heavy power users: 8 per CPU Heavy consumer users: 4 per CPU

Other users
If we add think time in the scenario, the capacity of Voyager will be extended considerably, especially considering that the OLAP analysis requires many operations and much thinking around these. The following calculation, based on internal bench marking can be done to give an estimate of the system capacity: A heavy power user does 41 operations in 2 minutes. If you assume that the user takes 5 minutes on average to do this workflow, there will be 300 - 129 = 171 seconds spent as think time. That means a concurrent user will utilize really 43% (129 out of 300) of the full service time. The capacity of the system could be extended to 57%.

Performance, Scalability and SizingLearners Guide

111

You can extrapolate the optimum load to 16 / 0.57 = 28 users. Optimal = 28 concurrent users = 1 MDAS service = 1 Processor

Vertical scalability
The best ratio that is observed when doubling the number of CPUs is 1.5. Thus when adding more and more CPUs, the asymptomatic ratio will be three. It is not advised to deploy Voyager on systems with many CPUs. The best ratio observed when doubling the number of systems is 1.7 - better than the vertical scalability and should then be the preferred solution. This is particularly beneficial when disk write is required by an OLAP middleware driver, for instance Essbase. In this case, rather than having a powerful server with many CPUs, it is recommended to use multiple clustered machines with a lower number of CPUs in each.

Tier balance
The measured balance in Business Objects' test cases is 70% (for back-end MDAS) and 30% (for web application tier), when the two are on the same machine. This can be used as a guideline to define the number of MDAS servers. For instance, on an 8 CPU server where CMS and Application server are activated, you would activate 5 MDAS (served by 5 CPU) and leave the 3 CPU (30% of 8 CPU) to CMS + Tomcat.

Memory requirements
As the MDAS's memory management is based on JVM Garbage Collection, the real memory usage cannot be linearly approximated by the number of concurrent users. The query size and the type of OLAP middleware (Microsoft Yukon, Essbase, and so on) have a great impact on the memory footprint in the JVM. Each MDAS instance requires from 250MB to 1GB (peak value). Minimum: 512MB per MDAS. Optimal: 1GB RAM available per MDAS.

Disk requirements
When using Essbase as the OLAP server, temporary files are created (by Essbase Driver) in the Windows temporary folder (C:\windows\temp). The disk contention, essentially in write operations, may occur in the following conditions: High user concurrency CPU power: the more the machine is powerful in CPU (speed and number), the more the disks are loaded High number of MDAS instances Important Query size: the temporary files contain data retrieved from the OLAP server

112

Designing and deploying a solutionLearners Guide

Recommendations
Deploy MDAS instances in separate clusters Limit the number of MDAS instance per machine Maximize disk performance by using the operation system's available features (striping and so on)

Query as a Web Service (QaaWS)


Query as a Web Service is a desktop product that can be found in the BusinessObjects Enterprise. It can be implemented in every BusinessObjects Enterprise XI deployment. There is no particular process for QaaWS in the CMS and this service is not manageable via the CMC. Queries are created inside the desktop product with tools like Xcelsius and are pushed inside the application server as web queries. These web queries are directly available for end users.

Web Intelligence Processing Server


As WIReportServer ( Web Intelligence Processing Server) is the process hammered by QaaWS on a deployment, the "Maximum Simultaneous Connections" for the process should be set from the default "50" to the desired number of users.

Tomcat
Tomcat Connection Timeout should be disabled: from default "20000" to "0". Tomcat Max Threads should be increased from "150" to "600".

Processor requirements
For sizing estimates based on number of simultaneous jobs per CPU it is recommended to use no more than 300 Maximum Simultaneous Connections per available CPU (simultaneous connection setting can be highly dependent on query Complexity). Although a Web Intelligence Processing Server may expand to several CPUs, the best throughput will be obtained by running one Web Intelligence Processing Server service per CPU.

Maximum
One Web Intelligence Processing Server service having one available CPU can support until 300 Maximum Simultaneous Connections. 300 concurrent users = One Web Intelligence Processing Server = One Processor

Optimal
One Web Intelligence Server service having one available CPU can optimally serve between 64 and 128 Simultaneous Connections.

Performance, Scalability and SizingLearners Guide

113

64 - 128 concurrent users = One Web Intelligence Processing Server = One Processor Caution: Default Web Intelligence Server service "Maximum Simultaneous Connections" will need to be adjusted to equal or more than the maximum desired number of simultaneous connections.

Memory requirements
Depending on the design of a query, the memory requirements will vary but will not be more than the regular Web Intelligence Processing Server demanding resources.

Live Office
Depending on the usage that is made of Live Office, the settings of the given services should be applied as described in the previous sections. For instance: If Live Office is used with Crystal Reports, one should follow the section on RAS. If Live Office is used with Web Intelligence, one should follow the Web Intelligence Processing Server tuning. If Live Office is used as a query panel, one should pay attention to the Web Intelligence Processing Server tuning, on the refresh aspect, (as the query panel is similar to that of Web Intelligence, when the refresh of a document is requested).

Recommendations
Add one Report Application Server instance for every 50 to 75 Live Office Crystal users. Report Application Server connection timeout should be optimally adjusted in order not to keep unused sessions left for too long a time. When the timeout value is too large the max connection may be overflowed. The timeout value, however, should be long enough to avoid interruption of active sessions. In a cluster environment, Report Application Server and Web Intelligence Processing Server should be deployed on a separate server in order to reduce the concurrency on disk I/O. Web Intelligence Processing Server maximum simultaneous connections value can be increased from 300 to 600 to support a large number of LiveOffice - Web Intelligence users. Rather than multiplying the Web Intelligence Processing Server instances keeping with a smaller value, setting a larger number of connections in one instance is more resource efficient and reduces disk contention. Number of Query Panel users should be restricted in order to avoid too many database accesses. In a Live Office document, it is better to use a filter rather than an interactive prompt (less interactions with the database). As a rule of thumb, use as many or more CMS instances than Report Application Server instances for Live Office Crystal users. This implies a better user session handling. In a cluster environment, Report Application Server deployed on the primary node gives better performance than those deployed on secondary nodes. Secondary nodes can be used for Web Intelligence Processing Servers without performance issues.

114

Designing and deploying a solutionLearners Guide

When deploying over several systems of several CPUs, horizontal scalability is more efficient than vertical scalability. Faster hard disk on the Live Office server is better for performance and scalability, for example, SCSI, striped formatting disk or SAN and so on.

Step 3: Determining the configuration of machines


Based on the usage and service threshold numbers, you perform calculations to determine the number of processors, services, and machines required for the applicable BusinessObjects Enterprise servers in your system. Note: Although the service thresholds that are based on benchmark tests are constant, every deployment will have different usage numbers and system requirements. When calculating the number of processors, services, and machines required for your system, you may need to make adjustments to the sizing numbers and configuration determined by the calculations to ensure that the specific needs of your system are met. For training purposes, this section uses an example to illustrate how the number of processors, services, and machines is determined.

CMS
For this example, assume that there is a total of 8000 users in the system and there are 800 concurrent active users. A single CMS service on a single CPU can handle up to 500 concurrent active users. More than one CMS is required to support all requests. Keeping load balancing in mind, it is recommended that two CMS services are installed across two single CPU machines, thus supporting up to 1000 users (more than the 800 required).

Web Application Server/Web Application Container Server


For this example, assume that there are 800 concurrent users viewing reports using the ActiveX viewer and generating 100 simultaneous requests. 1 Web Application Server = 400 concurrent active users per processor (CPU) 1 Web Application Server = 100 simultaneous viewing requests using ActiveX Since the Web Application Server can handle 400 concurrent users per server on a single CPU machine, this system requires two Web Application Server services on two single CPUs. The number of simultaneous viewing ActiveX requests can be handled by a single CPU. Select the threshold that requires the higher number of CPUs and/or services.

Web Intelligence Processing Server


Assume that there are 100 concurrent active viewing sessions. Since a single Web Intelligence Processing Server can support 25 concurrent sessions on a single CPU machine, the system requires installation of 4 Web Intelligence Processing Servers across 4 single CPU machines.

Performance, Scalability and SizingLearners Guide

115

Desktop Intelligence Processing Server


If there are 50 concurrent viewing sessions for Desktop Intelligence reports and the view format is set to HTML viewer, how many single CPU machines do you need? A single Desktop Intelligence Processing Server can support 8-12 concurrent simultaneous requests on a single CPU machine and only 1 service can be installed per machine, the system requires installation of 7 Desktop Intelligence Processing Server services installed across 7 single CPU machines.

Desktop Intelligence Cache Server


Assume that there are 100 concurrent viewing sessions for Desktop Intelligence reports (preference set to HTML viewer). A single Desktop Intelligence Cache Server can support 50 concurrent simultaneous requests on a single CPU machine and each service can support 200-400 sessions. This means the system requires installation of two Desktop Intelligence Cache Server services installed across two single CPU machines, or an installation of a single Desktop Intelligence Cache Server service installed on a dual CPU.

Crystal Reports Cache Server


For this example, assume that there are 800 concurrent users using the ActiveX viewer and there are 500 simultaneous requests. Because the Crystal Reports Cache Servers can handle 200 simultaneous requests per CPU and individual Crystal Reports Cache Server service can handle 400 sessions, this system requires Crystal Reports Cache Server service(s) installed across 3 CPUs. This could be accomplished by configuring the Crystal Reports Cache Server service on a dual-CPU machine and changing the Maximum Simultaneous Processing Threads setting on that machine to 400 and adding the Crystal Reports Cache Server service to a single CPU machine and changing the Maximum Simultaneous Processing Threads setting to 100. This means a total of 500 threads could be configured to serve Crystal Reports simultaneous requests (viewing requests).

Crystal Reports Processing Server


For this example, assume that there are 800 concurrent users using the ActiveX viewer and there are 200 simultaneous requests. If the maximum simultaneous report jobs is unlimited, the Crystal Reports Processing Server algorithm supports 25 simultaneous requests per CPU. To support 200 simultaneous requests, you can use one 8-CPU server with one Crystal Reports Processing Server service. Alternatively, you can use two 4-CPU servers, each handling 100 simultaneous requests, and two Crystal Reports Processing Server services. If the maximum simultaneous report jobs is manually set to 50 (midway between the optimal setting of 25 and the maximum setting of 75), 4 CPUs with 4 services each set at 50 maximum simultaneous report jobs are required to support 200 simultaneous requests.

Crystal Reports Job Server


For this example, assume 1000 reports are going to be scheduled at night. On average, it takes 9 minutes to run each report. You have 5 hours (300 minutes) to run all the reports. The Crystal Reports Job Server services can be installed on quad machines.

116

Designing and deploying a solutionLearners Guide

The number of Crystal Reports Job Servers required is determined by performing the Crystal Reports Job Server threshold calculation. In this example, the calculation is: (1000 reports x 9 minutes to run each report) / (300 minutes for the time window x 5 maximum jobs per CPU) = 6 CPUs A Crystal Reports Job Server service needs to be installed on two different machines with quad CPUs. These two Crystal Reports Job Server services need be have the maximum jobs allowed set to 20. As an alternative you could have configured one Crystal Report Job Server service on a quad machine (set the maximum jobs allowed to 20) and another Crystal Report Job Server service on a dual machine (set maximum jobs allowed to 10).

Step 4: System database tuning


The sizing guidelines provided by Business Objects are the results of benchmark testing performed by the product development group. Your system will use different hardware and software. Your reports will also be different sizes and have different levels of complexity. These factors may have an impact on system performance. While using the Business Objects sizing guidelines will provide you with a good starting point for your system, to achieve optimal performance, you should perform additional system testing and tuning. When performing system testing, use a third-party load testing application to simulate activity on your system and to monitor performance during the activity. You may identify bottlenecks or servers that are hitting resource maximums during activity. You can then adjust the server settings and redistribute resources until you find your system working at optimal performance.

System database back-end performance


To provide optimal overall performance the network and the database system need to be adequately sized and configured. Tuning a database system for peak performance is a complex task and involves hundreds of database parameters. Different tuning methodologies have been developed and comprehensive performance tuning guides are available. For data storage and retrieval BusinessObjects Enterprise exchanges messages with the database system over the network. BusinessObjects Enterprise data caches ensure minimal network traffic between BusinessObjects Enterprise and the database system. Nevertheless good network performance is critical, for example some operations cannot use BusinessObjects Enterprise data caches. The database client software uses the underlying network system. There is no tuning required for the database client software

Performance criteria
Some terminology is necessary to understand how database access may be optimized: Lock: Locks are used to control concurrency in a database. When a process locks a row or table, no other process can access it until the lock is released. The level at which a lock is requested, that is, row or table level, is referred to as the lock granularity.

Performance, Scalability and SizingLearners Guide

117

Deadlock: A deadlock occurs when two or more processes conflict in such a way that each must wait for the other to release its lock in order to complete. Lock wait: A lock wait occurs when a process must wait for another process to release a lock on a row or table. Lock escalation: A lock escalation occurs when the number of locks on rows or tables in the database equals the maximum number of locks specified in the database configuration settings. Meeting the following five criteria helps to avoid the most common performance bottlenecks. The database system's cache hit rates are over 90%. The optimizer statistics are not older than 24 hours. Lock granularity is row locking. There are no lock escalations. There are no log write waits. The average disk write queue length to the disk drives which contain the database log files is smaller than five.

90% or higher cache hit rates

Offers:

A relatively low number of physical disk reads and writes. A relatively low number of SQL compiler executions.

Disk access being unable to keep up with the level of I/O that the database server requests. Long disk read or disk write queues. Unnecessary SQL statement compilations or long SQL statement compilation times.

Are important to avoid:

Are achieved by:

Providing sufficient physical memory. Configuring sufficient cache sizes.

118

Designing and deploying a solutionLearners Guide

Up-to-date optimizer statistics

The Query Optimizer offers the ability to choose the Query Execution Plan with the minimal cost. Good selectivity estimates of predicates in SQL expressions, especially for expressions including columns with unequal or skewed distribution of column values.

Offers:

Are important to avoid:

Long response times and low throughput. Long execution times for queries. Full table scans. Lock escalations. Lock waits or deadlocks. Significant differences between the estimated number of rows for each operator in the query plan from the actual number of rows.

Implementing a procedure to periodically update the statistics or to update the statistics after frequent changes to the database. Creating statistics for tables, columns, and indexes that are not stale. Setting the sample size (percentage of data that is analyzed to gather statistical information) to a value which is sufficient for the skewed data distribution. Setting the number of histogram buckets to a value which is sufficient for the skewed data distribution.

Are achieved by:

Lock granularity is row locking

Offers:

Increased concurrent execution of transactions. Reduced transaction processing time due to fewer lock waits.

Performance, Scalability and SizingLearners Guide

119

Lock granularity is row locking

Are important to avoid:

Lock waits or deadlocks.

Are achieved by:

Setting the lock granularity of your database to row level.

No lock escalations

Offers:

Increased concurrent execution of transactions.

Are important to avoid:

Lock waits or deadlocks.

Configuring a sufficient size of the lock list. Setting the lock escalation threshold to an appropriate level. Keeping the optimizer statistics up to date.

Are achieved by:

No log write waits

Offers:

Increased concurrent execution of transactions. Reduced transaction processing time due to no I/O waits.

Are important to avoid:

Lock waits or deadlocks.

Are achieved by:

Providing an I/O subsystem with sufficient throughput. Providing a Disk Controller Cache and setting the Disk Controller cache to 100% write cache.

120

Designing and deploying a solutionLearners Guide

Designing an architecture plan


Designing an architecture plan involves gathering requirements from key stakeholders, analyzing the requirements, and then using the information to perform sizing calculations to determine the size and configuration of the BusinessObjects Enterprise system. A thorough architecture plan identifies system usage, report viewing and scheduling requirements, and other system deployment requirements, such as the level of high availability required, the location of users, and standard hardware configurations. To gather the information required to design an architecture plan, you need to perform an analysis of stakeholder requirements. The stakeholders are groups of individuals impacted by the BusinessObjects Enterprise implementation. Stakeholders may include users who consume reports and information, business managers who analyze the business value of the software implementation, and IT individuals who are responsible for managing and supporting the BusinessObjects Enterprise implementation. After completing this unit, you will be able to: Determine system usage, reporting, document interaction, and general deployment requirements Analyze deployment requirements to determine the size and configuration of a deployment Identify service thresholds

Determining system load


To effectively calculate the size of the system, you need to determine the load of the system. This involves estimating the: Total number of potential users who will have access to the system Total number of concurrent active users that will access the system Total number of simultaneous requests that will be made

The number of potential and concurrent users


Determining the number of potential and, in particular, concurrent active users who will access the system is necessary to effectively define the hardware and configuration of the BusinessObjects Enterprise servers. Typically, the total number of users dictates the scope of the project. You need to determine the total number of users before you can estimate the number of concurrent active users. When determining the number of potential users, you should determine the breakdown of users by geographical location. This information will help you determine the location of BusinessObjects Enterprise servers when planning the system architecture. The number of concurrent active users is typically 10% to 20% of the total number of potential users in the system. You need to determine the total number of concurrent active users before you can estimate the number of simultaneous requests.

Performance, Scalability and SizingLearners Guide

121

When determining the number of potential and concurrent active users, you need to take into account how many users will be added to the system in the future. Estimating how many users will be added to the system will help you size the system to handle future growth.

The number of simultaneous requests


The number of simultaneous requests is difficult to estimate. Typically, the number of simultaneous requests is 10% to 20% of the number of concurrent users (for example, folders within InfoView being expanded). To more accurately estimate the number of simultaneous requests, you can group users by how they use the system, calculate the percentage of concurrent users by usage type, and then determine the simultaneous usage rate. For instance, a simultaneous request to view a report or document causes the server(s) responsible for rendering pages to fulfill the request right away. Also, the request causes the server to use resources for a configurable amount of time (for instance, 20 minutes) in anticipation of future viewing requests. When it comes to dealing with simultaneous viewing requests, the estimated ratio of concurrency could be higher than the documented 10 to 20% range (for example, if there are 100 users and each user is viewing a report). In total there would be 50 reports being viewed (some of the reports are being viewed by more than one user). In this example, the simultaneous request rate would be 50%.

Determining reporting requirements


When determining reporting requirements, you need to consider the number of Crystal Reports, Web Intelligence, and Desktop Intelligence documents that will be stored in the system. The number of Crystal Reports, Web Intelligence, and Desktop Intelligence documents stored in the system will influence the number of report/document viewing requirements, and the number of report/ document scheduling requirements. It is important to point out that you need to perform separate calculations for viewing and scheduling needs for each document type that you will be using in the system.

Report viewing requirements


Analyzing the report viewing requirements helps to determine the load on the Crystal Reports Cache Server, Crystal Reports Processing Server, Web Intelligence Processing Server, Report Application Server, Connection Server, Desktop Intelligence Cache Server, and Desktop Intelligence Processing Server. When determining report/document viewing requirements, you need to estimate: The number of unique report/document viewing sessions there will be at any given time. The percentage of users who will view reports/documents on demand. The percentage of users who will view report /document instances. The percentage of report/document viewing requests by geographic location. The number of users viewing OLAP Intelligence reports.

122

Designing and deploying a solutionLearners Guide

Report scheduling requirements


Analyzing the report/document scheduling requirements helps to determine the load on the Crystal Reports Job Server/Web Intelligence Job Server and, in particular, the Web Intelligence Processing Server. When determining report scheduling requirements, you need to estimate: The percentage of users who will schedule reports. The total number of reports/documents that will be scheduled by geographic location. The frequency that reports/documents will be scheduled (for example, 100 reports/documents scheduled daily) by geographic location. The percentage of reports/documents that will be scheduled after work hours and during work hours. The time window required for reports/documents to run. The average time it takes to process the report/document.

Determining system deployment requirements


Gathering general system deployment information allows you to consider other factors that may affect how the system is configured. For instance, you may need to determine: The level of high availability that is required. The standard hardware configurations that you need to follow. The geographic location of the users who will access the system. The type of network and firewalls used. The individuals who will be responsible for managing and maintaining the system. The location of the production database(s). It is also recommended to talk with key stakeholders to identify the business issues that require solutions. By understanding the underlying business problem, you are better able to identify the high-level requirements for the system architecture and deployment configuration.

Activity: Determining architecture requirements


Scenario
Assume that all machines are single CPU systems.

Objectives
Run through the sizing process for the Jade Publishing Beijing office. Fill out the sizing information below.

Beijing Office
1. Web Server:

Performance, Scalability and SizingLearners Guide

123

Relevant information: Web based BOE environment requires a web server Threshold: 400 concurrent users sessions (user session=1 logged on user) or 100 simultaneous requests per processor (request=e.g. a user clicking on a folder) Actual number of servers required: 2. Web Application Server Relevant information: Number of simultaneous requests 150 Threshold: 400 CAU (concurrent active users) or 100 simultaneous requests per CPU Actual number of servers required: 3. CMS Relevant information: Number of simultaneous requests 150 Threshold: 500 simultaneous logged on users per CPU and/or 600 per CMS service and/or 1 CPU for every estimated 100 user simultaneous requests (this number may vary greatly depending on the type of action made) Actual number of servers required: 4. Web Intelligence Processing Server Relevant information: 40 viewing requests + 5 jobs running at peak time Threshold: 25 simultaneous viewing sessions (1 processing service installed per machine) Actual number of servers required: 5. Web Intelligence Job Server Relevant information: 20 jobs running at off peak time, 5 jobs at peak time Threshold: 100 jobs Actual number of servers required: 6. Desktop Intelligence Processing Server Relevant information: 20 simultaneous viewing sessions Threshold: 8-12 per CPU (1 processing service installed per machine) Actual number of servers required: 7. Desktop Intelligence Cache Server Relevant information: 20 simultaneous viewing sessions Threshold: 50 per single CPU and 200-400 per cache service Actual number of servers required: 8. Desktop Intelligence Job Server Relevant information: No Desktop Intelligence documents are scheduled

124

Designing and deploying a solutionLearners Guide

Threshold: 5 per CPU Actual number of servers required: 9. Crystal Reports Processing Server Relevant information: 75 simultaneous viewing sessions Threshold: 25-75 simultaneous viewing sessions per CPU (assume 50) Actual number of servers required: 10.Crystal Reports Cache Server Relevant information: 75 simultaneous viewing sessions Threshold: 200 simultaneous viewing sessions per CPU and 400 sessions per cache service Actual number of servers required: 11.Crystal Reports Job Server Relevant information: 200 at month end, each Crystal Report runs on average for 10 minutes, Time to complete 5 hours = 300 minutes Threshold: 5 jobs per CPU Actual number of servers required:

Performance, Scalability and SizingLearners Guide

125

Review: Performance, Scalability and Sizing


Review: Performance, Scalability and Sizing
1. What is scalability? 2. List the four main BusinessObjects Enterprise scalability goals. 3. What are the four steps of the sizing process? 4. List the three estimates necessary when determining system load.

126

Designing and deploying a solutionLearners Guide

Lesson summary
After completing this lesson, you are now able to: Describe scalability Describe the BusinessObjects Enterprise scalability goals Use the sizing process to size a BusinessObjects Enterprise deployment Determine architecture and hardware requirements Design an architecture plan

Performance, Scalability and SizingLearners Guide

127

128

Designing and deploying a solutionLearners Guide

Lesson 5

Deploying a System
Deploying a System
After you have designed a system architecture plan, you then install, configure, test, and troubleshoot the system in a test environment to ensure that it is operating as expected. After completing this lesson, you will be able to: Installing and configuring BusinessObjects Enterprise Troubleshooting BusinessObjects Enterprise

Deploying a SystemLearners Guide

129

Installing and configuring BusinessObjects Enterprise


After completing this unit, you will be able to: Prepare the environment for installation Install and configure BusinessObjects Enterprise

Preparing the environment for installation


When installing BusinessObjects Enterprise, the following operating system preparations must be made:

Network configuration
Each BusinessObjects Enterprise server must be configured to communicate over TCP/IP to provide connectivity to the CMS and other BusinessObjects Enterprise servers. Using the PING utility, verify connectivity to the CMS using the NetBIOS name of the CMS. Verify the web server has connectivity with the Web Application Server. To verify connectivity, use the PING utility using the NetBIOS name of the Web Application Server. All InfoView clients must be able to access the web server.

Administrative privileges
You must be logged on as a user with administrative privileges to the computer on which you are installing BusinessObjects Enterprise. Administrative privileges usually include the ability to write to the Windows Registry, which is required for installation.

ODBC configuration
If you are using SQL Server for your CMS database, correct ODBC configuration is required; native connections to SQL Server are not supported. Verify the server is running version 3.x or newer of the Microsoft Data Access Components (MDAC). Note: Microsoft Corporation provides a utility for testing your version of MDAC, which is called the Component Checker. Visit the Microsoft Corporation website for more information. Before beginning installation, ensure you have the password for the SQL Server sa account or a user name and password for an administrative level database account.

Virtual memory
For best performance, Microsoft Corporation recommends that you do not set the initial size to less than the minimum recommended size under Total paging file size for all drives. The recommended size is equivalent to 1.5 times the amount of RAM on your system. It is best to leave the paging file at its recommended size, although you could increase its size if you routinely use programs that require a lot of memory.

130

Designing and deploying a solutionLearners Guide

Note: Virtual memory settings are commonly referred to as Swap File settings.

Background services priority


Your Windows server should have Performance Options configured to give Background services priority on your system.

Web Server\Web Application Server\Web Application Container Server


Ensure there is connectivity between the InfoView clients and the web server. In a distributed deployment, the web server forwards the enterprise request to the Web Application Server or Web Application Container Server depending on the deployment type. In a standalone environment, the InfoView client requests are sent directly to the Web Application Server or Web Application Container Server.

Web browser
A web browser should be installed and configured on all desktop clients machines.

Database
BusinessObjects Enterprise requires a system database to store information about the server components and objects specific to your BusinessObjects Enterprise deployment. Note: The Setup program can install its own MySQL database by default.

TEMP directory environment variables


The TEMP directory should reside on the drive with the most available space to allow for the BusinessObjects Enterprise installation to store temporary files. A minimum of 700 MB of free space is recommended. The location of the TEMP directory is configured using Windows Environment Variables. By default, the TEMP directory for Windows 2000 is located on C:\Windows\TEMP.

Printer driver
The Print Engine (CRPE) requires a printer driver be installed to provide support for background formatting. Ensure a printer is installed on your BusinessObjects Enterprise server. If you are distributing your installation across multiple computers, ensure that a printer driver is installed on the Crystal Reports Job Server, Report Application Server, and Crystal Reports Page Server machines.

Installing BusinessObjects Enterprise


Before you begin installing BusinessObjects Enterprise, determine what type of installation will suit your needs.

Deploying a SystemLearners Guide

131

Installation types
When installing BusinessObjects Enterprise, you can select between the following installation types: New Perform a new installation when you want all BusinessObjects Enterprise components to be installed on a single machine that is already running web services. Expand Once BusinessObjects Enterprise is up and running, an expanded installation allows you to add server components, such as CMS, to other servers in your environment to distribute processing workload. Note: Systems can be scaled to handle increased usage or to add fault tolerance. Adding servers is called scaling horizontally, while adding processors to a machine is called scaling vertically. In a horizontally-scaled system, BusinessObjects Enterprise components are installed on multiple machines. In a vertically-scaled system, multiple BusinessObjects Enterprise server components can run on the same machine. Note that a single server, vertically-scaled system improves BusinessObjects Enterprises ability to handle increased usage, but does not increase the fault tolerance of the system. Custom Performing a custom installation allows you to select exactly which components will be installed. Silent installation For experienced administrators, the silent installation allows you to install from the command line. It is recommended that you run this installation when you need to install quickly on multiple machines. You can also incorporate the silent installation command into your own build scripts. Note: For detailed information and procedures for each installation type, consult the BusinessObjects Enterprise Installation Guide.

Testing the installation


Once you have installed the BusinessObjects Enterprise system, you should verify that its components are installed successfully. Checking these components will help you troubleshoot any issues that may occur during the installation process. Confirm the installation was successful by verifying whether you are able to: Log into the Central Management Console (CMC). View a Crystal report and a Web Intelligence document on demand. Schedule a Crystal report and view the scheduled instance. Publish a Web Intelligence document, Crystal report and Desktop Intelligence document. View the scheduled instances.

132

Designing and deploying a solutionLearners Guide

Activity: Wdeploy
Objectives
The Beijing office of Jade Publishing has a DMZ and corporate policy states that no application servers can be located within the DMZ. The goal of this activity is to use wdeploy to deploy the static or HTML content of InfoView and the CMC to Apache web server while deploying the dynamic content of Infoview and the CMC to Apache Tomcat web application server.

Web Server and Web Application Server setup


The following steps set up a web and web application server with no BOE web applications. To complete this activity Apache web server must be installed and a bridge between Apache and Tomcat must be built. This bridge allows Apache to forward all servlet requests for dynamic content to Tomcat. The bridge is built using a module called mod_jk. This module allows the web server, Apache, to recognize servlet requests and forwards these requests to be processed by the web application server, Tomcat. 1. Install Apache HTTP Server a. Download Apache from http://httpd.apache.org/download.cgi b. Specifically http://mirror.public-internet.co.uk/ftp/apache/httpd/binaries/win32/apache_2.2.9-win32-x86-no_ssl-r2.msi c. Run the Apache installation file and accept all defaults. This is assuming that BOE and the bundled Tomcat web application server have been installed to the default location of C:\Program Files\Business Objects. 2. Once the installation has completed test that Apache is up and running. a. In a browser, type http://localhost, the "It works" page should be loaded. 3. Test that Tomcat is up and running. a. In a browser, type http://localhost:8080, the Tomcat homepage should be loaded. 4. Undeploy all BOE web applications Caution: If you are using a VM image, take a snapshot of your VM if you would like to be able to roll back from this action. This action will remove all BusinessObjects Enterprise web applications including InfoView and the CMC. They will no longer be accessible after running this command. a. Stop the Apache Tomcat service. b. Modify the following file: C:\Program Files\Business Objects\deployment\config.tomcat55 with the Tomcat 5.5 settings. Note: This file contains details pertaining to Tomcat necessary for wdeploy.

Deploying a SystemLearners Guide

133

c. Open the file in a text editor and make sure that the following lines exist:
as_dir=C:\Program Files\Business Objects\Tomcat55 as_instance=localhost as_service_name=BOE120Tomcat

d. Open a DOS Command Prompt. e. Navigate to C:\Program Files\Business Objects\deployment f. Run the following command: wdeploy.bat tomcat55 undeployall Note: This operation will take a while to execute. During testing it took 1 minute and 22 seconds. When complete you should see the following message: "Build Successful" 5. Make sure the InfoView and CMC web applications are not deployed. a. Navigate to: C:\Program Files\Business Objects\Tomcat55\Webapps There should not be any BusinessObjects Enterprise web application folders. b. Test that InfoView and the CMC are not accessible from: http://localhost:8080/InfoViewApp http://localhost:8080/CmcApp 6. Ensure the Static contents of the BusinessObjects Enterprise war files are not deployed to Apache. There should be no BusinessObjects Enterprise static content deployed to C:\Program
Files\Apache Software Foundation\Apache2.2\htdocs

7. Stop Apache and Apache Tomcat services. To stop Apache go to Start > Programs > Apache Http Server 2.2 > Control Apache Server > Stop

Build the bridge between Apache and Tomcat


The following steps explain how to build a bridge between Apache and Tomcat. These steps reference files that have been configured to a default BusinessObjects Enterprise deployment. 1. You will now modify the Apache configuration file httpd.conf 2. Modify httpd.conf, located in C:\Program Files\Apache Software
Foundation\Apache2.2\conf

3. Open the file in a text editor and add the following line at the end of the file: Include conf/mod_jk.conf 4. This modification tells Apache to 'include' a configuration file defining the mod_jk module as well as the workers.properties 5. Mod_jk is a module that allows Apache to recognize servlet requests and then Apache forwards these requests to Tomcat

134

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

6. Workers.properties defines a worker using the AJP13 protocol Note: : For more information please refer to: http://tomcat.apache.org/connectors-doc/ajp/ajpv13a.html and: http://tomcat.apache.org/connectors-doc/reference/workers.html and: http://tomcat.apache.org/connectors-doc/webserver_howto/apache.html 7. Place the following files from the resource CD into the appropriate directories a. mod_jk.conf and workers.properties both go in C:\Program Files\Apache Software
Foundation\Apache2.2\conf

b. mod_jk.so goes in C:\Program Files\Apache Software Foundation\Apache2.2\modules mod_jk.so is the module that enables functionality in Apache http://tomcat.apache.org/connectors-doc/webserver_howto/apache.html 8. You must now update the Tomcat web application server to listen on a specific port for requests forwarded from Apache web server 9. Open the server.xml in a text editor. This file is located in C:\Program Files\Business
Objects\Tomcat55\conf

10.Uncomment the following line in the server.xml by removing the < !-and >
< !<Connector enableLookups="false" port="8009" protocol="AJP/1.3" redirectPort="8443"/> -- >

Note: This allows Tomcat to listen on port 8009 for servlet requests related to the AJP13 protocol. Note: Ensure that the strings are not concatenated on the above line. If the strings are concatenated, it can cause errors.

Wdeploy configuration
The following steps set up the wdeploy configuration files for Apache and Tomcat. These files must be manually configured in order to successfully run wdeploy which will configure the web and web application server. 1. Locate the wdeploy.bat. This is located at: C:\Program Files\Business Objects\deployment\wdeploy.bat 2. Modify the config.apache. a. This file is located under C:\Program Files\Business Objects\deployment.

Deploying a SystemLearners Guide

135

b. Open the file in a text editor and make the following changes:
ws_dir= C:\Program Files\Apache Software Foundation\Apache2.2 connector_type=tomcat55 deployment_dir= C:\program files\Apache Software Foundation\Apach2.2\htdocs

Run wdeploy
1. Run wdeploy. a. Open a DOS Command Prompt b. Navigate to C:\Program Files\Business Objects\deployment c. Run the following command:
wdeploy.bat tomcat55-Das_mode=distributed -Dws_type=apache deployall

This command will split the static and dynamic content of BusinessObjects Enterprise web applications and deploy static content to Apache and dynamic content to Tomcat. Note: This operation will take several minutes. During testing it took 19 minutes and 12 seconds. When complete, you should see the following message: "Build Successful". The static and dynamic content of the BusinessObjects Enterprise war files are split and are dropped to wdeploy workdir. Dynamic content: C:\Program Files\Business Objects\deployment\workdir\tomcat\application Static content: C:\Program Files\Business Objects\deployment\workdir\tomcat\resources The dynamic content of war files are deployed to Tomcat in: C:\Program Files\Business
Objects\Tomcat55\Webapps

The static content of war files are deployed to Apache HTTP Server in: C:\Program
Files\Apache Software Foundation\Apache2.2\htdocs

2. Start the Apache and Tomcat services. Note: It may take a while for Tomcat to startup completely. 3. When the deployment is completed, log into InfoView and CMC. Note: http://machine1/CmcApp should work if you enter machine3 CMS details. http://machine1/InfoViewApp should work Note: http://localhost:8080 should work but http://localhost:8080/CmcApp or http://localhost:8080/InfoViewApp should not work, as static content no longer resides on the Tomcat server.

136

Designing and deploying a solutionLearners Guide

Separating static and dynamic content onto different machines


Most enterprise deployments adhere to strict security policies. One common security policy is to have the web server on a machine within a DMZ, the net result being that there is no web application server running dynamic content in the DMZ. This means that a firewall separates the web and web application servers, making the deployment more secure. Separate the static web server (Apache) from dynamic (Tomcat) so that they reside on different machines. 1. On Machine 1: Where you have already configured the Apache Server you need to stop Apache first. 2. Stop Tomcat. 3. Configure Apache so it forwards the requests to Machine 2 (on which Tomcat resides). You can do it by editing workers.properties file found in C:\Program Files\Apache Software Foundation\Apache2.2\conf directory. The text should look like this:
# Define 1 real worker using ajp13 worker.list=ajp13 # Set properties for worker1 (ajp13) worker.ajp13.type=ajp13 worker.ajp13.host=Machine2 worker.ajp13.port=8009

Note: is either the name or the IP address of Machine 2. Note: ajp13 name is hardcoded in all the configuration files that BOE deploys. Ensure that the worker.list=ajp13 line is there. 4. Start Apache web server on Machine 1. Do not start Tomcat on Machine 1. 5. Stop the Tomcat on Machine 2. 6. Configure Tomcat so that it knows to accept AJP13 requests by listening on port 8009. You can do this by editing server.xml file found in C:\Program Files\Business Objects\Tomcat55\conf directory. Uncomment the following line in the server.xml by removing the < !- and >
< !<Connector enableLookups="false" port="8009" protocol="AJP/1.3" redirectPort="8443"/> >

This allows Tomcat to listen on port 8009 for servlet requests related to the AJP13 protocol. Note: It might also be necessary to ensure that the strings are not concatenated on the above line. During testing, the strings were concatenated, which was causing errors. 7. Start Tomcat on Machine 2. 8. Test the system by accessing the following link from any network station:

Deploying a SystemLearners Guide

137

http://Machine1/CmCApp or http://Machine1/InfoViewApp Note: Replace Machine 1 with the actual machine name or the IP address.

Configuring BusinessObjects Enterprise


This section reviews the configuration settings available for each BusinessObjects Enterprise server. Note: This section does not describe the server configuration settings in detail. For more information on configuring and administering BusinessObjects Enterprise servers, it is recommended you attend the BusinessObjects Enterprise Administering Servers course.

Intelligence Tier
The Intelligence Tier contains the "brain" of the BusinessObjects Enterprise environment.

Server Intelligence Agent (SIA)


The SIA can be configured under the Central Configuration Console (CCM). The following tabs are available: Properties Dependencies Startup Configuration Protocol

Properties
The Properties tab provides the following information about the SIA and any other listed service: Server Type Display Name Server Name Command Line Startup Type Service Logon information

Dependency
This tab shows the list of services that SIA startup depends upon. Note: If you are having problems with the SIA or the application server, go to the dependency tab to ensure that all the required services are started.

138

Designing and deploying a solutionLearners Guide

Startup
The Startup tab has two dialogue boxes: one for the local CMS server, and another for the remote CMS server. This is the bootstrap (pointer to the CMS) information for a given SIA. A local CMS server entry indicates that the SIA is responsible for starting the CMS. A remote CMS server entry indicates which CMS the SIA needs to register with. Note: There can be values in both dialogue boxes if you have more than one CMS in the cluster.

Configuration
The Configuration tab allows you to modify the port number of the SIA and the CMS system and audit database configuration options if the SIA is managing the locally installed CMS. Note: The Configuration tab allows you to modify only the port number of the SIA if the SIA does not manage the locally installed CMS. The Configuration tab will not show any CMS configuration information.

Protocol
The Protocol tab allows you to modify the SSL encryption for communications between the servers.

Central Management Server


BusinessObjects Enterprise system database
For production environments, it is recommended that the CMS database not be placed on: The database server against which the reports are running. If the database server is busy processing the data for reports, the retrieval time for the CMS to query the same server will be negatively impacted. The Crystal Reports Job Server or Crystal Reports Page Server. If the Crystal Reports Page Server or Crystal Reports Job Server are busy processing the data for reports, the retrieval time for the CMS to query the same server will be negatively impacted. A CMS machine in a clustered environment. The system load of a CMS on a database server will be higher than a CMS that does not run on a database server. As a result, CMS load balancing may be negatively impacted.

Guidelines for CMS clustering


Note: CMS servers in a cluster for the most part rely on a round robin mechanism. Run each CMS cluster member on the same operating system. Run each CMS cluster member on a machine that has the same amount of memory and the same type of CPU. Each machine should be configured in the same manner: Install the same operating system service packs and patches.

Deploying a SystemLearners Guide

139

Install the same version of BusinessObjects Enterprise, including any applicable patches. Ensure that each CMS connects to the BusinessObjects Enterprise Repository database in the same manner; whether you use native or ODBC drivers, ensure that the drivers are the same on each machine. Check that each CMS uses the same database user account and password to connect to the BusinessObjects Enterprise system database. If you wish to enable auditing, specify an audit data source for each CMS. Ensure that each CMS connects to the same common audit database, in the same manner. Run each CMS service under the same network account. Verify that the current date and time are set correctly on each CMS machine.

If the database server is multi-homed, add one of its valid IP addresses to the hosts file on each CMS machine. This ensures that each cluster member communicates with the database on the same IP address, without relying upon a WINS server for name resolution. By default, a CMS cluster name reflects the name of the first CMS that you configured to talk to the system database, and the cluster name is prefixed by the @ symbol. For instance, if your existing CMS is called EnterpriseCMS, then the default cluster name is @ENTERPRISECMS. The cluster name can be easily changed to a new meaningful name if desired. The step of renaming the cluster is optional.

Clustering CMS machines


Clustering involves pointing more than one CMS to a common system database. In order for each CMS to access the system database, it is necessary to configure the database connectivity on each CMS machine (such as ODBC for SQL Server, native connectivity for Oracle). The type of connectivity required will be based on the database used in the deployment. Note: If you are using ODBC, then it is possible to create this connection while going through the "Specify CMS Data Source" wizard. It is, however, recommended that you create the database connection and define an ODBC data source prior to setting up the cluster. There are two ways to add a new CMS cluster member to an existing node using CMC: Create a new CMS server on a node Clone an existing CMS server from one node to another Note: Throughout this lesson, the independent CMS refers to the one that you want to add to a cluster. You will add the independent CMS to your production CMS cluster. By adding an independent CMS to a cluster, you disconnect the independent CMS from its own database and instruct it to communicate with the system database that belongs to your production CMS cluster. Before you begin to cluster CMSs, ensure that: You have a database user account with create, delete, and update rights to the database storing the BusinessObjects Enterprise tables.

140

Designing and deploying a solutionLearners Guide

You can connect to the database from the machine that is running the independent CMS by using your database client software or through ODBC, as appropriate for your configuration. The CMS you are adding to the cluster meets the requirements outlined in the section titled Requirements for CMS clustering. You back up your current CMS database before making any changes.

To copy CMS system database from MySQL to SQL Server


By default, BusinessObjects Enterprise connects to a MySQL database. First you need to copy the BusinessObjects Enterprise system database from MySQL to SQL Server using the Central Configuration Manager (CCM). At the end of this exercise you will have moved the CMS system database from MySQL to SQL Server. But before you do that you need to create an empty database on the SQL Server. Note: The following steps assume that you are working with SQL Server database and will be copying content from MySQL to SQL Server. 1. On the production CMS (CMS1) machine use the SQL Server Enterprise Manager to create a new database. For example "BOEsystem". This database will host the CMS system database. 2. Using the ODBC Data Source Administrator, create a new System DSN for the database. For example "BOEsystemDSN". 3. Using the CCM stop the SIA and Tomcat. 4. From the CCM toolbar, click Specify CMS data source. The CMS Database Setup wizard appears. 5. Select copy data from another Data Source. Click OK. 6. Ensure that you specify From: MySQL (original) and To: SQL Server (newly created) using appropriate configuration parameters. 7. Start the SIA and start Tomcat. 8. You should be able to log into CMC and InfoView and view the contents like you did before.

To create a Node (SIA)


1. On the second CMS machine, create an ODBC data source (BOEsystemDSN) that points to the production CMS database (BOEsystem - located on the first CMS machine: CMS1). 2. From the CCM, Click Add SIA, then Click Next. 3. Put the Node name and port number. Click Next. For example "NodeMachineName2", "Port: 7410". 4. Enter the System name to which the Node needs to be added (CMS1 machine name) and the Administrator credentials for the CMS1 system. Click Next. 5. Click Finish.

Deploying a SystemLearners Guide

141

6. Start the newly created SIA node.

To add a CMS server to a newly created SIA


1. Log onto the CMC on the primary system (CMS1). 2. Go to Servers Nodes . 3. Select the newly created Node and click Manage New New server. 4. In Select Service ensure Central Management Service is selected and click Next. You will see the following warning: Before creating the CMS please make sure that
the machine hosting the node has a properly configured database connection to the CMS system database.

5. Click Create. Now you see the new CMS created on the node. Note: Rather than creating a brand new CMS server on the Machine 2 node, you could have cloned CMS1 from the original SIA node to the newly created SIA node on Machine 2.

To configure a second CMS to join a cluster


1. Use the CCM to stop the SIA on the Independent CMS Machine. 2. With the SIA selected, click Specify CMS Data Source on the toolbar. The CMS Database Setup dialog box appears. 3. Click on Update Data Source Settings option. 4. The Select Database Driver dialog box appears. 5. In the Select Database Driver dialog box, ensure the SQL Server (ODBC) option is selected, and then click OK. The Windows Select Data Source dialog box appears. 6. Select the Machine Data Source tab, then highlight the connection that corresponds to the production CMS database (XI30systemDSN that you created previously), and click OK. If prompted, provide your database credentials and click OK. The CMS server connects to the database server on Machine1 (production). 7. Start the SIA. 8. It is possible that the CMS2 is not configured to start automatically when the SIA starts. Using CMC start the CMS2 and configure it to start automatically when SIA starts. 9. Both the CMS1 and CMS2 should now be clustered.

To change the cluster name


By default, a CMS cluster name reflects the name of the first CMS in the cluster, prefixed by the @ symbol. For example, if your existing CMS is called ENTERPRISECMS, then the default cluster name is @ENTERPRISECMS.

142

Designing and deploying a solutionLearners Guide

The CCM on any CMS cluster member can be used to change the name of the CMS cluster. To rename the CMS cluster, you must first stop any one node that happens to host the CMS in the cluster using the CCM. Once the name has been changed, all cluster members are automatically informed of the cluster name change. Note: The name change update may take a few moments to propagate throughout the cluster. 1. Use the CCM to stop any SIA that hosts the CMS server that is a member of the cluster. 2. With the SIA selected, click Properties on the toolbar. 3. Click the Configuration tab. 4. Select the Change Cluster Name to check box. 5. Type the new name for the cluster. 6. Click OK, and then start the SIA. The CMS cluster name is now changed. All other CMS cluster members are dynamically notified of the new cluster name (although it may take several minutes for your changes to propagate across cluster members).

To remove a CMS from the cluster


1. Log into CMC, Go to > Servers > Nodes. 2. Select the node containing CMS to be removed. 3. Select the Central Management Server and click Manage > Delete. 4. Go to the CCM on the machine on which the CMS server used to reside and delete that SIA (assuming no other servers reside within that SIA). Note: On the independent and production CMS machines, go to the Windows registry key HKey_Local_Machine\Software\Business Objects\Suite12.0\Enterprise\CMSClusterMembers and delete the cluster member that has been removed.

Activity: Clustering CMS machines


Objective
In this activity you will cluster CMS machines (Machine 3 with Machine 4).

Instructions
1. Complete this activity in groups of four. Setup of the machines: Machine 1: Apache (Web Server) Machine 2: Tomcat (Web Application Server)

Deploying a SystemLearners Guide

143

Machine 3: CMS1, Input FRS, Output FRS, All other servers Machine 4: CMS 2 (clustered with CMS1)

1. Copy CMS system db from MySQL to SQL Server on secondary CMS machine. 2. Create a node on secondary CMS machine. 3. Add a CMS server to the newly created node. 4. Configure newly added CMS server to join the cluster of CMS 1. In this activity, the independent CMS (or the machine having the CMS belonging to the node) refers to the CMS that is not currently included in the production cluster. You will add the independent CMS to production CMS cluster. By adding an independent CMS to a cluster, you disconnect the independent CMS from its own database and instruct it to share the system database that belongs to your production CMS. The following diagram illustrates how the system database is used once the CMS machines are clustered:

144

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Test the CMS cluster


1. Log into the CMC and access the Settings area. Click the Cluster tab. Here you will see three pieces of information: CMS Name - this is the CMS that has authenticated you within the cluster. Cluster Name - this is the name of the cluster. Cluster Members - these are the members of the cluster. Navigate in the CMC to ensure that there are no errors or other problems.

2. Using the CMC, stop the CMS that you are connected to within the cluster. Once you refresh the page in the CMC, you will notice that the CMS has stopped. 3. In the CMC or InfoView, move to another page. Notice that the name of the CMS that you are connected to has changed. This seamless transition between CMSs indicates that the CMS machines have been successfully clustered.

Crystal Reports Cache Server


To modify the performance settings for the Crystal Reports Cache Server, access the Servers area in the CMC. The performance settings for the Crystal Reports Cache Server are:

Idle Connection Timeout (minutes)


Alters the length of time the Crystal Reports Cache Server waits for further requests from an idle connection thread. The default value is 20 minutes. Reducing this value can improve resource-limited server performance by flushing the idle threads more often.

Deploying a SystemLearners Guide

145

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Note: Setting this value too low can result in user requests being closed prematurely. Setting this value too high can cause requests to be queued while waiting for idle jobs to be closed.

Oldest On-Demand Data Given to a Client (in seconds)


Determines how long cached report pages are used before new data is requested from the database. The default value is zero seconds. The optimal value is largely dependent upon your reporting requirements. Note: This setting is relevant to view On-Demand requests or view instance requests that require refresh against the database.

Maximum Cache Size Allowed (KBytes)


Limits the amount of hard disk space used to store cached pages. Once this limit has been reached, .EPF files are deleted as required to remain below this maximum value. The Crystal Reports Cache Server deletes the least recently used .EPF files. The default maximum cache size allowed is 256000 kilobytes (KB). The optimal value is largely dependent upon your reporting requirements. If the Crystal Reports Cache Server handles large or complex reports, a larger cache size may be needed.

Cache Files Directory


Specifies the absolute path to the directory where the cached report pages (.EPF files) are stored. The default cache file directory is \Program Files\Business Objects\BusinessObjects
Enterprise 12.0\Data \crystalReportsCacheServer\temp\computername.cacheserver,

where computername is the name of the BusinessObjects Enterprise Crystal Reports Cache Server. The cache directory should be located on a drive that is local to the server.

Viewer Refresh Always Yields Current Data


When enabled, the Viewer Refresh Always Yields Data setting allows users to explicitly refresh a report using the Refresh button in the report viewer. When disabled, the Crystal Reports Processing Server does not connect to the database to retrieve current data when the user clicks Refresh; the report is refreshed based on the Oldest On-Demand Data Given to a Client (in minutes) setting. Enabled by default. Consider that providing users the ability to refresh their report data at any time can impact the performance of the database server.

146

Designing and deploying a solutionLearners Guide

Input and Output File Repository Servers (FRS)


You can configure the File Repository Servers in the following ways: Set root directories of the File Repository Servers Set idle times of the File Repository Servers

Setting root directories of the File Repository Servers


The Properties tab of the Input and Output File Repository Servers enable you to change the locations of the default root directories. These root directories contain all of the report objects and instances on the system. You may change these settings if you want to use different directories after installing BusinessObjects Enterprise or if you upgrade to a different drive. By changing the directories, the old directory paths become invalid. Note: The Input and Output File Repository Servers must not share the same root directory. You can set the location of the root directory using UNC or physical path names. It is recommended to put the root directory on a drive that is local to the server. Otherwise, the service must be started with a domain account with local administrator access to where the files are saved.

Setting idle times of the File Repository Servers


You can set the maximum idle time of each File Repository Server. When a file is transferred to or from the File Repository Server, the file is first placed in a temp directory. If the machine transferring the file fails during the file transfer, the original copy of the file will not become corrupt. The Max Idle Time setting specifies how long the File Repository Server waits before it cleans up resources on the File Repository Server if a file transfer is aborted.

Activity: Configuring an active/passive FRS


The customer, Jade Publishing requires the following: They would like to standardize on a single BI framework that is scalable and fault tolerant, supporting the best BI tools. There is an immediate need for an enterprise-wide query and analysis, file storage, and reporting distribution for Jade Publishing that will allow users to modify documents on-the-fly and build new queries to get answers. Your current machine setup within your groups of four is: Machine 1: Apache (Web Server) Machine 2: Tomcat (Web Application Server) Machine 3: CMS1, Input, Output, All other servers Machine 4: CMS 2 (cluster with CMS1)

Deploying a SystemLearners Guide

147

Ensure that the files (document/report objects, and document/report instances) are stored in central directories on the Machine 2 (Web Application Server) machine. It will be necessary to move the files stored from the CMS1 (Machine 3) from Input and Output directories to the respective file store directories on Machine 2. Ensure that you create new Nodes on Machine 3 and Machine 4. Those nodes will host the Input and Output FRS servers. Ensure that you configure two Input FRS servers, one on Machine 3 (within newly created node), and one on Machine 4 (within newly created node). Those two servers need to have the ability to retrieve and deposit documents from the centrally stored file share existing on Machine 2.

Note: Since Input and Output servers need to cross across the network to reach files, you need to configure the FRS servers to run not under the system account network authentication but as a domain account with read/write privileges to the Machine 2 central file store directories. Note: The servers run within the SIA node. It is the configuration of the SIA node that decides under which networking account a given BOE server (such as Input FRS server) runs. Ensure that you configure two Output FRS servers, one on Machine 3, and one on Machine 4. Those two servers need to have the ability to retrieve and deposit documents from the centrally stored file share existing on Machine 2.

Test the file store fault tolerance:


1. Ensure that the Input FRS servers run (one on Machine 3) and (one on Machine 4). Both of the Input FRS servers should be pointing to the central file location on Machine 2. 2. Ensure that the Output FRS servers run on Machine 3 and on Machine 4. Both of the Output FRS servers should be pointing to the central file location on Machine 2. 3. Add a new Web Intelligence document to the system (either create a new one from scratch or copy an existing document to a new folder location with the system).

148

Designing and deploying a solutionLearners Guide

4. Schedule the newly added Web Intelligence document. Verify that you can view the newly created Web Intelligence document and that you can view the newly created successful instance. 5. Stop the Input FRS server that has been running the longest. 6. Stop the Output FRS server that has been running the longest. 7. Ensure that you can still view and edit newly added Web Intelligence documents or any other document. This proves that the systems are Fault Tolerant when it comes to handling objects. 8. Ensure that you can still view existing successful Web Intelligence document instances or any other document instance. This proves that the systems are Fault Tolerant when it comes to handling instances. 9. Stop the second Input FRS server and the Output FRS server. At this point you should not have any running Input or Output FRS servers. 10.Try to preview an existing Web Intelligence document or any other object. You should be getting an error message indicating that the Input FRS Server is not available. 11.Try to preview an existing Web Intelligence document instance or any other object instance. You should be getting an error message indicating that the Output FRS Server is not available. 12.Start Input FRS servers on Machine 3 and Machine 4. 13.Start Output FRS servers on Machine 3 and Machine 4. 14.You should now be able again to preview documents and instances.

Adaptive Servers
The services included inside the Adaptive Processing Server and Adaptive Job Server do not have any configuration settings.

Event Server
The Event Server is responsible for monitoring files to trigger file-based events. The most important configuration setting for the Event Server is the file polling interval, which can be set in the Central Management Console. The Properties tab of the Event Server in the Central Management Console allows you to change the frequency with which the Event Server checks for file events. This File Polling Interval setting determines the number of seconds that the server waits between polls. The minimum value is one. Note: The lower the value, the more resources the server requires.

Deploying a SystemLearners Guide

149

Processing Tier
The processing tier contains CPU hungry servers that perform various tasks within the BusinessObjects Enterprise system.

Job Servers
Job Servers are responsible for processing scheduled jobs (depending on the type of viewing request). The BusinessObjects Enterprise Job server components include: Crystal Reports Job Server Web Intelligence Job Server Desktop Intelligence Job Server Destination Job Server Publication Job Server Program Job Server List of Values Job Server Although their responsibilities are different, the Properties settings are the same in the CMC. Access the server properties in the CMC to modify the performance settings for the different Job Servers. The performance settings for the Job Servers are:

Maximum Concurrent Jobs


This setting limits the number of concurrent child processes that a Job Server can initiate at one time, essentially controlling the number of scheduled reports that can be processed at one time. The default value is five. This can be modified to 10 on a dual processor computer and 20 on a quad processor computer. The ratio of five reports per processor is appropriate for most reporting environments. The value you set for this setting will vary depending on variables such as the hardware configuration, the database software being used, and your reporting requirements.

Temporary Directory
Job Servers require a temporary directory to store objects during processing. The objects stored on the File Repository Servers are compressed files and can expand up to 15 times their original size. Because of this, it is important to utilize a directory with excess space. If there is not sufficient hard drive space on the drive on which the temporary directory is located, the job will fail.

150

Designing and deploying a solutionLearners Guide

Desktop Intelligence Processing Server


The Properties tab of the Desktop Intelligence Processing Server allows you to modify the performance of the server in the CMC.

Location of Temp Files


The temp directory should be located on a drive that is local to the server. For optimal performance, the disk housing the temp folder must have a large amount of available space.

Idle Connection Timeout (minutes)


Idle time before the Report Engine can be reallocated to another user. Reducing this value can improve resource-limited server performance by flushing the idle threads more often. Setting this value too low can result in the user's requests being closed prematurely. Setting this value too high can cause requests to be queued while waiting for idle jobs to be closed. By default, this value is set at 20 minutes. The value can be between 1 and 28800 Minutes.

Idle Job Timeout (minutes)


This refers to idle time before the Report Engine can be closed. By default, this value is set at 20 minutes. The value can be between 1 and 28800 Minutes.

Number of Preloaded Report Jobs


The number of Report Engines preloaded to wait for users requests is set to 1 by default. The value can be between: 0 and 10.

Maximum Concurrent Jobs (0 for automatic)


This setting controls the maximum number of Report Engines running concurrently. By default, the value is 10 but can be varied between 1 and 10. This value must be two greater than the value set for the option Number of Preloaded Report Engine.

Share Report Data Between Clients


Allows the data in the cache to be shared between users.

Viewer Refresh Always Yields Current Data


Retrieves fresh data from the database when a viewer refreshes.

Deploying a SystemLearners Guide

151

Oldest On-demand Data Given to Clients (seconds)


If the document uses the server parameters for data sharing, expiration time (in seconds) after which data must be retrieved from database. The default value is 120 seconds. The value can be between 0 and 216000 seconds.

Maximum Operations Before Resetting a Report Job


This refers to the number of actions performed by the Report Engine before it is closed. This parameter is used to be sure Report Engines are recycled from time to time, in order to avoid memory leak. The default value is 1000 actions. The value can be between 10 and 1000.

Allow Running VBA


True if the Report Engines are allowed to run VBA scripts. By default, true, but it can be disabled for security reasons.

Desktop Intelligence Cache Server


The Properties tab of the Desktop Intelligence Cache Server allows you to: Set the location of the cache files Set the maximum cache size Set the maximum number of simultaneous processing threads Set the number of minutes before an idle connection is closed Set miscellaneous options related to sharing and refresh rates

Location of Cache Files


Specifies the absolute path to the directory where the cached report pages are stored. The default cache file directory is:
$INTSALLDIR/BusinessObjects Enterprise 12.0\Data\ <server>.Desktop_IntelligenceCacheServer\

The cache directory should be located on a drive that is local to the server.

Maximum Cache Size Allowed (KBytes)


This setting limits the amount of hard disk space used to store cached pages. Once this limit has been reached, files are deleted as required to remain below this maximum value. The Desktop Intelligence Cache Server deletes the least recently used files first. The maximum cache size can be any value between 0 and 100 000 000 kilobytes (KB).

152

Designing and deploying a solutionLearners Guide

A cache size value of zero means there is no operating cache. The default cache size allowed is 512000 KB. The optimal value is largely dependent upon your reporting requirements. If the Desktop Intelligence Cache Server handles large or complex reports, a larger cache size may be needed.

Minutes Before an Idle Connection Timeout


This setting controls the amount of time a Report Engine can remain idle before it times out. By default, this value is 20 minutes. The value can be between 1 and 28800.

Share Report Data Between Clients


If the document uses the server parameters for data sharing, true if after a refresh, the data in the cache are shared between users. This is true by default.

Viewer Refresh Always Yields Current Data


When enabled, the Viewer Refresh Always Yields Data setting allows users to explicitly refresh a report using the Refresh button in the report viewer. When disabled, the Desktop Intelligence Processing Server does not connect to the database to retrieve current data when the user clicks Refresh; the report is refreshed based on the Oldest On-Demand Data Given to a Client (in minutes) setting. Disabled by default. Consider that providing users the ability to refresh their report data at any time can impact the performance of the database server.

Oldest On-Demand Data Given to a Client (in seconds)


Determines how long cached report pages are used before new data is requested from the database. The default value is 120 seconds. The optimal value is largely dependent upon your reporting requirements. Note: This setting is relevant to view on-demand requests or view instance requests that require refresh against the database.

Always Share Report Jobs


True if the report job can be shared among several users. False by default.

Deploying a SystemLearners Guide

153

Amount of Cache to Keep When Document Cache Is Full


Sometimes referred to as the cache purge ratio. This parameter can be used to tune the cache purge. For example, when a cache is loaded weekly, but users want to keep all of it for several days, you would use this parameter. Value can be between 0% (Cache disabled) and 100%. Default value is 70%.

Connection Server
The Connection Server works in much the same manner as the Desktop Intelligence Cache Server works. The Properties tab of the Desktop Intelligence Cache Server allows you to: Set the number of minutes before an idle transient object is timed out Set miscellaneous options related to client connection support

Idle Transient Object Timeout (minutes)


The number of minutes before an idle connection to the Connection Server will be closed.

Enable HTTP Client Support


The Connection Server can be accessed through a CORBA server and called through an HTTP Client. This option is enabled by default to allow HTTP client communication with the Connection Server.

Enable CORBA Client Support


The Connection Server can be accessed through a CORBA server and called through a CORBA Client. This option is enabled by default to allow CORBA client communication with the Connection Server.

Enable Execution Traces


This option allows the Connection Server's processing activities to be tracked and monitored.

Web Intelligence Processing Server


The Properties tab of the Web Intelligence Processing Server allows you to modify the performance of the server in the CMC. The performance settings are:

Maximum Connections
This limits the maximum number of connections that the server allows at one time from sources such as the Adaptive Job Server. The default value is 50. If this limit is reached, the user will receive an error message, unless another server is available to handle the request.

154

Designing and deploying a solutionLearners Guide

Idle Connection Timeout (minutes)


This refers to the number of minutes before an idle connection to the Web Intelligence Processing Server will close.

List of Values Batch Size


This is the maximum number of values that can be returned per list of values batch. For example, if the number of values in a list of values exceeds this size, then the list of values will be returned to the user in several batches of this size or less.

Maximum Custom Sort Size (entries)


The maximum size in this setting refers to the number of items that will be returned when a List of Values is used as part of a custom sort.

Universe Cache Maximum Size (Universes)


This is the number of universes to be cached on the Web Intelligence Processing Server.

Enable List of Values Caching


This enables or disables caching per user session of List of Values in the Web Intelligence Processing Server. The default is for the feature to be on.

Enable Real-Time Cache


When this parameter is on, the Web Intelligence Processing Server caches Web Intelligence documents when the documents are viewed. The server also caches the documents when they are run as a scheduled job, provided the precache was enabled in the document. When the parameter is off, the Web Intelligence Processing Server does not cache the Web Intelligence documents when the documents are viewed. Nor does it cache the documents when they are run as a scheduled job. To improve system performance, set the Maximum Documents in Cache to zero when this option is selected, but enter a value for Maximum Documents in Cache when this option is cleared.

Document Cache Size (KB)


The size of the document cache.

Document Cache Cleanup Interval (seconds)


This describes the amount of time that elapses between scans of the cache. These scans determine the size of the cache, and based on these settings, initiates cache cleanup to meet the requirements of the performance settings above.

Deploying a SystemLearners Guide

155

Maximum Document Cache Size


The number of Web Intelligence documents that can be stored in cache. If real-time caching is enabled, set this to zero to improve system performance. If real-time caching is disabled, enter a value.

Binary Stream Maximum Size (MB)


This indicates a maximum size in megabytes for all binary files.

Maximum Character File Size


This indicates a maximum size in megabytes for all character files.

Crystal Reports Processing Server


The Properties tab of the Crystal Reports Processing Server allows you to modify the performance of the server in the CMC. The performance settings are:

Location of Temporary Directory


Specifies the absolute path to the directory where temp files are stored. The default temp file directory is \Program Files\Business Objects\BusinessObjects
Enterprise 12.0\Data\ CrystalReportsProcessingServer\temp\computername.CrystalReportsProcessingServer,

where computername is the name of the BusinessObjects Enterprise Crystal Reports Processing Server. The temp directory should be located on a drive that is local to the server. For optimum performance, the disk housing the temp folder must have a large amount of available space.

Maximum Concurrent Jobs


Limits the number of concurrent reporting requests processed. The default value is automatically chosen by the system. Optimal value is highly dependent upon your hardware configuration, your database software and your reporting requirements.

Idle Connection Timeout (minutes)


Alters the length of time that the Crystal Reports Processing Server waits for further requests from an idle connection thread. The default value is 20 minutes.

156

Designing and deploying a solutionLearners Guide

Reducing this value can improve resource-limited server performance by flushing the idle threads more often. Setting this value too low can result in a user's requests being closed prematurely. Setting this value too high can cause requests to be queued while waiting for idle jobs to be closed.

Idle Job Timeout (minutes)


Alters the length of time that the Crystal Reports Processing Server keeps a report job active. Setting this value too low can cause the report temp files to be deleted prematurely. Setting a value that is too high can cause system resources to be consumed for longer than necessary. The default setting is 60. This setting works in conjunction with the Report Job Database Connection setting.

Database Records Read When Previewing or Refreshing (0 for unlimited)


Allows you to limit the number of records that the server retrieves from the database when a user runs a query or report. Setting this is useful when you want to prevent users from running on-demand reports containing queries that return excessively large record sets. Error is shown if the number of records to be retrieved from the database exceeds the number of records indicated in the setting.

Oldest On-Demand Data Given to a Client (seconds)


Controls how long the Crystal Reports Processing Server can fulfil new requests using data that was generated to meet a previous request. If the time elapsed since the data was generated is less than the value set here, then the Crystal Reports Processing Server reuses this data to meet subsequent requests. Reusing data improves system performance when multiple users need the same information. Consider how important it is that your users receive up-to-date data.

Viewer Refresh Always Yields Current Data


Ensures that when users explicitly refresh a report, all previously processed data is ignored and new data is retrieved directly from the database. Disabling the setting ensures that the Crystal Reports Processing Server will treat requests generated by a viewer refresh in exactly the same way as it treats new requests.

Deploying a SystemLearners Guide

157

Allow Report Jobs to Stay Connected to the Database until the Report Job is Closed
Selecting this option limits the amount of time that the Crystal Reports Processing Server stays connected to your database server, and therefore limits the number of database licenses consumed by the Crystal Reports Processing Server. Used to make a trade-off between the number of database licenses you use and the performance you can expect for certain types of reports.

Report Application Server


The Properties section of the RAS in the Central Management Console lets you modify the way the server runs reports against your databases. The RAS performance settings are:

Report Job database connection


These settings can be used to balance between the number of database licenses you use and the performance you can expect for certain types of reports. However, if the RAS needs to reconnect to the database to generate an on-demand subreport or to process a group-by on the server command for that report, performance for these reports will be significantly slower than if you had selected Disconnect when the job is closed. (The latter option ensures that the RAS stays connected to the database server until the report job is closed.)

Number of database records to read when previewing or refreshing a report (-1 for unlimited)
This allows you to limit the number of records that the server retrieves from the database when a user runs a query or report. This setting is particularly useful if you provide users with ad hoc query and reporting tools, and you want to prevent them from running queries that return excessively large record sets. In addition, the RAS must be configured to read at least the same number of database records as the number of recipients in the dynamic recipient source. For instance, to process a dynamic recipient source with data for 100,000 recipients, the RAS must be set to read more than 100,000 database records. Note: For performance reasons, it is not recommended to set the number of database records to unlimited.

Browse Data Size/Batch Size


This setting allows you to specify the number of distinct records that will be returned from the database when browsing through a particular field's values. The data will be retrieved first from the client's cacheif it is availableand then from the server's cache. If the data is not in either cache, it is retrieved from the database.

158

Designing and deploying a solutionLearners Guide

Idle Connection Timeout


The "Idle Connection Timeout" setting alters the length of time that the RAS waits for further requests from an idle connection. Before you change this setting, it is important to understand that setting a value too low can cause a user's request to be closed prematurely, and setting a value that is too high can affect the server's scalability (for instance, if the ReportClientDocument object is not closed explicitly, the server will be waiting unnecessarily for an idle job to close).

Allow Report Jobs to Stay Connected to the Database until the Report Job is Closed
This setting is used to balance between the number of database licenses you use and the performance you can expect for certain types of reports.

Maximum Concurrent report Jobs (0 or unlimited)


This setting limits the number of concurrent reporting requests that a RAS processes. The default value is acceptable for most, if not all, reporting scenarios.

Activity: Configuring processing tier servers


Instructions
1. Set up machines as follows: Machine 1: Web Server Machine 2: Web Application Server Machine 3: CMS1, WebiPS, CRJS, CRPS, CRCS, WebiJS, DeskiCS, Input1, Output1 Machine 4: CMS2, WebiPS, CRJS, CRPS, DeskiPS, Input2, Output2

2. Servers need to be configured as follows: 3. Web Server Relevant information: Web based BOE environment requires a web server Threshold: 400 concurrent users sessions (user session=1 logged on user) or 100 simultaneous requests per processor (request= such as a user clicking on a folder) Actual number of servers required: 1 (for fault tolerance, you could argue for the second but keep 1) 4. Web Application Server Relevant information: Number of simultaneous requests 150 Threshold: 600 simultaneous requests per CPU Actual number of servers required: 1 (for fault tolerance, you could argue for the second but keep 1)

Deploying a SystemLearners Guide

159

5. CMS Relevant information: Number of simultaneous requests 150 Threshold: 500 simultaneous requests per CPU and/or 600 per CMS service and/or 1 CPU for every estimated 100 user simultaneous requests (this number may vary greatly depending on the type of action made) Actual number of servers required: 1, but go with 2 as fault tolerance of this component is critical 6. Web Intelligence Processing Server Relevant information: 40 viewing requests + 5 jobs running at peak time Threshold: 25 simultaneous viewing sessions (1 processing service installed per machine) Actual number of servers required: 2 7. Web Intelligence Job Server Relevant information: 20 jobs running at off peak time, 5 jobs at peak time Threshold: 100 jobs Actual number of servers required: 1 8. Desktop Intelligence Processing Server Relevant information: 20 simultaneous viewing sessions Threshold: 8-12 per CPU (1 processing service installed per machine) Actual number of servers required: 1 9. Desktop Intelligence Cache Server Relevant information: 20 simultaneous viewing sessions Threshold: 50 per single CPU and 200-400 per cache service Actual number of servers required: 1 10.Desktop Intelligence Job Server Relevant information: No Desktop Intelligence documents are scheduled Threshold: 5 per CPU Actual number of servers required: 0 11.Crystal Reports Processing Server Relevant information: 75 simultaneous viewing sessions Threshold: 25-75 simultaneous viewing sessions per CPU (assume 50) Actual number of servers required: 2 12.Crystal Reports Cache Server

160

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Relevant information: 75 simultaneous viewing sessions Threshold: 200 simultaneous viewing sessions per CPU and 400 sessions per cache service Actual number of servers required: 1 13.Crystal Reports Job Server Relevant information: 200 at month end, each Crystal Report runs on average for 10 minutes, Time to complete 5 hours = 300 minutes Threshold: 5 jobs per CPU Actual number of servers required: (200 reports X 10 minutes)/(5 reports (because 1 CPU) X 300 minutes) = 1.3 (round up to 2)

Sizing discussion for Guangzhou and Shanghai


You could make a case for a distributed environment and for having the servers (actually only one would be sufficient in each location) as part of a single Beijing CMS cluster deployment. No CMSs would be necessary in remote offices (Guangzhou and Shanghai). You could designate the reports/documents that refresh against the database in remote offices to execute on the server group in the appropriate remote office. However Guangzhou and Shanghai are running their own organizations for the most part. Both office need to communicate with the head office only on seldom occasions. It is probably best that the Guangzhou and Shanghai offices have their own CMS clusters. The reports that need to be shared between Beijing and remote offices could be shared via Federation technology.

Activity: Revert back to a single machine BusinessObjects Enterprise deployment


Overview
This activity is meant for you to practice reverting back to the original setup in case it is required. Also, from the facilitation and educational point of view it will be most beneficial to students if you work on your own systems for the rest of the course. The topics of Content Management that you are going to be working on in the next lesson is best understood and gives you the best experience if you work on your own system.

Instructions
1. Make sure that your system reverts back to a default single machine deployment. Your locally installed Tomcat should forward requests to a locally installed CMS and it should communicate with all other BusinessObjects Enterprise servers on the same local machine. Your machine should still be able to ping other systems by name/IP, but no other systems should be part of your deployment. 2. Make sure that at the end of this exercise your CMC and CCM point to the locally installed servers. No reference should exist pointing to servers installed on other machines.

Deploying a SystemLearners Guide

161

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

3. At the end of this activity as a group, review the necessary steps that were required to be performed on each system and verify that your group members followed these correctly.

162

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Troubleshooting BusinessObjects Enterprise


The purpose of this unit is to provide you with the resources to assist you in resolving issues within BusinessObjects Enterprise. After completing this unit, you will be able to: Use best practices for troubleshooting Use a strategic troubleshooting method

Using best practices when troubleshooting


There are several strategies you can employ to eliminate unwanted behavior. The following are considered best practices that will facilitate your troubleshooting efforts: Ensure that client and server machines are running supported operating systems, database servers, database clients, and appropriate server software. For details, perform a keyword search on the Business Objects support site. Verify that the problem can be reproduced, and take note of the exact steps that cause the problem to recur. Use the sample reports and sample data included with the product to confirm whether or not the same problem exists by trying to preview or schedule them. If the problem relates to connectivity or functionality over the web, check that BusinessObjects Enterprise is integrated properly with your web environment. If the problem relates to viewing or processing, verify your database connectivity and functionality from each of the affected machines. Use the appropriate Windows client to try to view and refresh the affected document or report on that server. For example, if you cannot view a .rep instance, go to the machine that is running the Desktop Intelligence Cache Server and try to view that document through the Desktop Intelligence application. Look for solutions in the documentation and release notes included with your product. Refer to the Business Objects Customer Support website for white papers, files and updates, user forums, and Knowledge Base articles. When the problem relates to viewing or processing Web Intelligence or Desk Intelligence documents, validate the SQL statement from the appropriate Report Server. Check the connection type for the universe being used. Remember that all connections for viewing via InfoView require a secured connection. Make sure no files are missing and all necessary components are installed.

Using a strategic troubleshooting method


The strategic troubleshooting method described below is one of many that can be used to identify and resolve issues with BusinessObjects Enterprise. While you may not need to use every item listed below, keeping these items in mind will assist you in the troubleshooting process.

Deploying a SystemLearners Guide

163

You will also find it essential to document everything you do as part of your troubleshooting process. Document not just the problem, but also the steps used to replicate and resolve the problem including the tests indicated below. Should you need to involve Business Objects technical support, this documentation greatly improves your chances of finding a timely solution and may save you (or a colleague) time in the future should the same or a similar error occur again.

Step 1: Identify the function or process that is not operating properly


Answer questions like those below to determine the function or process that is not operating properly in BusinessObjects Enterprise: Are you able to open BusinessObjects Enterprise Central Management Console or BusinessObjects Enterprise InfoView from the Start Menu? Are you able to view InfoView or the CMC? Are you able to log onto the CMS? Can you navigate through folders and see a listing of the reports in InfoView? Are you able to successfully schedule a report? Are you able to view successful instances of any report or Web Intelligence document using all possible viewer types? Are you able to view objects on demand using all possible viewer types?

Step 2: Enumerate possible sources of error. Identify the process flow that details which servers are involved in the function or process that is not operating
Once you have determined the function that isnt operating properly, find the associated process flow for that function. A collection of more common process flows can be found in the BusinessObjects Enterprise XI 3.0: Administering Servers - Windows course. Use these process flows to determine which server or component is involved.

Step 3: Narrow down the possible sources of error and analyze.


Follow the process flow to the server where the point of failure occurs. You may be able to temporarily rule out the servers that precede the point of failure. Devise and execute tests to verify which servers are functioning properly. Test another process flow that also uses the suspected servers to eliminate possible sources of error. Target your efforts on the most likely source(s) of error. The following is a partial list of exercises you may want to employ: Examine the server settings. Are they correct in the CMC? Does the Windows File directory structure appear to be changed? For example, are files renamed, moved, or missing? Is the ODBC data source connected to the database? Are you able to refresh a Web Intelligence document within the CMC or InfoView?

164

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Are you able to refresh a report in Crystal Reports Designer? Use the Application Event log, System Event Log, and Trace log file to help you identify the cause of the issue.

Activity: Troubleshooting BusinessObjects Enterprise deployments


Objective
Once you have built your systems, it is time to test if you can correct problems if they arise. Layout of the machines: Machine 1: Web Server Machine 2: Web Application Server Machine 3: CMS1, Web Intelligence Processing Server, CR Job Server, CR Processing Server, CR Cache Server, Web Intelligence Job Server, Desktop Intelligence Cache Server, Input FRS1, Output FRS1 Machine 4: CMS2, Web Intelligence Processing Server, CR Job Server, CR Processing Server, Desktop Intelligence Processing Server, Input FRS 2, Output FRS 2 The same problem will be introduced for all the groups within the class. Groups will be asked to leave the room, while the instructor breaks the system. Upon return, the students will be presented with an error message, and a brief explanation from the instructor (that will act as a troubled customer) as to what is happening on the system (what is not working). The groups, before attempting to solve the problem, should discuss which machine and which server on a given machine is most likely causing the problem by analyzing various process flows. Only then the group should start investigating the system in question by changing the configuration settings. This is a group activity, so make sure that only one person from the group is changing settings, and all others in the group as a unit approve of the suggested correction. This should lead to further discussions within the group. You will be presented with three different scenarios. It is time for Scenario 1, so everyone please leave the room. The instructor will ask you to come back in a minute or two.

Scenario 1
Symptom: Try to use the URL: http://Machine1/CmcApp or http://Machine2/InfoViewApp. You should receive errors signifying that you are unable to log onto the CMC or to InfoView. Try to use Crystal Reports, Desktop Intelligence and Web Intelligence full clients and open a report/document stored on the BusinessObjects Enterprise system. Try to refresh it. What server do you expect to not work?

Deploying a SystemLearners Guide

165

Scenario 2
Symptom: Can you log onto the CMC? Can you log onto InfoView? Can you navigate through the Enterprise folder structure? Can you view objects and instances?

Scenario 3
Symptom: Can you log onto the system? Can you view Crystal Report objects and instances? Can you view Web Intelligence objects? Can you view Desktop Intelligence objects?

166

Designing and deploying a solutionLearners Guide

Review: Deploying a system


1. What are the necessary steps to configure Web Server (e.g. Apache) to redirect to the locally installed Tomcat Web Application Server? 2. What are the steps that you would perform to cluster two CMS servers? 3. What are the steps that you would perform to configure redundancy for the Input and Output FRS? 4. Why is it necessary sometimes to configure SIA to run under domain account rather than default System account?

Deploying a SystemLearners Guide

167

Lesson summary
After completing this lesson, you are now able to: Deploy BusinessObjects Enterprise Use the wdeploy tool to deploy static and dynamic content Configure BusinessObjects Enterprise Cluster multiple CMS machines Configure an Active/Passive FRS Configure processing tier servers Troubleshoot BusinessObjects Enterprise

168

Designing and deploying a solutionLearners Guide

Lesson 6

Content Management
Content Management
Once you have deployed a BusinessObjects Enterprise system you need to design and implement a content management plan. After completing this unit, you will be able to: Design a content management plan Design an instance management plan Manage a system auditing plan Understand the algorithm that governs when content is migrated between different deployments Manage the Life Cycle Management tool Manage Federation services

Content ManagementLearners Guide

169

Designing a secured content management plan


BusinessObjects Enterprise content management provides the ability to control which users have access to objects and what level of access users have to objects. After completing this unit, you will be able to: Understand what is involved in designing a content management plan Create a content management plan Create a logical content plan Understand and plan delegated administration Manage security across multiple sites

What is a content plan?


When you are designing a content plan for your BusinessObjects Enterprise system you need to cover certain points that can be thought of as a model. The content management model involves: Adding any new BusinessObjects Enterprise users and groups. Adding any new third-party system users and groups. Setting up folder structures to organize content. Adding content to the folders. Determining where to publish which objects. Setting user and group access levels on folders and objects. Setting instance limits on the folders and objects. Setting up user and group access to Business Views and universes. Creating and assigning categories for objects. Configuring security on multiple sites to enable cross-site replication of content.

The content is the focus point of the content management model.

Content management considerations


The primary purpose of your BusinessObjects Enterprise system is to manage, process, and distribute your content in a secure manner. It should come as no surprise, then, that in designing a content management plan, you start with an analysis of your organizations content. The content management process involves: Analyzing the content to be stored in BusinessObjects Enterprise. Setting up folder structures to organize the content. Adding the content to the folders. Setting group and user access levels on the folders and objects.

A thorough content management plan will identify: The types of objects managed by the system and the folder structure to contain the objects.

170

Designing and deploying a solutionLearners Guide

The number of objects that will be managed by the system. The number of users accessing the system and the group structure to contain the users. Which users require access to each object in the system and the level of access required. How long historical instances will be kept in the system. Whether your organization has existing Windows NT, Windows Active Directory, or LDAP authentication systems that can be used to import users and groups into BusinessObjects Enterprise. The corporate categories used to organize the objects. Whether you will be using Business Views to deliver additional row and column security during report processing or viewing. The method for documenting the content management plan.

Analyzing stakeholder needs


The first stage of planning is to analyze stakeholder requirements. The results provide the information needed to create the content management plan. Your stakeholders are the people affected by the implementation of your BusinessObjects Enterprise system. Stakeholders may include: Users - people who are consumers of reports and information. Business managers - people who need to see the business value of the software implementation as it translates to increased employee productivity and return on investment (ROI). IT/IS - people who support and maintain the software implementation. Based on this analysis, you will be able to plan a content management strategy that benefits your stakeholders by: Increasing usability - users will be able to easily access the information they require, which will increase their effectiveness using the system. Increasing user adoption - increased usability will make it easier for users to commit to using the system which will help improve ROI. Reducing implementation time - proper planning reduces implementation challenges and helps to create a system that requires minimal maintenance over time. Reducing user support time - increased usability of the system helps reduce the amount of time needed to support users. Increasing ROI - increased usability, increased user adoption, reduced implementation time, and reduced user support time all work together to increase the total ROI of the software implementation.

Creating a content management plan


In creating a content management plan, you will need to develop a blueprint that identifies the logical requirements necessary for your deployment. This content plan model can be used to determine which users have common content needs. Logical requirements often align with business organization requirements. For example, members of a department may require access

Content ManagementLearners Guide

171

to the same reports, members of a project team may require access to the same reports, people with common job functions may require access to the same reports, and so on. The BusinessObjects Enterprise security model uses a series of rights to manage user access to secure content and manage users access to folders, categories, and objects. When granted, each right provides a user or group with permission to perform a particular action. Understanding the BusinessObjects Enterprise security model will enable you to map out a content management plan for your organization. This strategy will outline the objects to be published to BusinessObjects Enterprise, the users and groups who have access to the objects, and the level of access that users need.

Creating a logical content plan


A logical content plan involves mapping out who needs access to what content, and then organizing users and content based on those needs. This is accomplished by: Creating a logical folder structure. Creating logical groups. Specifying group access levels for the folders. Creating and assigning objects to categories.

Creating a folder structure and organizing objects


The first step in setting up a content management plan is to assess the content that will be added to the system. This ensures you organize the content according to the users accessing the content. This organization will in turn determine the folder structure to create. Some common organization groupings are: Business unit, department, or team. Country or region. Customer. For example, if your Marketing department has a number of reports to manage in BusinessObjects Enterprise, you might organize these reports in a Marketing folder. If your Marketing department has reports for their North America region and other reports for their Europe region, you might organize these reports into Marketing North America and Marketing Europe folders.

Creating a group structure and organizing users


Once you have created a folder structure and have logically organized your objects, plan a group structure that will enable you to manage user access to the content most efficiently. The group structure often mirrors the folder structure. For example, if you have users in your Marketing department who require access to the reports in the Marketing folder, you can organize these users into a Marketing group and set access rights for the Marketing group on the Marketing folder

172

Designing and deploying a solutionLearners Guide

Creating corporate categories


Once you have defined your folder structure, group structure, and object security for objects in the system, you should define any corporate categories that need to be set up in the system. Objects can belong to multiple categories. Because objects do not inherit access rights from categories, a user always has the same access rights for an object regardless of which category the user is viewing the object from. You may need to create special groups to restrict access to particular categories. To reduce the number of groups that you have to maintain in the system, you should determine if any existing groups can be used for category access before creating groups specifically for this purpose.

Delegated administration
Typically, a company's IT department or its system Administrators are responsible for managing the entire BusinessObjects Enterprise system. This can become a drain on IT resources. Many companies would like to be able to distribute some of the administration responsibilities to the various business units and departments that use the BusinessObjects Enterprise system. Delegated administration in BusinessObjects Enterprise enables system Administrators to grant limited sets of administrative rights to various groups of Administrators while still restricting access to the entire system. Here are some examples of the use of delegated administration:

Example 1
The IT department wants to grant responsibility for managing folders and reports to the individual departments. Each department will have a number of folder Administrators who will require Full Control access to their folder and the objects within their folder. The folder Administrators will require access to grant rights to users in their department without being able to grant rights to users in other departments.

Example 2
Departmental folder Administrators understand the business requirements of their users. The IT department does not want to have to create the calendars and events required by the departments. The IT department wants to grant responsibility for creating and managing calendars and events to the departmental folder Administrators.

Example 3
A BusinessObjects Enterprise deployment has servers in multiple geographic locations. Each location has its own IT support staff. The regional support staff require the ability to start and stop the servers in their local office without having the ability to stop and start servers in other offices.

Content ManagementLearners Guide

173

Using rights to delegate administration


Assuming that your group structure and public folder structure align with your delegated-administration security structure, you should grant your delegated administrator rights to entire user groups, but grant the delegated administrator less than full rights on the users he controls. For example, you might not want the delegated administrator to edit user attributes or reassign them to different groups. The following table summarizes the rights required for delegated administrators to perform common actions:
Action for delegated administrator Rights required by the delegated administrator

Create new users.

Add right on the top-level Users folder. Add right on the top-level User Groups folder.

Create new groups.

Delete any controlled groups, as well as individual users in those groups. Delete only users that the delegated administrator creates. Delete only users and groups that the delegated administrator creates. Manipulate only users that the delegated creates (including adding those users to those groups). Manipulate only groups that the delegated administrator creates (including adding users to those groups). Modify passwords for users in their controlled groups. Modify passwords only for principals the delegated administrator creates.

Delete right on relevant groups.

Owner Delete right on the top-level Users folder. Owner Delete right on the top-level User Groups folder.

Owner Edit and Owner Securely Modify Rights right on the top-level Users folder.

Owner Edit and Owner Securely Modify Rights on the top-level User Groups folder.

Edit Password right on relevant groups.

Owner Edit Password right on top-level Users folder, or on relevant groups.

174

Designing and deploying a solutionLearners Guide

Action for delegated administrator

Rights required by the delegated administrator

Note: Setting the Owner Edit Password right on a group takes effect on a user only when you add the user to the relevant group.

Modify user names, description, other attributes, and reassign users to different groups.

Edit right on relevant groups.

Modify user names, description, other attributes, and reassign users to different groups, but only for users that the delegated administrator creates

Owner Edit right on top-level Users folder, or on relevant groups Note: Setting the Owner Edit right on relevant groups takes effect on a user only when you add the user to the relevant group.

Choosing between Modify the rights users have to objects options


When you set up delegated administration, give your delegated administrator rights on the principals he will control. You may want to give him all rights (Full Control); however, it is good practice to use advanced rights settings to withhold the Modify Rights right and give your delegated administrator the Securely Modify Rights right instead. You may also give your administrator the Securely Modify Rights Inheritance Settings right instead of the Modify Rights Inheritance Settings right. The differences between these rights are summarized below.

Modify the rights users have to objects


This right allows a user to modify any right for any user on that object. For example, if user A has the rights View objects and Modify the rights users have to object on an object, user A can then change the rights for that object so he or any other user has full control of this object.

Securely modify the rights users have to objects


This right allows a user to grant or deny only the rights he is already granted. For example, if user A has View and Securely modify the rights users have to objects rights, user A can not give himself any more rights and can grant or deny to other users only these two rights (View and Securely Modify Rights). Additionally, user A can change only the rights for users on objects for which he has the Securely Modify Rights right. These are the conditions under which user A can modify the rights for user B on object O: User A has the Securely Modify Rights right on object O. Each right or access level that user A is changing for user B is granted to A.

Content ManagementLearners Guide

175

User A has the Securely Modify Rights right on user B. User A has Assign Access Level right on the access level that is changing for user B. Scope of rights can further limit the effective rights that a delegated administrator has. For example, a delegated administrator may have Securely Modify Rights and Edit rights on a folder, but the scope of these rights is limited to the folder only and does not apply to its sub-objects. The delegated administrator cannot grant these rights to another user on one of the folder's sub-objects. In addition, the delegated administrator will be restricted from modifying rights on those groups for other principals that he doesn't have restrictions on. This is useful, for example, if you want to have two delegated administrators for a group, but you don't want one to be able to deny access to the group for the other one. The Securely Modify Rights right ensures this, since delegated administrators generally won't have the Securely Modify Rights right on each other.

Securely modify rights inheritance settings


This right allows a delegated administrator to modify inheritance settings for other principals on the objects that the delegated administrator has access to. To successfully modify the inheritance settings of other principals, a delegated administrator must have this right on the object and on the user accounts for the principals.

Owner rights
Owner rights are rights that apply only to the owner of the object on which rights are being checked. In BusinessObjects Enterprise, an owner is a principal who has created an object; if that principal is ever deleted from the system, ownership reverts to the Administrator. Owner rights are useful in managing owner-based security. For example, you may want to create an folder or hierarchy of folders in which various users can create and view documents, but can only modify or delete their own documents. In addition, owner rights are useful for allowing users to manipulate instances of reports they create, but not others' instances. In the case of the scheduling access level, this permits users to edit, delete, pause and reschedule only their own instances. Owner rights work similarly to their corresponding regular rights. However, owner rights are effective only when the principal has been granted owner rights but regular rights are denied or not specified.

Creating groups for row and column security


When creating groups for row and column security, you may decide to start by creating departmental groups (for example, HR group, Finance group) and functional groups (for example, Viewer, Advanced Viewer, Scheduler, Designer). The departmental groups would govern the access to departmental folders (for example, HR folder, Finance folder). The functional groups would control what a user can do within the departmental folders once

176

Designing and deploying a solutionLearners Guide

access is granted. In this model a user typically would belong to a departmental group and a functional group. Alternately, you may opt to create functional groups that have differing rights to the various department folders. (for example, HR-Viewer, HR-Scheduler, HR-Designer, Finance-Viewer, Finance-Scheduler). In this content management model a user would typically belong to only one group. For the exploration of various security models refer to the Content Management Activity later in this lesson. BusinessObjects Enterprise enables additional row and column security to be implemented during report processing or viewing through the use of Universes, Business Views, and Processing Extensions. Business Views and Universes are metadata layers that provide a definition of a data source. Within this definition, permissions to access the data are granted for BusinessObjects Enterprise users and groups. Reports that are designed based on Business Views and Universes respect the permissions defined in the Business View and Universes. Business Views and Universes may be used to implement row level and/or column level security. Processing Extensions are implemented in the form of a custom DLL. The Processing Extension resides on the processing server and implements additional record selection when a report is processed or viewed. Processing Extensions can be used to implement row level security. Row level security is equivalent to additional record selection that is implemented on the report data. Based on the user or group membership of the user who is scheduling or viewing the report, additional record selection is applied to the report data. If you are implementing row level security, it is a good idea to design your reports to use Report Bursting Indexes on the fields that will be used in the record selection. This improves the speed of the record selection as the data in those fields are indexed within the report. Column level security controls which fields a user has the ability to see based on their user or group membership. If a user is not able to see a particular field in a report, the field appears with null values. Column level security can only be implemented through Business Views.

Implementing row and column security using Business Views and Universes
In order to effectively implement row and column security using Business Views or Universes, you need to be able to plan your user report and data access requirements. You need to determine: What information is required by users from their reports. Which users require access to restricted report data at design, processing, or view time. This information determines: The required reports.

Content ManagementLearners Guide

177

The Business Views or Universes required in order to design the reports. The groups that need to be created in BusinessObjects Enterprise to support the data access requirements. Note: As the implementation of Processing Extensions requires custom application development, Processing Extensions may be implemented in a variety of ways. To learn more about creating and implementing Processing Extensions, refer to the Developer Documentation included with BusinessObjects Enterprise.

Managing security rights across multiple sites


Security is important when working in any BusinessObjects Enterprise deployment. However, because Federation replicates content between separate deployments and requires collaboration with other administrators, it is necessary to understand how security performs before you begin using Federation. Administrators in separate deployments must coordinate with each other before enabling Federation. Once content is replicated, administrators can change, modify, and administer content. For these security reasons, it is important that you maintain communication with other administrators.

Rights required on the Origin site


This section describes the actions to the Origin site and the required rights of the user account connecting to the Origin Central Management Server (CMS). This is the account you enter in the Remote Connection object on the Destination site.

One-way replication
Action: To perform replication only from the Origin site to the Destination site. Minimum rights required: View and Replicate rights on all objects to replicate. View right on the Replication List. Note: View and Replicate rights are required on all objects being replicated, including objects that are automatically replicated by dependency calculations.

Two-way replication
Action: To perform replication from the Origin site to the Destination site, and from the Destination site to the Origin site. Minimum rights required: View and Replicate rights on all objects to replicate. View right on the Replication List. Modify Rights on user objects to replicate any password changes.

178

Designing and deploying a solutionLearners Guide

Scheduling
Action: To allow remote scheduling to occur on the Origin site from the Destination site. Minimum rights required: Schedule right for all objects that will be remotely scheduled.

Rights required on the Destination site


This section describes actions to the Destination site and the required rights of the user account that is running the Replication Job. This is the account of the user who created the Replication Job. Note: Like other schedulable objects, you can schedule the Replication Job on behalf of someone else.

All Objects
Action: To replicate objects regardless of one-way or two-way replication. Minimum rights required: View, Add, Edit, Modify Rights on all objects. Modify User Password rights in addition to above, for user objects.

First Replication
Action: The first time the Replication Job is run. This scenario is different than the following scenarios as no objects exist on the Destination site yet. Therefore, the user account the Replication Job is running under must have specific rights at all the top level folders and default objects that will have content added to them. Minimum rights required: View, Add, Edit, Modify Rights on all top level folders.

Federation specific objects


This section details scenarios that are specific to Federation that you may encounter.

Object Cleanup
Object Cleanup only occurs on the Destination site. Action: To delete objects on the Destination site. Minimum rights required: Delete rights for the account that the Replication Job is running under on all objects that may be potentially deleted.

Enabling two-way replication, with no modifications on the Origin site


In certain circumstances you may choose two-way replication but do not want some objects on the Origin site modified, even if they are changed on the Destination site. Reasons for this include: if the object is special and should only be changed by users on the Origin site; or if you want to enable Remote Scheduling but do not want changes propagated back.

Content ManagementLearners Guide

179

To safeguard against undesired changes being sent to the Origin site: Deny Edit rights of the user account used to connect in the Remote Connection Object. Note: For Remote Scheduling, you may create a job that only handles objects for Remote Scheduling. However, in this case ancestor objects are still replicated, including the report, the folder containing the report, and the parent folder of that folder. Any changes made on the Destination site are sent back to the Origin site, and changes made on the Origin site are sent to the Destination site.

Disabling cleanup for certain objects


When certain objects are replicated from the Origin site, you may not want to delete them from the Destination site if they are deleted on the Origin site. You can safeguard this through rights. For instance, choose this option when users on the Destination site start are using an object independently of users on the Origin site. Example: In a replicated universe where users on the Destination site create their own local reports using this universe, you may not want to lose the universe on the Destination site if it is deleted from the Origin site. To disable clean up on certain objects: Deny Delete rights of the user account the Replication Job is running under on the objects you wish to keep.

Replicating security on an object


To keep security rights for an object, you must replicate both the object and its user or group at the same time. If not, they must already exist on the site you are replicating to and have identical unique identifiers (CUIDs) on each site. If an object is replicated and the user or group is not replicated, or does not already exist on the site you are replicating to, their rights will be dropped. For example, Group A and Group B have rights assigned on Object A. Group A has View rights and Group B has Deny View rights. If the Replication Job replicates only Group A and Object A, then on the Destination site, Object A will only have the View rights for Group A associated with it. Note: When you replicate an object, there is a potential security risk if you do not replicate all groups with explicit rights on the object. The previous example highlights a potential risk. If User A belongs to both Group A and Group B, the user will not have permission to view Object A on the Origin site. However, User A will be replicated to the Destination site because he belongs to both groups. Once there, because Group B was not replicated, User A will have the right to view Object A on the Destination site, but can't view Object A on the Origin site. Objects that reference objects that are not included in a Replication Job, or those not already on the Destination site, are displayed in its log file, which shows the object referenced the unreplicated object and dropped its reference.

180

Designing and deploying a solutionLearners Guide

Security on an object for a particular user or group is only replicated from the Origin site to the Destination site. You may set security on replicated objects on the Destination site, but those settings will not be replicated to the Origin site.

Replicating security on an object using access levels


Similar to the previous section, rights must be defined by access levels to remain. The object, user or group, and access level must be replicated at the same time, or they must already exist on the site you are replicating to. Objects that assign explicit rights to a user or group that are not included in the Replication Job, or not already on the Destination site, are displayed in its log file, which shows the object had rights assigned that were not replicated and those rights were dropped. In addition, you can choose to automatically replicate Access Levels that are used on a imported object. This option is available on the Replication List. Note: Default access levels are not replicated, but references will be maintained.

Documenting your content management plan


When planning your content management strategy, it is a good idea to document your plan so that your system is easy to implement and maintain long-term. A well-documented content management plan helps to ensure that other users who may be administering BusinessObjects Enterprise are able to follow the set of standards that you have identified in your plan. Whether you use a spreadsheet, word processor, or other applications to document your plan, a good content management plan should identify: Naming conventions for folders, objects, groups, or user accounts. Standards for organizing content into folders or standards for creating folder structures (for example, the use of functional folders for implementing functional security). Standards for creating groups and subgroups and for adding users to groups. Processes and standards for using mapped third party authentication groups (Windows NT, Windows Active Directory, LDAP) to bring groups and users into BusinessObjects Enterprise. Global access level settings. Access levels to be mapped to each folder and/or object for users and groups. The use of Business Views or Processing Extensions to implement additional row and column security. Who is responsible for implementing, maintaining, and supporting the content management plan.

Activity: Content management plan


You will find this activity on the resource CD in the Lesson6_Content_Management\01_Single_Deployment\01_Security folder.

Content ManagementLearners Guide

181

The activity is called Content Management Plan Activity.doc and the access levels are in the Access Levels.xls file.

182

Designing and deploying a solutionLearners Guide

Designing an instance management plan


Designing an instance management plan allows you to determine how instances of report and program objects will be managed in your system. After completing this unit, you will be able to: Identify instance management considerations Set limits for instances to be stored in the system

Planning instance management


As scheduled reports and programs are processed in BusinessObjects Enterprise, instances are created and stored in the Output File Repository Server. These instances contain your companys data, so it is important to ensure they are backed up or replicated regularly. There are several benefits of creating an instance management strategy: By controlling the number of kept instances, users do not have to search through hundreds of instances when looking through a reports history. You can minimize the amount of storage space being used to store reports that are no longer required. You can reduce the number of reports that you have to include in your system backup. A thorough instance management plan identifies: The number of report instances created on a daily, weekly, monthly, and yearly basis. The length of time instances should be kept in the system. Legal requirements for keeping instances of particular reports (for example, reports containing company financial data). The size of instance files and the amount of disk space required to store instances. The backup or replication system and schedule as well as the owner who is responsible for the system backup. The method for documenting the instance management plan. When planning your instance management strategy, it is a good idea to document your plan so that your system is easy to implement and maintain long-term.

Setting instance limits


In the Limits page, you can set the limits for the selected object and its instances. You set limits to automate regular clean-ups of old BusinessObjects Enterprise content. At the object level, you can limit the number of instances that remain on the system for the object or for each user or group; you can also limit the number of days that an instance remains on the system for a user or group. In addition to setting the limits for the objects from the "Folders" management area, you can also set limits at the folder level. When you set limits at the folder level, these limits will be in

Content ManagementLearners Guide

183

effect for all objects that reside within the folder (including any objects found within the subfolders). Note: When you set the limits at the object level, the object limits will override the limits set for the folder; that is, the object will not inherit the limits of the folder.

To set limits for instances


1. In the Folders management area of the CMC, select an object. 2. Click Actions and choose Limits. The "Limits" dialog box appears. 3. Make your settings according to the types of limits you want to set for your instances. The options are as follows: Delete excess instances when there are more than N instances of an object To limit the number of instances per object, select this check box. Then, type the maximum number of instances that you want to remain on the system. (The default value is 100.) Delete excess instances for the following users/groups To limit the number of instances for users or groups, click Add in this area. Select from the available users and groups and press > to add them to your list. Then click OK. Type the maximum number of instances in the Instance Limit column. (The default value is 100.) Delete instances after N days for the following users/groups To limit the number of days that instances are saved for users or groups, click Add in this area. Select from the available users and groups and press > to add them to your list. Then click OK. Type the maximum age of instances in the Maximum Days column. (The default value is 100.) 4. Click Update.

Activity: Instance management


Objectives
Your analysis of the Jade Publishing business requirements has identified the following instance management requirements: All instances of Sales reports and documents must be kept. All other reports and documents should keep 10 instances

Instructions
1. Implement an instance management plan for Jade Publishing. 2. Design a test to prove that your instance management implementation works.

184

Designing and deploying a solutionLearners Guide

Designing a system auditing plan


Auditing provides you with a detailed historical view of user and object interaction and system usage. Auditing allows you to fine-tune system performance, retire unused reports, and provide business units with a comprehensive snapshot of their usage patterns. After completing this unit, you will be able to: Design and implement a system auditing plan

Designing and implementing system auditing


BusinessObjects Enterprise system auditing enables tracking of system usage. While system auditing can provide valuable information about your system, it adds more load to the system. By creating a system auditing plan, you can ensure that the information you require will be collected without adding unnecessary load to the system. A thorough system auditing plan identifies: System usage metrics to be monitored. The number and specifications of required system audit reports. System audit data is initially collected by the servers performing the auditable actions. The data is stored in a local log file on the server. The CMS manages the central audit database. Approximately every five minutes the CMS polls the server audit log files and collects audit data to be stored in the system audit database. Consequently, system audit reporting shows historical information, not real-time information. Note: For more information on managing system auditing, please refer to the Administration Guide or the 'Administering Servers - Windows' course.

Activity: System auditing


Objective
Implement an system auditing plan. From the case study: Jade Publishing are keen to see the ratio of usage between Desktop Intelligence and Web Intelligence. For this they have asked if you can audit all activity for these products.

Instructions
1. Implement a system auditing plan for Jade Publishing. 2. Design a test to prove that your system auditing implementation works.

Content ManagementLearners Guide

185

Managing Content in Multiple Deployments


Define the key terms in content management in multiple deployments Explain how to manage content in different deployments Describe the different tools to manage content in deployments

Understanding the key terms in content management


CUID
An ID that is assigned to an object at its creation in BusinessObjects Enterprise XI 3.1 and does not change even if it is moved to an entirely new cluster. When an object is migrated to another system (for example, from a BusinessObjects 5/6.x system to BusinessObjects XI 3.1), it is assigned a CUID based on the properties of the object as they are defined by the source system.

Binary Information
The binary information in a .rep file stores the CUID for the universe on which it is based. The relationship between a .rep and a universe is also stored in the system database, but this relationship can be linked through short universe name.

Info-Objects
Info-Objects can be personal inbox, or any managed components on the BI Platform. They expose a standard set of information and interfaces and encapsulate the specific details of each managed component. All components that are used and managed by the BI Platform are represented as Info-Objects. Info-Objects store information such as ID number, Info-Object type, and scheduling information that allows the BI Platform to manage each component.

Upgrade
The term upgrade, when used in the context of Business Objects software, refers to the movement between versions on the current BusinessObjects Enterprise XI architecture. An example of an upgrade would be from BusinessObjects Enterprise XI to BusinessObjects Enterprise XI R2. An upgrade may involve the replacement of one version with another, or installation alongside other versions with content moving between the two.

Migration
The term migration, when used in the context of Business Objects software, refers to the movement from the legacy Business Objects V5 and V6 architecture to the current BusinessObjects Enterprise XI architecture. An example of a migration would be from BusinessObjects Enterprise 6.5.1 to BusinessObjects Enterprise XI 3.0. A migration always involves the installation of the new architecture alongside the legacy architecture. The migration involves the transfer and/or conversion of legacy content from the legacy deployment to the new deployment.

186

Designing and deploying a solutionLearners Guide

Promotion
Promotion is defined as an activity to move Info-Objects with dependencies from a source live system to a destination live system, where both the environments are on identical versions of the product.

Info-Objects
Info-Objects can be personal inbox, or any managed components on the BI Platform. They expose a standard set of information and interfaces and encapsulate the specific details of each managed component. All components that are used and managed by the BI Platform are represented as Info-Objects. Info-Objects store information such as ID number, Info-Object type, and scheduling information that allows the BI Platform to manage each component.

Objects ID
An ID that is unique to a given cluster, assigned to a specific object. When an object is moved to a new cluster, a new object ID is assigned. The object ID is visible in the File Repository Server path of each report object. When importing an object from a source system, the imported object is given an objectID which is always incremented in the CMS_INFOOBJECTS table. The objectID will not be repeated in the CMS database (cluster). If the object is deleted from the destination and re-imported from source, it will be issued a new objectID but will still be issued the same CUID it received during the first migration. Note: For internal queries, it is the object ID that is used, rather than the CUID, to locate the object's properties.

Managing the dependencies


Business Objects Full Client and Web Intelligence documents are dependent on their data objects (the universe(s) and connection) to be able to return data from the database. This is different from the typical Crystal Report, where you only need a DSN or the native driver information to make a database connection. These universes and connections are objects in the system, and they will have to be imported together with the report objects to make sure the report can still connect once migrated. The dependencies are as follows: Document objects depend on Universe objects. Universe objects depend on Connection objects. Respective details are stored in the objects. For example, the report stores the ID and name of the universe object it is built on, and the universe stores the ID and name of the default universe connection.

Content ManagementLearners Guide

187

Differentiating BusinessObjects LifeCycle Manager, Import Wizard, and Federation


The Import Wizard
The Import Wizard is a locally installed Windows application that allows you to import most objects in the repository (including user accounts, groups, folders, universes, documents, and objects stored in Inbox and personal folders on a cluster server) to the new BusinessObjects Enterprise system. This tool is focused on migration (for example, from a BusinessObjects 5/6.x system to BusinessObjects XI Release 2 or from BusinessObjects XI Release 2 to BusinessObjects XI 3.0) However, this tool lacks the provision for centralized view and versioning.

BusinessObjects LifeCycle Manager


The LCM is a standalone web application that is integrated as another tab in the CMC. This tool is meant to replace the Import Wizard as the tool for managing "promotions". The Import Wizard focuses on migration.

Federation
Federation is a tool for business intelligence (BI) administrators to create a process for copying BI content from one system to another while keeping the systems synchronized. Federation is intended for a specific portion of the BI content, not for a full migration of a complete system to another.

188

Designing and deploying a solutionLearners Guide

Understanding the Import Wizard


After completing this unit, you will able to: Describe the roles of the Import Wizard. Describe the difference between merging and updating Describe the difference between updating objects by name and updating by CUID

The roles of the Import Wizard


BusinessObjects XI 3.0 imports most of the source environment using a single tool called the Import Wizard. The Import Wizard is a locally installed Windows application that allows you to import most objects in the repository (including user accounts, groups, folders, universes, documents, and objects stored in Inbox and personal folders on a cluster server) to the new BusinessObjects Enterprise system. The Import Wizard acts as a bridge between the source repository and target repository, or the CMS database, and File Repository servers.

Merging and updating


You can use the Import Wizard to indicate whether you prefer to merge or update data being migrated. To have a complete understanding of how merge and update work, you need to understand the source and destination object relationships, as well as how they are migrated using the Import Wizard.

How object dependencies are determined by the Import Wizard


The Import Wizard assigns a new CUID based on information from the source system repository. The Import Wizard uses an algorithm to calculate a CUID, it is not a random ID. The CUID is based on the ID and name properties of the object. The reason for this is that if the same object from BusinessObjects 5/6.x imports to different BusinessObjects XI 3.0 clusters, or an object migrates twice to the same cluster, it will end up with the same CUID. This reference is critical so the Import Wizard can make accurate comparisons between objects in the source and destination systems during subsequent update migration steps. The Import Wizard must be able to determine whether the object in BusinessObjects 5/6.x it is currently trying to migrate is the same object that was migrated in a previous increment to BusinessObjects XI 3.1: First the connection is imported and a new CUID is assigned, based on the algorithm mentioned above. Next, the universe is imported, and given a new CUID based on the source system details. The relationship to the connection is maintained by replacing the source system details with the connections CUID. Finally, the document is imported, and given a CUID based on the source system details. The binary is then opened to replace the source system universe details with the universes CUID. If during the previous step several universes of the same short name (and their connections) are added, during this last step the Import Wizard can determine which universe is the correct

Content ManagementLearners Guide

189

one. When the Import Wizard opens the document binaries to update the universe entries, it can find more detail about the correct universe. In addition to the name, the Import Wizard can now find the universe ID and the universe Domain ID, and can therefore pick the correct universe to link to the document. Other universes with the same name are imported as well, but they are not linked to the document. The relationships are still part of the binaries and have been updated with the CUIDs. The Import Wizard then adds the new objects to the XI 3.0 system.

How the Import Wizard determines when to update dependent objects


In this scenario, answers to the following questions are examined: What happens if one of the dependant objects already exists? What happens if the BusinessObjects XI 3.1 objects have been updated? In this example, the Sales universe and its connection is already imported. Now you need to migrate a document (Bonus.rep) that is based on the same universe. Note: The universe has been updated in the BusinessObjects XI 3.1 system, so that the content is different from the original in the source system. The Import Wizard has already imported the objects and updated the relationships with CUIDs. Since you don't want the universe or connection to be updated, the Do not update objects option during migration is unchecked in the Import Wizard. You have a choice for how to deal with the changes in the Sales universe in BusinessObjects XI 3.1. You can choose to overwrite the changes with the content (or rights) of the source system or keep the changes that were made in the destination system. The Import Wizard compares the connections by CUID - they are the same. Therefore, the connection is not updated. The Import Wizard compares the universes by CUID - they are the same. Therefore, the universe is not updated. Comparing the two reports by CUID shows they are different. Therefore, Bonus.rep report is added to the BusinessObjects XI 3.1 system. Part of document properties is the link to the Sales universe by the universe's CUID. This is where the CUID-creation algorithm is important. If the Import Wizard had created a different CUID for the source system's Sales universe, and added that CUID to the properties of the Bonus.rep report, the report would not be able to connect the Sales universe that already exists in XI 3.1.

Updating objects
If you are performing an incremental migration and choose to update objects in your destination environment, you will be asked to select which object types to overwrite in the destination system if they already exist. You can choose whether to overwrite any or all of the following: Document, dashboards, and analytic content Universes Universe connections

190

Designing and deploying a solutionLearners Guide

Group and user membership Objects rights Existing objects can be identified either by name, or by their unique identifier (CUID).

Identifying objects by name


Use an object's name to determine whether it exists on the destination system when you want to merge two different repositories into a single system. This option can create new, unique CUIDs on the destination system in cases where there is a name conflict. Additionally, destination CUIDs will be preserved when an object with the same name as a destination object is copied. The XI 3.1 Import Wizard has an import option to overwrite existing objects when the name and path are matched in the destination system, even though the CUID may differ. When updating objects with the same name where the CUIDs of the source and destination objects differ, the destination object's CUID are kept. If you choose to rename duplicate objects, the name of a source object is updated and the source CUID will be used if it does not already exist in the destination system. If the CUID already exists, the renamed report is given a new CUID. Only certain types of objects can be compared by name. Objects that can be compared by name are: Folders Folders and objects within personal or public folders Corporate and personal categories Universes, Universe Connections, and Overloads Dashboards Profiles Schedules

All other objects will be compared based on CUID. When comparing objects by name, the following rules will apply:

Content ManagementLearners Guide

191

Identifying objects by CUID


Use the CUID to determine when an object already exists when you want to update the destination system. This option will never assign a new CUID to objects, preserving their identity across deployments. When comparing objects by CUID, the following rules will apply:

192

Designing and deploying a solutionLearners Guide

Activity: CUID generation


Instructions
1. On Machine 1 create a new top level Folder named: "ChinaFolder". Find its CUID by opening its properties in the CMC. Write down the value of the CUID. 2. On Machine 1 add a new Web Intelligence document to the newly created folder: "ChinaFolder". Name that newly created document "BeijingDocument". Investigate its CUID by verifying its properties in the CMC. Write down the value of the CUID. Note: You may copy an existing document, move an existing document or create a brand new document. 3. Run the Import Wizard from Machine 1 to connect to Machine 2. Select the top option to compare by CUID (in the case of a name conflict, rename it). You should see that the Import Wizard created ChinaFolder and BeijingDocument on Machine 2. Investigate the CUIDs for ChinaFolder and BeijingDocument by checking their properties in the CMC. The CUID values should be the same for those objects on Machine 1 and Machine 2. 4. On Machine 2 rename the folder "ChinaFolder" to "NewFolder". Note that the renaming of the folder does not change its CUID value. Modify the contents of the BeijingDocument by changing the font color of the Title of the document and save the changes. Note that the modifications to the contents of the document do not change the CUID. 5. Run the Import Wizard from Machine 1 to connect to Machine 2. Select the top option to compare by CUID (in case there is a name conflict, rename it). 6. Try to predict the outcome. Write down the expected folder and document structure on Machine 2. 7. You should see "NewFolder" renamed back to "ChinaFolder" (both folders had the same CUID). You should also see that the contents of the object "BeijingDocument" have reverted back to match the source document contents from Machine 1 (both documents were located in the same folder). At this point the contents of Machine 1 and Machine 2 should be identical. 8. Discuss in groups how this situation could have happened in real life.

Instructions to create a Name Conflict


1. Rename "BeijingDocument" to "NewDocument". Please note that the renaming of the document does not change its CUID and as a result it does not create a name conflict. 2. Add a new document named "BeijingDocument" to the folder "ChinaFolder". Write down the CUID of the newly added document. This CUID should be different from the CUID of the original "BeijingDocument". You should now see two documents in the ChinaFolder. At this point you have potentially created a name conflict where an incoming object (BeijingDocument) might have the same name but the CUID would be different from BeijingDocument on Machine 2.

Content ManagementLearners Guide

193

3. Run the Import Wizard again from Machine 1 to Machine 2. Select the top option to compare by CUID (in the event of a name conflict, rename it). 4. Try to predict the outcome. Write down the expected folder and document structure on Machine 2. 5. You should see no changes to "ChinaFolder" on Machine 2. You should see that the object "BeijingDocument" and its contents have not changed. You should see that "NewDocument" has been renamed to "BeijingDocument(2)" because of the encountered name conflict (same name, different CUID) with the existing "BeijingDocument" (different CUID). Remember you can't have two documents in the same folder with the same name. 6. Discuss in groups how this situation could have happened in real life.

Create another Name Conflict


1. Remove "BeijingDocument(2)" from Machine 2. You should now only have a single "BeijingDocument" in "ChinaFolder" on Machine 2. Note that "BeijingDocument" on Machine 1 has a different CUID to that of "BeijingDocument" on Machine 2. This creates a possible name conflict. 2. Run the Import Wizard again from Machine 1 to Machine 2. Select the top option to compare by CUID (in the event of a name conflict, rename it) 3. Try to predict the outcome. Write down the expected folder and document structure on Machine 2. 4. Machine 2 should have an unchanged "ChinaFolder". You should see that the object "BeijingDocument" and its contents have not changed on Machine 2. You should also see that the new object "BeijingDocument(2)" has been added because of the name conflict (same name, different CUID) with the existing "BeijingDocument". You should now have two documents in "ChinaFolder" on Machine 2. 5. Discuss in groups how this situation could have happened in real life.

Investigate what happens when objects are moved on the destination system
1. On Machine 2 delete ChinaFolder and its contents. 2. On Machine 1 run the Import Wizard from Machine 1 to Machine 2. Select the top option to compare by CUID (in the event of a name conflict, rename it). You should now see "ChinaFolder" and "BeijingDocument" on Machine 2. Investigate the CUIDs for "ChinaFolder" and "BeijingDocument" by viewing its properties in the CMC. CUID values should be the same for those objects on Machine 1 and Machine 2. 3. On Machine 2 create a new folder and name it "IndiaFolder". 4. On Machine 2 move "BeijingDocument" from "ChinaFolder" to "IndiaFolder". View the properties of "BeijingDocument" in "IndiaFolder". Its CUID should not have changed. At this point "ChinaFolder" should be empty and "IndiaFolder" should contain the object "BeijingDocument". 5. On Machine 1 run the Import Wizard from Machine 1 to Machine 2. Select the top option to compare by CUID (in the event of a name conflict, rename it).

194

Designing and deploying a solutionLearners Guide

6. Try to predict the outcome. Write down the expected folder and document structure on Machine 2. 7. On Machine 2 you should end up with "ChinaFolder" containing "BeijingDocument". You should see that "BeijingDocument" inside the folder "IndiaFolder" disappeared. Note that you cannot have two documents in the same CMS system with the same CUID. CUIDs are unique across the cluster. 8. Discuss in groups how this situation could have happened in real life.

Using the Import Wizard to import data


The Import Wizard is a locally-installed Windows application that allows you to import existing user accounts, groups, categories, folders, report objects, report instances, calendars, events, repository objects, and server groups.
Note: You can use the Import Wizard to import information from an existing system to a new BusinessObjects Enterprise system that is running on Windows or UNIX.

The Import Wizard imports settings that are specific to each object, rather than global system settings. For instance, a global minimum number of characters password restriction is not imported. But a user-level must change password at next log on password restriction is imported with the user account. Always import users if you want to bring across the associated rights for an object, even if the user already exists in the destination system. If the user already exists, the Import Wizard maps all rights for the user on the source system to the existing user on the destination system. If the user is not brought across, all rights information for that user is discarded.

Import Example A:
A user has Full Control rights for an object in the source environment, but the user is not imported into the destination environment. The Full Control right for that user is discarded in the destination environment when the object is imported into the destination environment. In the case of objects imported into the destination environment without their owners, the administrator of the destination environment becomes the new owner of the objects.

Import Example B:
User A owns an object and has Full Control rights while User C has View rights on the same object in the source environment. If the administrator runs the Import Wizard and imports the object into the destination environment along with User C, but does not import User A, the object becomes owned by the administrator. User A loses Full Control rights, but User C still has View rights on the object. Note: Before starting this procedure, ensure you have the Enterprise account credentials that provide you with administrative rights to the BusinessObjects Enterprise system for both the source and the destination system. The functionality provided by the Import Wizard varies depending upon the product from which you are importing information.

Content ManagementLearners Guide

195

When importing information using the Import Wizard, first specify the source and destination environments and then select the information that you want to import. The Import Wizard copies the requested information from the source environment to the destination environment.

To specify the source and destination environments


This procedure shows how to specify a source environment and a destination environment using the initial screens of the Import Wizard. 1. From the BusinessObjects Enterprise XI 3.0 BusinessObjects Enterpriseprogram group, click Import Wizard. 2. Click Next. The Specify source environment dialog box appears.

3. In the Source list, select the product from which you want to import information.

196

Designing and deploying a solutionLearners Guide

4. In the CMS Name field, enter the name of the source environments CMS. Note: Depending on the source type, you will be asked for different credentials. If you choose Crystal Enterprise 10 or BusinessObjects Enterprise XI, you will be asked for a CMS, User Name, and Password. For earlier versions of Crystal Enterprise, you will be asked for an APS Name, User Name, and Password. Because earlier versions of BusinessObjects do not have a CMS, you will be asked for the User Name, Password, and the location of the Domain key file (domain.key). 5. Enter the appropriate credentials for an account with administrative rights to the source environment. 6. Click Next. The Specify destination environment dialog box appears.

Content ManagementLearners Guide

197

7. In the CMS Name field, enter the name of the CMS for the destination environment. 8. Enter the User Name and Password of an Enterprise account that provides you with administrative rights to the BusinessObjects Enterprise system. 9. Click Next. The Choose objects to import dialog box appears. 10.In the Choose objects to import dialog box, select the check box (or check boxes) corresponding to the information you want to import. Note: The options available depend on the version of the source environment. Events and server groups can be imported from Crystal Enterprise 8.5 or later. Repository objects and calendars can be imported from BusinessObjects Enterprise XI. Access levels can be imported from BusinessObjects Enterprise XI 3.0 . 11.Click Next. 12.In the Import scenario dialog box select the import scenario.

198

Designing and deploying a solutionLearners Guide

These scenario options provide the opportunities to add, overwrite, or reject objects that may have the same Name or CUID (unique cluster identification) in the destination CMS system database. You have two options to identify the object and determine if the object already exists on the destination environment: 1. Use the source object's unique identifier (CUID). 2. Use the source object's name and path. You must decide how you would like the Import Wizard to handle scenarios where the objects already exist on the destination environment. If the Import Wizard detects an object in the destination with the same unique identifier:
Action Description If the Import Wizard finds an object in the destination environment with the same CUID, it updates the destination's object. If the Import Wizard does not find an object in the destination environment with the same CUID, but it finds an object with the same name, it imports the object from the source environment and then renames that object.

Update the destination object. In case of name conflict, rename it.

Content ManagementLearners Guide

199

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Action

Description If the Import Wizard finds an object in the destination environment with the same CUID, it updates the destination's object. If the Import Wizard finds an object on the destination environment with the same name but different CUID, it does not import the object from the source environment. If Import Wizard finds an object on the destination environment with the same CUID, it does not import the object.

Update the destination object. In case of name conflict, do not import it.

Do not import the object.

If the Import Wizard detects an object in the destination with the same name and path:
Action Description If Import Wizard finds that an object already exists on the destination environment with the same name and path, it imports the source's object and renames it. After the import, both the destination's original and the source's versions are on the destination. If Import Wizard finds that an object already exists on the destination environment with the same name and path, it updates the destination environment's version with the source's version If Import Wizard finds that an object already exists on the destination environment with the same name and path, it does not import the source's version.

Keep the destination object and import a renamed copy of the object.

Update the destination object.

Do not import the object.

Note: Matching objects by name and path is only supported for the following object types: Folders and objects under public folders and personal folders Corporate Categories Personal Categories Universes, Overloads and Connections Dashboards Profiles Schedules All other object types will use the matching by unique identifier scheme. 13.Click Next. The Incremental import screen appears. This screen prompts you to specify what type of objects and rights to want to overwrite on the destination environment with the objects from the source environment, when there is a match found.

200

Designing and deploying a solutionLearners Guide

14.Click Next. A Note On Importing Server Groups appears. This warning reminds you to select the required users and groups that have objects associated with them in order for the rights to be applied. Note: Server groups are also imported without their server members. Once you have finished the importing process, you must manually add the required servers to the imported server group. If no servers are added to the group(s), reports dependent on that group will not run successfully. 15.Click Next. The Select Users and Groups dialog box appears.

Content ManagementLearners Guide

201

In the Groups list, select the groups that you want to import. In the Subgroups and Users list, select specific members of any group. This example imports all of the users and groups. 16.Click Next.

202

Designing and deploying a solutionLearners Guide

The Select the custom Access Level screen appears. Select the access levels that you want to import. 17.Click Next. The Select Categories dialog box appears. 18.Select the categories you wish to import.

Content ManagementLearners Guide

203

Note: You can also choose to Import all objects that belong to the selected categories. Document domains in BusinessObjects 6.5 come across as folders in BusinessObjects Enterprise XI. 19.Click Next. The Select Folders and Objects dialog box appears. 20.Select the check boxes for the folders and reports that you want to import.

204

Designing and deploying a solutionLearners Guide

Note: You can also choose to Import all instances of each selected report and object package. 21.Click Next. If you are importing application folders and objects, the Select application folders and objects screen appears. Select the application folders and objects that you want to import. Note: If the selected folders and objects exist on the destination system, they will be updated using the source system as a reference. 22.Click Next. Import options for universe and connection screen appears.

Content ManagementLearners Guide

205

This screen allows you to select one of three options: Import all universes and all connection objects. Import all universes and only connection objects used by these universes. Import the universes and connections that the selected Web Intelligence and Desktop Intelligence documents use directly. Note: If you select the third option, you can select additional universes to import on the next screen. 23.Click Next. The Import repository objects options dialog box appears. 24.Select an option to import selected reports that use repository objects. 25.Click Next.

206

Designing and deploying a solutionLearners Guide

The "Import options for publications "screen appears if you are importing profiles or publications. 26.Click Next. If you chose to import remote connections and replication jobs, the Remote Connections and Replication Jobs screen appears. Select the remote connections and replication jobs that you want to import. 27.Click Next. The Preparing for Import dialog box appears. 28.When the Information collection complete dialog box appears, click Finish to begin importing the information. The Import Progress dialog box displays status information and creates an Import Summary while the Import Wizard completes its tasks. If the Import Summary shows that some information was not imported successfully, click View Detail Log for a description of the problem. Otherwise, click Done. Note: The information that appears in the Detail Log is also written to a text file called ImportWiz.log, found in the directory where the Import Wizard was run. By default, this directory is C:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\Logging\. The log file includes a system-generated ID number, a title that describes the action taken and the reason why.

Content ManagementLearners Guide

207

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Managing BusinessObjects LifeCycle Manager


In this unit you will learn how to promote BusinessObjects Enterprise repository objects from one BusinessObjects Enterprise XI 3.1 system to another in a distributed environment, using the new tool called BusinessObjects LifeCycle Manager XI 3.1. Note: BusinessObjects LifeCycle Manager XI 3.1 is a separate installlation over BusinessObjects Enterprise XI 3.1. It is available as a free download from SAP Service Marketplace. After completing this unit, you will be able to: Describe the life-cycle management process Define life-cycle management and promotion Identify some of the common challenges in your product life-cycle Identify the main functionality of BusinessObjects LifeCycle Manager

Understanding Life-Cycle Management


Many deployments of BusinessObjects Enterprise contain different stages such as development, testing, and production. Reports and other Business Intelligence objects often require modification or enhancement due to the changing information and business requirements. Administrators must control how objects are promoted through these stages, whether the objects are completely new or the objects have the rights to overwrite or update the objects that already exist in the destination environment. Life-Cycle Management refers to the set of processes involved in managing information related to a product life cycle. It provides a framework which helps organizations enhance and analyze the life cycle stages. It also helps organizations identify and establish proactive systems. The product life cycle involves three phases. They are:

Development system
This system is meant to be broken by long running reports, and incorrect queries. Basically, your major mistakes should be made here. Speed, responsiveness, and stability of the development system are secondary factors. Data sources that reports run against in the development system should be (if possible) subsets of actual production data sources and not production data sources. Reports, Universes, Universe Connections should be backed up on a nightly basis. The development system can be used by administrators to create or modify an existing production level security framework.

Test system
The test system is used to ensure that there are no surprises once a report actually moves to the production environment. User acceptance and load testing are done at this point and the report is run as it would be in an actual performance environment.

208

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

New reports that have not yet made it to the production system will be found in the testing system. Backups of the test system should be done to ensure that it can be used as an emergency backup system should the production system suffer a system wide failure, or if there is no implemented Disaster Recovery system.

Production system
The production system is the main entry and use point for the users. The system should be backed up nightly according to best practices, and auditing must be used to proactively ensure that all reports are running as expected. Auditing tools can be used to ensure that custom or user created reports are not interfering with system performance. These phases can occur at the same site or at different geographical locations. The time required to transfer the resources from one repository to another repository must be minimal, to obtain a high quality and competitive product. You may have experienced that migrating Business Objects elements is time consuming, especially when you manage changes from a development system to production system. An important consideration in the life-cycle management within the same version is how the objects will exist and behave in the new environment. Will the promoted objects be completely new objects in the new environment? Will the promoted objects overwrite existing objects in the new environment? Furthermore, the interdependency of BI objects adds more complexity to the problem as these objects have to be moved together without breaking their dependencies. Hence an application is required to carry out this process in a effective manner.

Introducing BusinessObjects LifeCycle Manager


BusinessObjects LifeCycle Manager 3.1 (the LCM tool) is a web-based application that provides you a centralized view to monitor the progress of the entire life-cycle process. It enables you to move BI resources from one system to another system, without affecting the dependencies of these resources. It also enables you to manage different versions of BI resources, map dependencies of BI resources, and roll back a promoted resource to restore the destination system to its previous state. BusinessObjects LifeCycle Manager provides you a list of features: Promotion Managing dependencies Mapping Scheduling Security Test Promotion Air Gap Rollback

Content ManagementLearners Guide

209

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Version Management Auditing

Installing BusinessObjects LifeCycle Manager


BusinessObjects LifeCycle Manager 3.1 works with BusinessObjects Enterprise XI 3.1 and later. The tool is integrated in the BusinessObjects Enterprise XI 3.1 install in the collateral CD (Add-on folder). It supports Windows only installation and deployment. However, it can be used to administrate Windows or Unix systems. The platform supported is same as PAR (Products Availability Report) documentation of BusinessObjects Enterprise XI 3.1 for Windows. Visit http://www.businessobjects.com/support/default.asp for more details.

Authentication and authorization


The LCM tool supports the following authentication types: Enterprise authentication: This authentication type requires a user name and a password that are recognized by the BusinessObjects Enterprise system. This is the default authentication method. LDAP authentication: This authentication type requires a user name and a password that are recognized by the BusinessObjects Enterprise system. Windows AD: This authentication type requires a user name and a password that are recognized by the BusinessObjects Enterprise system. The default is enterprise authentication. The three roles in BusinessObjects LifeCycle Manager are administrator, delegate administrator, and user. The administrator has total control over the LCM tool and monitors how the BI resources are transferred through the different stages of the product life cycle. In addition, the administrator can delegate administration so that a delegated administrator is able to promote BI content along with security settings. The user in the LCM tool has the rights to view the created jobs. Also, the user is allowed to perform various operations in the LCM if the user is given the rights by the administrator.

Navigating in BusinessObjects LifeCycle Manager


After you log into the LCM application you will be presented with LCM home page. The home page will be similar to the InfoView home page. There is a left pane to display folders, and a link named "Administration Options" for you to manage the configurations.

210

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Defining a promotion job


An LCM job can be defined the process involved in moving info-objects from a source live system to a destination live system, where both environments are on identical versions of the product. In other words, a job is a collection of related and dependent info-objects which needs to be promoted from a source CMS to a destination CMS. The benefit of using a job is that you are able to edit and schedule this job and have more granular control over the objects contained in the job. The created job is stored in the CMS repository as an info-Object. It is displayed in a LCM folder in the application. When you create a LCM job, you connect to the source CMS and select the desired info-objects to be promoted. You then decide if you want to promote the dependents as well. If you decide to promote the dependents the LCM tool will perform a dependency analysis and add all the dependent objects to the job. You are now able to save the defined job with all the dependent objects to the LCM repository. You also have the option to deselect some of the dependents that you do not want to promote. When you are ready to promote the LCM job, you log into both the source CMS and the destination CMS, then you explore the LCM folders and select the desired LCM job and promote the contents of the job. The contents in the destination system will be replaced by a newer version of the info-object. If the object already exists in the destination system, it is first backed up by the LCM into a predetermined folder before the newer version of the document is promoted to there. The promotion of objects from the source system to the destination system will be based only on the CUID matching.

Managing the dependents of a job


A "dependent object" of a job refers to object such as user, group, and report, that is to be promoted together with the job during the promotion. Info-Objects in BusinessObjects Enterprise environment may be dependent on other objects. These dependencies are exported automatically when an info-object is promoted to another

Content ManagementLearners Guide

211

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

BusinessObjects Enterprise system. There are two types of dependent resources in BusinessObjects Enterprise system:

Direct dependent resource


A direct dependent resource refers to a resource on which a selected primary resource object directly depends. For example, a Web Intelligence document depends on a universe. In this case, the Web Intelligence document is the primary resource and the universe is the direct dependent resource.

Indirect dependent resource


An indirect dependent resource refers to resource on which a selected primary resource object indirectly depend. For example, a Web Intelligence document exists in a folder. In this case, the Web Intelligence document is the primary resource and the folder in which the Web Intelligence document resides is the indirect dependent resource. Before you promote an object, it is important that you select the dependents that you want to promote and permit the promotion of all the dependents to another system. Otherwise, the dependents will not be promoted along with the job. For example: The Crystal report, eFashion universe.rpt, uses the eFashion Universe and eFashion-Webi connection, and the report Charting.rep also uses the same universe and connection. If you deselect the eFashion Universe and eFashion-Webi connection when you promote this job to the destination system, both eFashion universe.rpt and charting.rep are no longer linked to the eFashion universe in the destination system.

Mapping the dependents


The mapping feature in BusinessObjects LifeCycle Manager enables you to change a universe or business view connection that exists in the source system to a connection that exists in the destination system. For example, a universe called Finance uses a test connection in the source system. While promoting this universe, you can swap the test connection to a live connection that exists in the destination system. In another situation, if you want to promote a report (this report has dependents such as universe and connection) but you don't want to overwrite the universe or connection in the destination system, then you need to map the report to an existing connection in the destination system. BusinessObjects LifeCycle Manager supports four kinds of mapping, they are: Connection Mappings Query as a Web Service (QaaWS) Mappings Crystal Report Mappings

Scheduling a job
Scheduling is a process which allows you to run a job automatically at specified times. BusinessObjects LifeCycle Manager enables you to specify when a job is going to be promoted. A saved job can be promoted by a delegated administrator or user with adequate permissions on the destination system at any point of time. In addition, there may be cases when you want

212

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

to schedule a job at fixed intervals. The LCM tool allows you to promote large jobs when the load on the server is at its minimum. BusinessObjects LifeCycle Manager has a scheduling option that allows you to run a job automatically at specified times. When you schedule a job, you choose the recurrence pattern that you want and specify additional parameters to control exactly when and how often the object will be run.

Promotion with or without Security


The LCM tool supports the following security options: Do not Promote Security: If you select this option, jobs are promoted without the associated security rights. This is the default option. Promote Security: If you select this option, jobs are promoted along with the associated security rights. The following table discusses the behavior of info-objects in relation to the supported security options:
Behavior Promotion with security Promotion without security

If the info-objects do not exist in the destination system.

Info-objects are created in the destination system. They have identical rights on both source and destination systems. Info-objects are copied to the destination system. The info-objects have rights identical to the rights of the source system.

Info-objects created the destination system inherit the rights of the destination system.

If the info-objects exist in the destination system.

Info-objects are updated; however, the rights remain unchanged.

If the users or user groups do not exist in the destination system.

Users or user groups are created in the destination system. The rights of the source system are carried to the destination system.

Users or user groups are not created in the destination system. Users or user groups are created in the destination system, if they are primary objects. Users or user groups are mapped to the destination system; the rights of the users or user groups do not change in the destination system.

If the users or user groups exist in the destination system.

Users or user groups are mapped to the destination system; the rights of the users or user groups are identical

Content ManagementLearners Guide

213

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Behavior

Promotion with security

Promotion without security

on both source and destination systems.

Testing promotion
While promoting an object, the following conditions can occur: An object with same CUID exists in the destination: If the CUID is identical, and rollback is enabled the object in the destination is first backed up and stored in a predetermined folder. The new object from the source is now propagated to the destination system. An object with same name but different CUID exists in destination: This occurs if an object with the same name already exists in the destination and a new object with identical name is propagated from source to destination. If the object is propagated to the same folder as the original, the job should promote the new object but change the name. (Appending a number (2) to the end of the name). The log file should record this change of details. Test Promote is the feature of the LCM tool to match all the existing CUIDs in the job to the destination system for any potential conflicts with the destination objects. A typical use case for this option is to analyze the proposed change in the destination system before it is integrated. In this case you want the tool to report the changes made but not to store the results in the CMS database. The LCM tool has a feature called "Test Promotion" to display a list of what would be potentially added/replaced in the destination system. Test Promotion shows the promotion details in a table with two columns. The objects (user, groups, and universes) that need to be promoted is displayed in the first column and the status is displayed in the second column. Some of the potential messages that can be displayed by the Test option include: New object will be added to destination system. There is a name or CUID conflict. No dependent objects has been promoted from the source system. Some of the dependent objects have not been promoted from the source system. Object failed to be updated in destination system. Reason: FRS is unreachable. Object failed to be promoted to the destination system (for example, you have no rights to update the Finance folder). Security has not been imported from the source CMS. Security has been imported to the destination system successfully.

214

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Rolling back a job


Rollback is a function of the LCM tool to restore the destination system to an identical state that existed before the promotion. Using this feature, you are able to rollback changes made by the particular instance to its previous version. Also, you have option to rollback all objects within the scope of the job instance or only rollback a subset of the objects within the scope of the job instance.

Full / Job Rollback


You remove all the changes applied to the destination system from the job and restore the system to the previous state before the promotion.

Partial / Resource Rollback


You select only one resource from the job for rollback. Only this instance of the object is rolled back.

Using the Version Management System


BusinessObjects LifeCycle Manager enables you to manage versions of resources that exist in the BusinessObjects Enterprise repository. The LCM tool uses Subversion, an open source version control system to version control the resources. Version control (also known as revision control) is the management of multiple versions of the same unit of information. It is most commonly used in engineering and software development to manage ongoing evolution of digital documents like source code, blueprints, or electronic models and other critical information that are shared by a team of people. Information stored in the BusinessObjects Enterprise repository (for example, documents and their dependents) change over time. Tracking the changes of documents and their related dependencies are what constitutes a versioning system. The version control system records who made a specific change and allows rolling back to previous states to avoid undesirable changes. Essentially, versioning of content is archiving important document versions so that administrators have control on changes over time. In the life cycle management tool, version management is achieved by providing integration with a third party open-source version management tool called Subversion (http://downloads.open.collab.net/collabnet-subversion.html). It is responsible for versioning of objects stored in the repository and allowing users of the LCM tool to check-in / check-out their changes.

Using Subversion as the Version Management System


Subversion (SVN) is a freeware version control system by CollabNet Inc. It allows the LCM users to keep track of changes that are made to objects stored in the BOE XI 3.0 repository. Subversion is bundled with the installation along with BusinessObjects LifeCycle Manager 3.1 as the default version management tool. It will be installed to whatever location you choose at

Content ManagementLearners Guide

215

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

the time of install. You need to provide Subversion the username/password to BusinessObjects LifeCycle Manager at the time of installation. This username/password is stored within the LCM configuration. Whenever BusinessObjects LifeCycle Manager attempts any VMS task it reads the configuration and passes this information to the VMS. The configuration of Subversion will be completed after the default LCM installation is completed. To use Subversion within the LCM tool, you must be authenticated by the single sign-on and have the rights to check-in / check-out the objects in the promotion jobs. You are able to perform the following operations to the Version Management System. Adding objects to the VMS. Checking-in objects to the VMS. Checking-out objects to the VMS. Listing the version history. Getting the latest version of objects. Getting a particular version from the VMS to CMS or to the local disk.

Understanding "Air Gap" requirements


Recall that promotion is the capability to transfer a complete functioning resources from one repository to another. If the source and the destination system can be accessed simultaneously, the promotion is done through a WAN or LAN network. However, if there is no network connection between the source and destination systems, BusinessObjects LifeCycle Manager employs a new technique known as "Air Gap Feature". The inter-repositories communication involved in the LCM tool uses the Web Service interface. The underlying servers are grouped in clusters that are geographically separated and connected via the LAN/WAN network. Common Objects Request Broker Application (CORBA) or other proprietary communication channels serve as the communication bus to handle and deliver client-server interaction. However this deployment scenario may raise some configuration and security problems for firewall and proxies in some organizations. Some organizations may have systems that are highly secured and located in isolation so no one can touch them. If there is no connectivity between source and the destination systems, BIAR files are generated and these files are copied into a data storing device and transferred to the destination system. This feature is known as "Air Gap Feature". The business need for the Air Gap feature is usually found in banks and governments that have absolutely no network connectivity between development, testing, and production environment. In this situation, BIAR files are written to a DVD or tape, then it is physically moved to the production environment where it can be imported. Business Intelligence Archive Resource (BIAR) files are a packaging tool for managed content in the BusinessObjects Enterprise system. They can be used to archive folders and objects in the Enterprise repository so that they can be easily transferred to a different location. This is useful for making backups, and moving BI applications from a development environment to a production environment.

216

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

When there is no connectivity between the source and the destination systems, you need to log into the source system and create BIAR files. These files can be stored into a data storing device and used later. This workflow for promoting objects using BIAR files inside BusinessObjects LifeCycle Manager can be summarized into the following steps: 1. Select an LCM job for Promotion. 2. Enter the source CMS connection parameters. 3. Select BIAR file option as the destination, Enter the folder path to save the BIAR file. 4. Select whether you want to promote security or enable rollback. 5. Promote the Job. 6. Verify the BIAR file is created successfully.

Content ManagementLearners Guide

217

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Managing the Federation Services


Federation is an important cross-site replication tool for working with multiple BusinessObjects Enterprise deployments in a global environment. Content can be created and managed from one BusinessObjects Enterprise deployment and replicated to other BusinessObjects Enterprise deployments across geographical sites on a recurring schedule. You can complete both one-way replication and two-way replication jobs. After completing this unit, you will be able to: Recognize when to use the different types of replication Manage replication conflict resolution and object cleanup Use web services for replication Troubleshoot replication error messages

Reviewing Federation
Federation is feature available on the CMC in BusinessObjects Enterprise. It is an important cross-site replication tool for working with multiple BusinessObjects Enterprise deployments in a global environment. Content can be created and managed from one BusinessObjects Enterprise deployment and replicated to other BusinessObjects Enterprise deployments across geographical sites on a recurring schedule. You can complete both one-way replication and two-way replication jobs.

218

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Replication types and mode options


Depending on your selection of Replication Type and Replication Mode, you will create one of four different Replication Job options: one-way replication, two-way replication, refresh from origin, or refresh from destination.

One-way replication
With one-way replication, you can only replicate content in one direction, from the Origin site to a Destination site. Any changes made to objects on the Origin site in the Replication List are sent to the Destination site. However, changes made to objects on a Destination site are not sent back to the Origin site. One-way replication is ideal for deployments with one central BusinessObjects Enterprise deployment where objects are created, modified and administered. Other BusinessObjects Enterprise deployments use the content of the central deployment. To create one-way replication, select the following options: Replication Type = One-way replication Replication Mode = Normal replication

Two-way replication
With two-way replication, you can replicate content in both directions between the Origin and Destination sites. Any changes made to objects on the Origin site are sent to Destination sites, and changes made on a Destination site are sent to the Origin site during replication. Note: To perform remote scheduling and to send locally run instances back to the Origin site, you must select two-way replication mode. If you have multiple BusinessObjects Enterprise deployments where content is created, modified, administered and used at both locations, two-way replication is the most efficient option. It also helps synchronize the deployments. To create two-way replication, select the following options: Replication Type = Two-way replication Replication Mode = Normal replication

Refresh from Origin or Refresh from Destination


When you replicate content in one-way or two-way replication modes, the objects on the Replication list are replicated to a Destination site. However, not all of the objects may replicate each time the Replication Job executes. Federation has an optimization engine designed to help finish your replication jobs faster. It uses a combination of the object's version and time stamp to determine if the object was modified since the last replication. This check is done on objects specifically selected in the Replication List and any objects replicated during dependency checking.

Content ManagementLearners Guide

219

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

However, in some cases the optimization engine may miss objects, which won't be replicated. That's why Refresh from origin and Refresh from destination force the Replication Job to replicate content, and their dependencies, regardless of the timestamps. "Refresh from origin" only sends content from the Origin to the Destination sites. "Refresh from destination" only sends content from the Destination sites to the Origin site. The following three examples highlight scenarios using Refresh from Origin and Refresh from Destination where certain objects will be missed due to the optimization.

Scenario 1:The addition of the objects that contain other objects into an area that is being replicated.
Folder A is replicated from the Origin site to the Destination site. It now exists on both sites. A user moves or copies Folder B with Report B, into Folder A on the Origin site. During the next replication, Federation will see that Folder B's timestamp has changed and will replicate it to the Destination site. However, Report B's timestamp does not change. Therefore, it will be missed by a regular one-way or two-way Replication Job. To ensure Folder B's content is properly replicated, a Replication Job with Refresh from Origin should be used once. After this, the regular one-way or two-way Replication Job will replicate it properly. If this example is reversed and Folder B is moved or copied on the Destination site, then use Refresh from Destination.

Scenario 2: The addition of new objects using Import Wizard or the BIAR command line.
When you add objects to an area that is being replicated using Import Wizard or BIAR command line, the object may not be picked up by a regular one-way or two-way Replication Job. This occurs because the internal clocks on the source and destination systems may be out of sync when using the Import Wizard or BIAR command line. Note: After importing new objects into an area that is being replicated on the Origin site, it is recommended that you execute a Refresh from Origin Replication Job. After importing new objects into an area that is being replicated on the Destination site, it is recommended that you execute a Refresh from Destination Replication Job.

Scenario 3: In between scheduled replication times.


If you add objects to an area that is being replicated and can't wait until the next scheduled replication time, you can use Refresh from Origin and Refresh from Destination Replication Jobs. By selecting the area where objects have been added, you may replicate content quickly. Note: This scenario can be costly for large Replication lists, so it is recommended that you do not use this option often. For example, it is not necessary to create replication jobs to refresh from the Origin to Destination mode on an hourly schedule. These modes should be used in run now or infrequent schedules. Note: In some cases, you cannot use conflict resolution, including: Refresh from origin: destination site option wins is blocked, or Refresh from destination: origin wins option is blocked.

220

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Managing conflict detection and resolution


In Federation, a conflict occurs when the properties of an object are changed on both the Origin site and Destination site(s). Both top level and nested properties of an object are checked for conflicts. Two types of object conflicts are described in the following examples: When Frank modifies the report file on the Origin site, and Simon modifies the replicated version on the Destination site. When Abdul modifies the name of a report on the Origin site, and Maria modifies the name of the replicated report on the Destination site. Some instances do not create a conflict. For example, if Lily modifies the name of a report on the Origin site, and Malik modifies the description of the replicated version on the Destination site, the changes merge together.

One-way replication conflict resolution


In one-way replication, you have two choices for conflict resolution: Origin site takes precedence No automatic conflict resolution

Origin site takes precedence


If a conflict occurs during one-way replication, the Origin site object takes precedence. Any changes to objects on a Destination site are overwritten by the Origin site's information.

Origin site takes precedence


Frank changes the name of a report to Report A. Simon changes the name of the replicated version on the Destination site to Report B. After the next replication job runs, the replicated version on the Destination site will revert to Report A. Because the conflict is automatically resolved, it is not generated in the log file and does not appear in the conflicting object list.

No automatic conflict resolution


If a conflict occurs and you select No automatic conflict resolution, the conflict is not resolved, a log file is not generated, and it does not appear in the conflicting object list. The administrator can access a list of all replicated objects that are in conflict in the Federation area of the CMC. Objects in conflict are grouped together by the Remote Connection they used to connect to the Origin site with. To access these lists, go to the Replication Errors folder in the Federation area of the CMC, and select the desired Remote Connection. All replicated objects on a Destination site will be flagged with a replication icon. If there is a conflict, objects will be flagged with a conflict icon. Note:

Content ManagementLearners Guide

221

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

The list is updated when a Replication Job that uses a Remote Connection is completed. It contains all objects in conflict for all of the Replication Jobs that use its specific Remote Connection. All objects in conflict will be flagged with a conflict icon. A warning message also appears in the Properties page. Any user with access to the CMC and the Replication Job instances can access the XML log outputted in the logfile directory. A Destination site object's icon is flagged to indicate a conflict. During processing, a conflict log is created.

No automatic conflict resolution


Abdul modifies Report A on the Origin site. Maria modifies the replicated version on the Destination site. The next time the replication job runs, the report will be in conflict as it has changed on both sites and it will not be resolved. The Destination report is maintained and changes to the Origin's report are not replicated. Subsequent replication jobs will behave the same way until the conflict is resolved. Any changes on the Origin site are not replicated until the conflict is manually resolved. Note: In this case, the entire object is not replicated. Other changes that may not be in conflict are not brought over.

Manual conflict resolution


To manually resolve a conflict, you have three options: 1. Create a Replication Job that replicates only the objects in conflict. It must use the same Remote Connection object and Replication List. To keep the Origin site changes, create a Replication Job. Then set Replication Mode toRefresh from Origin, and set Automatic Conflict Resolution toOrigin site takes precedence. To keep the Destination site changes, create a Replication Job with Replication Type = Two-way replication, Replication Mode = Refresh from Destination, and Automatic Conflict Resolution = Destination site takes precedence. Note: In Replication Mode, set Refresh from Origin or Refresh from Destination, to select only the objects in conflict on the Replication List. This way, other objects are not replicated. Next, schedule the Replication Job to run and it will replicate the selected objects and resolve the conflict as specified. 2. Create a Replication Job that replicates only the objects in conflict. It will need to use the same Remote Connection object. However unlike option 1, you may create a new Replication List on the Origin site. Use only the objects in conflict and create a new Replication Job which will use this focused Replication List. To keep the Origin site changes, set the Automatic Conflict Resolution toOrigin site takes precedence.

222

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

To keep the Destination site changes, set Automatic Conflict Resolution toDestination site takes precedence and the Replication Type toTwo-way replication. 3. For one-way replication jobs, you may only delete the object on the Destination site. The next time the Replication Job executes, it replicates the object from the Origin site to the Destination site. Note: Be careful when deleting an object because other objects that depend on it may be removed, stop working, or lose security. Options 1 and 2 are recommended.

Two-way replication conflict resolution


In two-way replication conflict, you have three choices for conflict detection: Origin site takes precedence Destination site takes precedence No automatic conflict resolution

Origin site takes precedence


If a conflict occurs, the Origin site will take precedence and overwrite any changes to the Destination site(s).

Origin site takes precedence


Lily modifies the name of a report to Report A. Malik modifies the name of the replicated version on the Destination site to Report B. After the next replication job runs, the replicated version on the Destination site will revert to Report A. This will not generate a conflict in the log file, and it will not appear in the conflicting object list because the conflict was resolved according to the user's instructions on the Origin site.

Destination site takes precedence


If a conflict occurs, the Destination site keeps its changes and overwrites them to the Origin site.

Destination site takes precedence


Kamal modifies the name of a report to Report A. Peter modifies the name of the replicated version on the Destination site to Report B. When the replication job runs, a conflict is detected. The name of the Destination report remains as Report B. In two-way replication, changes are also sent back to the Origin site. In this scenario, the Origin site is updated and its report name is changed to Report B. This does not generate a conflict in the log file and it will not appear in the conflicting object list because the conflict was resolved according to the user's instructions.

No automatic conflict resolution


When No automatic conflict resolution is selected, a conflict will not be resolved. The conflict will be noted in a log file for the administrator, who can manually resolve it.

Content ManagementLearners Guide

223

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Note: An object's icon is flagged to indicate that a conflict exists. Although changes are replicated to both Origin and Destination sites in two-way replication, only the Destination site's versions will be flagged with a conflict icon. Note: Any user with access to the CMC and the Replication Job instances can access the XML log outputted in the logfile directory. A Destination site object's icon is flagged to indicate a conflict. During processing, a conflict log is created. The administrator can access a list of all replicated objects that are in conflict in the Federation area of the CMC. Objects in conflict are grouped together by the Remote Connection they used to connect to the Origin site with. To access these lists, go to the Replication Errors folder in the Federation area of the CMC, and click the desired Remote Connection. Note: The list is updated when a Replication Job that uses a Remote Connection is completed. It contains all objects in conflict for all of the Replication Jobs that use its specific Remote Connection. All replicated objects on a Destination site will be flagged with a replication icon. If there is a conflict, objects will be flagged with a conflict icon.

No automatic conflict resolution


Michael modifies Report A on the Origin site. Damien modifies the replicated version on the Destination site. When the next replication job runs, the report is in conflict as it has changed on both sites and will not be resolved. The Destination report is kept and changes to the Origin's report are not replicated. Subsequent replication jobs behave the same way until the conflict is resolved. Any changes on the Origin site will not get replicated until the conflict is manually resolved by the administrator or delegated administrator. Note: In this case, the entire object is not replicated. Other changes that are not in conflict are not brought over.

Manual conflict resolution


To manually resolve a conflict, you have three options: 1. Create a Replication Job that replicates only the objects in conflict. It must use the same Remote Connection object and Replication List. To keep the Origin site changes, create a Replication Job. Then set the Replication Mode toRefresh from Origin and set Automatic Conflict Resolution toOrigin site takes precedence. To keep the Destination site changes, create a Replication Job and set Replication Type to Two-way replication, set Replication Mode toRefresh from Destination, and set Automatic Conflict Resolution toDestination site takes precedence. Note: In Replication Mode, set Refresh from Origin or Refresh from Destination, to select only the objects in conflict on the Replication List. This way, other objects are not

224

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

replicated. Next, schedule the Replication Job to run and it will replicate the selected objects and resolve the conflict as specified. 2. Create a Replication Job that replicates only the objects in conflict. It will need to use the same Remote Connection object. However unlike option 1, you may create a new Replication List on the Origin site. Use only the objects in conflict and create a new Replication Job which will use this focused Replication List. To keep the Origin site changes, set the Automatic Conflict Resolution = Origin site takes precedence. To keep the Destination site changes, set Automatic Conflict Resolution = Destination site takes precedence and the Replication Type = Two-way replication. 3. You may also delete the object on the site you don't want to keep. Note: Be careful when deleting an object because other objects that depend on it may be removed, stop working, or lose security. Options 1 and 2 are recommended. To keep the Destination site changes, you can delete the object on the Origin site. The next time the Replication Job executes, it replicates the object from the Destination site to the Origin site. Note: Be careful when deleting a Origin' site's copy as other Destination sites that replicate that object may execute their replication job before the copy has been replicated back. This will cause the other Destination sites to delete their copy, which will be unavailable until the copy is returned. To maintain the Origin site changes, you may delete the object on the Destination site.

Managing Object Cleanup


In Federation, you should perform Object Cleanup throughout the lifecycle of your replication process, to make sure all objects that you delete from the Origin site are also deleted from each Destination site. Object Cleanup involves two elements: a Remote Connection and a Replication Job. A Remote Connection object defines general cleanup options, and a Replication Job performs the clean up when the appropriate interval passes.

How to use Object Cleanup


Separate Replication Jobs that use the same Remote Connection work together during Object Cleanup. This means that your Replication Job will clean up objects within its Replication List, as well as objects within other Replication Lists that use the same Remote Connection. A remote connection is only considered the same if the parent of the Replication Job is the same remote connection object.

Content ManagementLearners Guide

225

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

How to use Object Cleanup


Replication Jobs A and B replicate Object A and Object B. They both replicate from the same Origin site and use the same Remote Connection. If the Origin site deletes Object B , Replication Job A will see that Object B was deleted. Even though Replication Job B is the one replicating it, Object B will also be removed from the Destination site. When Replication Job B executes it won't need to run an Object Cleanup. Note: Only objects on the Destination site are deleted during Object Cleanup. If you remove an object from the Origin site that is part of a replication, the object will be removed from the Destination site. However, if an object is removed from the Destination site, it will not be removed from the Origin site during Object Cleanup, even if the replication job is in two-way replication mode. Objects that are deleted or removed from the Replication List are not deleted from Destination site. To properly remove an object that is specified explicitly on a Replication List, you should delete it on both the Destination site and the Origin site. Objects that are replicated via dependency calculations are not deleted.

Object Cleanup limits


In the Remote Connection object, you can define the number of objects a Replication Job will clean up at one time. Federation automatically tracks where the clean up job ends. This way, the next time you run a Replication Job, it starts the next clean up job at that point. Tip: To complete a Replication Job faster, limit the number of objects for cleanup.

Object Cleanup limits


Replication Jobs A and B are replicating Object A and Object B. Both objects are replicated from the same Origin site and use the same Remote Connection. If the Origin site deletes Object B and the object limit is set to 1, the next time Replication Job A runs, it will only check if Object A has been deleted. This way, the Object B is not checked and will not be deleted. Next, Replication Job B runs and starts the object cleanup at the point where Replication Job A ended. It will check if Object B has been deleted and remove it from the Destination site. You can find this option on the Remote Connection object's property Limit the number of clean up objects to: Note: If you do not select this option, all Replication Jobs that use this Remote Connection will check all objects for potential clean up.

Object Cleanup frequency


You can set the how often a Replication Job performs object cleanup in the Remote Connection Cleanup Frequency field.

226

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Note: You must enter a positive whole number, which represents the number of hours to wait between object cleanup processing.

Object Cleanup frequency


Replication Jobs A and B replicate Object A and Object B. Both objects are replicated from the same Origin site and use the same Remote Connection. If Object B is deleted from the Origin site + the Object Limit is set to 1 + the Cleanup Frequency is set to 150 hours + Replication Job A runs next, it will check if Object A has been deleted. Because the Object limit is set to 1,Object B will not be checked or deleted. The next cleanup occurs 150 hours after Replication Job A did the initial check. Although Replication Jobs A and B may execute many times before the 150 hour limit, neither will attempt to run an Object Cleanup. After 150 hours, the next Replication Job will execute and attempt cleanup. Then it will determine that Object B was deleted, and then delete it.

Enabling and disabling options


Each Replication Job can participate in Object Cleanup. Use Enable Object Cleanup on destination option on a Replication Job to instruct it whether or not to run an Object Cleanup. In some cases, you may have high priority Replication Jobs you do not want to participate in Object Cleanup, so you can execute them as quickly as possible. To do this, disable Object Cleanup.

Using Web Services in Federation


Federation uses Web Services to send objects and their changes between the Origin and Destination sites. Federation-specific Web Services are automatically installed and deployed in your BusinessObjects Enterprise XI 3.0 installation. However, you may want to modify properties or customize deployments in Web Services to improve functionality, as described in this section. Tip: To improve file management and functionality, it is recommended that you enable file caching in Federation. Note:

Session variable
If you are transferring a large number of content files in one Replication Job, you may want to increase the session timeout period of the Federation Web Services. The property is located in the dsws.properties file:
<App Server Installation Directory>\dswsbobje\Web-Inf\classes

Activating Session variable


C:\Program Files\Business Objects\Tomcat55\webapps\dswsbobje\WEB-INF\classes

To activate the session variable, enter:

Content ManagementLearners Guide

227

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

session.timeout = x

Where x is the desired time, x is measured in seconds. If not specified, the default value is 1 (one) second.

File caching
File caching allows Web Services to handle very large attachments without buffering them in memory. If it is not enabled during large transfer sizes, all of the Java Virtual Machine's memory may be utilized and replication may fail. Note: File caching decreases performance as the Web Services process to files instead of memory. You may use a combination of both options and send large transfers to a file and smaller ones into memory. To enable file caching, modify the Axis2.xml located at:
<App Server Installation Directory>\dswsbobje\Web-Inf\conf

Enabling File caching


C:\Program Files\Business Objects\Tomcat55\webapps\dswsbobje\WEB-INF\conf

Enter the following:


<parameter name="cacheAttachments" locked="false">true</parameter> <parameter name="attachmentDIR" locked="false">temp directory</parameter> <parameter name="sizeThreshold" locked="false">4000</parameter>

Note: Threshold size is measured in bytes.

Custom deployment
Federation Web Services may deploy automatically and require the federator, biplatform, and session services to activate. To disable Federation, or any other Web Services, modify the corresponding Web Services service.xml file. BusinessObjects Enterprise Web Services are located in:
<App Server Installation Directory>\dswsbobje\WEB-INF\services

Custom deployment
C:\Program Files\Business Objects\Tomcat55\webapps\dswsbobje\WEBINF\services

To deactivate Web Services: add activate property in the service name tag of the service.xml file and set it to false restart your java application server To disable Federation: services.xml file is located in:

228

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

C:\Program Files\Business Objects\Tomcat55\webapps\dswsbobje\WEBINF\ services\federator\META-INF

Change service name from:


<service name="Federator">

to:
<service name="Federator" activate="false">

Best Practices
You can optimize the performance of a Replication Job if you follow the configuration steps described in this section. If there is a large number of objects in a single Replication Job, you can take additional steps to ensure success when you run the Replication Job. Typically, you should be able to replicate up to 32,000 objects in each Replication Job. However, some deployments may need to make configurations with smaller or larger replication sizes. Obtain a dedicated Web Services provider. Replicated content is sent via Web Services. In a default installation of BusinessObjects Enterprise XI 3.0, all of the Web Services utilize the same web service provider. This means that larger Replication Jobs may tie up the web service provider longer and slow down its response to other web service requests as well as any applications it serves. If you plan to replicate a large number of objects at once, or run several Replication Jobs in sequence, you may consider deploying Federation Web Services on its own Java Application server using your own web services provider. To do this, use the BusinessObjects Installer and install BusinessObjects Enterprise Web Services. You must have a Java Application Server already running. If you do not, install the entire Web Tier Components, which will install the BusinessObjects Web Services and Tomcat. To do this, launch the Installer on the desired machine, select Custom Install and select either the Web Tier Components option or BusinessObjects Web Services. Note: You must input an existing CMS, for example the hostname, port, and administrator password. You will need to use this new Web Services provider's URI in your Remote Connection's URI field.

Increase the Java Application Server's available memory. Increase the available memory your Java Application Server can use if your single Replication Job replicates many objects, or if you are sharing the Application Server with other applications. If you deployed BusinessObjects Enterprise and Tomcat, the default available memory is 1 GB. To increase the available memory for Tomcat:

Content ManagementLearners Guide

229

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

In Windows: 1. 2. 3. 4. Open the Tomcat Configuration. Select Java. In the Java Options text box, locate -Xmx1024M Increase the -Xmx1024M to the desired size. For example, to increase the memory to 2GB, enter: -Xmx2048M In Unix: 1. 2. In the <BOE_Install_Dir>/setup/, open env.sh with your preferred text editor. Increase the -Xmx1024m parameter to the desired size. Locate the following lines:
# if [ -d "$BOBJEDIR"/tomcat ]; then # set the JAVA_OPTS for Tomcat JAVA_OPTS="-Dbobj.enterprise.home=${BOBJEDIR}enterprise120 -Djava.awt.headless=true" if [ "$SOFTWARE" = "AIX" -o "$SOFTWARE" = "SunOS" -o "$SOFTWARE" = "Linux" -o "$SOFTWARE" = "HP-UX" ]; then JAVA_OPTS="$JAVA_OPTS -Xmx1024m -XX:MaxPermSize=256m" fi export JAVA_OPTS # fi

3.

Increase the -Xmx1024m parameter to the desired size. For example, to increase the memory to 2GB, enter: -Xmx2048M Tip: For other Java Application Servers, refer to your Java Application Server's documentation to increase the available memory.

Reduce the size of the BIAR files being created. Federation uses Web Services to replicate content between the Origin site and Destination site(s). Objects are grouped together and compressed into BIAR files for more efficient transportation. When replicating a large number of objects, it is suggested that you configure your Java Application Server to create smaller BIAR files. Federation will package and compress objects across multiple smaller BIAR files so the number of objects you want to replicate will not be limited. To reduce the size of the BIAR files created, add the following Java parameters to your java application server: Dbobj.biar.suggestSplit and Dbobj.biar.forceSplit

230

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

bobj.biar.suggestSplit suggests an appropriate size of the BIAR file, which it will try to meet. Suggested new value is 90MB. bobj.biar.forceSplit will force a BIAR file to stop at a given size. Suggested new value is 100MB. Note: You do not need to change the default BIAR file size settings unless your application server is running out of memory and its maximum heap size cannot be increased any further. For Tomcat on Windows: 1. 2. 3. Open the Tomcat Configuration. Select Java. Under the Java Options text box, add the following lines at the end:
-Dbobj.biar.suggestSplit=90 -Dbobj.biar.forceSplit=100

For Tomcat on Unix/Linux: 1. 2. Open the env.sh with your preferred text editor. It is located in <BOE_Install_Dir>/setup/ Locate the following lines:
# if [ -d "$BOBJEDIR"/tomcat ]; then # set the JAVA_OPTS for tomcat JAVA_OPTS="-Dbobj.enterprise.home=${BOBJEDIR}enterprise120 Djava.awt.headless=true" if [ "$SOFTWARE" = "AIX" -o "$SOFTWARE" = "SunOS" -o "$SOFTWARE" = "Linux" -o "$SOFTWARE" = "HP-UX" ]; then JAVA_OPTS="$JAVA_OPTS -Xmx1024m -XX:MaxPermSize=256m" fi export JAVA_OPTS # fi

Add the desired BIAR file size parameters. For example,


JAVA_OPTS="$JAVA_OPTS -Xmx1024m -XX:MaxPermSize=256m -Dbobj.biar.suggestSplit=90 -Dbobj.biar.forceSplit=100"

For other Java Application servers, consult your documentation to add Java system properties. Increase the Socket Timeout. The Adaptive Job Server is responsible for running the Replication Job. During the execution of the Replication Job, the Adaptive Job Server establishes a connection to the Origin site. When receiving large amounts of information from the Origin site, it is important that the Socket which the Adaptive Job Server is using to receive information does not timeout. The default value is 90 minutes. Increase the Socket Timeout to the time you require. To increase the Socket Timeout on the Adaptive Job Server: 1. 2. Open the Central Management Console (CMC). Navigate to the Server management area and select Adaptive Job Server.

Content ManagementLearners Guide

231

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

3. 4.

Click Properties. Add Command Line Parameters to the end of the following: For Windows: Windows:-javaArgs Xmx1000m,Xincgc,server,Dbobj.federation.WSTimeout=<timeout in minutes> For Unix/Linux: -javaArgs Xmx512m,Dbobj.federation.WSTimeout=<timeout in minutes>

Limitations
Federation is a new feature in BusinessObjects XI 3.0. It is a very flexible tool, however certain limitations may effect its performance during production. This section highlights areas that you can modify to optimize your Federation operations. Maximum number of objects. Each Replication Job replicates objects between BusinessObjects Enterprise deployments. It is recommended that the maximum number of objects you replicate in a single Replication Job is 32,000. While a Replication Job may function with more than 32,000 objects, in this release Federation only supports replicating up to 32,000 objects. Rights. In BusinessObjects Enterprise XI 3.0, rights are only replicated from the Origin site to the Destination site. It is recommended that user rights common to both deployments are set on the Origin site and replicated to the Destination sites using two-way replication. User rights on a specific site will be administered as usual in a BusinessObjects Enterprise deployment on the site where the users reside. Business Views and associated objects. BusinessObjects Enterprise may store Business Views, Business Elements, Data Foundations, Data Connections and List of Values (LOVs). These objects are used to enhance the functionality of Crystal reports. If these objects are first created on the Destination site and then replicated to the Origin site using two-way replication, they may not work properly and their data may not appear in Crystal reports. It is recommended that you create the Business Views, Business Elements, Data Foundations, Data Connections and LOVs on the Origin site and then replicate them to the Destination site. Make updates to the objects on the Destination site or the Origin site (rights permitting) and the changes will replicate back and forth properly. Universe overloads. BusinessObjects Enterprise may store universe overloads. If universe overloads are created on the Destination site and then replicated to the Origin site using two-way replication, they may not work properly.

232

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

To resolve this, first create the universe overloads on the Origin site and replicate them to the Destination site. Second, set any security on the universe overloads on the Origin site and replicate them to the Destination site. Object cleanup. Object cleanup deletes objects that have been deleted on the other site. Object cleanup is currently only done from the Origin site to the Destination site.

To view a log after a Replication Job


Every time you run a Replication Job, Federation automatically produces a log file, which is created on the Destination site. 1. Go to the Federation management area in the CMC. 2. Click All Replication Jobs folder. 3. Select the desired Replication Job from the list. 4. Click Properties. The Replication Job Properties page opens. 5. Click History. 6. Click the Instance time of the log file to view successful Replication Jobs, or click Failed status to view a log file of failed Replication Jobs. 7. Select desired instance to view the log file. The log file is outputted in XML format and uses an XSL form to format the information into an HTML page for viewing. You can access the XML log from the computer that is running the Server Intelligence Agent that contains the Adaptive Job Server. You can find the log file at: Windows: <Install Directory>\BusinessObjects Enterprise 12.0\Logging Unix: <Install Directory>/bobje/logging

Troubleshooting error messages


This section contains error messages you may encounter in rare circumstances while using Federation. These messages will appear in the Replication Jobs logs or in the functionality area of a report. Invalid GUID. For example:
ERROR 2008-01-10T00:31:08.234Z The GUID ASXOOFyvy0FJnRcD0dZNTZg (found in property SI_PARENT_CUID on object number 1285) is not a valid GUID

This error means that you are replicating an object whose parent is not being replicated with it, and which does not already exist on the Destination site. For example, an object is being

Content ManagementLearners Guide

233

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

replicated but not the folder that contains it. The parent object may not be replicated because the account replicating the objects does not have sufficient rights on the parent object. Crystal reports showing no data on the Origin site. This error may occur if the Crystal report is using a Business View, Business Element, Data Foundation, Data Connection or List Of Values (LOVs) that was originally created on the Destination site and then replicated to the Origin site. Universe overloads are not applied correctly. This error may occur if the report is using a universe which contains a universe overload that was created on the Destination site and replicated to the Origin site. Java out of memory. For example:
java.lang.OutOfMemoryError

This may occur if your Java Application Server has run out of memory while processing a Replication Job. Your Replication Job may be too big or your Java Application Server may not have enough memory. Either increase the available memory of your Java Application Server by moving Federation Web Services to a dedicated machine, or reduce the amount of objects being replicated in one Replication Job. Socket timeout. For example:
Error communicating with origin site. Read timed out.

The information being sent from the Origin site to the Adaptive Job Server on the Destination site is longer than the allotted timeout. Increase the socket timeout on the Adaptive Job Server, or reduce the number of objects you are replicating in your Replication Job. Query Limit. For example:
SDK error occurred at the destination site. Not a valid query. (FWB 00025) ...Query string is larger than query length limit

This error may appear if you are replicating too many objects at one time and Federation submits a query that is too large for the CMS to handle. Objects from the Origin site will be committed to the Destination site. However, any changes that need to be committed to the Origin site will not be committed. Conflicts are resolved as specified, however manual resolution conflict flags on the object will not be set. Objects committed on the Destination site will continue to work properly. To resolve this issue, reduce the number of objects you are replicating in one Replication Job. Replication Job Times Out. For example:

234

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Object could not be scheduled within the specified time interval.

You may receive this message if your Replication Job has timed out while it was waiting for another Replication Job to finish. This may occur if you have multiple Replication Jobs connecting to the same Origin site at the same time. The failed Replication Job will try to run again at its next scheduled time. To resolve this issue, schedule the failed Replication Job at a time that doesn't conflict with other Replication Jobs that connect to the same Origin site. Replication Limit.
SDK error occurred at the destination site. Database access error. . Internal Query Processor Error: The query processor ran out of stack space during query optimization. Error executing query in ExecWithDeadlockHandling.

You may receive this message if you exceed the number of supported objects that can be replicated at one time. To resolve this issue, reduce the number of objects you are replicating in your Replication Job and try to run it again.

Activity: One way replication (instructor led)


Instructions
1. Import Federation.biar content into Instructor machine (GroupA) 2. Create a Replication List on the Origin (GroupA- Instructor Machine) site (based on Federation.biar - Federation folder) 3. Create a Remote Connection on the Destination (GroupB - Student Machine) site pointing to the Origin (GroupA - Instructor Machine) 4. Create a Replication Job on the Destination (GroupB - Student Machine) site pointing to Replication List on Origin (GroupA - Instructor Machine). Accept defaults (one-way) replication 5. Schedule the Replication Job once and see if the contents get propagated on the destination (GroupB - Student Machine)

Activity: One way and two way replication


Within Jade Publishing Beijing is the central office. Reports created at the central office will serve as report templates to be used by the satellite offices. These reports must be made available to the satellite offices to allow them to replicate the reports from the central office to their sites. In some cases there will be content from the satellite office that does not exist on the central office. In these instances the content must be replicated from the satellite office to the central office. During the sizing discussion it became obvious that when setting up servers around all the offices you might either decide to use server groups or federation to fulfill the processing requirements for remote offices. In this activity you will setup Federation as a proof of concept

Content ManagementLearners Guide

235

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

(POC) between two separate groups of four machines. Data needs to travel between two different CMS clusters. Once the data travels across CMS clusters it then can be processed locally in the remote location, or it can be processed by the Origin location. Decide which groups of four machines will work together during the federation workshop. You will federate content from GroupA system (four machines) to GroupB system (different four machines).

One way replication


1. Setup one-way federation (Accept default parameters: Enable object cleanup, Origin site takes precedence, One-way replication, Normal Replication) Import BIAR file to GroupA system (you should end-up with the Folder Directory and two files: Crystal Report and Web Intelligence document). Create Replication List on GroupA system (pointing to the newly added folder directory). Create a Remote Connection on GroupB system pointing back to GroupA system (Make sure that you add good description, this will help you better track what is happening with the systems). Create a Replication Job on the GroupB system pointing to the Replication List on GroupA. Accept defaults. Schedule the Replication Job once and ensure that the contents from GroupA system get propagated to the GroupB.

2. In order to test one-way federation: Add new document to the source federation directory (on GroupA system) Schedule newly added document (on GroupA system) (make sure you name instance that makes it obvious that it was scheduled on GroupA system) to run now. Schedule existing document (on GroupA system) to run now Modify existing document on (GroupB system) (e.g. change the title of the document) Schedule existing document (on GroupB system) (Give title for the instance indicating that it was scheduled on GroupB system) to run now. Schedule the replication job (run once) again to run now. 3. In order to verify one-way federation: Verify that the new document appears on the GroupB system. Verify that the new instance for newly added document appears on the GroupB system. Verify that the new instance for existing document appears on the GroupB system. Verify that the modified changes to the existing document on the GroupB system are still there and that the changes were not lost. Verify that the existing schedule document (on GroupB system) is still on the GroupB system, but that it did not propagate back to GroupA system. Ensure that the replication job ran successfully.

236

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

4. In order to verify one-way federation when the title of the document is changed in both places (conflict resolution) GroupA and GroupB: Change title of the Web Intelligence document (inside Web Intelligence Document) on the GroupA system. Change title of the Webi document (inside Webi Document) on the GroupB system. Run the replication job again. 5. In order to verify changes: ensure that the Webi document has changed on the GroupB system and now it reflects the name from GroupA system (remember the default is that in case of conflict, the origin wins).

Two-way Federation
1. Setup two-way federation (modify existing replication job to convert it to two-way replication) (Setup following parameters by accepting default parameters: Enable object cleanup, Origin site takes precedence, Two-way replication, Normal Replication, Replicate all objects). In order to test two-way federation: 1. 2. 3. Modify existing document on the GroupB system. Schedule existing document on the GroupB system. Run the replication job. In order to verify the two-way federation: 1. 2. 3. Verify if the document modification on the GroupB system was propagated back to the GroupA system. Verify if the schedule from GroupB system was propagated to the GroupA system. Verify that the previous schedules performed on GroupB system were also propagated back to GroupA system.

Content ManagementLearners Guide

237

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Content Management


1. What do you need to take into consideration when planning content security? 2. In terms of BusinessoBjects Enterprise, what is the definition of a 'Name Conflict'? 3. Under what circumstances would content on the source system be removed from one folder and appear in a different folder after migration? 4. What does it take to setup One-Way Federation?

238

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Lesson summary
After completing this lesson, you are now able to: Design a secured content management plan Design an instance management plan Design a system auditing plan Manage content across multiple deployments Use the Import Wizard Manage Lifecycle Management using Lifecycle Manager Manage replication of content across multiple sites using Federation Services

Content ManagementLearners Guide

239

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

240

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Answer Key
This section contains the answers to the reviews and/or activities for the applicable lessons.

Answer KeyLearners Guide

241

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

242

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Reviewing BusinessObjects Enterprise architecture, administration and security


Page 22
1. In the context of BusinessObjects Enterprise, what is the difference between a server and a service? Answer: A server is an Operating System (OS) level process hosting one or more services. For example, CMS and Adaptive Processing Server are servers. A server runs under a specific OS account and has its own PID. A service is a server subsystem that provides a specific function. The service runs within the memory space of its server under the Process ID (PID) of the parent container (server). For example, the Web Intelligence Scheduling and Publishing Service is a subsystem running within the Adaptive Job Server. 2. What is the Server Intelligence Agent? Answer: The SIA is a locally run service managed by the operating system. The task of the Server Intelligence Agent (SIA) is to start, stop, and monitor locally run BusinessObjects servers. When one of the managed servers goes down unexpectedly, the SIA restarts the server immediately. When you issue a command in the CMC to stop a server, the SIA stops the server. 3. What is an Access Level? Answer: Access levels are groups of rights that users frequently need. They allow administrators to set common security levels quickly and uniformly rather than requiring that individual rights be set one by one. 4. List the options available when setting advanced rights. Answer: Granted, Denied, Not Specified, Apply to Object, Apply to Sub-Objects. 5. Describe the concept of rights override. Answer: Rights override is a rights behavior in which rights that are set on child objects override the rights set on parent objects. Rights override occurs under the following circumstances:

Answer KeyLearners Guide

243

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

In general, the rights that are set on child objects override the rights that are set on parent objects. In general, the rights that are set on subgroups or members of groups override the rights that are set on groups.

6. What is 'scope of rights'? Answer: Scope of rights refers to the ability to limit the extent of rights inheritance. To define the scope of a right, you decide whether the right applies to the object, its sub-objects, or both. By default, the scope of a right extends to both objects and sub-objects.

244

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Identifying requirements


Page 34
1. You are the administrator. You are told that there will potentially be 100 users viewing the same report at the time. Where would you say this information belongs? A) Content Management B) Sizing C) Deployment D) High Availability Answer: B) - Sizing. But it could also be argued that sizing information directly impacts answer C) Deployment. 2. List two examples of customer information that pertains to deployment requirements. Answer: Network infrastructure, Hardware information (although this could also impact sizing), There is a DMZ zone for external customers, LDAP authentication is used for internal customers 3. What are the major 'requirement areas' for which you need to collect information? Answer: A) Content Management B) Sizing C) Deployment D) High Availability

Answer KeyLearners Guide

245

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Ensuring Availability of your Business Intelligence Solution


Page 72
1. What is a disaster? Answer: A disaster is any unanticipated event that creates a defined problem. Disasters are usually considered in terms of severity. 2. True or false? Fault tolerance is also referred to as graceful degradation. Answer: True. 3. True or false? BusinessObjects Enterprise XI 3.1 scales vertically but not horizontally. Answer: False. BusinessObjects Enterprise XI 3.1 scales both vertically and horizontally. 4. Name two benefits of hardware loadbalancers. Answer: Monitoring network availability of servers within a cluster. Accommodating session state. 5. BusinessObjects Enterprise comes with a tool to ease the deployment of web applications on supported web application servers. What is it called? Answer: wdeploy. 6. What is a CMS cluster? Answer: CMS clusters are a set of two or more CMS servers that function as a single CMS. The CMSs in a cluster work together to maintain the common system database.

246

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Performance, Scalability and Sizing


Page 126
1. What is scalability? Answer: Scalability is the capacity to address additional system load by adding resources without fundamentally altering the implementation architecture or implementation design. 2. List the four main BusinessObjects Enterprise scalability goals. Answer: Increase overall system capacity, Increase schedule reporting capacity, Increase report viewing capacity and Improve web response speeds. 3. What are the four steps of the sizing process? Answer: 1. 2. 3. 4. Determine the system load. Determine the number of services required. Determine the configuration of machines. Perform system testing and tuning.

4. List the three estimates necessary when determining system load. Answer: Total number of potential users who will have access to the system Total number of concurrent active users that will access the system Total number of simultaneous requests that will be made

Answer KeyLearners Guide

247

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Deploying a system


Page 167
1. What are the necessary steps to configure Web Server (e.g. Apache) to redirect to the locally installed Tomcat Web Application Server? Answer: Building a bridge between Apache and Tomcat: On the Apache server: Modify httpd.conf, located in C:\Program Files\Apache SoftwareFoundation\Apache2.2\conf Open the file in a text editor and add the following line at the end of the file: Include conf/mod_jk.conf This modification tells Apache to include a configuration file defining the mod_jk module as well as the workers.properties Mod_jk is a module that allows Apache to recognize servlet requests and then forward these requests to Tomcat. On the Tomcat server: You must now update the Tomcat web application server to listen on a specific port for requests forwarded from Apache web server Open the server.xml in a text editor. This file is located in C:\Program Files\BusinessObjects\Tomcat55\conf Uncomment the following line in the server.xml by removing the <!-and--> < !<Connector enableLookups="false" port="8009"protocol="AJP/1.3" redirectPort="8443"/> > Note: This allows Tomcat to listen on port 8009 for servlet requests related to the AJP13 protocol. Note: Ensure that the strings are not concatenated on the above line. If the strings are concatenated, it can cause errors. 2. What are the steps that you would perform to cluster two CMS servers? Answer: Create a separate SIA on the joining CMS Using CMC add CMS service to the newly created SIA Using CCM configure CMS to point to the source CMS System Database

248

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

3. What are the steps that you would perform to configure redundancy for the Input and Output FRS? Answer: Create a separate SIA on the Input1 FRS Configure SIA to run under network account (not system account) so that it can reach network storage location (e.g. \\machine\FileStore\Input) Using CMC add Input FRS service to the newly created SIA Using CMC configure Input FRS root location to be pointing to the network location (e.g. \\machine\FileStore\Input) Now perform very similar steps on Input 2 machine Now perform very similar steps on Output1 and Output2 machines (adjust directory paths accordingly)

4. Why is it necessary sometimes to configure SIA to run under domain account rather than default System account? Answer: BOE servers contained within SIA run under network account under which SIA is running. If a BOE server contained in the SIA might need a network resources for example to read/write to a network resource, it would have to run under network domain account rather than default System account. E.g. Input FRS/Output FRS pointing to the File Storage on the network. Web Intelligence JS, or CRJS need to send instances at the end of schedule in .PDF format to a drive somewhere on the network.

Answer KeyLearners Guide

249

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll

Review: Content Management


Page 238
1. What do you need to take into consideration when planning content security? Answer: Folder content structure Group/user structure Which access levels to work with How to secure content - Where to apply various access levels and for which groups 2. In terms of BusinessoBjects Enterprise, what is the definition of a 'Name Conflict'? Answer: Same name, different CUID 3. Under what circumstances would content on the source system be removed from one folder and appear in a different folder after migration? Answer: This case is covered in the CUID workshop. After initial migration, create a new folder and move the object to the newly created folder. Then run the migration again: Match by CUID, in case of name clash rename. You will see that the original folder will be overwritten with the incoming object. Since you can't two objects with the same CUID in the system, the object that was moved on the target system disappears. Now you have only one copy of that document in the "original" folder location. 4. What does it take to setup One-Way Federation? Answer: Create a Replication List (Origin). Create a Remote Connection (on the Destination system pointing to the Origin system). Create a Replication Job (under the Remote Connection). Finally schedule the Replication Job and test the results.

250

Designing and deploying a solutionLearners Guide

Contact for Any SAP Module Materials : sapmaterials4u@gmail.com ll VISIT: www.sapcertified.info ll