Beruflich Dokumente
Kultur Dokumente
CONTRIBUTOR
ADMINISTRATION GUIDE
Product Information
This document applies to Cognos® 8 Planning Version 8.3 and may also apply to subsequent releases. To check for newer versions of this
document, visit the Cognos Global Customer Services Web site (http://support.cognos.com).
Copyright
Copyright © 2007 Cognos Incorporated.
Portions of Cognos® software products are protected by one or more of the following U.S. Patents: 6,609,123 B1; 6,611,838 B1; 6,662,188
B1; 6,728,697 B2; 6,741,982 B2; 6,763,520 B1; 6,768,995 B2; 6,782,378 B2; 6,847,973 B2; 6,907,428 B2; 6,853,375 B2; 6,986,135 B2;
6,995,768 B2; 7,062,479 B2; 7,072,822 B2; 7,111,007 B2; 7,130,822 B1; 7,155,398 B2; 7,171,425 B2; 7,185,016 B1;7,213,199 B2.
Cognos and the Cognos logo are trademarks of Cognos Incorporated in the United States and/or other countries. All other names are trademarks
or registered trademarks of their respective companies.
While every attempt has been made to ensure that the information in this document is accurate and complete, some typographical errors or
technical inaccuracies may exist. Cognos does not accept responsibility for any kind of loss resulting from the use of information contained
in this document.
This document shows the publication date. The information contained in this document is subject to change without notice. Any improvements
or changes to either the product or the document will be documented in subsequent editions.
U.S. Government Restricted Rights. The software and accompanying materials are provided with Restricted Rights. Use, duplication, or
disclosure by the Government is subject to the restrictions in subparagraph (C)(1)(ii) of the Rights in Technical Data and Computer Software
clause at DFARS 252.227-7013, or subparagraphs (C)(1) and (2) of the Commercial Computer Software - Restricted Rights at 48CFR52.227-19,
as applicable. The Contractor is Cognos Corporation, 15 Wayside Road, Burlington, MA 01803.
This software/documentation contains proprietary information of Cognos Incorporated. All rights are reserved. Reverse engineering of this
software is prohibited. No part of this software/documentation may be copied, photocopied, reproduced, stored in a retrieval system, transmitted
in any form or by any means, or translated into another language without the prior written consent of Cognos Incorporated.
Table of Contents
Introduction 13
What’s New? 17
New Features in Version 8.3 17
Extended Language support 17
Microsoft Vista Compliance 17
Microsoft Excel 2007 17
Select Folders in Cognos Connection 17
Select a Framework Manager Package for an Administration Link 18
Chapter 2: Security 27
Cognos Namespace 27
Authentication Providers 28
Deleting or Restoring Unconfigured Namespaces 29
Users, Groups, and Roles 29
Users 30
Groups and Roles 30
Administration Guide 3
Table of Contents
4 Contributor
Table of Contents
Administration Guide 5
Table of Contents
6 Contributor
Table of Contents
Administration Guide 7
Table of Contents
Reconciliation 241
The Production Application 241
Model Definition 241
Data Block 241
Production Tasks 242
Cut-down Models and Multiple Languages 242
The Development Application 243
Development Model Definition 243
Import Data Blocks 243
Run Go to Production 244
Go to Production Options Window 244
Show Changes Window 245
Model Changes Window 246
Import Data Details Tab 250
Invalid Owners and Editors Tab 250
e.List Items to be Reconciled Tab 252
Cut-down Models Window 252
Finish Window 252
8 Contributor
Table of Contents
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products 301
Client and Admin Extensions 302
Client Extensions 302
Admin Extensions 303
Integrating with Cognos Business Intelligence Products 304
Using Cognos 8 BI with Contributor Unpublished (Real-Time) Data 304
The Generate Framework Manager Model Admin Extension 308
Generate Transformer Model 311
Excel and Contributor 313
Contributor for Excel 313
Print to Excel 315
Export for Excel 315
Financial Planning with Cognos Performance Applications and Cognos 8 Planning 315
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products 317
Download and Deploy the Sample 317
Run a Contributor Macro to Import Data 319
Create and Publish a Framework Manager Package 319
Create a Report 321
Create an Event Studio Agent 326
Administration Guide 9
Table of Contents
10 Contributor
Table of Contents
Glossary 397
Index 405
Administration Guide 11
Table of Contents
12 Contributor
Introduction
This document is intended for use with the Cognos 8 Planning - Contributor Administration Console.
This guide describes how to use the Contributor Administration Console to create and manage
Contributor applications.
Cognos 8 Planning provides the ability to plan, budget, and forecast in a collaborative, secure
manner. The major components are Analyst and Contributor.
Audience
To use this guide, you should have an understanding of Cognos 8 Planning - Analyst. Some
knowledge of security and database systems would also be helpful.
Related Documentation
Our documentation includes user guides, getting started guides, new features guides, readmes, and
other materials to meet the needs of our varied audience. The following documents contain related
information and may be referred to in this document.
Administration Guide 13
Introduction
Note: For online users of this document, a Web page such as The page cannot be found may appear
when clicking individual links in the following table. Documents are made available for your
particular installation and translation configuration. If a link is unavailable, you can access the
document on the Cognos Global Customer Services Web site (http://support.cognos.com). Logon
credentials are available either from your administrator or by request from support.america@cognos.
com.
Document Description
Contributor Browser User Using the Cognos 8 Planning - Contributor Web client
Guide
Contributor for Microsoft Using the Cognos 8 Planning - Contributor for Microsoft Excel®
Excel® User Guide
Cognos Connection User Using Cognos Connection to publish, find, manage, organize, and
Guide view Cognos content, such as scorecards, reports, analyses, and agents
Cognos 8 Administration Managing servers, security, reports, and portal services; setting up
and Security Guide Cognos samples; troubleshooting; and customizing Cognos 8
Framework Manager User Creating and publishing models using Framework Manager
Guide
Guidelines for Modeling Recommendations for modeling metadata to use in business reporting
Metadata and analysis
Event Studio User Guide Creating and managing agents that monitor data and perform tasks
when the data meets predefined thresholds
Finding Information
To find the most current product documentation, including all localized documentation, access the
Cognos Global Customer Services Web site (http://support.cognos.com). Click the Documentation
link to access documentation guides. Click the Knowledge Base link to access all documentation,
technical papers, and multimedia materials.
Product documentation is available in online help from the Help menu or button in Cognos products.
You can also download documentation in PDF format from the Cognos Global Customer Services
Web site.
You can also read PDF versions of the product readme files and installation guides directly from
Cognos product CDs.
14 Contributor
Introduction
Getting Help
For more information about using this product or for technical assistance, visit the Cognos Global
Customer Services Web site (http://support.cognos.com). This site provides product information,
services, user forums, and a knowledge base of documentation and multimedia materials. To create
a case, contact a support person, or to provide feedback, click the Contact Us link. For information
about education and training, click the Training link.
Administration Guide 15
Introduction
16 Contributor
What’s New?
This section contains a list of new features for this release. It will help you plan your upgrade and
application deployment strategies and the training requirements for your users.
For information about upgrading, see the Cognos 8 Planning Installation and Configuration Guide.
To review an up-to-date list of environments supported by Cognos products, such as operating
systems, patches, browsers, Web servers, directory servers, database servers, and application servers,
visit the Cognos Global Customer Services Web site (http://support.cognos.com).
Administration Guide 17
What’s New?
18 Contributor
Chapter 1: Cognos 8 Planning - Contributor
Cognos 8 Planning - Contributor is a Web-based planning platform that can involve thousands of
people in the planning process, collecting data from managers and others, in multiple locations.
Complex calculations are performed on the Web client showing totals as soon as data is entered,
preventing unnecessary traffic on the server during busy times. Information is then stored in a data
repository, providing an accurate and single pool of planning data.
In addition, users can use Contributor for Excel to view and edit Contributor data using Excel.
Administrators use the Contributor Administration Console to create and configure Contributor
applications, manage access settings, distribute Cognos 8 Planning - Analyst business plans, and
configure the user's view of the business plan.
Cubes
A cube is similar to a spreadsheet. A cube always contains rows and columns and usually at least
one other page, making it multidimensional. It is used to collect data. Cells in cubes can contain
entered data or calculations.
Dimensions
The rows, columns, and pages of a cube are created from dimensions. Dimensions are lists of related
items, such as Profit and Loss items, products, customers, cost centers, and months. Dimensions
also contain all the calculations. One dimension can be used by many cubes.
e.Lists
The structure of an application is based on an e.List. An e.List is a kind of dimension that contains
a hierarchical structure that typically reflects the structure of the organization. For example, it may
Administration Guide 19
Chapter 1: Cognos 8 Planning - Contributor
include cost centers and profit centers. There is one e.List per application, and the e.List item is
assigned to a user, group, or role. There are two types of user: planners and reviewers. A planner
enters and submits data to be reviewed by a reviewer. There may be several layers of reviewer
depending on the structure of the e.List.
D-Links
Cubes are linked by a series of D-Links in Analyst. A D-Link copies information in and out of
cubes, and sometimes to and from ASCII or text files.
Multiple Administrators
You can secure individual elements of the Administration Console and therefore allow multiple
administrators to access different parts of the Contributor application at the same time. For example,
you can give rights to a specific user to create and configure applications on a specific datastore.
You can choose to cascade rights to all applications on a datastore, or restrict rights to specific
applications. Contributor administrators have access only to those applications and operations that
they have rights for.
Moving Data
Administrators can use administration links to move data quickly and easily between applications
without having to publish, reducing the need for large applications. You can have several small,
focused applications. Smaller e.List structures provide quicker reconciliation times. Also, the need
for cut-down models and access tables is reduced.
Administration links give you the following benefits:
20 Contributor
Chapter 1: Cognos 8 Planning - Contributor
System Links
Administrators can set up links so that Web client users and Contributor for Excel users can move
data between Contributor cubes in the same or different applications. System links are run from
the target application.
Publishing Data
Three publish layouts are available.
The table-only layout is designed to give users greater flexibility in reporting on Cognos 8 Planning
data, and for use as a data source for other applications. It is used with the Generate Framework
Manager Model extension, and the Generate Transformer Model extension.
The incremental publish layout publishes only e.List items that contain changed data. Users can
schedule an incremental publish using a macro or through Cognos Connection and Event Studio.
You can achieve near real-time publishing by closely scheduling incremental publishes.
The view layout, as supported in Contributor and Analyst version 7.2, is compatible with previous
Cognos Planning data solutions.
Data is always published to a separate publish datastore.
❑ configuring rights
Administration Guide 21
Chapter 1: Cognos 8 Planning - Contributor
❑ running jobs
Assigning Rights
Rights determine whether users can view, save, submit, and so on.
For example, you want to allow a planner to view, but not save or submit, or to make and save
changes but not submit.
The rights a user can have are also affected by the view and review depth, set in the e.List window,
and the Reviewer edit setting in the Application Options window. A user can be directly assigned
rights, or inherited rights.
You can set up rights directly in the Administration Console after you create the Contributor
application, or you can create and maintain the rights in an external system, and import them.
For more information, see "The e.List" (p. 89).
22 Contributor
Chapter 1: Cognos 8 Planning - Contributor
This establishes how the application appears and behaves in the Web browser.
● restrict what users can see and do using saved selections and access tables.
For example, you may want to hide salary details from some users.
Running Jobs
A job is an administration task that runs on job servers and is monitored by the Administration
Console. You can start the process and monitor its progress. All jobs can be run while the application
is online.
Using the Monitoring Console (p. 59) in the Administration tree, you can manage and monitor the
progress of jobs in Contributor applications.
An example of a job is reconcile, which ensures that the structure of the e.List item data is up to
date, if required. This job is created and runs after Go to Production runs.
For more information, see "Jobs" (p. 47).
Administration Guide 23
Chapter 1: Cognos 8 Planning - Contributor
The Administrator
Administration can be divided into separate functions depending on your business needs. A Planning
Rights Administrator assigns administrative access to Contributor applications and to functions
within applications. Administrators have access only to those applications and operations that they
have rights for. In addition, multiple administrators can access different parts of the Contributor
application at the same time.
You can restrict administrative access on a per application basis so that someone who can see only
database maintenance in application A can create applications in application B.
Administrators see only those applications that they have rights to, and only those functions within
those applications.
Depending on the rights assigned to them, administrators can
● assign functional rights to other administrators
● add job servers and job server clusters to the Planning Store
The Planner
Planners are responsible for entering data into the Contributor application using the Web browser,
or Contributor for Excel. This data is referred to as a contribution. Planners edit data only in the
selection assigned to them by the administrator. They cannot make structural changes to the
application. After data is entered, the planner can either save or submit the data. Submitted data
is forwarded to a reviewer and cannot be edited further by the planner unless the reviewer rejects
it.
A planner can be responsible for more than one e.List item and can view each e.List item individually
or view all e.List items in a single view, if configured by the administrator.
The Reviewer
Reviewers are responsible for approving contributions submitted by one or more planners. Reviewers
can view data and see the status of all submissions they are responsible for managing at any stage
in the planning and review cycle. Reviewers can edit contributions if they have appropriate rights.
After data is submitted, the reviewer has the following options:
● reject the data if they are not satisfied with it
24 Contributor
Chapter 1: Cognos 8 Planning - Contributor
Typically, a reviewer sends an email to the planner to give the reason for rejection.
After the reviewer takes over editorial control of a contribution, the planner is no longer the owner
of the contribution. The reviewer has the right to submit it.
Any user can be both a planner and a reviewer for the same e.List item. When users have both roles,
they can view their review items and contribution items in the same Web page.
In addition, reviewers can annotate any changes they make to a Contribution e.List item (p. 287).
The Toolbar
The following functions are available on the Administration Console toolbar.
Set Online Makes the application visible in a Web browser. Set Online
can be automated.
Set Offline Prevents the application from being accessed in a Web browser.
Set Offline can be automated
Administration Guide 25
Chapter 1: Cognos 8 Planning - Contributor
26 Contributor
Chapter 2: Security
Cognos 8 security is designed to meet the need for security in various situations. You can use it in
everything from a proof of concept application where security is rarely enabled to a large scale
enterprise deployment.
The security model can be easily integrated with the existing security infrastructure in your
organization. It is built on top of one or more third-party authentication providers. You use the
providers to define and maintain users, groups, and roles, and to control the authentication process.
Each authentication provider known to Cognos 8 is referred to as a namespace.
In addition to the namespaces that represent the third-party authentication providers, Cognos 8
has its own namespace named Cognos. The Cognos namespace makes it easier to manage security
policies and deploy applications.
For more information, see the Cognos 8 Security and Administration Guide.
Cognos Namespace
The Cognos namespace is the Cognos 8 built-in namespace. It contains the Cognos objects, such
as groups, roles, data sources, distribution lists, and contacts.
During the content store initialization, built-in and predefined security entries are created in this
namespace. You must modify the initial security settings for those entries and for the Cognos
namespace immediately after installing and configuring Cognos 8.
You can rename the Cognos namespace using Cognos Configuration, but you cannot delete it. The
namespace is always active.
When you set security in Cognos 8, you may want to use the Cognos namespace to create groups
and roles that are specific to Cognos 8. In this namespace, you can also create security policies that
indirectly reference the third-party security entries so that Cognos 8 can be more easily deployed
from one installation to another.
The Cognos namespace always exists in Cognos 8, but the use of the Cognos groups and roles it
contains is optional. The groups and roles created in the Cognos namespace repackage the users,
groups, and roles that exist in the authentication providers to optimize their use in the Cognos 8
environment. For example, in the Cognos namespace, you can create a group named HR Managers
and add to it specific users and groups from your corporate IT and HR organizations defined in
your authentication provider. Later, you can set access permissions for the HR Managers group to
entries in Cognos 8.
Administration Guide 27
Chapter 2: Security
Authentication Providers
User authentication in Cognos 8 is managed by third-party authentication providers. Authentication
providers define users, groups, and roles used for authentication. User names, IDs, passwords,
regional settings, personal preferences are some examples of information stored in the providers.
If you set up authentication for Cognos 8, users must provide valid credentials, such as user ID and
password, at logon time. In Cognos 8 environment, authentication providers are also referred to
as namespaces, and they are represented by namespace entries in the user interface.
Cognos 8 does not replicate the users, groups, and roles defined in your authentication provider.
However, you can reference them in Cognos 8 when you set access permissions to reports and other
content. They can also become members of Cognos groups and roles.
The following authentication providers are supported in this release:
● Active Directory Server
● Cognos Series 7
● eTrust SiteMinder
● LDAP
● NTLM
● SAP
You configure authentication providers using Cognos Configuration. For more information, see
the Installation and Configuration Guide.
Multiple Namespaces
If multiple namespaces are configured for your system, at the start of a session you must select one
namespace that you want to use. However, this does not prevent you from logging on to other
namespaces later in the session. For example, if you set access permissions, you may want to reference
entries from different namespaces. To log on to a different namespace, you do not have to log out
of the namespace you are currently using. You can be logged on to multiple namespaces
simultaneously.
Your primary logon is the namespace and the credentials that you use to log on at the beginning
of the session. The namespaces that you log on to later in the session and the credentials that you
use become your secondary logons.
When you delete one of the namespaces, you can log on using another namespace. If you delete all
namespaces except for the Cognos namespace, you are not prompted to log on. If anonymous access
is enabled, you are automatically logged on as an anonymous user. If anonymous access is not
enabled, you cannot access the Cognos Connection logon page. In this situation, use Cognos
Configuration to enable anonymous access.
28 Contributor
Chapter 2: Security
Steps
1. In Cognos Connection, in the upper-right corner, click Launch, Cognos Administration.
If the namespace you want to delete does not have a check mark in the Active column, it is
inactive and can be deleted.
The namespace is permanently deleted. To use the namespace again in Cognos 8, you must add it
using Cognos Configuration.
Administration Guide 29
Chapter 2: Security
and roles created in Cognos 8. The groups and roles created in Cognos 8 are referred to as Cognos
groups and Cognos roles.
Users
A user entry is created and maintained in a third-party authentication provider to uniquely identify
Tip: To ensure that a user or group can run reports from a package, but not open the package in
a Cognos studio, grant the user or group execute and traverse permissions on the package.
Groups and roles represent collections of users that perform similar functions, or have a similar
status in an organization. Examples of groups are Employees, Developers, or Sales Personnel.
Members of groups can be users and other groups. When users log on, they cannot select a group
they want to use for a session. They always log on with all the permissions associated with the
groups to which they belong.
Roles in Cognos 8 have a similar function as groups. Members of roles can be users, groups,
and other roles.
The following diagram shows the structure of groups and roles.
30 Contributor
Chapter 2: Security
Group Role
● you want to avoid cluttering your organization security systems with information used only in
Cognos 8
Note: You do not have to use these roles, they can be deleted or renamed. If you decide not to use
the predefined roles, you must assign the access permissions and capabilities required by Cognos 8
Planning to other groups, roles, or users.
Capabilities
Capabilities are secured functions and features. If you are an administrator, you set access to the
secured functions and features by granting execute permissions for specified users, groups, or roles.
Users must have at least one capability to be accepted through the Cognos Application Firewall.
Administration Guide 31
Chapter 2: Security
The Planning Contributor Users role has the Planning Contributor capability by default. If you do
not want to use this role, you can assign the capability to any groups, users, or roles that you create
to replace this role by giving execute permissions to the appropriate members.
The Planning Rights Administrators role has the Planning Rights Administration capability by
default. To assign this capability to groups, users, or roles, you must give execute permissions to
the appropriate members. You must also give members permissions to traverse the Administration
folder.
Tip You change capabilities through Cognos Administration, by clicking the Security tab. For more
information, see "Securing Functions and Features" in the Administration and Security Guide.
● add Contributor application members and Analyst users to the Planning Contributor Users
role
Note: We recommend that you add groups of users as defined in your authentication
provider to the roles in Cognos 8 Planning, rather than individual users. This means that
changes in group membership are reflected immediately in the roles without having to make
changes in Cognos 8
32 Contributor
Chapter 2: Security
Note: If you are using the Generate Transformer Model extension, you must add the Cognos Series
7 namespace. Local authentication export (LAE) files cannot be used.
3. In the Properties window, ensure that Allow Anonymous Access is set to False.
2. In the Explorer window, under Security, right-click Authentication, and then click New resource,
Namespace.
4. In the Type list, click the appropriate namespace and then click OK.
The new authentication provider resource appears in the Explorer window, under the
Authentication component.
5. In the Properties window, for the Namespace ID property, specify a unique identifier for the
namespace.
6. In the Properties window for Authentication, for the Allow session information to be shared
between client applications, set the value to True.
Administration Guide 33
Chapter 2: Security
This enables you to have single signon between multiple clients on the same computer. Note
that you cannot have single signon between a Windows application, and a Web client application,
for example, Contributor administration and Cognos 8.
7. Specify the values for all other required properties to ensure that Cognos 8 components can
locate and use your existing authentication provider.
8. Test the connection to a new namespace. In the Explorer window, under Authentication,
right-click the new authentication resource and click Test.
Cognos 8 loads, initializes, and configures the provider libraries for the namespace.
For specific information about configuring each kind of authentication provider, see the Cognos 8
Planning Installation and Configuration Guide.
Steps
1. In Cognos Connection, in the upper-right corner, click Cognos Administration.
4. In the Actions column, click the properties button for the Planning Rights Administrators or
Planning Contributor Users role.
● To search for entries, click the appropriate namespace and then click Search. In the Search
string box, type the phrase you want to search for. For search options, click Edit. Find and
click the entry you want.
● To type the name of entries you want to add, click Type and type the names of groups,
roles, or users using the following format, where a semicolon (;) separates each entry:
namespace/group_name;namespace/role_name;namespace/user_name;
Here is an example:
Cognos/Authors;LDAP/scarter;
34 Contributor
Chapter 2: Security
7. Click the right-arrow button, and when the entries you want appear in the Selected entries box,
click OK.
Tip: To remove entries from the Selected entries list, select them and click Remove. To select
all entries in a list, click the check box in the upper-left corner of the list. To make the user
entries visible, click Show users in the list.
8. Click OK.
For more information, see the Cognos 8 Administration and Security Guide.
Data Manager Framework Manager Only members of the Data Manager Authors
Authors group can import from a Framework Manager
data source.
You must have a Data Manager Authors group
member perform this task.
Administration Guide 35
Chapter 2: Security
For more information about the Everyone group, and System Administrators role, see "Initial
Security" in the Administration and Security Guide.
● system links
● translated applications
Note: Members of roles can be users, groups, and other roles. Groups can contain users and other
groups, but not roles.
36 Contributor
Chapter 2: Security
Cascade Rights
If you set rights to operations for the Cognos 8 Planning environment, the datastore server, or the
job server, you are prompted to cascade the rights to the lower levels. Regardless of your response,
when you grant rights to a datastore server or job server cluster, the user automatically inherits the
same rights for any applications, publish containers, or job servers that you subsequently add.
Tips: To always cascade rights without being prompted, on the Access Rights window, click Cascade
rights selection.
Rights that are cascaded are indicated by blue text.
Operations are the functions that can be performed in the Contributor Administration Console.
Initially, a Planning Rights Administrator grants rights so that other Contributor administrators
perform these operations.
Rights Privileges
Session Details Access You can grant the write to view who has write access to the
development model.
Links Access You can secure the ability to create, edit, execute, delete, import,
and export administration links. You can also secure previously
created administration links (administration link instances).
You secure Administration Link instances individually. To locate
them, scroll to the bottom of the Operations tree and look for
LinkLink Name.
You can grant the right to select an application and cube as the
source and target of a system or administration link.
Administration Guide 37
Chapter 2: Security
Rights Privileges
Datastores Access You can grant the right to add or remove a datastore server.
● create an application
Publish Container Access You can grant rights for administering publish containers:
● link to a publish container
38 Contributor
Chapter 2: Security
Rights Privileges
Development Access You can grant the right to perform the following operations in a
development application:
● configure the Web client - navigation, orientation, options,
planner-only cubes and Contributor help
● import data
● synchronize
Production Access You can grant the right to perform the following operations on
the production version of the application:
● publish data
● delete annotations
● preview data
This option is important if you want to hide sensitive data
● manage extensions
Job Server Clusters Access You can grant the right to add or remove a job server cluster.
Job Server Access You can grant the right to update job server properties for the
Planning environment.
Go to a Job server cluster
● to enable and disable job processing of an application
Steps
1. Click Access Rights.
Administration Guide 39
Chapter 2: Security
2. Click Add.
3. Click the appropriate Namespace and select the user, group, or role.
Tip: you can filter operations by datastore, application, publish container, job server cluster,
and job server.
7. Click Save.
Access rights apply as soon as the changes are saved.
● Prepare Import
● Go To Production
You can secure the rights to create, edit, execute, delete, and transfer macros, and the ability to
create individual macro steps.
By default, when a user with the Create Macro right adds a new macro instance, they are granted
all rights to it - edit, execute, delete and transfer. For other users, the access to that instance are
determined by their rights.
After a link or macro is created, only a Planning Rights Administrator can change the instance
rights.
For example, consider a user who is granted create, edit, and execute macro access rights. By default,
this user has all access rights to macros they create. However, they only have edit and execute rights
to those created by other users. A Planning Rights Administrator can subsequently grant or revoke
any rights to those macros for any user.
The Execute Command Line macro step is secured by default. This is to minimize the risk of
unauthorized access to resources.
You can also secure the rights to edit, execute, delete and transfer previously created macro instances,
for example, "Import Expenses".
40 Contributor
Chapter 2: Security
● create macro step rights for all the macro steps that you are transferring
Authentication of Macros
Authentication is based on the security context under which the macros are run. For example, if
the macro contains a Go to Production step, the user specified in the authentication details when
you create a macro must have the rights to run Go to Production. This is separate from the access
rights used to secure the management of macros.
Steps
1. In Cognos Connection, in the upper-right corner, click Cognos Administration.
3. Click the actions button next to the Administration capability and click Set Properties button
. On the Permissions tab, grant the traverse permission to the required users, groups, and
roles and click OK.
4. Click the Administrator capability to show additional functions. Click the actions button next
to Run activities and schedules and click Set Properties. On the Permissions tab, grant execute
and traverse permissions to the required users, groups, and roles and click OK.
6. Click the Set Properties button on the Administration page. On the Permissions tab, grant
read, execute, and traverse permissions to the required users, groups, and roles.
Administration Guide 41
Chapter 2: Security
If required, grant write permission if you want the user, group, or role to be able to modify the
contents of this folder. Grant set policy permission if you want the user, group, or role to be
able to change security permissions on this folder.
7. Click OK.
Members of the required users, groups, or roles now have access schedule and run Contributor
macros in Cognos Connection.
● Reconcile
Steps
1. In the Systems Settings pane, click the Scheduler Credentials tab, and click Update.
2. Click Logon.
3. Enter the User ID and password to be used as the scheduler credentials and click OK.
4. If your logon is successful, the Logon button is disabled, and the Logon as button is enabled.
42 Contributor
Chapter 2: Security
After you create and configure Contributor applications, the next step is to configure user access.
You do this by assigning users, roles, or groups to e.List items in the Rights window in the
Contributor Administration Console, either by importing a file, or by manually inserting rights.
Administration Guide 43
Chapter 2: Security
44 Contributor
Chapter 3: Configuring the Administration Console
When you start the Cognos 8 Planning - Contributor Administration Console for the first time you
must configure it before you can use it.
Before you can configure the Cognos 8 Planning - Contributor Administration Console, you must
be a member of the Planning rights administration capability which by default, is granted to members
of the Planning rights administration role. The Planning rights administration capability can be
granted to any user, group, or role. (p. 31).
When you have done these tasks, applications can be created, imported, or upgraded (p. 58).
You can either create the tables by selecting Create and populate tables now or you can choose to
create the tables using a script which is then run by a Database Administrator (DBA). Use this
option if you do not have access rights to create tables in the database. To choose the script option,
you select Generate table scripts and data files and enter the location of where to save the script.
The script is then created automatically and the DBA can run the script to create the tables.
If the filesys.ini was not specified during installation, you must specify the path when you create
Planning tables. You change this setting if the default path is not used.
If you want to work with a different FileSys.ini, and the associated properties and samples other
than the default, select the Allow use of non-default FileSys.ini check box.
The filesys.ini file is a control file used by Analyst. It contains file paths for the Libs.tab, Users.tab,
and Groups.tab that control the specific library and user setup. You can edit the filesys.ini path by
selecting Tools, Edit FileSys.Ini path.
Planning tables are typically prefixed with a P_, and hold information about
● datastore servers
Administration Guide 45
Chapter 3: Configuring the Administration Console
● security
● macros
● administration links
● jobs
Steps
1. Right-click Datastores in the Administration tree, and click Add Datastore.
● Tip: You can also modify the existing application datastore connection by clicking the
Configure button on the Datastore server information page (p. 47).
3. Enter the Datastore server name, or click the browse button to list the available servers
(SQL Server only).
Setting Description
Trusted Connection Click to use Windows authentication as the method for logging
on the datastore. You do not have to specify a separate logon
id or password. This method is common for SQL Server
datastores and less common, but possible, for Oracle.
Use this account Enter the datastore account that this application will use to
connect. This box is not enabled if you use a trusted
connection.
Password Type the password for the account. This box is not enabled if
you use a trusted connection.
46 Contributor
Chapter 3: Configuring the Administration Console
Typically these settings should be left as the default. They may not be supported by all datastore
configurations.
Enter the following information.
Setting Description
Connection Prefix Specify to customize the connection strings for the needs of the
datastore.
Connection Suffix Specify to customize the connection strings for the needs of the
datastore.
Jobs
A job is an administration task that runs on job servers and is monitored by the Administration
Console.
Additional servers can be added to manage applications, speeding up the processing of jobs. You
can run the job and monitor its progress. All jobs can be run while the application is online to Web
clients. This means that you can make changes to the development version of an application while
the production version is live. It reduces the offline time during the Go to Production process.
Each job is split into job items, one job item for each e.List item. If you are running a Publish job
for eight e.List items, eight job items are created. Contributor applications can be added to more
than one job server, or to a job server cluster. When a job is created, job items are run on the
different job servers, speeding up the processing of a job.
Types of Jobs
The following table describes each type of job:
Commentary tidy Deletes user annotations and audit annotations, and references to
attached documents (p. 289).
Administration Guide 47
Chapter 3: Configuring the Administration Console
Cut-down tidy Removes any cut-down models that are no longer required.
Export queue tidy Removes obsolete items from the export queue.
Import queue tidy Removes from the import queue model import blocks that are no longer
required.
Job test Test the Job sub system using a configurable Job Item.
Language tidy Cleans up unwanted languages from the data store after the Go to
Production process is run. This job is created and runs after Go to
Production is run.
Prepare import Processes import data ready for reconciliation (p. 174).
Reconcile Ensures that the structure of the e.List item data is up to date. This job
is created and runs after Go to Production is run. For more information,
see "Reconciliation" (p. 52).
Validate users Checks to see if the owner or editor of an e.List item has the rights to
access the e.List item. For more information, see "Ownership" (p. 91).
The job checks the rights of users and updates the information in the
Web browser. This job is created only if a change is made to the
Contributor model, and runs after Go to Production is run.
48 Contributor
Chapter 3: Configuring the Administration Console
of the queued jobs to work on in no specific order. If there are enough job processors, all jobs can
be run at the same time.
Because jobs are independent of each other, they do not need to run in a specific order. The exception
is the publish job. The publish job can be started when a reconcile is running, but for it to complete
successfully, all e.List items that are being published must be reconciled.
Action Job
* Triggered by Go to Production
Changes in the following areas do not cause jobs to be run:
● navigation
● orientation
Securing Jobs
Some jobs run under scheduler credentials. This is because jobs that run in the background cannot
prompt the user for authentication information. Scheduler credentials are associated with an
authenticated session, which can include more than one user logged on to different namespaces.
The following jobs run under scheduler credentials:
Administration Guide 49
Chapter 3: Configuring the Administration Console
● Validate users
When the Validate users job is run, the scheduler credentials must be associated with all the
namespaces you imported users from. If it is logged on to only one namespace, users that belong
to other namespaces are considered invalid.
● Reconcile
Only members of the Planning rights administrator capability can modify the scheduler credentials.
For information about setting the scheduler credentials, see "Assign Scheduler Credentials" (p. 42).
Managing Jobs
You can manage and monitor the progress of running jobs in applications from the Job Management
branch in the Administration tree, or from the Monitoring Console (p. 59).
You can also monitor the progress of jobs triggered by administration links from the Monitor Links
window.
The Job Monitor shows the following information:
Details Description
● ready to run
● queued. The job is waiting to be run. The job may have to wait
for a job server to become available before it can be run.
● running
Succeeded The number of job items that ran successfully. If all did, All is stated.
A percentage is shown.
Total Items The total number of job items that the job is split into. Jobs are
broken down into atoms of work known as job items, enabling a
job such as Publish to be run over different threads.
Estimated Completion An estimated date and time of completion for the job in local time.
50 Contributor
Chapter 3: Configuring the Administration Console
Details Description
Average Duration The average interval between the completion of job items.
Start The date and time when the job started, in the local format.
Last Completion The time when the last job item was completed.
Duration (min) The time in minutes the job task has taken to complete.
Details Description
Start The date and time when the job started on that processor in local
format.
End The date and time when the job stopped running on that processor.
Publish Jobs
The publish process is carried out by the reporting publish job for a table-only layout (p. 262), or
the publish job if a view layout (p. 276) is selected.
To monitor publish jobs in the jobs monitor, select the publish container from the box that is
available at the top of the job monitor.
Administration Guide 51
Chapter 3: Configuring the Administration Console
Cancelled Jobs
If a Job is cancelled, you can show information about why this happened by double-clicking on the
line.
Pausing Jobs
If you want to pause a running job, you must stop the Job Server. To do this, right-click the job
server or job server cluster name and click Disable Job Processing.
Reconciliation
Reconciliation ensures that the copy of the application that the user uses on the Web is up to date.
For example, all data is imported, new cubes are added, and changed cubes are updated.
Reconciliation takes place after Go to Production runs and a new production application is created.
It also takes place when an administration link or an Analyst to Contributor link to the production
application is run. However, in this case, only the imported data is updated. The application is
reconciled on the job server unless a user tries to access an e.List item before it is reconciled. For
more information, see "The Effect of Changes to the e.List on Reconciliation" (p. 101).
All contribution e.List items are reconciled and aggregated if
● the application was synchronized with Analyst
● changes were made to the Access Tables, Saved Selections, or the e.List that resulted in a different
pattern of No Data cells for contribution e.List items that are common to both the development
and production applications
Note: Changing an access setting to No data, saving the application, and then changing the
access setting to what it was previously also results in reconciliation.
Note: If you use the Prepare zero data option, an import data block is created for all e.List
items, so all e.List items are reconciled.
● Data is imported.
● Data is saved.
52 Contributor
Chapter 3: Configuring the Administration Console
Reconciliation can be performed across multiple processors and job servers. For more information,
see "Manage a Job Server Cluster" (p. 54).
If the Prevent client-side reconciliation check box is selected (p. 79), a user cannot open the e.List
item until the e.List item is reconciled on the server side.
Deleting Jobs
You can delete jobs, but we do not recommended it because it can leave your data in an unstable
state.
Tip: If you want to pause a running job, you can stop the job server. Right-click the name of the
job server or job server cluster and click Disable Job Processing.
Prepare import
If you delete a prepare import job, the next time you run Prepare import, it tidies this up.
Cut-down models
You typically delete a cut-down models job after you cancel the Go to Production process. Clicking
the Go to Production button reruns the cut-down models job. If you delete a cut-down models job
during Go to Production, Go to Production does not run.
Reconciliation
If you delete a reconcile job, all e.List items that were not reconciled stay unreconciled. e.List items
that were already reconciled by the job remain reconciled. When a user attempts to view an
unreconciled e.List item in the grid, if client side reconciliation is allowed, the reconciliation process
takes place on the client. If client side reconciliation is not allowed, users cannot view the e.List
items. Rerun the Go to Production process to trigger a repair reconcile job.
Administration Guide 53
Chapter 3: Configuring the Administration Console
You can add applications and publish containers to multiple job servers. This speeds up the
processing of jobs, such as reconciliation, giving near linear improvements per processor.
The following two scenarios provide examples of how this might work.
Scenario 1
If you publish large amounts of data, you might want to assign the publish container to different
servers than those that are processing the main application. This is because if you assigned the
application container and the publish container to cluster X containing servers A, B,C, D, it is
possible that a large job, for example, publishing 5000 e.List items, could consume all the resources
in cluster X for some time, preventing other jobs from being processed. So in this case you might
want to assign the publish container to, for example, servers A and B, and the application container
to server C and D to enable other jobs such as prepare import, and server-side reconciliation to run
at the same time as the publish job. There is no control at job level.
Scenario 2
If you have some applications that are in production and are live, you might want to have one or
more job clusters with your best hardware to monitor these applications to ensure that they are
stable and available. For applications that are in development, you might want to have a different
job cluster containing your less efficient hardware.
For more information, see "Jobs" (p. 47). You can also automate job server management (p. 197).
Steps
1. Right-click Job Server Clusters and click Add Job Server Cluster.
3. Click Add.
The next step is to add job servers to the job server cluster.
4. Remove a job server cluster by right-clicking it and selecting Delete Job Server Cluster.
5. Disable job processing on a cluster by right-clicking the cluster and selecting Disable Job
Processing.
6. To test communication with the job server, right-click the job server name and click Test. Any
errors are displayed in a message box.
54 Contributor
Chapter 3: Configuring the Administration Console
Job servers can exist in only one Planning content store. You can either manually delete the job
server (p. 57) and add it to the new Planning content store, or on the job server, change the content
store that it is associated with.
4. Enter the Maximum concurrent jobs. The default is -1. This is 1 concurrent job per processor.
Typing 0 stops job execution.
5. Click Add.
You should now add any applications, publish containers, and other objects to either a job
server cluster, or an individual job server. Jobs such as reconcile are not run until an application
is added to a job server or job server cluster.
Tip: to modify the properties of a job server, right-click the server name and click Properties.
2. Under Data Access, Content Manager, Content Store, change the value for Database server
with port number or instance name.
● an application folder
Administration Guide 55
Chapter 3: Configuring the Administration Console
You need this if you are publishing using the Table-only Layout.
Step
● Click the job server cluster name, and click Add.
The window contains details about the objects monitored by the cluster. It lists only objects
that are directly assigned to the cluster, not those objects that are assigned to individual job
servers.
Tip: Start or stop all job servers in the cluster by right-clicking the cluster name and selecting
Enable Job Processing or Disable Job Processing as required.
Tip: You can see whether a cluster or individual job server is started from the icons in the
administration tree:
Icon Description
Steps
1. Click the name of the job server, and then click Monitored Applications.
2. Click Add.
Three panes of information appear.
Pane Description
Monitored objects held in the Shows the objects monitored by the cluster that the job server is in.
Job Server Cluster
Monitored objects held in Lists the objects monitored by the job server. It contains only details
this Job Server of objects directly assigned to the server.
You can assign an application folder and its contents to a job server
or job server cluster as a monitored object. You can also assign
each application within an application folder to a different job
server. For more information on monitoring application folders,
see "Monitor Application Folders" (p. 57)
56 Contributor
Chapter 3: Configuring the Administration Console
Pane Description
Job Tasks being processed by Monitors the jobs that are being processed by the server:
this Job Server ● data source: If you see N/A by an application folder, this
indicates that the folder may have more than one data source
(applications within an application folder can be on more than
one datastore).
● application name
● Thread ID: identifies the thread used to execute the job task.
A thread is created for each job task. Multiple threads can be
created per process. The number of threads per process is set
in the Maximum concurrent jobs option. Used for debugging.
3. Right-click the job server name and select Enable Job Processing or Disable Job Processing as
required.
You can automate this process, see "Job Servers (Macro Steps)" (p. 197)"Disable Job
Processing" (p. 197) or "Enable Job Processing" (p. 197).
Step
● In the Contributor Administration Console, right-click the job server name and click Delete
Job Server
Administration Guide 57
Chapter 3: Configuring the Administration Console
still add individual applications within an application folder to be monitored by different servers
or clusters.
● To add the contents of an application folder to be monitored, select the application folder row
and click Add.
● To add individual applications from within an application folder, select the applications required
and click Add.
● Link to existing Applications(p. 58). This adds Contributor applications that exist on the
datastore server to the Administration Console.
● Upgrade Application(p. 329). You can import and upgrade Contributor applications created in
an earlier version of Contributor.
Step
● In the Administration Console, right-click on the server name, or the application name in the
Administration tree and click Remove Application.
Tip: You can also assign applications to application folders. For more information, see
"Application Folders" (p. 66).
● During application creation, you selected the Generate datastore scripts and data files option.
In this case, before you can add the application, the script must be run by a database
administrator, see "Running the Script.sql file (DBA Only)" (p. 67)
● You want to link to an application that was created in another Planning content store.
58 Contributor
Chapter 3: Configuring the Administration Console
To do this, you must first remove the application from the Planning content store table it
currently resides in.
Steps
1. Right-click Applications in the Administration tree and click Link to existing Applications.
2. The Add existing applications window lists the applications that exist for the currently selected
datastore server. Click the application you require.
3. If the application was created in a different Planning content store, add an application ID. This
is used by the Web browser to identify the application. It must be a unique character string.
4. For applications created using the Script.sql file, select the XML package. The location of the
package.xml is the same as the location of the script.
5. Click Add.
Ensure that the application is added to a job server or job server cluster. For more information,
see "Add Objects to a Job Server" (p. 56).
● application(p. 50)
● macros(p. 192)
● deployment(p. 168)
Managing Sessions
Multiple administrators may administer a Cognos 8 Planning - Contributor application at any one
time. However, to prevent data integrity issues, when a change is made to the development model,
it is locked. The lock is dropped when the administrator navigates to a different function or closes
the Administration Console. You can manually remove the lock by clicking the Remove button on
the Session Details area. Use this with caution because it could prevent other users from saving
changes.
You can automate the removal of an application lock. See "Remove Application Lock" (p. 219) for
more information.
The development model can be changed by the following:
● changing navigation
Administration Guide 59
Chapter 3: Configuring the Administration Console
● changing orientation
The production model is updated during the Go to Production process. There are checks in place
to ensure that two jobs do not run concurrently. Additionally, when you create a job using the
Administration Console, you are prompted if a valid job of that type already exists and alerted that
this new job may overwrite the existing job.
Example 1
Administration Console 1: User A clicks Navigation in the Administration tree. At this stage, the
development model is not locked.
Administration Console 2: User B clicks e.List in the Administration tree, and makes a change. The
development model is then locked by User B. A further check is made to ensure that no changes
were made to the development application since it was last read by Administration Console 2. If
this is true, User B can continue to edit the e.List and save the changes.
Administration Console 1: User A edits Navigation. User A is informed that User B has the
application locked and is asked whether they want to take the lock. User A takes the lock from
User B. Because User B updated the development model since User A opened the Navigation page,
the development model is no longer up-to-date. The user is prompted and Navigation is re-loaded
with the updated development model and User A can continue to edit and save changes.
Example 2
Administration Console 1: User A opens Application Options and makes a change.
Administration Console 2: User B opens Orientation and makes a change. User B is prompted that
User A has the lock and User B takes the lock.
Administration Console 1: User A clicks Save. User A is prompted that User B now has the lock
and that changes made by User A will not be saved. The controls on Application Options are
disabled.
60 Contributor
Chapter 3: Configuring the Administration Console
Example 3
Administration Console 1: User A opens Grid Options and makes a change.
Administration Console 2: User B opens the Publish Data and selects the detail for a publish. Because
this does not change the development model, there is no need for a lock to be taken. Both User A
and User B can work on the application concurrently.
Sending Email
You can send email to a user defined in an application using your default email program.
Steps
1. Click in the application containing the user you are sending an email to.
3. Choose to send an email to All Users or All Active Users, as defined in the application.
4. Click on the group of users you want to send email to, then click Mail. This opens up a new
email message in your standard email tool.
Administration Guide 61
Chapter 3: Configuring the Administration Console
62 Contributor
Chapter 4: Creating a Cognos 8 Planning -
Contributor Application
To create a Cognos 8 Planning - Contributor application, you perform the following steps:
❑ If you have DBA rights to the datastore server, create a Contributor application using the
Application Wizard.
If you do not have DBA rights to the datastore server, create a script using the Application
Wizard, then send the script to the DBA who will run the script. After the script is run, add the
application to the Administration Console, "Adding an Existing Application to a Datastore
Server" (p. 58).
These processes create a development application. A development application is not seen by
the users, it is simply the application that you work on in the Administration Console.
This means you can make changes to the application without having to take it offline, reducing
the amount of time that users are offline to as little as a minute. This is the period of time taken
to integrate the new e.List items into the hierarchy and set the correct workflow states.
❑ Run the Go to Production process, see "The Go to Production Process" (p. 239).
This activates a wizard which creates a production application. This makes the application
available to end-users.
Steps
1. In the Administration tree, under the name of the datastore server where the application is to
be created, right-click Applications.
A check is made to see if you are logged on with appropriate rights. If you are not, you are
prompted to log on. Click Next.
Administration Guide 63
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
3. Select a library.
This is the Analyst library that your application will be based on. The D-Cube library list is for
information only and tells you which D-Cubes the selected library contains. Click Next.
5. Click Next.
The Administration Console checks the library to ensure that it can create the application. If
there are any errors or warnings, you can view these and save them to text file for more
investigation. If there are errors, the wizard will terminate. You can continue to create the
application if you only have warnings.
6. A window listing the statistics for the application will be displayed. This is for information
only. See "Model Details" (p. 66) for more information.
You can print the details on this window.
Detail Description
Application Display Defaults to the Analyst library name, but you can change this during
Name application creation. After an application is created, this cannot be
changed. There are no character restrictions. The maximum length
is 250 characters.
Datastore Name The name of the datastore application that contains the Contributor
application database tables.
This defaults to the Analyst library name, stripping out special
characters. Only the following characters are allowed: lowercase
letters a to z and numeric characters. No punctuation is allowed
except for underscore. Maximum 30 characters (18 for DB2 OS390)
SQL Server only: Reserved keyword words are not allowed, see
your SQL Server documentation for more information.
Application ID This is used by the Web browser to identify the application. Only
the following characters are allowed: lowercase letters a to z and
numeric characters. No punctuation is allowed except for
underscore.
64 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Detail Description
Location of datastore Enter the file path where the SQL Server datastore files will be
files created on the administration server. You can browse the data
server file structure of file locations. Oracle and DB2 users do not
see this box, the location of datastore files is determined by your
datastore structure.
Location of datastore Enter a location for the datastore backup files. The file location
backup files must exist prior to creating an application and should be a different
location than the datastore location.
Create and populate This option creates the Contributor application and adds it to the
datastore now Administration Console tree. You only have this option if you have
appropriate DBA rights.
Generate datastore Select this option if you want to create a datastore script and data
scripts and data files files which can be used to create an application at a later stage. This
option is mandatory if you do not have DBA rights.To create an
application at a later date, the script must be run by a DBA and the
application added to the datastore. After the application is added
and you click on any of the branches, you are prompted to select
the package.xml file.
Save scripts in this folder Enter or browse for a file location to save datastore scripts and data
files to.
● Temporary tablespace
DB2 UDB
Specify the tablespace name for data, indexes and BLOBS. It defaults
to USERSPACE1. Customized tablespace names can be used.
Tablespaces need to be at least 8000 pages.
Administration Guide 65
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
8. Select the job cluster or job server to run the jobs for this application.
9. Click Next and then Finish. The application creation progress is displayed.
If you generated datastore scripts and data files, send the script to the DBA. After your DBA
has run the script, you can add the application to the Administration Console "Creating, Adding
and Upgrading Applications" (p. 58).
Application Folders
You can use application folders to organize your applications into related groups. You can assign
job server clusters (p. 56) and access rights to groups of applications, making it easier to administer
multiple applications at the same time. For example, you can add or remove a group of applications
from a job server or job server cluster. An application folder can contain applications that exist on
more than one datastore.
When you assign an application to an application folder, it moves from under the Applications
branch of the tree to the relevant application folders.
Tips
● To remove an application from an application folder, right-click the application and select
Remove application from folder. The application is moved under Applications. If that was the
only application in the folder, the folder is removed from the Administration Console.
● To move an application between folders, you must first remove it from the original application
folder.
● Selecting Remove application removes the application from the Administration Console.
Steps
1. Before you create an application folder, at least one application must exist.
You must also have the right to assign applications to application folders.
2. Right-click the application name and click Assign Application to an Application Folder.
3. Click Create a new Application Folder and add the Application, or Assign the Application to
an existing Application Folder.
5. If you selected Assign application to an existing folder, select the folder name from the drop
down list.
6. Click Assign.
Model Details
The model details window in the Application Wizard displays information about the application.
66 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Exceeding these numbers may not be a problem for your application, but could slow down the end
user, and a redesign of the model in Analyst could help.
Total Number of Cells in 500,000 A large number of cells in the application will lead to
Application (per e.List slice) performance problems unless the model builder is able
to use no data settings in access tables to create e.List
specific models that are considerably smaller. Under
certain circumstances it is possible to distribute very
large models with Contributor, particularly if bandwidth
and server capacity is not an issue.
Largest Cube 200,000 This restriction is similar to the Total Number of Cells
in Application. A large single cube can lead to
performance problems at runtime, for example,
breakback and data entry can become slow, unless the
cube is cut-down using no data settings in access tables.
Total Number of D-List 2,500 A very large number of dimension items can cause the
Items in Application model definition to be very large. See also below.
Administration Guide 67
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
SQL Server open and run the file using SQL Server Query Analyzer.
After you have run the script successfully, add the application to the Contributor Administration
Console. For more information, see "Adding an Existing Application to a Datastore Server" (p. 58).
Application Information
When you click the application name, the following application information is displayed:
Detail Description
Application Display Name The name of the application as displayed in the Administration
Console.
Library Name The name of the library in Analyst that is used to create the
application.
The e.List The name of the dimension that is used as the e.List
placeholder.
● set the order in which the users are asked to go to each cube (p. 69)
● select the cube dimensions that make up the rows, columns, and pages of the cubes (p. 70)
68 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
● designate which cubes can be viewed only by the planner (p. 75)
● create text for the users to see in the cube or Web client (p. 76)
Steps
1. In the Administration tree, click System Settings, and Web Client Settings.
2. To enable the Contributor applications to be deployed automatically over the Web, select Allow
automatic Cab downloads and installations.
Cab format is the compressed format in which the Contributor applications are stored.
3. To modify the separator that is used between names in emails sent from Contributor applications,
enter the separator in the Email character separator box.
● Enter an amount (in megabytes) for the Maximum Document Size (MBs).
5. In the Allowable Attachment Types box, choose to either remove a selected file type by clicking
Remove or click Add to add a new allowable attachment type. At the end of the list of file type,
enter a label name and the file type extension in each box. Make sure you append the file type
extension with an asterisk (*).
Note: Changes made to the Attached Documents settings take effect almost immediately and
without the need to perform a Go To Production.
Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then
Navigation.
2. In the Set Cube Order box, click each cube name and move as required using the arrow keys.
3. Click Save.
The changes are visible to users after you run the Go to Production process.
Administration Guide 69
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then
Orientation.
2. Click the tab for the cube that you want to modify.
3. Click a dimension and move it using the arrows, to set it as a page, row, or column. Repeat
with the other dimensions if required.
4. If you want to create nested (merged) dimensions, place two dimensions under either Row or
Column.
Planners cannot change nested settings, even if they reslice.
5. Click Save.
The changes are visible to users after you run the Go to Production process.
If cubes that have dimensions defined as pages have dimension items with different access
settings, such as Read and Write, the cube opens with the first writable page selected by default.
If all items have the same access setting, the cube opens with the first selected page as created
in Analyst.
If a user moves from one cube to another with the same dimension, the cube opens to the same
item selected in the previous cube.
Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then Grid
Options.
3. Click Save.
Changes will be applied to the production application after running the Go to Production
process.
70 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Option Description
Set Breakback When breakback is enabled and data is entered into a calculated cell,
Option data in other cells is automatically calculated from this data.
For example, you can distribute a total annual salary over twelve months
to calculate a monthly payment. For more information, see the Analyst
User Guide. All cubes have breakback selected by default.
Allow Multi e. If the Allow Multi e.List Item Views option is on, the user can select a
List Item Views multi e.List item view, or a single e.List item view. This means they can
edit or view all contributions they are responsible for in one window.
The default value is off because a large amount of memory may be needed
to open a multi-e.List item view.
Allow Slice and If the Allow Slice and Dice option is on, users can swap a row or column
Dice with a page or swap a page with a column or row heading. The default
value is on.
Recalculate After If the Recalculate After Every Cell Change option is on and a user types
Every Cell data into the application, the data is recalculated as soon as the focus
Change moves from the cell. The default is Off, meaning that data is calculated
when pressing Enter.
Select Color for You can specify the color of data in the grid for the following situations:
Changed Values Saved data: the color of data with no change. The default is black.
Typed data not entered: the color of data that is typed but not entered.
The default is green.
Data entered but not saved: the color of data entered in the current session
but not saved. The default is blue.
The default colors are different in Analyst where the color of data that
is entered but not saved is red, not blue, and where detail/total is blue/
black, not normal/bold.
Administration Guide 71
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
With Breakback On and Recalculate After Every Cell Change Off, press Enter. The total holds as
240,000 and the remaining months have 18,182.
Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then
Application Options.
3. Click Save.
Changes will be applied to the production application after running the Go to Production
process.
Option Description
History Tracking Use the History Tracking option to track actions performed by users.
When you select Action time stamps and errors and Full debug
information with data, information is recorded in a datastore table
named history.
Choose one of the following:
● No History
Does not track changes
72 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Option Description
Cut-down Models If cut-down models (p. 134) are used, a different model definition is
produced for each e.List item. This can significantly speed up
transmission to the client for models with large e.Lists. However, if
used inappropriately, it could slow down performance.
Choose one of the following:
● No cut-down models
Allow Reviewer Edit If the Allow Reviewer Edit option is on, users with review and submit
rights to an e.List item can edit an e.List item up to the review depth
level, which is assigned in the e.List window.
Allow Bouncing If the Allow Bouncing option is on, someone with appropriate rights
can take ownership of an e.List item by clicking the Edit or Annotate
button while the item is being edited or reviewed by another owner.
Prompt to Send When this option is selected, user 1 is prompted to send an email to
Email When User user 2 when user 1 takes ownership of an e.List item from user 2. The
Takes Ownership email is copied to other people who have submit or save rights.
Use Client-side If the Use Client-side Cache option is on, model definitions and data
Cache blocks are cached on the client computer so that they do not have to
be downloaded repeatedly from the server. This provides a huge
reduction in the network bandwidth required and is invisible to the
user.
This is not possible on client computers where a security policy prevents
saves to the hard disk. When the user requests the data by opening the
grid, a mechanism checks to see whether the data is cached on the client
machine and whether that data changed.
Administration Guide 73
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Option Description
Prevent Offline If the Prevent Offline Working option is on, users cannot work offline.
Working Offline working is possible only when cache on the client server may
be used, but the client side cache does not have to be enabled.
For more information, see "Working Offline" (p. 86).
Prompt to Send Reviewers are prompted to send an email message to current owners
Email on Reject of contribution e.List items when they reject an item. The email is sent
to the person who submitted the e.List item, and copied to other people
who have submit or save rights and people who have assigned rights
for the review e.List item, that is the rights are not inherited through
the hierarchy.
Prompt to Send The user is prompted to email all immediate reviewers and copy (cc)
Email on Save all immediate owners when they save an item.
Prompt to Send The user is prompted to email all immediate reviewers and copy (cc)
Email on Submit all immediate owners when they submit an item.
Web Client Status You can change the interval of time that the server is polled to refresh
Refresh Rate the Web client status. Increasing the refresh interval decreases the
(minutes) amount of Web traffic. This may be desirable if there are a lot of clients,
but it also reduces the visibility of the data that the user gets on the
workflow state.
Record Audit This records actions taken in the Web client, such as typing, copying
Annotations and pasting data, and importing files. In addition, system link history
is stored as an annotation on the cube that was targeted by the link.
When a link is run, an annotation in the open e.List item. If the link is
rerun, the same annotation is updated. A history dialog box shows all
history related to the links that apply to the open e.List items.
If enabled, users can view audit annotations for any cells for which they
have at least view access.
This option can greatly increase the size of the application datastore,
and should be used with care. It is Off by default.
Annotations Import If a user imports a text file into the Web grid, this option determines
Threshold whether each row imported in a single transaction is recorded separately,
or all rows imported are recorded in a single entry. If the threshold is
set to 0, all rows imported in a single transaction are recorded as a
single entry.
74 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Option Description
Annotations Paste If a user copies and pastes data into the Web grid, this option determines
Threshold whether each row pasted in a single transaction is recorded separately,
or all rows pasted are recorded in a single entry. If the threshold is set
to 0, all rows pasted in a single transaction are recorded as a single
entry.
Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then Planner
Only Cubes.
Administration Guide 75
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then
Contributor Help Text.
2. In the Enter Instructions text box, type instructions, using HTML tags if required.
You can create links to other Web pages, for more information, see "Creating Hypertext
Links" (p. 377) and "Customizing Cognos 8 Planning - Contributor Help" (p. 375).
3. In the Contributor Help Text window, click the cube name tab.
5. In the larger text box, type either HTML formatted text, or plain text, up to 3000 characters.
For more information, see "Customizing Cognos 8 Planning - Contributor Help" (p. 375).
6. Click Save.
Changes are applied to the production application after running the Go to Production process.
● show information about the application, such as the number of cubes, and the number of
D-Links (p. 77)
76 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Steps
1. In the appropriate application, click Development, Application Maintenance, and then
Application XML.
Admin Options
You can configure import and publish options on the Development, Application Maintenance,
Admin Options window.
These options should only be configured by Database Administrators and is only available to users
with DBA rights. They can also be set directly in the datastore.
You do not need to run the Go to Production process for these options to apply, they are applied
as soon as you save.
Option Description
Import Block Size The number of rows that are passed to the calculation engine at a time
during import. The default is -1, which means all rows are passed at once.
Import Location This is a temporary file location. Files are not deleted, but they may be
overwritten. When you import files, they are copied to this location on
the server.
Administration Guide 77
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Option Description
Import Options You can specify parameters to BCP (bulk copy utility for SQL Server
applications) or SQL Loader command line parameters (import tool for
Oracle applications).
A BCP example is:
-B 10000
This parameter sets import text files to be uploaded in batch sizes of
10,000. Other possible parameters are [-h "load hints"] and [-a packet
size].
A SQL Loader example is:
DIRECT=TRUE
This parameter affects the number of lines that are uploaded and may
speed up the import process considerably, but it may require a lot of
memory.
When importing data, ensure that the database code page parameter
reflects the underlying data being imported. For example, when importing
Western European language data, use the Windows code page for Western
European, 1252. The parameter to use is -C=-C1252. For non-Western
European data, verify with your specific database documentation what
code page to use to ensure that it imports correctly.
Publish Options If a foreign locale is used, the CODE PAGE parameter can be set here.
You can also set BCP options here.
Generate Scripts Set this option to Yes to generates a script when any actions are performed
that require DDL commands to be run in the datastore, publishing data
and synchronize with Analyst.
Table-only Publish Set this option to Yes if you want a full publish to occur after a model
Post-GTP change. The Reporting job detects if the model changes are incompatible
with the publish schema or link definition and performs a full publish to
correct the incompatibilities. If this option is set to No and errors are
detected, incremental publish is disabled.
Act as System Link Set this option to Yes if you want to allow the use of this application as
Source a source for a System Link.
Display Warning Setting this option to Yes displays a warning message if you select the
Message on Zero Zero Data option when importing data "Steps to Prepare the Import
Data File" (p. 175).
78 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Option Description
Base Language This option determines language in which the Contributor application is
displayed if the user has not specified a preference in Cognos Connection.
For information about translating applications see "Translating
Applications into Different Languages" (p. 183).
Scripts Creation Path This options sets the default location for script creation on the Contributor
Administration server.
Steps
1. In the appropriate application, click Development, Application Maintenance, and then
Dimensions for Publish.
3. Click Select Dimension and click the dimension name to use as a data dimension.
4. Click Save.
Tip: Click Preview to view the data columns that will be published, either with a data dimension
selected or without.
Administration Guide 79
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Option Description
Copy This option enables you to specify whether the publish setting in the e.List
Development e. window should overwrite the settings in the Publish view layout e.List window.
List Item Publish If you import an e.List file with publish settings, or you edit the publish settings
Setting to in the e.List window, this option is automatically selected, and any publish
Production settings you made are carried over to the production application, overwriting
Application any settings made in the production application. Clear this option if you do
not want to overwrite the settings in the production application.
This option is only applied if changes were made to the Contributor application
since the last time Go to Production was run.
Planning Package When you set Go To Production Options, you must name the planning
Settings package. Optionally, you can include a screen tip and a description for the
package.
Application access is restricted by the e.List. By default, when you create a
package, Overwrite the package access rights at the next Go To Production
is selected and the package access rights are based on the e.List. If, in Cognos
Connection, you make changes manually to the package access rights and
want your modifications to remain after the next Go to Production, you must
clear this check box. For information about the Got to Production process,
see "The Go to Production Process" (p. 239).
Datastore Options
Datastore options enables you to view the datastore tables that are associated with cubes in the
application, and perform datastore backups.
80 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
In Datastore Maintenance, you set the Datastore Backup location and perform ad-hoc datastore
backups. You can also view information about datastore objects, and correct translation problems.
Option Description
Datastore Datastore backup backs up the entire application datastore, including the
Backup production application immediately. This does not backup the publish datastore,
nor the CM datastore.
f you are using SQL Server, you can browse for a location. Oracle backups can
only be made to a location on the administration server, or machines with
access to the administration server. For DB2UDB applications, we recommend
that you backup manually. Refer to your provider documentation.
Restoring the application means that any contributions entered since the backup
was made will be lost. It is best practice to make backups when there is unlikely
to be much activity and you can stop the Web application from running while
the backup is being made.
Publish Set the location for datastore container files created during publish.
Container Files
Datastore This table displays Model Objects. In this instance, Model Objects are cubes
Names and their associated datastore tables.
Datastore Object Name lists the import datastore table names. If you click
Display row counts, the number of rows in each datastore table is displayed,
enabling you to see if, for example, there is any data in the export table without
having to look in the datastore manually. This may take a few minutes to
appear.
Import datastore tables are prefixed with im and import errors are prefixed
with ie.
Translation When a translation is created, a row is created in the Language table in the
Maintenance datastore, and some information is added to the model XML about the
translation. If the information in the model XML and the information in the
database get out of step, you cannot run Go to Production. This may happen
if you have a problem in the Administration Console, or network problems.
The Datastore language tab box lists the rows that exist in the Language table
in the application datastore. The Model language table box lists the languages
that exist in the Model XML. If the languages listed in the two boxes are
different, click Synchronize.
Administration Guide 81
Chapter 4: Creating a Cognos 8 Planning - Contributor Application
Option Description
Tablespace The Tablespace window displays the tablespace options that were chosen during
application creation.
Tablespace Options:
● Data
● Index
● Blobs
It also displays the temporary tablespace for the current Contributor application.
This window is only visible if the application runs on Oracle or DB2 UDB.
82 Contributor
Chapter 5: The Cognos 8 Planning - Contributor
Web Application
To make a Cognos 8 Planning - Contributor application available to users, you must set up a
Cognos 8 Web site. This is described in the Cognos 8 Planning Installation and Configuration
Guide.
Users can access Contributor via the Web, or using Contributor for Excel. See the Contributor for
Microsoft Excel® Installation Guide for more information.
The Tree
The tree on the left side of the page shows the names of the areas that users are responsible for
contributing to (Contributions) and the areas that they are responsible for reviewing (Reviews).
Both appear in a hierarchical form. Depending on their rights, users may see either one of these
branches, or both. When the user clicks an item in the tree, a table with the details for the item
appears on the right side of the window.
Each item in the tree has an icon that indicates the current workflow state. For more information,
see "Workflow State Definition" (p. 297).
The Table
The table gives information such as the workflow state of the item, the current owner, the reviewer,
and when it last changed.
When users open a contribution, they can view or enter data depending on their rights and the state
of the data.
Data that users can edit has a white background. Read-only data has a pale gray background.
Data can be edited only if the icon indicates that it has a workflow state of Not started or Work
in progress.
Users can annotate data (p. 287).
Administration Guide 83
Chapter 5: The Cognos 8 Planning - Contributor Web Application
Users can also reject contributions by clicking a reject button in the table.
If Contributor for Excel is installed, users can open their e.List items in Excel from the Contributor
Web application.
● translation assigned by user specified preference for Product Language defined in Cognos
Configuration
● Resize the worksheet so you can see more or less data on a page.
● Save data as an Excel workbook and work locally without a connection to the network.
84 Contributor
Chapter 5: The Cognos 8 Planning - Contributor Web Application
Steps
1. Type the following URL in the address bar of the browser:
http://server_name/cognos8
3. You can return to the Cognos Connection portal by clicking the Cognos Connection link on
the top right hand side of the page.
Tip: You can copy the universal resource location of a Contributor cell to the clipboard ready
to be used by other applications. This enables you to link directly to the cell from another
application. In order for the link to work, the Contributor application must be available on
the computer and the user must have appropriate rights.
Steps
1. In Internet Explorer, select Tools, Internet Options, Security, Custom Level.
2. Under Reset custom settings, select Medium from the list, and then click Reset.
3. Click OK.
The Contributor Administration Console uses Microsoft Internet Explorer security settings to
communicate with the Web server. This may cause users to be prompted for multiple logons. To
prevent multiple logon prompts, ensure that each user’s browser security settings are set to one of
the automatic logon options.
Steps
1. In Internet Explorer, select Tools, Internet Options, Security, Local intranet, Custom Level.
2. Under User Authentication, select Automatic logon with current username and password, or
Automatic logon only in Intranet zone.
3. Click OK.
Administration Guide 85
Chapter 5: The Cognos 8 Planning - Contributor Web Application
Steps
1. Create a new directory and copy the webcontent files to a sub-directory of the new directory,
for example:
\\server\customweb\webcontent\
2. Set up the virtual directory alias to point to the new parent directory.
The virtual directories that you need to create are:
For more information about configuring the Web Server, see the Cognos 8 Planning Installation
and Configuration Guide.
Working Offline
Working offline means that users can continue to work in situations when they are not connected
to the network.
Users can work offline only if the Prevent off-line working option is cleared (p. 72), and if they are
a user, or belong to the group or role associated with offline users. Offline working is possible only
when cache on the client computer can be used. When an e.List item is taken offline, the data is
stored in the offline store on the user's computer, see (p. 88). When the user wants to do some
work, they open up the offline application and work and save data as normal. The application
appears to work just like the grid in the Web application, except there is no submit button.
Working offline should not be the standard working practice. Because reviewers cannot view the
current data and planners cannot receive updates when new data is imported. Ideally, users should
bring offline data online as soon as possible to keep the data changes visible.
A user can work offline, save their data, and end their session. If another user logs on to the same
computer as a different user, the new user cannot see the first user’s data.
86 Contributor
Chapter 5: The Cognos 8 Planning - Contributor Web Application
Only a single e.List item or a standard multi-e.List item view can be worked with offline.
Working offline does not give access to everything in the cache and does not provide private e.List
item save.
If there are multiple users with edit rights assigned to the e.List item, another user with appropriate
rights can edit the e.List item. The user who checked out the e.List item cannot check in the changes.
If the administrator makes more than one set of changes to the application before the user attempts
to check in the edited e.List item, the user cannot check in the changes. They can save the changes
to a .csv file when running the Go to Production process; the administrator is warned which users
will be terminated if they proceed. If the administrator made just one set of changes, the user can
check in the changes successfully.
When you bring offline data online, numbers that were changed and saved in the offline application
show as changed in the online grid.
Annotations made in the offline application are editable and can be deleted when been brought
online until the e.List item is saved.
When you take an e.List item offline, you cannot work online on other e.List items until you bring
the offline data online.
Example
A user is working offline and changes a cell. The administrator makes the same cell read only and
runs Go to Production. When the user comes back online, they see that the cell is now read only
and are unable to make changes. However, the cell will contain the value they entered in the offline
data.
Steps
1. In the Contributor application, open the e.List item.
Tip: From the front window of the Contributor application, in the Last Changed column, click
the down arrow for the appropriate cell. Status information appears on the lower half of the
window.
Administration Guide 87
Chapter 5: The Cognos 8 Planning - Contributor Web Application
88 Contributor
Chapter 6: Managing User Access to Applications
You manage access to Contributor applications through the e.List and Rights windows.
The e.List defines the hierarchical structure of an application. It is used to determine who can enter
data, who can submit data, who can read data and so on.
Users are secured by Cognos 8 security (p. 27).
Rights are defined by assigning users, roles, and groups to e.List items, and then by giving each e.
List item, and user, group, or role pairing an access level of Read, Write, Submit, or Review.
The e.List
The e.List is used to determine who can enter data, who can only read data, who data is hidden
from, and so on.
An e.List is a dimension with a hierarchical structure that typically reflects the structure of the
organization. An e.List contains items such as departments in a company, for example, Sales,
Marketing, and Development. Each department may be divided into several teams, for example
Sales may be divided into an Internal Sales and an External Sales team.
Planners are assigned to items in the lowest level in the hierarchy. Reviewers are assigned to items
in the parent levels.
All Departments
Operations Corporate
In the example shown, All Departments, Operations, and Corporate are reviewer e.List items. Any
users assigned to these e.List items with rights greater than view are reviewers. Customer Service,
Production & Distribution, Procurement, Sales, Human Resources, Finance, Marketing, and IS&T
are contribution e.List items, and any users assigned to these items with rights higher than view are
planners.
An e.List is created in two steps:
● The dimension that represents the e.List is created in Analyst.
● The file containing e.List data is imported into the Administration Console as a text file (p. 95).
Administration Guide 89
Chapter 6: Managing User Access to Applications
All Departments is the parent of Operations and Corporate. The Analyst calculation for All
Departments is
+Operations+Corporate
Operations is the parent of Customer Service and Production and Distribution. The calculation for
Operations is
+{Customer Service}+{Production and Distribution}
The D-List used in Analyst to represent the e.List does not need to reflect the full hierarchy of the
e.List, but it must contain at least a parent and a child, and we recommend that it contains at least
one review item and two children so that weighted averages and priorities can be tested.
There are some circumstances when it helps to use the full e.List in the Analyst model, bearing in
mind that you still must import the e.List into the Administration Console as a text file or Excel
Worksheet.
For example:
● If you need to bring Contributor data back into Analyst for more analysis.
● If you are using Analyst simply as a tool for staging and tidying up external data.
If you export data from an Analyst D-Cube with the e.List into a Contributor cube, ensure that
the e.List item names in Analyst exactly match the Contributor e.List id.
The e.List should have a valid hierarchy. It is best practise not to have too many levels in the
hierarchy. Avoid having more than 20 child items assigned to a parent. This improves performance
and improves aggregation speed.
90 Contributor
Chapter 6: Managing User Access to Applications
● the usual e.List item owner is absent, so a substitute user needs to make the submission
You may want to consider creating a group or role to contain multiple users, rather than assigning
multiple users, groups or roles to an e.List item. This allows users to be changed in the authentication
provider without you having to run the Go to Production process for these changes to be reflected
in the Contributor application. For more information about users, groups, and roles, see "Users,
Groups, and Roles" (p. 29).
There can be multiple owners of review e.List items as well as contribution e.List items.
Any users with edit rights to an e.List item may edit the contribution e.List item when
● The e.List item is in a Not started state, no changes have been made to it
● The e.List item is in a Work in progress state, changes have been made, but they either have
not been submitted to the reviewer, or have been submitted but have been rejected back to the
planner
● The e.List item has been taken offline for editing by another user, group, or role
Anyone with appropriate rights can take control of an e.List item from another user who may be
editing it. If they try to do this, they will receive a warning.
Ownership
An owner of an e.List item is a user, group, or role that has rights greater than view. These rights
may be directly assigned, or may be inherited.
If more than one user, group, or role is assigned to an e.List item with rights greater than view, the
first one in the import file is the initial owner of the e.List item in the Contributor application. For
more information, see "Reordering Rights" (p. 108).
Unowned Items
If an e.List item has not been opened for edit, it is unowned. After it is opened for edit, the user,
group, or role that opened it is the owner.
Current Owner
The current owner is shown in the Contributor application and is the user, group, or role who is
editing or last opened an e.List item for edit. However, after they have opened the e.List item, they
can then choose to edit it, depending on the settings.
Someone can become the current owner by taking ownership of an e.List from another user.
Administration Guide 91
Chapter 6: Managing User Access to Applications
Note: After subsequent Go to Productions, the current owner is the last user, group, or role to have
edited the e.List item. The current owner is not reset.
Steps
1. Click Development, e.List and Rights and then click either e.List, or Rights.
3. In the appropriate tab, type the name of the source file, or browse for it.
4. Click Import.
5. if your file contains a header row, click the First row contains column headers box. If you
browse for files, the header row is automatically detected.
6. Click Delete undefined items, to delete existing e.List items, or rights that are not included in
the file that is to be imported. For more information, see "Delete Undefined Items
Option" (p. 94).
7. Click Trim leading and trailing whitespace to remove extra spaces at the beginning and end of
text strings on import.
92 Contributor
Chapter 6: Managing User Access to Applications
WARNING Circular reference at row x column This occurs if e.List item a is the
y EListItemName parent of e.List item b and e.List item
b is the parent of e.List item a.
ERROR Duplicate item at row x column y Duplicate items are not allowed. This
Username row will not be imported.
ERROR Duplicate user logon at row x column This row is not imported.
y Userlogon
ERROR Empty item name at row x column y e.List item names are mandatory in
the e.List import file and the rights
import file.
Administration Guide 93
Chapter 6: Managing User Access to Applications
ERROR Invalid characters (ASCII control See "Illegal Characters" (p. 387).
characters not permitted) at row x
column y Illegal character z
WARNING Invalid parent name at row x column This message appears if an e.List item
y does not have a valid parent. The e.
List item is still imported, but it does
not have a parent (and so is not part
of the hierarchy).
ERROR Item name too long (maximum 100 Item names and captions have a limit
characters) at row x column y of 100 characters.
WARNING Review depth greater than view depth Review depth cannot be greater than
at row x column y view depth. View depth is increased
to review depth.
For example, if you want to find 001 New York, typing York will not find this string. Instead, type
001 New.
Steps
1. Select the column to search in, for example, Item Display Name.
3. Click Find.
94 Contributor
Chapter 6: Managing User Access to Applications
If you delete e.List Items from the e.List window, any children of these items are also removed.
You can delete the entire e.List, and rights by importing files containing only headings and selecting
the Delete undefined items check box. However, if you import a file that is completely blank (no
headings), you will receive warnings to say that compulsory columns are missing and the e.List or
rights remain unchanged.
EListItemCaption (p. 96) Item Display Name Compulsory but may be left blank.
Administration Guide 95
Chapter 6: Managing User Access to Applications
EListItemName
A unique identifier for each e.List item. This is an editable box and is case sensitive.
The following constraints apply:
● Must not be empty.
● Must not contain control characters, that is, below ASCII code 32, see "Illegal
Characters" (p. 387).
● Must be unique. Although it is case sensitive, differences in case do not count - the characters
must be unique.
EListItemParentName
This column identifies which e.List item is the parent by referring to the e.List item name. The top
reviewer item refers to its own e.List item name as the parent name. This is case sensitive.
EListItemCaption
The name of the e.List item as it appears in the Contributor application.
The following constraints apply:
● May be empty (but will give a warning).
● Must not be more than 100 characters long (will be truncated, with a warning).
EListItemOrder
This is the order in which the e.List items appear in the application. This is optional, the default is
the order in the file.
EListItemViewDepth
The View depth column indicates how far down a hierarchy a user can view the submissions of
planners and reviewers.
96 Contributor
Chapter 6: Managing User Access to Applications
Defaults to 1.
EListItemReviewDepth
The Review depth column indicates how far down a hierarchy a reviewer can reject, annotate and
edit (if they have appropriate rights) contributions and reject and annotate submissions of reviewers.
The following values may be used:
● -1 indicates all descendant hierarchy levels.
Defaults to 1.
EListItemIsPublished
This indicates whether an e.List Item will be published. Possible values are Yes, Y, No, N (not case
sensitive).
Defaults to No.
Steps
1. In the Administration Console tree, click the application name, Development, e.List and Rights
and then either e.List, or Rights.
2. Click Export.
3. Enter or browse for the filename and location under e.List, or Rights, for example: c:\temp\
export_rights.txt.
4. Select the Include column headings box if you want the column headings exported.
5. Select the Export box for each file you want to export.
6. Click OK.
Administration Guide 97
Chapter 6: Managing User Access to Applications
Steps
1. In the application tree, click Development, e.List and Rights and then click e.List.
2. Expand the parent e.List item and click in the e.List table where you want to insert the new
item. The new item appears above the item you clicked.
Details Description
Item Display Name The name of the e.List item (typically a business location) as it is
displayed to the user in the Web browser.
Item Id The name of the e.List item when imported from an external resource
such as a general ledger system or datastore. This may be a code or
a name independent of the e.List item name.
You can edit the item Id, but it must be unique within the application.
This name is used when publishing data.
98 Contributor
Chapter 6: Managing User Access to Applications
Details Description
View Depth Indicates how far down a hierarchy a user can view submissions from
planners and reviewers.
To assign view depth:
1. Click the View Depth cell of the appropriate e.List item.
Note: When importing the e.List, All and None are represented by
-1 and 0 respectively.
For more information, see "View Depth Example" (p. 102).
Review Depth Indicates how far down a hierarchy a reviewer can reject (or edit if
allowed), submissions from planners and reviewers.
Note that this setting is also influenced by the user's rights and
whether Reviewer Edit is allowed in Application Options (p. 72).
To assign review depth:
1. Click the Review Depth cell of the appropriate e.List item.
Note: When importing the e.List, All and None are represented by
-1 and 0 respectively.
For more information, see "Review Depth Example" (p. 102).
Administration Guide 99
Chapter 6: Managing User Access to Applications
If a contribution e.List item becomes a review e.List item, the rights of a user assigned to that e.List
item are changed to the equivalent review rights as shown in the following table.
View View
Edit Review
Submit Submit
Steps
1. Click the e.List item.
The up and down arrows change the order of items, and the left and right arrows demote and
promote items in the e.List.
Step
● In the e.List screen, click the e.List item and then click Delete and Save.
Step
● In the e.List window, click an item and then click Preview.
100 Contributor
Chapter 6: Managing User Access to Applications
User, group, or role The user, group, or role assigned to the e.List item.
Rights The level of rights that the user, group, or role has to the e.List
item. See "Rights" (p. 103) for more information.
Inherit from If the rights have been directly assigned to the user, group, or role,
this cell will be blank. If the rights have been inherited, this indicates
the name of the e.List item the rights have been inherited from.
You can save the information on this screen to a text file by clicking Save to file and entering a file
name and location.
● If the review depth for Country A is 1, then the owner can only review the regions (the children
of your e.List item).
● If the review depth for Country A is 2, then the owner can only review the regions and the cost
centers.
● If the review depth for Country A is 3, then the owner can review the regions, cost centers, and
divisions.
● If the review depth for Country A is All (-1 in the import file), then the owner can review all
preceding e.List items. You can only set the review depth at All (-1) if the view depth is also
set at All (-1).
● If the view depth for Country A is 1, then the owner can only view the regions (the children of
your e.List item).
● If the view depth for Country A is 2, then the owner can only view the regions and the cost
centers.
102 Contributor
Chapter 6: Managing User Access to Applications
● If the view depth for Country A is 3, then the owner can view the regions, cost centers, and
divisions.
● If the view depth for Country A is All (-1 in the import file), then the owner can view all
preceding e.List items.
Rights by User
Rights by User displays the level of rights for a user to an e.List item. It can be displayed by selecting
items from the Rights screen and clicking Rights Summary.
A table with the following information is displayed:
Details Description
User, group, or role The name of the user, group, or role assigned to the e.List
item.
Rights The level of rights that a user has to the e.List item. For more
information, see "Rights" (p. 103).
Inherit from If the rights have been directly assigned, this cell will be
blank. If the rights have been inherited, this indicates the
name of the e.List item the rights have been inherited from.
You can save the information to a text file by clicking Save to file and entering a file name and
location.
Rights
Rights for planners are determined by the setting in the rights screen. By assigning rights, you can
configure user roles in the Administration Console, determining whether users can view, edit, review,
and submit.
Typically, you import a rights file. But you can also manually insert rights, and modify or delete
existing rights. If you want to make changes to the rights file, we recommend that you export the
file to ensure you have correct information, modify this file using an external tool such as Excel
and then import the file again.
You can assign more than one user, group, or role to an e.List item. For more information, see
"Multiple Owners of e.List Items" (p. 91).
Rights for reviewers are determined by the following settings:
● The rights setting.
● The view and review depth setting. Review depth gives the right to reject (or edit if reviewer
edit is on) to a specified depth. This is set in the e.List screen.
● The Allow Reviewer Edit option in the Application Options screen (p. 72).
● If a reviewer has two different levels of rights for the same e.List item the higher rights applies.
Rights may be assigned directly or inherited. See the following example:
If the reviewer is assigned with submit rights to a parent e.List item which has a review edit depth
of 1, and reviewer edit is allowed, the reviewer has the right to view, edit, reject and submit the
child e.List item. These are inherited rights.
The reviewer is also directly assigned to the child item with view rights. These are declared, or
directly assigned rights.
You can only directly assign one set of rights to a user for a specific e.List item. If you insert a
duplicate record you receive a warning, and the rights that appear lower down in the rights table
are deleted.
Tip: If you specify more than one reviewer for an e.List item, in the workflow page for the
Contributor application, an email link named email all is displayed. If you specify one reviewer,
the name chosen in the User, Group, or Role column is displayed, and you should ensure that a
descriptive name for the user, group or role is chosen.
Submit Rights
An e.List item can have no user with submit rights (that is, no user is assigned or has resolved rights
through Reviewer depth to the e.List item). If this is the case, contributions are not submitted and
the item and its parents in the hierarchy cannot be locked.
We recommend that the administrator reviews the rights screen to ensure every e.List item has at
least one user with submit rights (who may be a reviewer with appropriate rights).
Inherited Rights
If the reviewer is assigned with submit rights to a parent e.List item which has a review depth of 1,
reviewer edit is allowed, the reviewer will have the right to view, edit (contribution e.List item only)
reject and submit the child e.List item. These are inherited rights.
The following tables explains what rights mean when they are assigned to planners and to reviewers.
It also explains how the rights can be affected by different settings.
104 Contributor
Chapter 6: Managing User Access to Applications
Rights Reviewers
● submit or reject child review e.List items if the e.List item they are assigned to
has sufficient review depth
● annotate their own review e.List item and children to review depth
● annotate their own review e.List item and children to review depth
Note: when Reviewer edit is off, reviewers cannot edit contribution items
● submit and reject child e.List items if the e.List item they are assigned to has
sufficient review depth
● annotate their own review e.List item and children to review depth
With reviewer edit off, the reviewer cannot edit or submit any e.List items, but can
● reject child e.List items if the e.List item they are assigned to has sufficient
review depth
● annotate their own review e.List item and children to review depth
View View assigned e.List items and children to view depth. Cannot annotate, reject, edit,
or submit.
Rights Planner
Submit View, edit and save, submit and annotate assigned e.List items.
Edit View, and edit assigned contribution e.List items. Can annotate. Cannot submit.
Rights Planner
EListItemName Item ID Identifies the e.List item that you are setting rights
for. This must match an e.List item id in the e.List
import file and is case sensitive.
CamObjectName User, Group, Role The display name of the user, group, or role as it
appears in Cognos 8.
EListItemUserRights
The following rights can be used. These are not case sensitive.
Edit View and edit, but cannot submit. Contribution e.List items.
Submit View, save changes, submit. Contribution and review e.List items.
106 Contributor
Chapter 6: Managing User Access to Applications
If the import file specifies Review for a contribution e.List item, or Edit for a review e.List item, on
import, the Administration Console changes the settings so that Review becomes edit for a planner
and Edit becomes Review for a reviewer.
For more detail, see "Rights by User" (p. 103).
You can assign more than one user to an e.List item. If more than one user is assigned to an e.List
item with rights higher than View, the user that is first in the import file is the initial owner of the
e.List item in the Contributor Web application. When you insert rights manually, they are appended
to the bottom of the Rights table and it is not possible to reorder the rights at e.List level. The only
way to reorder the rights at e.List level is to export the file, delete the existing rights, modify the
import file and import the new file.
Note: If you have imported a rights file containing three columns, and you import a new rights file
containing only the first two columns, any new rights added will take the default value of submit,
and any rights that existed in both the old and the new files will remain unchanged. If the new
rights file contains three columns, any rights existing in both the old and the new files are overwritten
with the new rights.
For more information, see "Multiple Owners of e.List Items" (p. 91) and "Rights by e.List
Item" (p. 101).
Steps
1. In the tree for the application, click Development, e.List and Rights and then Rights.
2. Click Insert.
3. Select the e.List item by clicking in the Item Display Name cell.
Tip: You can choose to display the complete list of users, groups, or roles by selecting Show
all descendants. Depending on the number of items in the list, this may take a while to
display. If this box is not selected, only the direct members of the group or role are shown.
● Select the group, or role. Any users who are members of the group or role that you select
will be added to the list.
5. Select the rights by clicking the Rights cell. If the e.List item is a Review item, you can choose
View, Review, or Submit. If the e.List item is a contribution item, the rights you can select are
View, Edit, or Submit. For more information, see "Actions Allowed for Contribution e.List
items" (p. 105).
The Item Id is the external identifier for the e.List item and is the key for importing.
Reordering Rights
If more than one user, group, or role is assigned to an e.List item with rights higher than View, the
user, group, or role that is first in the import file is the initial owner of the e.List item in the
Contributor application. When you insert rights manually, they are appended to the bottom of the
Rights table and it is not possible to reorder the rights at e.List level. The only way to reorder the
rights at e.List level is to export the file, modify the import file and import the new file.
Viewing Rights
You can view rights listed by e.List item and rights listed by user by selecting one or more lines in
the Rights table and clicking Rights Summary.
You can print and save to file the rights by e.List (p. 101) and rights by user (p. 103).
108 Contributor
Chapter 6: Managing User Access to Applications
Validating Users, Groups and Roles in the Application Model and Database
You can validate users, groups, and roles that are used by the application model and database
against the Cognos 8 namespace.
The validate function checks name information used by the Contributor Administration Console
against the Cognos 8 namespace. If any names have been changed or removed, you can update the
information used by the Contributor Administration Console to match the namespace.
If there are no invalid items and only changed items, the database table only is updated. Therefore,
there will be no cut-down models job run during Go to Production.
If there are invalid items and changes then the model is also updated and a cut-down model job
will run during next Go to Production.
Steps
1. In the tree for the application, click Development, e.List and Rights and then Rights.
2. Click Validate.
3. If there are invalid or out-of-date users, groups, or roles, and you want to update them, click
Update or click Cancel.
110 Contributor
Chapter 7: Managing User Access to Data
You control access to cells in cubes, whole cubes, and assumption cubes using access tables. Saved
selections are groups of dimension items that support and simplify access tables.
For example, in an Overheads dimension, you might want to show only those items relating to
travel expenses. This allows you to show users only those items that are relevant to them.
You define access for contribution e.List items, but access is automatically derived for review e.List
items.
The key difference between using saved selections and defining access directly in access tables is
that saved selections created on dimensions within an application are dynamic. That is, they change
when definitions in the dimension upon which they are made are changed (when an application is
synchronized following changes to the Cognos 8 Planning - Analyst model).
Imagine the following scenario:
You have a dimension that contains:
● Product 1
● Product 2
● A saved selection is made which is the enlargement of the "Total Products" subtotal.
If a change is made to the Analyst model which modifies the dimension to now contain:
● Product 1
● Product 2
● Product 3
The saved selection, which is the enlargement of the "Total Products" subtotal, now includes all
three products without any change being made to it. In other words, it is dynamic and changes as
the definitions in the application change following synchronization.
Saved Selections
When you create saved selections, you name and save selections of items from a dimension. A
selection is a collection of dimension items, and could be lists of:
● Products sold by a particular outlet
● Product/Customer combinations
● Channel/Market combinations
● Employee lists
Once you have created a saved selection, you can set levels of access to this item. For more
information, see "Creating Access Tables" (p. 119).
You cannot explicitly define an access table on a review e.List item. If you create a saved selection
on the dimension selected as the e.List, you cannot select any review e.List items.
Steps
1. In the application tree, click Development, Access Tables and Selections, and Saved Selections.
2. Click New in the Saved Selections form, and enter details as shown below:
Dimension Click this box to show a list of dimensions, then click one.
3. To edit the selection rules, click in the Selection Name or Dimension column of the saved
selection, then click Edit. This opens the Dimension Selection Editor, see "Editing Saved
Selections" (p. 112) for more information.
● Choose the items you want to show from the Show list box.
Steps
1. Open the Edit window.
In Saved Selections, click on the selection you are going to edit, then click Edit.
The results are displayed under Show. A check mark indicates either a first or second selection,
or a result (=).
112 Contributor
Chapter 7: Managing User Access to Data
Detail Only Detail items are displayed, that is, all dimension items except
for calculations.
Select:= to select items that equal the criteria. <> to select items that
do not equal the criteria.
Results Selection Displays the results of the first and second selections.
3. To make a selection, click one of the following options from the First Selection list box:
All Selects all items in the dimension. This is useful when used in
combination with Second Selections, for example:
● First Selection: All
Detail Selects all detail items (all dimension items that are not
calculations). The benefit of using this item is that if the list of
detail items change, the saved selection is updated automatically.
Calculate Selects all calculated items. If the list of the calculated items
change, the saved selection is updated automatically.
List of items Click the items to be selected then move them to the First
Selection list box by clicking the right arrow.
If the list changes, the saved selection must be updated.
Enlarge Includes all items that make up a calculated item, either directly
or indirectly.
Click one or more calculated items and then move them to the
First Selection list box by clicking the right arrow.
If this is a simple saved selection, click OK to close the Edit Saved Selection window and then
click Save to save the selection.
Except All items selected in the first selection, except those selected in the
second selection.
Intersect Items that are the same in both the first selection and second
selections.
● Click Save.
● In the saved selections window, click the selection and then click Delete.
Access Tables
Create access tables to determine the level of access users have to cubes, saved selections, and
dimension items. Access tables can reduce the volume of data a user has to download, especially
114 Contributor
Chapter 7: Managing User Access to Data
when used in conjunction with cut-down models and the No Data setting. For more information,
see "Cut-down Models" (p. 134).
You can set access levels for an entire cube (contribution or assumption cubes) or for specific
selections of cells in a cube (contribution cubes only).
For entire cubes, you can choose Write, Hidden, or Read for contribution cubes and Hidden or
Read for assumption cubes (p. 116). Access set at cube level applies to all planners.
Access to specific selections of cells is controlled using access tables. Do this by choosing one or
more dimensions, and defining access to sets of items in these dimensions.
If you need cube-level access to vary by planner, select the Include e.List option. You must also
include one of the other dimensions of the cube (preferably the smallest), and select All items for
this dimension when creating the access table.
Access tables using more than two dimensions (this includes the e.List) should be avoided where
possible. This is because when you perform an action in the Administration Console that makes
use of the access tables, the system needs to resolve the access tables in order to determine what
access level applies to each cell, and which cells have data. If an access table is very large, this can
slow down the system considerably. For more information, see "Large Access Tables" (p. 125).
It is not possible to create planner-specific views of assumption cubes. If this is required, you should
convert the assumption cube to a contribution cube in Analyst by adding the placeholder e.List.
Then you should move any assumption data present in the Analyst D-Cube into Contributor using
Analyst<>Contributor links (p. 347).
You cannot explicitly define an access table on a review e.List item. If you create a saved selection
on the dimension selected as the e.List, you cannot select any review e.List items.
Assumption Cubes
An Assumption Cubes contains data that is moved into the Contributor application on application
creation and on synchronize.
● They do not contain the e.List, therefore data applies to all e.List items.
Other Cubes
These are all other Cubes used in Contributor.
● They must contain the e.List.
● Are writeable by default, but can also be set to be read-only, contain no data, or be hidden.
● You can set access to selections of cubes, whole cubes, and to dimension items.
● Can be set to planner-only cubes (p. 75) which hides the cube from the reviewer.
When you create an access table, you select one or more dimensions, and define access to sets of
items in these dimensions. By default, access tables include the e.List, so you can vary access setting
by planner. You can opt not to include an e.List in an access table, in which case the setting applies
to all planners, for example, you might want to make a budget version read-only for everyone. You
cannot have planner specific access settings for assumption cubes. This is because they do not
contain the e.List.
● Read
● Hidden
● No Data
Access rules are resolved in the order in which they appear in the table. If more than one rule is
applied to an item, the last access rule assigned is given priority. For example, you might want to
set all items to No Data, and then subsequently set individual items to Read, Write, or Hidden.
If you have defined more than one access table for a cube, the access setting that will apply is the
lowest level of access amongst all the access tables, for example, a hidden access setting has priority
over write.
No two access tables that control the same dimension can be applied to the same cube.
You receive a warning if you create an access table that contains more than two dimensions as this
can slow down the Administration machine if you import large amounts of data.
If no access levels are set, the following defaults apply:
● All cubes apart from assumptions cubes have a global access level of Write.
● Assumption cubes (cubes used to bring data into an application) have a global access level of
Read.
Selecting access levels of Read, Write, and Hidden have no affect on the way links or importing
data work. The access level No Data does affect links and importing data.
116 Contributor
Chapter 7: Managing User Access to Data
Write
This is the default for all cells in a planner’s model. Write access means that users with appropriate
rights can write to this item, provided the e.List item is not locked (the locked state occurs when
data is submitted to a reviewer).
This option cannot be set on assumption cubes.
You can breakback from a writable calculation, unless all detail items used by the calculation are
set to read or hidden.
If a calculation uses other calculated items, and these calculated items are set to read, or hidden,
breakback is possible, unless all the items used by the calculation, detail or calculated, are set to
read.
Read
Cells marked as read are visible but cannot be changed by the planner. For example, a planner
cannot type into any read-only cells, paste will miss out read-only cells, and breakback will treat
read-only detail cells as held. However, planners can change values in read-only calculated cells;
read-only calculated items will still be recalculated (they are not treated as held). This extends to
breakback: if a writeable calculation, for example, Grand Total, uses some read-only calculated
items such as Total Group A, a planner can breakback from the writable calculation (Grand Total)
through the read-only calculations (Total Group A). This is only possible when at least some of the
items feeding a calculation are writeable. It is never possible to change a cell that is a D-Link target.
Detail cells targeted by D-Links are read-only as normal and will treated as held by breakback.
Calculated cells targeted by D-Links are also Read-only, but these cells cannot be changed by
forward calculation or breakback due to planner entry.
Read is the default value for the following:
● All cells in assumption cubes.
● All cells targeted by D-Links. These can never be changed directly by the planner.
● Calculated items are read-only when none of their precedent items are writable. For example,
a subtotal will automatically be Read-only if all items summed by the subtotal are read-only
(whether they are read-only due to access tables or D-Link targets, or due to submission of
contribution e.List items).
● Calculated items are set to read-only when breakback is not possible because of the type of
calculation: in particular, the result and outputs of BiFs, and constant calculations are always
read-only.
● A planner can never change read-only detail cells, but read-only totals are not held.
Hidden
Hidden cells are not visible to a planner, but otherwise they are treated in the same way as read-only
cells. For example, breakback does not target hidden detail cells, but goes through hidden calculated
cells if at least some of the detail cells the calculation uses are writeable. Hidden calculated cells are
recalculated, and so on. This means that intermediate calculations, for example, in a cube, or even
entire intermediate calculation cubes, can be hidden without affecting model calculation integrity.
If all cells in a cube are hidden for a particular planner, the cube is removed entirely in the Web
browser view, but the data is still downloaded to planners.
No Data
No Data cells do not contain any data. When used by calculations they are assumed to contain
zero.
No Data access settings, regardless of whether cut-down models are used, can affect:
● Volume of data processed in memory, which in turn affects calculation speed. This is because
the calculation and link engines do not process No Data cells where possible, so No Data areas
in general reduces memory requirements and speeds up recalculation.
● Data block size, which in turn affects download and upload speed (when opening the grid and
when saving or submitting) can reduce network traffic. This also affects the speed of aggregation
when data is saved, and hence reduces the load on the run time server components.
No Data access settings used in conjunction with cut-down models can affect the model definition
size. It can improve download speed on opening and reduce network traffic, but also increases the
time it takes to run the Go to Production process.
● Items which are entirely No Data for all cubes in the model are identified.
● Items are removed from dimensions where possible (see restrictions below).
As a rough guide, each e.List item is approximately 1 Kb per item (this is where you have roughly
one user per e.List item). Each dimension item is between 100 and 250 bytes. The e.List item is
larger because it contains extra information.
118 Contributor
Chapter 7: Managing User Access to Data
Reducing the data block size affects download and upload speed (when opening the grid and when
saving or submitting) and can reduce network traffic. It also affects the speed of aggregation up the
e.List when contribution data is saved, and hence reduce the load on the run time server components.
Cubes with Access Tables Assign access levels to dimensions and saved selections either
using the Administration Console, or by importing simple access
tables.
Cubes Without Access Tables Assign an access level to a whole cube, if it has no individual
access tables.
Assumption Cubes Set access levels for the whole cube (you cannot create individual
access tables for assumption cubes).
Note: If you create an access table after you have imported data, the entire import queue is deleted.
Making changes to access tables, e.List items or saved selections that affect the pattern of no data
in a cube can also result in data loss, see "Changes to Access Tables That Cause a Reconcile Job
to Be Run" (p. 133) for more information.
Before you can set access levels, you must first have imported an e.List.
Steps
1. In the application's tree, click Development, Access Tables and Selections, and Access Tables.
● Assign an access level to a whole cube, if it has no individual access tables. For more
information, see "Cubes Without Access Tables" (p. 122).
● Set access levels for the whole cube (you cannot create individual access tables for
assumption cubes). For more information, see "Assumption Cubes" (p. 122).
You can choose to make the access table applicable to all or part of the e.List. If you click Include
e.List, you can select which parts of the e.List the access table is applicable to. If you do not include
this option, it will apply to all parts of the e.List.
The default value for cubes with access tables is write.
You can also import access tables, see "Importing Access Tables" (p. 122).
3. Select one or more cubes from the Candidate Cubes list. Normally you will apply an access
table to all candidate cubes.
4. Ensure that Create rule based access table is selected (this is the default).
5. Choose whether to include the e.List. The default is for the e.List not to be included meaning
that the access settings apply across the whole e.List.
Note: If you create access level rules with the e.List included, clear the Include e.List option
and save, then subsequently decide to include the e.List again, you must reenter any e.List
specific access settings.
6. Click Add. This adds your selection to the list of access tables.
Tip: In the access tables list, you can edit the name of the access table.
7. Select one or more of the rows that you have just added to the access tables list and click Edit.
The next step is to assign access to dimension items, or to saved selections. For more information,
see Exporting Access Tables.
After you have edited the access table, you should save. If there is data in the import data queue,
you will receive a warning that the import data queue will be deleted. You may also receive a
warning such as:
Saving these changes will require a reconcile job to run next time you Go to Production. Do
you want to continue?
Reconciliation ensures that the copy of the application that the user accesses on the Web is up
to date. If you click Yes, the changes are saved and when you run Go to Production, a
reconciliation job is created. If you click No, the changes to the access table window are
discarded.
120 Contributor
Chapter 7: Managing User Access to Data
Steps
1. Select the access table that you want to change.
3. Check those cubes you want the access table to apply to.
Steps
1. Select the access level, and then click the saved selection, or dimension items from each list,
and e.List items (if included).
2. In any one list of saved selections and dimension items, you can click either one saved selection,
or a combination of dimension items. An <<ALL>> selection applies a rule to all items in the
dimension or e.List.
4. Repeat until you have created all the rules for the access table.
These access rules are resolved in the order in which they were assigned. If more than one rule
is applied to an item, the last access rule assigned is given priority and will apply. Use the arrows
to change the order in which access rules apply. If no rules are set, an access level of Write
applies.
Warning
Once you have created access level rules for an access table, if you decide to remove or include the
e.List again, you will lose any rules that you have set for this table and will have to reset them. See
Rules for Access Tables for more information.
Note that The Reset Development to Production button below removes all changes made since the
last time Go to Production was run.
Steps
1. Check the log file. Click the Tools, Show Local Log File menu to see if it shows an "Out of
memory" message.
2. If this happens, you can work around this by clicking the Reset Development to Production
button in the toolbar and reapplying any changes made .
Steps
1. In Access Tables, click in the appropriate Access Level cell.
Assumption Cubes
Any assumption cubes in the application are listed in the Assumption Cubes section. The default
access value for assumption cubes is Read, and the other available value is Hidden. You can change
the default access value.
Steps
1. Click in the appropriate Access Level cell.
2. Click either Read or Hidden from the list. The default level is Read.
Assumption cubes contain data that is moved into the Contributor application when you run
the Go to Production process and when you synchronize. They do not contain the e.List.
122 Contributor
Chapter 7: Managing User Access to Data
You can also automate the import of Access Tables. See "Import Access Table" (p. 204) for more
information.
Steps
1. Open the Access Tables window by clicking Development, Access Tables and Selections, and
Access Tables.
2. In the Access Table window, click one or more dimensions in Available Dimensions. The
Candidate Cubes list shows which cubes contain all the selected dimensions with no conflicting
access tables. If you have selected more than one dimension, only those cubes that contain all
these dimensions are selectable.
3. Select one or more cubes from the Candidate Cubes list. Normally, you apply an access table
to all candidate cubes.
4. Select the Import Access Table option, and Include e.List if required.
5. Click Add. Once you have clicked the Add button, you cannot change whether a table is rule
based or imported. There is a check mark to indicate if you have selected Import access table
in the access table grid.
Note: You can use this dialog box to set the base access level without importing an access table.
7. Enter the file name and location for the access table import file.
9. Select the file format. If the file format is Excel Worksheet, enter the name of the worksheet
containing the access table.
11. Delete undefined settings: If an access table file has previously been imported for the access
table and you are importing a new one, existing settings are updated with the new settings
specified, and any previous settings that do not exist in the new file are kept, or if Delete
undefined settings is checked, are deleted.
12. Select the Base access level. This is the default level that is applied to any undefined items. No
Data is the default access level. See "Access Level Definitions" (p. 116) for more information.
14. To view the access table, click View. You can print this file and save to file. You can also export
the access table, see "Exporting Access Tables" (p. 125).
● A column for every dimension that the access table will apply to (mandatory).The names of
dimension items must be identical in spelling and case to the way they are in the Analyst model.
● A column containing e.List items. If omitted, the access level applies to the whole e.List.
● A column containing access levels (optional). The following access levels can be set Hidden,
Read, Write, No Data (these are not case sensitive). If omitted, a default of Write will apply.
It is important to note that you cannot import saved selections or rule based access tables. Each
line of the access table (barring the headings if used) contains the following information:
dimension item name (from dimension a) [tab] dimension item name (from dimension b) [tab] e.
List item (if e.List included) [tab] AccessLevel
Column Order
The required order for the dimension columns in an access table import file with no column headers
is the same as the dimension order of the access table. You can see this in the Access table setup
window:
This shows the order of dimensions as: Versions and Channels. In this case, the third column would
be the e.List, and the fourth column would contain the access levels.
If the import file contains column headings, the columns can be in any order.
Column Headings
The use of column headings in the import file is optional. If column headings are used they should
be:
This must be the same spelling This must be the same spelling As shown. Default is Write.
and case as in the Analyst and case as in the Analyst
model. model.
124 Contributor
Chapter 7: Managing User Access to Data
You cannot edit an imported access table within the Administration Console. To make changes,
you should edit the source file and import again.
Steps
1. Click Development, Access Tables and Selections, and Access Table) to open the access tables
window.
2. Click the access table that you want to view and click View.
3. The View button is not enabled for access tables created in the Administration Console
(rule-based tables). To view rule-based access tables, click Edit.
Steps
1. Open the access tables window (click Development, Access Tables and Selections and then
Access Table) and click the access table to be exported.
2. Click Export.
3. Enter or browse for a file location and file name. You can export to a text file, tab separated
format.
5. Click OK to create the file and Close to close the dialog box.
● They can increase the physical size of the cut-down model so that the benefit of using cut-down
models is lost. This is because the access table needs to be resolved.
● If cut-down models are not used, resolving access tables on client machines can cause
performance problems.
Examples
These examples set access to the following dimensions in a Revenue Plan cube:
The e.List (named Stores) contains these saved selections:
● High street
● Superstores
● Telesales Centers
These saved selections are subsets of the total (All). There are 1200 items in Stores.
126 Contributor
Chapter 7: Managing User Access to Data
● Discount
● Mail Order
These saved selections are subsets of the total (All). There are 12 items in Channels.
Products contains these saved selections:
● Sanders
● Drills
Note that not all items are included in these saved selections. There are 400 items in Products.
Example 1
This example shows two different ways of setting access where available Channels vary by store
and product selection is the same for all stores and channels. Tables A and B achieve this in the
most efficient way. See the following calculations:
The size of Access Table A is:
● 12 (Channels) x 1200 (Stores) x 4 (bytes) = 57.6 KB
This means that Access table C takes 22.98 MB more memory to resolve than tables A and B
together.
Using two separate access tables is also easier to maintain. For example, if High Street stores started
selling through the Mail order channel, using two tables, you just add one line to Access table A.
But for access table C, you must add three lines.
Channel Store
Channel Store
Product
Write All
Read Sanders
Hidden Drills
128 Contributor
Chapter 7: Managing User Access to Data
Example 2
This shows two different ways of setting access where products vary by store and channels vary by
store.
In this case, the e.List (Store) must be included in both access table. Superstores can write to drills.
Note that it is not necessary to put the line Write, Drills (Retail/Discount), Superstores into the
access table, you can leave this out. This was added for illustrative purposes.
The size of Access Table D is:
● 12 (Channels) x 1200 (Stores) x 4 (bytes) = 57.6 KB
So even with the additional dimension in Access Table E, the combined total of Tables D and E of
1.98 MB is still 21.06 MB less than Access Table F.
Note that although tables D and E are separate, they interact. The channels that are shown are
dependent on which product is viewed, and the e.List item that is selected.
Channel Store
Products Store
Products Store
130 Contributor
Chapter 7: Managing User Access to Data
Superstores: Writeable Access to Drills for the Retail and Discount channels
You do not need to include the Write, Drills, Discount, Superstores line. This is included for
illustrative purposes.
Example
In the Versions dimension, item Budget version 1 is writable for the planner and item Budget version
2 is read-only. In the Expenses dimension, the item Telephone is writable and the item Donations
is hidden. The planner will get the following resolved access:
When access to the cells of a cube needs to be controlled using more than one dimension (as in the
example above), you must decide whether to use multiple access tables (one for each dimension),
or one access table using all the dimensions. You should choose multiple access tables (one for each
dimension) wherever possible (as in the example above). In general this will be much easier to
understand and maintain. You should only use a multi-dimensional access table in circumstances
when access to items in one dimension cannot be defined without reference to the items of another
dimension.
Conflicting access tables are not allowed, that is you cannot apply multiple access tables to one
cube using the same dimension. For example, if you have applied an access table for the dimension
Months to a cube, you cannot apply another access table using Months, nor one that uses Months
and Versions, and so on. After choosing the dimension for an access table, your choice of cubes
which the access table can be applied to is limited to those cubes that contain the chosen dimensions.
132 Contributor
Chapter 7: Managing User Access to Data
Changes to Access Tables and Saved Selections and the Effect on Reconciliation
When access tables are changed, the system determines whether there is any impact on the pattern
of No Data cells in a model.
If an impact is determined, all e.List items are reconciled. Access tables can be changed indirectly
by changing a saved selection used by an access table, or by making certain changes to the e.List if
the e.List is used in a saved selection used by an access table. Note that changes made to other
dimensions may impact access tables via saved selections, but these changes are introduced via
synchronize which always requires full reconciliation.
The cases where changes to the e.List affect saved selections are:
● A saved selection on the e.List uses Filter and existing contribution items are renamed in the e.
List.
● A saved selection on the e.List uses Enlarge (of a review e.List item) and existing contribution
items are moved between review items in the e.List.
This only applies to e.List items shared by the development application and the current production
application. So, the addition or deletion of e.List items in itself will not require reconciliation of all
e.List items.
● Import may target Hidden or Read-only cells. All valid data present in an ASCII file is imported
into a cube. To limit the selection of cells targeted you should cut-down the ASCII file to contain
only the required data. Data in the source ASCII file that does not match an item in a cube is
not imported and is reported as an error. Import cannot target formula cells in a cube. Import
data should not target No Data cells.
● Published data can include cells that are marked as Hidden, as well as Read-only or Writeable.
Force to Zero
In Analyst, the calculation option Force to Zero forces calculations in other dimensions to return
zero. Contributor interprets this option differently, effectively as Force to No Data. This can cause
items to disappear from the Contributor grid.
If you do not want such items to disappear in Contributor, you should remove the Force to Zero
setting in Analyst.
Cut-down Models
Cut-down models are customized copies of the master Contributor model definition that have been
cut-down to include only the specific elements required for a particular e.List item. Note that the
e.List is also cut-down.
Cut-down models can reduce substantially the size of the model that the Web client has to download
when there are large dimensions containing hundreds or thousands of items, of which only a few
are required for each planner.
However, the cut-down model process significantly increases the amount of time it takes to run the
Go to Production process.
The process of creating the cut-down model for a particular e.List item is as follows:
134 Contributor
Chapter 7: Managing User Access to Data
● Items which are entirely No Data for all cubes in the model are identified.
● Items are removed from dimensions where possible, for more information, see "Restrictions
to Cutting Down Dimensions" (p. 137).
● Changes have been saved to the Contributor model, one of the cut-down model options have
been selected, and Go to Production is run.
Cut-down models are not created if no changes have been saved to the Contributor model. A change
is any action where you press the Save button. A notable exception to this rule is changes to existing
translations. If the only change you make is to an existing translation, the cut-down model process
will not be triggered.
Importing data does not change the Contributor model. This means that you can import data and
run the Go to Production process without causing the cut-down model process to be triggered.
When changes are saved to the Contributor model, the package GUID in the model (a unique
identifier that is used to reference objects) is also changed, causing cut-down models jobs to be
created. If no changes have been made, the GUID does not change so there is no need for cut-down
models to be run.
Limitations
The cut-down model process can cause the runtime load on the server to be adversely affected.
Without cut-down models there is a single model definition which can be cached in memory on the
server, reducing the number of calls to the datastore. When cut-down model definitions are used
there are too many of them to cache in memory on the server. As a result, the particular model
definition must be retrieved from the datastore every time.
Even if cut-down models are not being used, the same process of cutting down the dimensions
happens anyway when the model definition is loaded. The benefits of using No Data access settings
to reduce memory requirements and decrease block size apply regardless of whether cut-down
models are being used. See "Restrictions to Cutting Down Dimensions" (p. 137) for more information.
Create Cut-down Model Definition for Each Aggregate e.List Item (Review Level Model Definition)
In review-level model definitions, separate model definitions are produced for each review e.List
item and its immediate children. In this case all contribution e.List items below a particular review
item use the same model definition.
Because review-level model definitions require considerably fewer model definitions, they take less
time to produce or recreate. They should be used when the selections are not small subsets of those
required at parent level, or when it would take too long to produce or recreate the planner-specific
model definitions - typically with e.Lists with thousands of items. This option is a compromise
between no cut-down models and fully cut-down models.
● One for each review e.List item with its immediate children (extra model definitions for the
individual review e.List items are not required).
● One for each multi-e.List item "my contributions" view (where a planner has responsibility for
multiple e.List items). Model definitions are not produced where a planner owns all the children
of a particular review e.List item. The review and children model definition will be used instead.
The benefit of creating a cut-down model definition for every e.List item is that performance is
optimized for each planner. But it may take some time to produce or recreate the model definitions.
This option should be used when the appropriate selections for the children of one review e.List
item are small subsets of the selections required for the parent review e.List item.
136 Contributor
Chapter 7: Managing User Access to Data
Access tables must be carefully considered when setting up cut-down models. Potentially, the
overhead in terms of model size and memory usage for using access tables can be higher than the
benefit gained from using cut-down models.
When models are large, you should use access tables along with cut-down models so that the size
of the model to be downloaded to each client is reduced.
Cut-down model options are set in the Application Options window, see "Change Application
Options" (p. 72).
● The data dimension of the source cube for a accumulation link (that is, the dimension which
contains D-List format items that are treated as if they were dimensions of the source cube).
● The data dimension of the target cube for a lookup link (that is, the dimension which contains
D-List format items that are treated as dimensions of the target cube).
● Items that are the weighting for a weighted average are not removed unless the average is also
removed.
The level of cut-down applied per dimension is the resolved level across all cubes. This is why it is
impossible to cut down a dimension that is used in both an assumption cube and a contribution
cube, because the entire dimension is required for the assumption cube. Where the same dimension
occurs in two or more contribution cubes with different access tables, it will only be cut-down to
remove items that are not required in any cube. As a result, there are cases where dimensions are
not cut-down as much as might be expected, resulting in greater memory usage. However, there
are ways in which to structure the model to avoid this situation:
Example 1
If you have a dimension that can not be cut down because it is used by an assumption cube.
You could create an identical dimension to substitute in to the assumption cube leaving the dimension
in other contribution cubes to be cut down.
Example 2
If the assumption cube is causing the problem.
An alternative is to add the e.List to the assumption cube and apply access settings to this cube so
that the dimension can be cut down.
The model is an Employee Plan cube with an Employees dimension. Each cost center (CC) has 100
of its own employees with no access to other employees. You would use an access table to give each
CC write access to the appropriate 100 employees, with no data access to the rest.
With no cut-down models, each planner will receive a model definition including a 10,000-item
employee list, which is large in size (approximately 2.5 MB). Only one model definition needs to
be produced and updated.
138 Contributor
Chapter 7: Managing User Access to Data
With planner-specific model definitions, each planner’s model definition will contain only the
required 100 items from the Employees dimension (approximately 25KB). Each dimension item is
around 250 bytes. One hundred and eleven model definitions must be produced and updated.
With review-level model definitions, the dimension definition downloaded to each planner will
contain 1,000 items--thus this element of the model definition will be ten times larger than it needs
to be, but still ten times smaller than the full version (approximately 250kb). Eleven model definitions
must be produced and updated.
to decide which cut-down method to use, consider these factors:
● The application structure itself.
● e.List hierarchy, for example, with review-level model definitions it may be sensible to reduce
the number of contribution e.List items per review e.List items by introducing dummy review
e.List items to reduce the size of the model definitions.
140 Contributor
Chapter 8: Managing Data
The following types of data can be imported into and exported from Cognos 8 Planning -
Contributor.
Data in other Contributor Administration links (p. 145) in the production or development
applications and cubes Administration Console
Text files and Contributor Local links in the Web client. See the production
cubes Cognos 8 Planning - Contributor
Browser User Guide.
If you are moving data between Contributor cubes and not making model changes, use an
administration link to move data into the Production application. The data is processed using an
activate process, you do not have to run Go to Production. Note that there is no option to back up
the datastore when targeting the Production application. You can target only the development
application if you are importing data from Cognos 8 Packages.
Administrators can also set up links that are run from a Web client session enabling Web client
users to move data between a cube in a source application and a cube in a target application. For
more information, see "Administration and System Links" (p. 142).
When you import data into a Contributor application, the data is first put into an import queue.
There are two import queues, one for the development version of an application and one for the
production version of an application. The import queues are independent of each other and contain
the data in import blocks that are applied to an e.List item during a reconcile job.
For each e.List item in an application, there is a model import block. The data from importing data,
administration links, Analyst>Contributor, or Contributor>Contributor links is placed there, ready
to be moved into the cube by a reconcile job (p. 52). For links that target the development
application, the reconcile job is created during the Go to Production process. For links that target
the production application, an activate process creates a reconcile job.
Important: Be aware that if two reconcile jobs are run while users are working offline, the users
will be unable to bring the data online. See "Editor Lagging " (p. 251) for more information.
Because you can have multiple cube import blocks per cube, you can run administration links and
Analyst>Contributor links as well as import data concurrently.
Note that a model import block is represented by a row in the import queue table in the application
datastore. An individual cube import block cannot be seen in the datastore.
Note: The Get Data extension must be configured before you can create a system link. For more
information about configuring the Get Data extensions, see "Configure Client Extensions" (p. 303).
The difference between administration and system links is described in the following table.
142 Contributor
Chapter 8: Managing Data
Run in the Contributor Administration Run on the Contributor Web client by the
Console and using macros by the Contributor application user (but created by
administrator. the administrator in the Contributor
Administration Console).
Designed to move large amounts of data and Designed to move small amounts of data on
can be scheduled. an ad-hoc basis.
Stored in the Content Manager datastore. Stored with the target application.
When moving data between Contributor Can contain only one element, and as a result
application, can contain multiple elements can contain only one source and target cube.
(sub-links) enabling a single link to have
many cubes as the source and target.
Can map an e.List dimension to a non-e.List Can only map an e.List to an e.List
dimension, enabling you to move data dimension.
between applications that do not share an e.
List.
Can run a link to a locked e.List item. Cannot run a link to a locked e.List item.
When moving data between Contributor Cannot be tuned for optimal performance.
applications, can be tuned for optimal
performance.
Can be sourced from Cognos Packages. Cannot be sourced from Cognos Packages.
Local Links
Local links allow Web client users to load data into the Contributor application from external data
sources, and from the active Contributor grid. You create and run local links in the Web client
using the Get Data client extension. These are similar to system links. For best performance, we
recommend that users import into one e.List item at a time from external sources.
Local links are similar to system links, except for the following differences:
● Local links are created in the Web client, and not the Contributor Administration Console.
● Local links can be used to import data from external data sources.
● In local links, users can only import data from tabs in the active Contributor grid (system links
can import data from source cubes to which the user has no access rights).
Note: The Get Data extension "Configure Client Extensions" (p. 303) must be configured before
users can create a local link.
Web client users can also move data between cubes for one e.List item at a time, using system links
created by the administrator.
Administrators move data from one production application to another, or to development
applications by using administration links. The administration link process uses the job system and
so enables you to move large amounts of data. It is quicker to move data into the production
application if you have no model changes to make. This is because if you move data into the
development application, you must run Go to Production before the data is available to the Web
client.
If you run more than one link to the same application, and the same cell is targeted, the most recent
value is returned.
Administrators can also move smaller amounts of information from Analyst to Contributor using
Analyst >Contributor links. This process does not use the job system. We recommend that you use
the @SliceUpdate macro to split one large link (across the entire e.List) into smaller links that deal
with smaller numbers of e.List items at a time. A slice update sample is available on the Cognos
Global Customer Services Web site.
For more information, see the Cognos 8 Planning - Analyst User Guide.
Cascaded Models
Using administration links, you can create several small models that contain a high level of detail,
targeted at regional managers, and roll them up into a larger application with less detail so that the
top executives see only the numbers that they are interested in.
For example, you can have America, Asia, and Europe models rolling up into a Corporate model.
144 Contributor
Chapter 8: Managing Data
Matrix Management
Using administration links, you can create models that allow data to roll up both on a regional and
departmental basis, with approvals from both organization structures.
For example, you can have a Company model where Human Resources reports into Country, and
this can be linked into a Corporate model where Country reports into Human Resources.
Enhanced Security
Administration links allow you to separate cubes into applications by purpose. For example, you
can have a sales forecasting application, a travel planning application, and a salary planning
application. This separation of duty can improve security maintenance. An application containing
a salary plan model may require many access tables to specify who can view the cube, you can
simplify the cube by separating the access tables from the cube.
Administration Links
If you are an administrator, you can use administration links to copy data between Contributor
applications without having to publish data first. You can also use administration links to import
data from Cognos 8 data source such as Oracle data stores, SQL Server data stores, or SAP BW. If
importing from Cognos 8 data sources, you must first create a Framework Manager model and
publish it as a package to Cognos Connection. For more information, see "Importing Data from
Cognos 8 Data Sources" (p. 161).
If you are importing data from a Contributor application, you must have sufficient access rights to
select applications as the source and target of a link. If you are importing from Cognos 8 data
sources, you must have the rights to select applications as the target of a link, and be able to access
the source package in Cognos Connection. For more information, see "Configuring Access to the
Contributor Administration Console" (p. 36). If you have appropriate rights, you can secure the
ability to create, edit, execute, delete, import, and export administration links. You can also secure
previously created administration links (administration link instances).
Because data can be moved around easily, you can create smaller applications. Smaller applications
can improve performance because shorter e.Lists have quicker reconciliation times. Additionally,
smaller applications usually do not need as many access tables and cut-down models, so the time
taken to run the Go to Production is reduced. You can tune administration links for optimal
performance. For more information, see "Tuning Administration Links" (p. 154).
For more information see "Cut-down Models and Access Tables" (p. 136).
The source application must be a production application which means Go to Production must be
run at least once. Also, all e.List items in the source application must be reconciled otherwise the
link will not run. The target application can be either the production application or the development
application. An e.List must be defined. You can map an e.List dimension to a non e.List dimension
to move data between applications that do not share an e.List. This is not possible in a system link.
Administration links are similar to D-Links defined between Analyst D-Cubes, except that look-up
links, and Fill, Add, and Subtract modes are not supported. Administration links are also similar
to system links. Administrators set up system links between applications that can be run on the
Web client by end-users using the Get Data extension. System links also have to have a source e.
List mapped to a target e.List. Unlike administration links, a single system link can run from only
one source cube in one application to one target cube in one application. You can, however, set up
multiple system links.
Administrators set up a series of elements that define sub-links from the production versions of
applications to either the development or production versions of target applications.
If the elements are grouped together into a single link so they can be run at the same time, you can
move data simultaneously between multiple applications. For example, you may want to move data
between the following applications:
● Sales > Profit and Loss
Administration links do not run unless one or more Job servers are monitoring the Planning Content
Store. For information about adding the Planning Content Store to a Job server, see "Add
Applications and Other Objects to a Job Server Cluster" (p. 55).
● When link element two is run, it targets the same e.List items, and overlaps some of the cells
targeted by link element one. A further two cube import blocks are created.
● When link element three is run, it also targets the same e.List items, and two cube import blocks
are created.
Any cells that are updated by link element three that overlap elements two and one take the value
from element three. Each e.List item in the target application can have only one Model Import
Block (MIB) per application state type. The MIBs are stored in the import queue. There could be
one for development and one for production. Each MIB can hold many cube import blocks (CIB).
CIBs are inserted into the MIB in chronological order. There is no specific precedence related to
the source of the data. Each link element has its own CIB, as does each Analyst link, plus another
CIB for the relational import.
Note: Where multiple link elements exist in a link, the CIBs for those links will be in the order that
the link elements are defined in the link.
146 Contributor
Chapter 8: Managing Data
Link Mode
Administration links run in Substitute mode. This means that data in cells in the target area of the
D-Cube are replaced by the transferred data. If no data is found in the source for a particular cell,
the data in that cell is left unchanged.
If data is imported into a read-only cell that is a target of a D-Link, the D-Link will override the
current import value.
Link Order
The order in which you run links is important. For example, if you run the Analyst>Contributor
link before the administration link, the Analyst>Contributor link is applied to the cube first.
However, if you run the same Analyst > Contributor Link again after the administration link, the
first Analyst > Contributor link is overwritten, and the second Analyst > Contributor link is run
after the administration link.
Analyst>Contributor links are activated automatically after every administration link if run using
the Analyst user interface rather than macros.
Making Changes to the Development Application After a Link or Import Data Process
Some changes to a development application may affect a link or import data in the following ways:
● If the cubes that you are importing data into changed, you may have to re-create the source
files and go through the complete import data process, or re-create and re-execute links. If
changes were made that do not affect the cube that you are importing data into, you need only
to rerun Prepare Import.
● Any changes made to the Access Tables, Saved Selections, or the e.List that result in a different
pattern of No Data cells for contribution e.List items that are common to both the development
and production applications result in the import queue being deleted.
● Creating access tables after prepare import has run causes the import queue to be deleted.
● If the items are mapped manually, the link must be updated or recreated after any changes to
dimension items.
Note: Any changes to the source file are not reflected in the target unless the administration link is
rerun.
2. Under the Administration Links pane, choose whether to add a link or edit an existing one:
If the link definition specifies an application that no longer exists, the Select Link Source/Target
dialog box appears. Select a different source application, target application, or both, and then
click OK.
If you chose an application with an incompatible model structure, a message appears indicating
that the selected application is invalid and that the editor is empty. Close the editor, click Edit,
and then select a different application. Type a brief description of the source and target of the
link element.
5. To tune administration link performance, click the Advanced button to adjust the amount of
e.List items that load in a single batch for both the source and target.
148 Contributor
Chapter 8: Managing Data
The source application must be a production application. You can preview the dimensions of
a cube in the right pane.
9. Click Map to map source dimensions to a target dimension manually (p. 152), or click Map All
to map dimensions with the same name. You need at least one set of matching dimensions in
order to use the Map All feature.
The mapped dimension pairs now appear in the lower set of Map source to target dimensions
lists. A single line connects paired dimensions.
Tips:
● Double-click the connecting line (or either dimension) to confirm that the items in the
dimensions are mapped correctly.
● To edit the properties of a mapped dimension, click the source, target, or line between the
source and target dimension names and click edit.
● To remove a map, click the map and click Clear. Clear all created maps by clicking Clear
All.
10. In the Additional Options window of the Administration Link-Element dialog box, you can
choose to include annotations or attached documents. Do one of the following:
11. Click Finish when you are done configuring the link element.
12. If you want to add a new element, click Yes. To return to the main Administration Links
window, click No.
Note: If you add a new element, it must match the source used in the original link.
Both actions save the current element. You can change the order in which the elements are run
using the arrow buttons. For more information, see "Order of Link Elements" (p. 146).
If you want to monitor the progress of an administration link, under Administration links, click
Monitor Links. For more information, see "Jobs" (p. 47).
Tip: If you receive an error message stating that the batch sizes are too large to load data, you
need to adjust the batch sizes. For more information, see "Tuning Administration Links" (p. 154).
To automate this process, see "Execute Administration Link" (p. 216).
Note: Applications defined in a link may no longer be available since the administrator last
created or modified the link. An application becomes invalid when the application ID is changed
2. Under the Administration Links pane, choose whether to add a link or edit an existing one:
If the link definition specifies an application or package that no longer exists, the Select Link
Source/Target dialog box appears. Select a different source package, target application, or both,
and then click OK.
If you chose a package with an incompatible model structure, a message appears indicating
that the selected package is invalid and that the editor is empty. Close the editor, click Edit,
and then select a different application. Type a brief description of the source and target of the
link element.
3. In the Administration Link Properties dialog box, enter or edit the name and description of the
link.
Both can have up to 250 characters. Link names must be unique and must not be empty, or
consist only of spaces.
4. In the Select a Cognos Package as the Link Source dialog box, browse for a Cognos Package
in Cognos Connection by clicking the ellipses button.
If you select a package not published from Framework Manager you will get an error message
stating that the package you have selected cannot be used as a source for an Administration
Link because it was not Published from Framework Manager.
6. Select the available Query Items in the Query Subject and move them to the Selected Query
Items pane.
Select the Display preview of selected query item check box to preview the Query Items. The
preview option only works with Query Items that have not been selected, and helps you select
the correct Query Items.
8. In the Administration Link-Element dialog box, select the target application and a target cube.
The application has to be Development.
150 Contributor
Chapter 8: Managing Data
9. Click Map to map source dimensions to a target dimension manually (p. 152), or click Map All
to map dimensions with the same name. You need at least one set of matching dimensions in
order to use the Map All feature.
The mapped dimension pairs now appear in the lower set of Map source to target dimensions
lists. A single line connects paired dimensions.
Tips:
● Double-click the connecting line (or either dimension) to confirm that the items in the
dimensions are mapped correctly.
● To edit the properties of a mapped dimension, click the source, target, or line between the
source and target dimension names and click edit.
● To remove a map, click the map and click Clear. Clear all created maps by clicking Clear
All.
10. If you want to select the columns containing the data, click Mark Data.
Note: Mark Data is not available once you have mapped your data.
11. In the Administration Link - Element dialog box, click Next to pick unmapped source Query
Items and unmapped target dimension items.
12. In the Additional Options window of the Administration Link-Element dialog box, you can
choose to include annotations or attached documents. Do one of the following:
13. Click Finish when you are done configuring the link element.
14. If you want to add a new element, click Yes. To return to the main Administration Links
window, click No.
Note: If you add a new element, it must match the source used in the original link.
Both actions save the current element. You can change the order in which the elements are run
using the arrow buttons. For more information, see "Order of Link Elements" (p. 146).
If you want to monitor the progress of an administration link, under Administration links, click
Monitor Links. For more information, see "Jobs" (p. 47).
To automate this process, see "Execute Administration Link" (p. 216).
Note: Applications defined in a link may no longer be available since the administrator last
created or modified the link. An application becomes invalid when the following occurs:
● The application ID is changed because the application was transferred from a development
environment to a production environment.
● When changing the package or target application, you chose a package with an incompatible
model structure.
Steps
1. Click Map.
The Map Items dialog box appears. Any matching dimension items are highlighted.
If a source dimension does not map to any target dimension, it can be treated as an extra source
dimension. If the items in the source and target dimensions do not match, a manual map is
required. For example, if the source item is Jan-03 and the target item is 1-03, a manual map
is required.
If items in a source or target of the manually mapped link are added, the link must be manually
updated to account for the new items in order to correctly run the load.
3. If you want to include calculated items (shown in bold) click Calculated items.
The Map Items dialog box closes and returns you to the Map Source to Target dialog box.
5. If some unmatched items remain in the Map Items dialog box, click Manually Map, select a
source dimension and target dimension, click Add, and click OK.
Note: This filter applies only to items that appear in the Dimension Items list. It does not affect
what is loaded into the target.
6. In the Filter box, type the character you want to filter with.
Only the items that begin with that character appear.
Tip: To remove the filter, delete the character in the Filter box .
The Select Substring dialog box appears with the longest item name in the dimension list.
When you use a substring, all the items that match the substring are rolled up into one item.
For example, if you have dimension items named Budget 1, Budget 2, and Budget 3 and you
applied the substring BUD, all three items are rolled into one dimension item to be loaded into
the target dimension.
152 Contributor
Chapter 8: Managing Data
Note: Unlike filtering by characters, using a substring applies to what is included in the load
as well as what is viewed in the Dimension Items list. You can use a substring when mapping
dimensions manually or automatically.
8. Click in the Substring box to place bars at the beginning and end of the substring. If the substring
appears in the front of the string, place a single bar at the end of the substring.
To remove the bar, right-click it.
9. Click OK. The dimension items are now filtered by the number of characters you selected.
Steps
1. Select either a source or target dimension.
Remove a Dimension
You can remove a selected dimension.
Steps
1. In the Map Source to Target dialog box, click the source dimension you want to remove.
❑ Backup and remove the Contributor applications from the current Planning Content Store and
add them to the new Planning Content Store.
❑ Import the administration links into the new Planning Content Store.
Steps
1. Click Administration Links, and Manage Links.
3. Select the name and location, and click Save. The administration link is saved with a .cal
extension.
4. To import an administration link, click the Import button, and select the administration link
file.
It is given an edit status of UNKNOWN. Check the administration link to ensure that the
source and target cubes are available, and all dimensions are mapped.
Note: When importing an administration link created using Cognos Planning version 7.3 SP3
or earlier, the source and target batch size setting is 1, which loads one target/source e.List item
into batch. This was the default behavior of the previous versions of Cognos Planning. For
more information, see "Tuning Administration Links" (p. 154).
Note: You cannot tune administration links that use Cognos Packages as their source. This is because
Cognos Packages do not load from e.List items.
154 Contributor
Chapter 8: Managing Data
A batch is a set of data that is to be transferred can include data from more than one e.List item.
A batch can also target multiple e.List items.
Important: When changes occur to a model you should evaluate whether you need to retune the
administration link.
● incorporating that data into the target application via a reconcile job
To determine whether or not tuning the administration link will be beneficial, review the amount
of time it takes to move data versus any time spent on the reconciliation.
You can see the inter_app_links job in the Monitor links window, and the reconcile job in the Job
Management window of the target application.
Tip: If you are running multiple administration links that target the same application, consider
targeting the development application and running Go to Production. This means that reconciliation
is run once instead of multiple times. Alternatively, instead of having multiple links, you can have
multiple link elements from different applications in the same link targeting the production
application. In this case, reconciliation is run only once.
Types of Link
Where the source and target applications share the same e.List, and each source e.List item is mapped
to its matching target e.List item, the link has a one-to-one relationship. The amount of effort
required to run this link is determined by the number of mappings between e.List items.
Links that have a single source e.List item targeting multiple e.List items have a one-to-many
relationship. The effort required to run this link is determined by the number of target e.List items.
Links where many source e.List items target a single e.List item have a many-to-one relationship.
The effort required to run this kind of link is determined by the number of source e.List items.
Links where multiple source e.List items are mapped to multiple e.List items have a many-to-many
relationship. These links typically need a lot of processing power, because the effort needed to run
them is calculated by the number of source e.List items multiplied by the number of target e.List
items. You may get the most benefit from tuning these links.
Number of Processors
The number of processors and the amount of available RAM directly affect performance.
If any of your servers in the job server cluster have more than 4 CPUs available, we recommend
that you increase the Job Item Count multiplier per machine setting in the epInterAppLinkResources.
xml file (<install_location>\cognos\c8\bin). The default setting is 4 CPUs per job server. However,
having fewer than 4 CPUs does not negatively affect performance.
The file is installed as read-only. We recommend that you back up the file and reset the read-only
flag to write in order to change the CPU number. You must make the same change to the file on
all servers in the cluster.
Model Changes
Changes to the model affect how the administration link performs. If you tune an administration
link and it shows improved performance and then a change occurs in the model, the optimization
may become invalid. This is because the change can affect the overall shape of the administration
link (e.List length, cube size, and so on) that the tuning was based on.
Steps
1. While the administration link runs, monitor the memory utilization on the least powerful server
in the job server cluster that the administration links run.
2. Adjust the batch size for both the source and target and rerun the administration link. We
suggest that you increase the source batch size where possible before increasing the target batch
size.
● For the source, if there are 150 source e.List items, try entering 75. If that does not work,
try 50 and so on.
● For the target, divide the number of e.List items by the number of physical processors
multiplied by 2.
The values for Limit To must be positive whole numbers and greater than zero in order for the
tuning settings to be valid.
3. Monitor the memory utilization on the same server to see if it has improved.
156 Contributor
Chapter 8: Managing Data
4. If not, adjust the numbers and run the administration link again.
Note: When importing an administration link created using Cognos Planning version 7.3 SP3 or
earlier, the source and target batch size setting is 1, which loads only one source e.List item at a
time. This was the behavior of the previous versions of Cognos Planning.
Steps
1. In the Create New Link dialog box, click Advanced.
2. If you want to load all e.List items at once, ensure that No Limit is selected.
3. If you want to divide your loads into batches, type a number into the Limit To box.
Note: The values for the Limit To box must be positive whole numbers and greater than zero
in order for the tuning settings to be valid.
4. If the performance is acceptable, leave the Source Batch Size as no limit. If you get errors, reduce
the size.
Steps
1. In the Create New Link dialog box, click Advanced.
3. If you want to divide your loads into batches, enter a number into the Limit to box.
4. Click OK.
You now need to configure the link element (p. 148).
When a previously created administration link is imported, the source and target batch size is set
to 1, which loads one source and target item into a batch for processing. We recommend that you
change the source batch size to No Limit, which is the default value for any newly created
administration link. By adjusting this setting you should see performance gains. You can then try
to adjust the batch size settings to further improve performance.
We recommend that you increase the Remote Call Time-out in Seconds setting to 7200 seconds in
the epAdminLinksResources.xml file, located at <install_location>\cognos\c8\bin. The file is installed
as read-only. We recommend that you back up the file and reset the read-only flag to writable.
After changing this setting, the Planning service needs to be stopped and restarted on the machine
that is building or modifying the link.
System Links
Administrators can set up links that are run from a Web client session so that Web client users can
move data between a cube in a source application to a cube in a target application. A system link
is a pull link, rather than a push link.
A system link can target hidden, read-only, and writable cells.
System links move date from one source cube in one application to one target cube in one application.
System links are stored with the application, whereas administration links are stored in a separate
datastore. The target for system links must be in the production version of the application, whereas
the target for administration links can be in the production or development version of the application.
You cannot map an e.List dimension to an ordinary dimension in a system link, unlike in an
administration link. This is for performance reasons. If many e.List items must be loaded for a link,
this potentially takes a lot of resources. An administration link can run across job servers and is
158 Contributor
Chapter 8: Managing Data
scalable, so resources are usually not a problem. But a system link runs on the Web client computer
and is not scalable. If you must map an e.List dimension to an ordinary dimension, use an
administration link. A target e.List item can have only one source e.List item mapped to it, but one
source e.List item can be mapped to many e.List items.
To create a link, administrators must be granted the access rights System link as source, and System
link as target, for the relevant applications. In addition, the Admin options setting Act as system
link source must be set to Yes for source applications. For more information, see "Admin
Options" (p. 77). Otherwise, you can still create links using this source, but the Web user cannot
run the link. You assign the link to an e.List item in the target application.
To run the link, the user must have write access to the e.List item that the link is assigned to. They
do not require rights to the source cube. Links are executed on the client computer through the Get
Data client extension. For more information, see "Configure Client Extensions" (p. 303). They can
be run only from the target application. Client users cannot edit system links.
If the Get Data extension is configured and enabled for a user, group, or role, that user, group, or
role has rights to run system links and local links.
The Go to Production process does not have to be run after you set up a system link.
The history of system link actions is stored as an annotation for cubes and targeted e.List items, if
enabled. When a system link is run, a new annotation is created for that link in the open e.List item.
If the link is executed again by the same user or another user, the same annotation is updated. In
addition, a separate history dialog shows all history related to the links that apply to the open e.
List items.
● When changing the source or target application, you chose an application with an incompatible
model structure.
To use an application as a source for a System Link, you must first set Act as system link source in
Admin Options to Yes. For more information, see "Admin Options" (p. 77).
You can copy commentary, such as file attachments or user annotations, between applications using
system links.
Steps
1. Click Production, System links.
3. If the link definition specifies an application that no longer exists, the Select Link Source/Target
dialog box appears. If this happens, select a different source application, target application, or
both, and then click OK.
If you chose an application with an incompatible model structure, a message appears indicating
that the selected application is invalid and that the editor is empty. Close the editor, click Edit,
and then select a different application.
7. Map the source dimensions to the target dimensions manually (p. 152), or click Map All to map
dimensions with the same name.
You must have at least one set of matching dimensions to use Map All.
The mapped dimension pairs move to the fields below, and a line connects the two. This line
signifies that these dimensions are a matched pair.
Note: The Substring option is unavailable to system links on the e.List dimensions because you
cannot have multiple sources or multiple targets due to the potentially large number of nodes
that would need to be downloaded to the client in order to execute the System Link.
8. In the Additional Options window of the Administration Link-Element dialog box, you can
choose to include annotations or attached documents. Do one of the following:
Steps
1. Open the e.List item and click the take ownership button if necessary.
3. Click the System Links tab, select the link, and click Run.
160 Contributor
Chapter 8: Managing Data
Tip: If the Get Data extension is reset, all settings and data are lost and all System Links created
for the application are deleted.
❑ In Framework Manager, create a new project and import the metadata into the project (p. 164).
❑ In Framework Manager, model the source. See the Framework Manager User Guide for more
information.
❑ Create and publish the Cognos package to Cognos Connection (p. 165).
Tip: You can create and schedule macros that run administration links.
You can also automate the import of Cognos packages using the @DListItemImportCognosPackage
macro.
You can include authentication information for the database in the data source connection by
creating a signon. Users need not enter database authentication information each time the connection
is used because the authentication information is encrypted and stored on the server. The signon
produced when you create a data source is available to the Everyone group. Later, you can modify
who can use the signon or create more signons.
Physical Connections
A data source defines the physical connection to a database. A data source connection specifies the
parameters needed to connect to a database, such as the location of the database and the timeout
duration.
Note: The schema name in the connection string for an Oracle database is case-sensitive. If the
schema name is typed incorrectly, you cannot run queries.
Required permissions
Before creating data sources, you must have write permissions to the folder where you want to save
the data source and to the Cognos namespace. You must also have execute permissions for the Data
Source Connections secured feature.
5. In the name and description page, type a unique name for the connection and, if you want, a
description and screen tip, and then click Next.
6. In the connection page, click the type of database to which you want to connect, select an
isolation level, and then click Next.
Note: For Cognos Planning - Contributor 7.3 data sources, select Cognos Planning - Series 7.
For Cognos 8 Planning - Contributor data sources, select Cognos Planning - Contributor.
162 Contributor
Chapter 8: Managing Data
7. Enter any parameters that make up the connection string, and specify any other settings, such
as a signon or a timeout.
One of the following options may apply depending on the data source to which you are
connecting:
● If you are connecting to a Cognos cube, you must enter the full path and file name for the
cube. An example for a local cube is C:\cubes\Great Outdoors Company.mdc. An example
for a cube on your network is \\servername\cubes\Great Outdoors Company.mdc.
● If you are connecting to a password protected PowerCube, click Cube Password, and then
type the password in the Password and Confirm Password boxes.
● If you are connecting to an ODBC data source, the connection string is generated from the
name you enter in the ODBC data source box and any signon information. The data source
name is an ODBC DSN that has already been set up. You can include additional connection
string parameters in the ODBC connect string box. These parameters are appended to the
generated connection string.
● If you are connecting to a Microsoft Analysis Services data source, select an option in the
Language box. If you selected Microsoft Analysis Services 2005 you must specify an instance
name in the Named instance since you can have more than one instance on each server.
● If you use a Microsoft Active Directory namespace and you want to support single signon
with Microsoft SQL Server or Microsoft Analysis Server, select An External Namespace,
and select the Active Directory namespace. For more information about configuring an
Active Directory namespace, see the Cognos 8 Planning Installation and Configuration
Guide.
● If you selected Microsoft Analysis Services 2005, you must specify an instance name in the
Named instance since you can have more than one instance on each server.
● If you selected Cognos Planning - Series 7, you must specify the Planning Administration
Domain ID and the namespace.
● If you selected Other Type as the data source type, you must build the connection string
manually.
Tip: To test whether parameters are correct, click Test. If prompted, type a user ID and password
or select a signon, and then click OK. If you are testing an ODBC connection to a User DSN,
you must be logged on as the creator of the DSN for the test to succeed.
8. Click Finish.
The data source appears in Data Source Connections on the Configuration tab, and can be
selected when using the Import wizard in Framework Manager.
Steps
1. From the Windows Start menu, click Programs, Cognos 8, Framework Manager.
2. In the Framework Manager Welcome page, click Create a new project, and specify a name and
location.
You can add the new project to a source control repository, see the Framework Manager Help
for more information.
3. In the Select Language page, click the design language for the project.
You cannot change the language after you click OK, but you can add other languages.
Note: If an SAP BW server does not support the selected language, it uses the content locale
mapping in Cognos Configuration. If a mapping is not defined, Framework Manager uses the
default language of the SAP BW server.
6. Select the check boxes for the tables and query subjects you want to import.
Tip: For usability, create a package that exposes only what is required.
8. If you want to import system objects, select the Show System Objects check box, and then select
the system objects that you want to import.
Note: You save the project file (.cpf) and all related XML files in a single folder. When you
save a project with a different name or format, ensure that you save the project in a separate
folder.
164 Contributor
Chapter 8: Managing Data
2. In the Provide Name page, type the name for the package and, if you want, a description and
screen tip, and click Next.
3. Specify whether you are including objects from existing packages or from the project and then
specify which objects you want to include.
4. Choose whether to use the default access permissions for the package:
● To set the access permissions, click Next, specify who has access to the package, and click
Next.
You can add users, groups, or roles. See the Framework Manager User Guide for more
information.
5. Move the language to be included in the package to the Selected Languages box, and click
Next.
6. Move the sets of data source functions you want available in the package to the Selected function
sets box.
If the function set for your data source vendor is not available, make sure that it was added to
the project.
● To publish the package to the report server, click Cognos 8 Content Store. Click Public
Folders to publish the package to the public folder. You can create folders in the public
folder also. Click My Folders to create your own folder and publish the package to it.
4. To enable model versioning when publishing to the Cognos 8 Content Store, select the Enable
model versioning check box and type the number of model versions of the package to retain.
Tip: To delete all but the most recently published version on the server, select the Delete all
previous model versions check box.
5. If you want to externalize query subjects, select the Generate the files for externalized query
subjects check box.
6. By default, the package is verified for errors before it is published. If you do not want to verify
your model prior to publishing, clear the Verify the package before publishing check box.
7. Click Publish.
If you chose to externalize query subjects, Framework Manager lists which files were created.
8. Click Finish.
Tip: If you have OpenHub, you can use it to generate a text file or database table from SAP BW.
You can then manually create a Framework Manager model and Cognos Package from the tables
and then import the package into Planning using an Administration Link, D-Link, or D-List import.
For Cognos products to be able to access SAP BW as a data source, the user accounts used to connect
to SAP must have specific permissions. These permissions are required for the OLAP interface to
SAP BW and are therefore relevant to both reporting and planning activities.
For more information about guidelines for working with SAP BW data, see the Framework Manager
user Guide.
For more information about access permissions for modelling and reporting access, see the Cognos
8 Planning Installation and Configuration Guide.
For information about setting up your environment to work with SAP BW and Planning, see the
Cognos 8 Planning Installation and Configuration Guide.
Steps
1. In Framework Manager, click the Key Figures dimension.
2. From the Tools menu, click Create Detailed Fact Query Subject.
3. In the metadata wizard, select the data source you want to use.
You can create a new data source by clicking the New button and specifying SAP BW for
Planning as the type.
166 Contributor
Chapter 8: Managing Data
4. Click OK.
Framework Manager creates a model query subject named Detailed_Key_Figures and a separate
folder containing references to the relational objects. The references to the relational objects
are the physical layer.
Note: Packages that contain the Detailed_Key_Figures query subject are only accessible or
supported for the report authoring tools, such as Query Studio and Report Studio if they are
hidden by doing the following:
● In the Define Objects screen click the down arrow and choose Hide Component and
Children.
Recommendation - Hierarchy
These recommendations will help improve performance when working with the SAP BW import
process.
● Use manageably sized dimensions when importing SAP BW data. This is because Planning relies
on lookups against the SAP BW hierarchies during the import process, so larger hierarchies
slow down the import process. This may require modelling in SAP BW since it is at a higher
level of detail than the Planning process requires.
● Where possible, take data from the lowest level in the BW hierarchies. This is because data is
taken from the fact table level and aggregated to the level selected in the Planning link. The
further up the hierarchy that members are mapped into Planning, the more aggregations are
needed to be recreated during the import process. This may require modelling in SAP BW since
it is at a higher level of detail than the Planning process requires.
Make at least 2 MB of memory available on the installation location’s drive. If you still receive the
error, then make more memory available.
Export a Model
You can export a model structure, with or without data, to move between development, test, and
production environments or to send a model with or without data to Cognos Customer Support.
168 Contributor
Chapter 8: Managing Data
When you export a model, Cognos 8 Reports, Events, or Framework Manager Models associated
with the Planning Network are not exported.
The model structure and data are exported to the deployment directory location set in Cognos
Configuration.
You can backup an application by exporting it, but we do not recommend this as a substitute for
database backup.
Steps
1. From the Tools menu, click Deployment and then click one of the following:
● Export Model
4. Type a new name for the export, or choose a name from existing deployments and click Finish.
5. Click OK.
You can view the progress of the export in the Monitoring Console on the Planning Network
Transfer tab.
To transfer the deployment to a new environment, move the export folder from the source
deployment directory location to the deployment directory location for the target environment.
Compress the export folder to transfer your export to Cognos support.
Import a Model
You can import a model or object to move an application into a test or production environment.
Models for import must be in the deployment directory location set in Cognos Configuration.
You can import macros, administration links, applications, Analyst libraries, and security rights
from the source Planning Content Store that were exported during a previous deployment. You
can select exported objects for import or import an entire model. If a model was exported with
data, then the data will be used during an import.
Administration links and macros can be imported even if they reference an application that is not
in the target destination. If imported with a related application, macros and administration links
are automatically mapped to the target application.
Through the import process, you can change the target datastore and security for your model. The
Deployment wizard will attempt to map security settings for users, groups, and roles. If you are
using different namespaces or changing user, group, or role mappings, you may have to complete
some of the mapping manually.
The security settings for the source will be applied to the user, group, or role you map to. Source
users, groups, and roles can be mapped together or individually to any single target user, group, or
role. When mapping a number of users, groups, or roles, the target maintains the greatest level of
security privileges. Any unmapped items are mapped to Planning Rights Administrator and do not
appear individually as a users, groups, or roles in the target.
Application IDs and object names must be unique within a planning network. During the import
processes, if duplicate names or IDs are found, you are warned. If you proceed with the import
without changing names and IDs, then any existing applications or objects with common names
or IDs will be overwritten.
To import Contributor applications, you must have at least one configured datastore and the
Planning content store must be added to a job server. A datastore is not required to import Analyst
libraries, macros, or administration links.
Steps
1. From the Tools menu, click Deployment and then click Import.
3. In the Deployment Archive Name page, select a deployment to import and click Next.
4. In the Import Object Selection page, select the objects for import and click Next.
Selecting a top level object selects all the children of that object.
5. In the Namespace Mapping page, select the target namespace for each source namespace, and
click Next.
6. The User Group Role Mapping page contains a tab for each namespace mapping. For each
mapping, assign the correct target user, group, or role to each source by clicking the ellipsis
(…) button.
7. On the Select entries (Navigate) page, in the available entries directory, click the namespace
that contains the target user, group, or role.
8. From the selected entries, select the target user, group, or role and click OK.
9. Complete the user, group, or role mapping for each Namespace mapping. Once you have
completed mapping each source user, group, or role to the target, click Next.
10. For each application or library with a warning next to it in the Object Mapping page, click the
ellipsis (…) button to change the configuration settings.
11. On the Configuration settings page, type new names, IDs and locations of files, and click OK.
For an Oracle or DB2 datastore, you must identify tablespaces for data, indexes, blobs, and a
temporary tablespace.
12. To avoid overwriting macros or administration links, for each object with a warning next to
it in the Object Mapping page, type a new name for the target object directly into the target
column.
170 Contributor
Chapter 8: Managing Data
13. Optionally, if you are importing a model without data, select the option to automatically go
to production with all imported applications during the import process.
14. If you are overwriting objects, you will be prompted to confirm the import, to continue, click
Yes.
You can view the progress of the export in the Monitoring Console on the Deployments tab.
Once the transfer is complete, refresh the console and add any newly created applications to a job
server cluster, see "Add Applications and Other Objects to a Job Server Cluster" (p. 55).
Tip: During the import process, some application options are excluded from the transfer because
they do not apply to the new application location, for example, display names, backup location,
and publish options are excluded. If these options are required, they can be included by modifying
the AdminOptions to exclude during Limited transfer or AdminOptions to exclude
during Full transfer resource values in the <install_location>\bin\epPNHelperResource.xml
file.
Steps
1. From the Tools menu, click Deployment and then click View Status of Previous Exports and
Imports.
2. In the Welcome to the View Existing Deployment Wizard page, click Next.
The log of the deployment request appears. Errors and warnings are shown for failed requests.
Original
- <Resource ID="Java command-line options"> - <![CDATA[ -Xms1024m -Xmx1024m ]
]>
Modified
- <Resource ID="Java command-line options"> - <![CDATA[ -Xms256m -Xmx256m ] ]>
❑ Select the cube and the text file to load into the cube (p. 173).
One million rows per e.List item per cube is a good size limit.
Budget Version 1 Asia Sales Nov 613300 Communications: mobile phone 670
Pacific
Budget Version 1 Asia Sales Nov 613500 Communications: telephone 370
Pacific equipment
Budget Version 1 Asia Sales Nov 615100 Supplies: computer supplies 680
Pacific
Budget Version 1 Asia Sales Nov 615300 Supplies: office supplies 300
Pacific
Budget Version 1 Asia Sales Nov 615400 Supplies: fax & photocopier 350
Pacific
Budget Version 1 Asia Sales Nov 615500 Supplies: catering 1280
Pacific
172 Contributor
Chapter 8: Managing Data
Use the preview in the Import Data Copy tab to check that you have the source data in the correct
format.
Select the Cube and Text File to Load into the Cube
The copy process copies the import data file to a file location on the administration server and
specifies that the cube the data is to be loaded into. You can also check that your source file is in
the format expected by the datastore. The administration server destination is specified in Admin
Options, (p. 77), but should be modified only by a database administrator.
If your import files are large, it is quicker to manually copy the files to the administration server
destination. If you do this, you must follow the steps described below, but do not click Copy. As
soon as you have specified a valid file and location, the Administration Console registers which file
is to be loaded into a particular cube. You can only do this process one import file at a time.
Steps
1. Click Development, Import Data for the appropriate application.
2. On the Copy tab, in the Select cube box, click the cube to import into.
3. In the Select text file to load box, enter the text file and its location.
4. In preview, check that the order of columns in the text file matches the order expected by the
datastore.
The header row gives the names of the dimensions taken from the cube, and the final column
(importvalue) contains the value. The rows below the heading contain the data from the text
file.
5. If the data appears to be in the wrong columns, you should rearrange the column order in the
text file and repeat steps 1-5.
6. Unless you want to manually copy the files, click Copy and then repeat steps 2 to 5 until you
have selected all the required cube and text file pairings.
Steps
1. In the Import Data window for the appropriate application, click the Load tab.
2. Select the Load check box for each cube that you want to load data for.
Row Count indicates how many rows are currently in the import table from previously loaded
data.
3. Click Delete existing rows if you want previously loaded data to be removed, otherwise when
the names of previously loaded data match the newly loaded data it is replaced by the new data
and previously loaded data that is not matched remains in the staging table.
4. Click Load.
The next step is to prepare the data (p. 174).
174 Contributor
Chapter 8: Managing Data
If you are importing a large file, you can test to check that the import file is valid to avoid time
consuming problems. When you test, a prepare job is created for only the first e.List item for the
selected cube in the import table. Any errors are listed, such as extra dimensions, columns in the
wrong order, and invalid e.List items.
If you have more than one Planning Administration Console service, you must load data into Import
Tables (im_cubename) prior to running a Prepare Import job.
You can also automate the preparing of import files (p. 203).
Go to Production Process
If the Prepare Import process was not run, no data is imported when Go to Production is run. To
prepare import data blocks, you must cancel Go to Production and return to the Import window.
2. In the Prepare column, click the cubes that you want to test.
3. Click Test.
Any errors are listed in the Import errors pane and a prepare import job is created. You can
view the progress of this job in the Job Management window, or in the Monitoring
Console(p. 50). If the test is successful, you can run prepare for all the import data. This
overwrites test data.
2. In the Prepare column, click the cubes you are importing data into.
3. If you want import blocks created for all e.List items, and not just the e.List items that you are
importing data into, click the Zero Data option.
This zeros existing data in the cube before import takes place.
4. Click Display Row Counts to show the number of rows in the text file being imported.
5. Click Prepare.
If the Admin Option Display warning message on Zero Data is set to yes (p. 77), a warning
message is displayed if the Zero Data option is selected. This is to prevent accidental setting of
this option.
A prepare import job is created. You can view the progress of this job in the Job Management
window.
When you perform a Go To Production, the prepared data will be imported.
176 Contributor
Chapter 9: Synchronizing an Application
You use synchronize to update all cubes in an application when the underlying objects in Cognos
8 Planning - Analyst change. Changes include renaming dimensions, adding, deleting, or renaming
dimension items. When you synchronize an application, you are re-importing the definition of the
cubes in the application from Analyst. Synchronize also brings in new data for assumption cubes
(that is those cubes without the e.List).
Before making changes to an Analyst model, you should backup the library.
Synchronizing an application means that all e.List items will be reconciled, see
"Reconciliation" (p. 52), after the Go to Production process is run. We recommend that you back-up
the datastore before synchronizing.
See "How to Avoid Loss of Data" (p. 178) for more information.
If you are automating the synchronize process, when a destructive synchronize is detected, you can
choose to stop the process, or continue.
Synchronizing an Application
You can synchronize an application with Analyst to make sure all cubes that are shared with Analyst
are updated.
Steps
1. Select Development, Synchronize with Analyst for the appropriate application. A check is made
to see if you are logged on with appropriate rights. If you are not, you are prompted to logon
via the Cognos Common Logon.
2. If the library or library name has changed, enter or browse for the new name in Library.
The Administration Console checks to see if the e.List selected when creating the application
still exists in the library. If it does not, a warning message appears.
Generate Scripts
If the Generate Scripts option is set to Yes in Admin Options "Generate Scripts" (p. 178), a check
is made to see if the datastore needs to be restructured, for example if columns in tables must be
added or deleted. If they do, synchronize generates a script, Synchronize script.sql when synchronize
is run. This script must be run by a database administrator and it changes the columns in the
ie_cubename table as synchronize does.
178 Contributor
Chapter 9: Synchronizing an Application
An alternative method is to run a Contributor > Contributor link using Analyst. This is a simpler
process, but should only be used on applications with a small e.List.
Data is moved from the Production version of a Contributor source into the Development version
of the Contributor target via Analyst. Once this is complete, you must run the Go to Production
process.
5. Transfer the data from the ev_cubename view, using a tool such as DTS, depending on your
datastore. Remember to reorder/change the columns as required.
Example Synchronization
In the example shown below, the first item under Restructured Cubes, the order of the dimensions
is changed. The dimension e.List has moved from fourth to second. The preview shows the new
order of the dimensions and the old order.
If you look at the expanded Product Price and Cost cube, you can see that the dimension e.List was
added to it, and in Compensation Assumptions, the dimension e.List was removed.
Under Dimensions, you can see that a dimension item (18V Trim Saw Drill/Driver Kit) was deleted
from Indoor and Outdoor Products.
Click Advanced to see more detailed information. This provides a detailed description of the
differences between the previous Analyst model and the current model. It lists the cubes and
dimensions that have changed and when you click an item, a detailed breakdown of the changes is
provided. Typically, this information is used for debugging purposes.
Steps
1. Click the Advanced button on the Synchronize window, or during Go to Production, by clicking
the Advanced button on the Model Changes window.
2. In the empty box at the bottom of the window, enter a file location and name and click Save.
If you compare the information that you see in the Synchronize preview with the information
that is displayed by clicking Advanced, you will see in the Model changes window, one extra
cube is listed (Expenses) and an extra section named Common Links is listed. Common links
contains details of a D-Link that was changed as a result of the changes to the Compensation
Assumptions cube. The Expenses cube is listed under common cubes because it is the target of
the changed link.
180 Contributor
Chapter 9: Synchronizing an Application
Synchronize Preview
Advanced--Model Changes
182 Contributor
Chapter 10: Translating Applications into Different
Languages
The Translation branch enables you to translate, from an existing Cognos 8 Planning - Contributor
application, the strings that will appear in the Web client. The translated strings are held in the
Contributor datastore along with the rest of the application and are streamed to the Web client
when the users connect to the application. In addition to creating new language translations, you
can create custom text translations.
There are three roles involved in the translation cycle:
● The model builder who creates the Cognos 8 Planning - Analyst model using Analyst.
● The administrator who uses the Administration Console to create and manage the Contributor
application.
If a translation is upgraded from Cognos Planning version 7.3 or earlier versions there may be
additional product strings or incompatible strings that require translation. New product strings
and incompatible strings introduced during an upgrade are not automatically filled with the base
language.
Tip: Client extensions must be configured and tested before starting a translation.
When changes, including renaming dimensions or adding dimension links, are made to the Analyst
model, the Contributor application must be synchronized and go to production must be run to
incorporate the changes into the application. Any changes to the Analyst model that the Contributor
application is based on may require changes to the translation.
When the translation is opened in the Translation branch, changes to the base strings are displayed,
however, you cannot see which strings have changed. In order to see which base strings have
changed, you should export the translation from the Content Language tab both before and after
synchronization of the application, giving the translations different names. You can then compare
the translations and see what has changed.
If a user selects a preferred product language that is not a Cognos 8 Planning tier 1 language
and no translation exists, the Contributor Web client will use the model base language for
content strings and the application base language for product strings.
The application base language is configured in the Contributor Administration Console, Admin
Option, for each application.
Steps
1. In the Contributor Administration Console, expand the application to be translated so that
you can see the Translations branch.
The Create New Translation dialog box appears. The system locale tells you which bitmap
fonts and code pages are defaults for the system that the Administration Console is running
on. This should be the same as the Translation Locale, otherwise the translation may not show
properly.
4. Select By user specified preference for Product Language, or By User, Group or Role.
Use By user specified preference for Product Language if you are creating a translation that
uses one of the supported locales. Users who have this language specified in their properties
get the translated version of the application, unless they are members of a group or role that
is assigned to a different translation.
Use By User, Group or Role to create a translation in a language that applies only to a specified
user, group, or role.
6. Select the required namespace and then the necessary users, groups, or roles and click the add
button .
184 Contributor
Chapter 10: Translating Applications into Different Languages
7. Click OK.
9. Select the translation base language from English, French, German, Japanese, and Swedish.
11. Open the new translation. To do this, click the name of the translation under Translations.
You can now begin translation.
Note: When you create a translation with Japanese as the base language, the content strings
are not translated. Analyst does not support Japanese characters. To use Japanese in the
Contributor Web Client, you must translate the content strings.
● Product language relates to generic strings such as button names, status bar text, error messages,
menu and menu names item names. Product Language base strings for a language will be the
same for all models.
The Content Language and Product Language tabs separate the translation items into categories.
The total category on the Product Language tab is used when a multi-e.List view is displayed for
all contributions. There are a number of categories that appear on the Product Language tab if
client extensions are installed. These allow you to translate the buttons, wizards and any messages
that the user might see.
The strings are color coded to indicate the status of the string. The colored squares in the Category
column have the following meanings:
● Blue - the translated string cell has not changed in this session.
If strings on the Content Language tab are left blank, in the Web application they will default to
the base string. If strings on the Product Language tab are left blank, they will appear blank in the
Web application.
If you do not have the correct system locale set on your computer, we recommend that you export
the file in .xls format, and use Excel to translate the strings. This ensures that the fonts appear
correctly when you are translating. For more information, see "Export Files for Translation" (p. 188).
In Product Language, some of the translatable strings contain parameters that must not be changed,
for example:
%1:currently annotating user% has been annotating %2:e.List
item% since %3:edit start time%.\r\nIf you continue and annotate
then %1% will be unable to save any changes.\r\nDo you still wish
to annotate %2%?
● %2:e.List item%
You cannot add, remove, or edit parameters. However, they can be moved or repeated within the
translation string.
These are some of the formatting codes that are used:
\r carriage return
\n new line
\t tab
For example:
Unable to create email :-\r\n\tTo: %1:to%\r\n\tCc: %2:cc%\r\n\tSubject:
%3:subject%\r\n\tBody:
%4:body%
Text in message boxes wraps automatically so it is not always necessary to use formatting codes.
Steps
1. In the Translations branch, click the name of the translation.
3. Enter the new strings directly in the Translated string column or into the Edit window. You
can choose to:
● Populate the empty column with the text in the Base string column and then edit the text.
To do this, right-click in the column and click Copy base to translated.
● Click on the first base string that you are going to translate. This will appear in the left
pane of the Edit window. Enter or edit the translated text in the right pane. Populate any
remaining blank cells with base string text if this is needed. To do this, right-click in the
column and click Fill empty with base.
186 Contributor
Chapter 10: Translating Applications into Different Languages
You can clear strings from the Translated string column by right-clicking and selecting Clear
translated.
● xls
Define an Excel worksheet (import only).
● xml
● other
Define a custom format.
Files that you import must match the format expected by the Administration Console. The best
way to ensure this is to first export a file in the format that you will be editing in. After you have
completed your translation, import the changed file.
Where:
Context Groups the parts of the application into sections, for example, Model,
D-Cube, Hierarchy.
This should not be changed.
Base The string to be translated in the language in which the model was first
created.
This should not be changed.
Steps
1. Open the translation and click either the Content Language tab or the Product Language tab.
Only one tab can be exported at a time.
2. Click Export.
4. Select the required file format. If you click Other, you must enter a custom delimiter, for example
";"
5. Select Include column headers if you want to include a header row. This is useful for files that
you may be opening in a tool such as Excel. It lets you see which text contains the translated
strings.
If you are modifying the export file to import it into the Administration Console, you should
edit only the text in the Translation column. If you edit the text in the String ID, Category, or
Base columns, you will have problems importing the file.
6. Click OK.
Steps
1. Open the translation that you are going to import the translated file into.
3. Click Import.
188 Contributor
Chapter 10: Translating Applications into Different Languages
6. Enter the file name and location in the File name box and click OK.
7. Click Save.
Steps
1. Click Find.
Find what Enter the text string. Previous searches are saved and you can
select them from the list.
Search All Categories, Select whether you want to search all categories, or selected
Selected Rows rows.
Direction Select the direction for the search, either Up or Down (default)
.
Translating Help
You can translate Contributor help text in the same way that you translate other strings.
Cube help is located on the Content Language tab under Cube help for <cube name>. The first row
has simple cube help, the second row is detailed cube help that may contain HTML tags. These
tags can be modified.
Instructions are also located on the Content Language tab under Application Headline and
Instructions. The top row is the headline and the bottom line is the instructions.
From the Contributor Administration Console, it is not possible to translate the default help supplied
with the Contributor Web site.
About Fonts
A font has both a name and a set of charsets that it supports. Some fonts support a variety of
charsets from the Unicode range and some do not. Some, such as MS Sans Serif, are not Unicode
fonts and only support the Western code pages.
Japanese has limited support from the set of fonts that are installed by default on a US/English
version of Windows 2000/XP. None of the standard fonts supplied with Windows 2000 support
the Japanese charset.
Once you install the Japanese language pack, however, standard Japanese fonts are installed on
your system (Japanese fonts all have unique names).
For Korean or Japanese translations to appear correctly in the Contributor Web client, you must
set them as the default character set for the operating system.
Tip: You can install fonts for East Asian languages through the Regional and Language Options
in the Windows Control Panel.
190 Contributor
Chapter 11: Automating Tasks Using Macros
You can automate common tasks that are performed in Contributor Administration Console by
using macros. Formerly known as Automation Scripts and created by the Contributor Automation
tool, the automation of tasks is now integrated with Contributor Administration Console using
macros and macro steps. This makes it easier to maintain and use automated tasks.
Macros, comprised of macro steps, can be
● run interactively within Contributor Administration Console
● using Windows scripts and batch files via a batch scheduling tool
Macros are stored in the Planning content store. Macros and macro steps can be exported and
imported again as an XML file.
❑ Create a new macro, see (p. 192).
❑ Run a macro from the Administration Console (p. 221), from Cognos Connection (p. 221), as a
Cognos 8 Event (p. 222), from the command line (p. 224), or as a batch file (p. 224).
The following are examples of tasks in the Administration Console that are performed frequently
and are often automated using either Macros or a batch scheduler tool.
Creating a Macro
Create macros using the Macros tool in Contributor Administration Console. Macros are used to
automatically perform tasks. Macros are stored in the Planning store.
❑ Create a new macro (p. 192)
❑ Create a macro step (p. 193) or transfer macro steps from another macro (p. 196)
Steps
1. In the Contributor Administration tree, click the Macros icon .
2. Click New.
3. In the Macro Name box, type the name of the new macro. Click the Publish to Cognos
Connection check box to have the macro accessible in Cognos Connection and click OK.
The new macro appears in the Macros box. The Edit State is set to Incomplete because no steps
are added yet, and the Run State is set to Ready.
192 Contributor
Chapter 11: Automating Tasks Using Macros
Tip: Rename or delete a macro by selecting the macro in the Macros list and clicking Rename
or Delete.
Note: The Automation scripts created in version 7.2 and earlier cannot be used as macro steps.
Job Servers
Stop a job server at a scheduled time Disable Job Processing (p. 197) StopApplicationServer.xml
Start a job server at a scheduled time Enable Job Processing (p. 197) StartApplicationServer.xml
Remove a container from a job server Remove Monitored Job Object RemoveMonitoredApplications.xml
(p. 198)
Control the maximum number of Set Max Concurrent Job Tasks SetMaxConcurrentJobTasksFor
jobs that can run on a job server (p. 199) ApplicationServer.xml
Set how often a job server checks for Set Polling Interval for Job Server SetPollingIntervalForApplicationServer.
jobs (p. 199) xml
Schedule other macro steps to allow Wait for Any Jobs (p. 199) WaitForAnyJobs.xml
any jobs to finish before other jobs
start
Development
Move data from import staging tables Prepare Import (p. 203) AutomatedPrepareImport.xml
Load data into the datastore staging File (p. 202) UploadImportFile.xml
table
Import Access Tables into a Import Access Table (p. 204) N/A
Contributor application
Production
Publish data collected by Contributor Publish - View Layout (p. 208) N/A
to a datastore (using default
parameters)
Publish only changed data for e.List Publish - Incremental Publish) N/A
items (p. 213)
194 Contributor
Chapter 11: Automating Tasks Using Macros
Administrator Links
Macros N/A
Run a program from the command Execute Command Line (p. 218) N/A
line
Session
Steps
1. In the Macro Steps area, click New.
3. Click OK.
A dialog box appears with the parameters relevant to the type of macro step you selected.
4. Review each parameter and change it as required. For information about the parameters, see
the topic for the type of macro step you are adding in step 2.
6. If you want to add other steps to the macro, click New and repeat steps 2 to 5 for each macro
step.
The macro steps are added to the list of macro steps contained in the macro.
Tips: To edit or delete a macro step, click the macro step and click Edit or Delete. To reorder
macro steps, click the macro step and then click the Move Up or Move Down button.
Steps
1. Click the Macros icon in the tree.
Property Description
Direction
Delete Source Deletes the macro step from the original macro
Copy From
New Macro Creates a new macro that includes the selected macro step
(self) Makes a copy of the selected macro step in the existing macro
Files in Folder Copies the selected macro step to a folder, which is used for backup
purposes
Publish to Cognos Publishes the macro to Cognos Connection for use in Event Studio.
Connection
Select Steps
196 Contributor
Chapter 11: Automating Tasks Using Macros
4. Click OK.
Parameter Task
Job Server or Cluster Browse to the Job Server or Cluster that you want to start by clicking
the Browse button and selecting the correct server name.
Parameter Description
Job Server or Cluster The job server or cluster that you want to stop.
Parameter Description
Job Server or Cluster The job server or cluster that you want to start.
Job Doctor
Use this macro step to generate a report on jobs in a container. The report is in XHTML format.
It is typically used on the advice of Technical Support and can be used to help debug problems with
Contributor jobs.
Tip: Adding a Wait for Any Jobs macro step before the Job Doctor macro step ensures that all jobs
are complete before moving on to this macro step. For more information on the Wait for Any Jobs
macro step, see "Wait for Any Jobs" (p. 199).
The following table describes the relevant parameters.
Parameter Description
Include contents of Whether to include the adminhistory table information in the report.
Admin History Although this is useful information, it can slow down report generation
and make the output quite large.
Report file name (Enter A path and file name for the XHTML report.
Local Application Server
Path)
Parameter Task
198 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Task
Job Server or Cluster Browse to the Job Server or Cluster that you want to stop by clicking
the Browse button and selecting the correct server name.
Parameter Description
Job Server or Cluster The job server or cluster that you want to limit job tasks on.
Maximum number of Job The maximum number of job tasks allowed. This should be no more
Tasks than the number of physical processors that you have on the
computer.
Parameter Description
Job Server or Cluster The job server or cluster that you want to set polling interval for.
Polling Interval Set the frequency with which a job server looks to see if there are any
jobs to do. This is measured in seconds. The default is 15 seconds.
Parameter Task
Job Timeout (in minutes) A time period after which the macro terminates with an error if the
jobs are not completed. The default is 1440 minutes, which is one
day.
Go to Production
The Go to Production process takes the development Contributor application and creates the
production application, making it available to users on the Web client. A new development
application is established. Use this macro step to automate the Go to Production process.
Before you can run Go to Production, the application must at least have an e.List and users defined.
You can run Go to Production without setting any rights, but no one can view the application on
the Web client. However, you can preview the application by selecting Production, Preview in the
Administration Console.
When you start Go to Production, job status is checked. If jobs are running or queued, the Go To
Production macro step will wait for them to complete. During the automated Go to Production
process, the following checks are completed.
● A check is made to see if there are any jobs running.
● If necessary, a job is created to ensure that all e.List items are reconciled if they are not already.
● A check is made to see if a Cut-down models job is required. If it is required, the job is created
and run.
For more information on Go To Production, see "The Go to Production Process" (p. 239).
Parameter Description
Application The name of the Contributor application datastore that the macro
step is being run against.
200 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Description
Minimum e.List item state: One of the following minimum workflow states:
● Not Started
● Work in Progress
● Locked
● Work in Progress
● Locked
The state must be greater than or the same as the minimum e.List
Item State.
If you set Work in Progress, and e.List items are Locked, the e.List
items are reset from Locked to Work in Progress.
The default is Locked, which means no change takes place.
Skip Top Level e.List Items Does not reset top level e.List items.
Jobs
Job Timeout (in minutes) A time period after which the macro step terminates if the preexisting
jobs are not completed. If jobs are running, Go to Production will
wait for them to finish. The default is 1440 minutes, which is one
day.
Wait for jobs after Go to After the Go to Production process, complete all jobs before moving
Production on to the next macro step.
Import Data
Importing data into cubes requires the following process.
● Select the cube and text file to load into the cube.
● Load data from the text files into the datastore staging tables.
Parameter Description
Application The name of the Contributor application datastore that the macro step
is being run against.
File to Upload (Enter The name and location of the file to be loaded. This can be a UNC
Local Application location such as \\server\share\file.txt.
Server Path)
Target Cube Name The name of the cube that the data is to be imported into.
Remove existing data Whether existing rows are to be deleted. Selecting this item removes
in import table previously loaded data. Clearing this option when the names of previously
loaded data match the newly loaded data causes the new data to replace
the old. Previously loaded data that is not matched remains in the staging
table.
202 Contributor
Chapter 11: Automating Tasks Using Macros
Prepare Import
Use this macro step to take the data from the import staging tables per cube, per e.List item. The
calculation engine validates the data and converts it into import blocks Errors are written to
ie_cubename.
The import data block contains only the data required for an individual e.List item. Data targeting
No Data cells or formula items and data not matching any items is removed.
The process of converting data into import blocks uses the Job architecture to run on multiple
computers and processes. It will not conflict with other online jobs for the application.
The following table describes the relevant parameters.
Parameter Description
Application The name of the Contributor application datastore that the macro step
is being run against.
Cube Details
Cubes to Prepare The names of the cubes that you are going to prepare import data for.
Cubes to Zero The names of the cubes that you are going to prepare import data for.
Job Timeout (in A time period after which the macro step terminates if the job is not
minutes) completed. If the job does not succeed, an error appears. The default is
1440 minutes.
Synchronize
Use this macro step to automate the synchronize function for the application.
For more information on synchronizing an application, see "Synchronizing an Application" (p. 177).
The following table describes the relevant parameters.
Parameter Description
Application The name of the Contributor application datastore that the macro step
is being run against.
Analyst Library Name The name of the Analyst Library to synchronize. Either use library name
already specified for the application or select other library name.
Parameter Description
Save Changes if Destructive synchronize removes dimensional items or cubes and results
Destructive in data loss.
When you run the synchronize process from the Contributor
Administration Console, you can preview the changes and decide whether
to save the synchronization. When running synchronize using macros,
it is not possible to preview the changes before saving the synchronize.
Instead, you can choose to cancel the synchronization if data will be
lost by selecting the Save changes if destructive option.
A synchronize is considered destructive in the following circumstances:
● cube dimensions added
For more information, see "Changes that Result in Loss of Data" (p. 177).
Parameter Description
Analyst Macro The Analyst Library containing the macro you want to run and the
specific macro.
204 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Description
Application The name of the Contributor application datastore that the macro step
is being run against.
Access Table File Path The path for the Access Table file.
(Enter Local
Application Server
Path)
Trim leading and Whether you want to remove leading and trailing whitespace in the file.
trailing whitespace
First row is heading Whether the first row is used as the header row.
Delete Undefined items If an access table file was previously imported for the access table, and
you are importing a new one, existing settings are updated with the new
specified settings. Select this check box to delete settings that do not exist
in the new file. If the check box is cleared, previous settings are retained.
File Type:
Excel File Whether you are importing an Excel file. Enter the Worksheet location.
Import e.List
For more information on the e.List, see "Managing User Access to Applications" (p. 89).
Use this macro step to import an e.List into your application.
The following table describes the relevant parameters.
Parameter Description
Application The name of the Contributor application datastore that the macro step
is being run against.
Trim leading and Whether you want to remove leading and trailing whitespace in the file.
trailing whitespace
First row is heading Whether the first row is used as the header row.
Delete Undefined items If an e.List was previously imported and you are importing a new e.List,
existing settings are updated with the new specified settings. Select this
check box to delete settings that do not exist in the new file. If the check
box is cleared, previous settings are retained.
File Type:
Excel File Whether you are importing an Excel file. Enter the Worksheet location.
Import Rights
Use this macro step to import rights into your application.
The following table describes the relevant parameters.
206 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Description
Application The name of the Contributor application datastore that the macro step
is being run against.
Rights File Path (Enter The location for the Rights file.
Local Application
Server Path)
Trim leading and Whether you want to remove leading and trailing whitespace in the file.
trailing whitespace
First row is heading Whether the first row is used as the header row.
File Type:
Excel File Whether you are importing an Excel file. Enter the Worksheet location.
Parameter Description
Application Enter the name of the Contributor application datastore that the
macro step is being run against.
Parameter Description
Model Definition File Path The name and location of the model definition file. The model
(Enter Local Application definition file is a description of the entire Contributor application
Server Path) and is in XML format.
Save any generated datastore A location for generated datastore scripts to be saved.
scripts to file When Generate Scripts is set to Yes in Admin Options, a check
is made to see if the datastore must be restructured, for example,
if tables must be added or deleted, a script is generated.
This datastore update script typically must be run by a database
administrator (DBA).
Publish
Use the Automated Publish process when you need to perform a Contributor publish as a scheduled
task or from the command line as part of a script. A Publish can no longer target the Contributor
transactional datastore.
During the publish process, the published data is exported to a temp directory on the job servers.
A file is created for each e.List item for each cube. After the files are created, they are typically
loaded to the datastore using a bulk load utility (BCP or SQLLDR) and then the temp files are
deleted.
Using the Publish - View Layout - Advanced macro step, you can do an interruptible publish if you
want to use different mechanisms to bulk load data into the target datastore or an external
application. Interruptible publish prevents the temp files from being loaded into the datastore and
deleted. They remain in the temp directory, or you can collate them into a large file per cube. For
collation, each job server that may be involved in the publish job must expose a share that the
computer running the macro step can access. That share must expose the TEMP folder for the user
context of the Planning Service.
Tip: To use Interruptible publish, you must select from the How should the data be managed option
group the User-managed option.
For more information on publishing, see "Publishing Data" (p. 255).
208 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Description
e.List items to Publish Whether to publish all e.List items, use the selection from
Contributor Administration Console, or select individual e.List
items.
Parameter Description
Use Plain Number Formats Whether numeric formatting is removed or retained. Selecting this
option removes any numeric formatting. It publishes data to as many
decimal places as needed, up to the limit stored on the computer.
Negative numbers are prefixed by a minus sign. There are no
thousand separator, percent signs, currency symbols, or other numeric
formats that are applied on the dimension or D-Cube. Plain Number
Format uses the decimal point (.) as the decimal separator.
Data Filters
Parameter Description
● Hidden (default)
● Read
● Write
This option is additive, so if you select Hidden, data set to Read and
Write is also published, and if you select Read, data set to Write is
also published.
Automatically upload data Whether to load data automatically into the datastore.
to datastore
Remove existing data Selecting this option ensures that a consistent set of data is published.
It publishes data for all the selected cubes, and removes all other
published data in the datastore. Clear this option if you want to leave
existing data. If an e.List item is being republished, it replaces data
for that e.List item with the new data.
Where should the data be Whether to publish to either a default container or an alternate
published to? publish container.
Publish GUIDs not Names Select this item if you are doing a standard publish because the GUIDs
(to upload to export are used to load the data into the publish tables. You may want to
tables) use names rather than GUIDS if the data is to be exported to Analyst
or external systems.
Should files be collated If Yes, enter location for Local Application Server and enter the share
name to retrieve files from. This share must exist on all machines
that process the job. The same share name is used for all machines.
210 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Description
Stop execution if specified Selecting this option ensures that execution and processing will be
Cubes not found halted when there are no Cubes that can be identified from the results
of the selected settings.
Stop execution if no e.List Selecting this option ensures that execution and processing will be
items result from settings halted when there are no e.List items that can be identified from the
results of the selected settings.
e.List items to Publish Whether to publish all e.List items, use the selection from Contributor
Administration Console, or select individual e.List items.
Stop execution if specified Selecting this option ensures that execution and processing will be
e.List items not found halted when there are no e.List items that can be identified from the
results of the selected settings.
Publish e.List items Enter or select a date and time for filtering those e.List items that
changed since have since changed.
e.List item state Whether to publish e.List items at any state or specify a particular
state to publish.
Job Timeout (in minutes) A time period after which the macro step terminates if the job is not
completed. If the job does not succeed, an error appears. The default
is 1440 minutes, or one day.
Parameter Description
Use persisted parameters Whether to use same parameters for all settings for each time macro
for all settings step is run.
Create columns with data Determines the column data types from the model using the selected
types based on the 'dimension for publish'. This option can minimize the default number
'dimension for publish' of columns although non-uniform data will not be published. For
example, row data will be filtered when a value is inconsistent with
the model and column data type.
Only create the following This option will always publish the selected columns from the
columns 'dimension for publish'. Use this option to publish only the required
data of the selected types. Selecting numeric, date and text options
will ensure all data (uniform and non-uniform) is published.
Include rollups Selecting this check box includes all items, including calculated items.
Clearing this option only publishes leaf items, and therefore fewer
rows. You can recreate the calculation in your reporting tools by
linking the et and sy tables.
Include zero or blank Whether to include zero or blank values in the publish.
values This option suppresses rows containing all zeros or blanks. This can
speed up the process of publishing data substantially, depending on
the number of zero or blank cells.
Prefix column names with Whether to prefix column names with the data type.
data type Select this option if you wish the column name to be prefixed with
the data type to avoid reserved name conflicts.
Table options
212 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Description
Dimensions for publish Whether to use the default dimension for publish or specify a
particular dimension.
e.List items to Publish Whether to publish all e.List items, use the selection from Contributor
Administration Console, or select individual e.List items.
Job Monitoring
Timeout (in minutes) A time period after which the macro step terminates if the job is not
completed. If the job does not succeed, an error appears. The default
is 1440 minutes.
Parameter Description
Reporting Publish Container Choose either to publish data to the default container or specify
an alternate publish container.
e.List Item Filter Select the check box if you wish to only published submitted e.List
items.
Job Monitoring
Parameter Description
Timeout (in minutes) A time period after which the macro step terminates if the job is
not completed. If the job does not succeed, an error appears. The
default is 1440 minutes.
Delete Commentary
Use this macro step to delete user or audit annotations and attached documents in a Contributor
application using date and time, character string, and filters for e.List item name.
Contributor applications can be annotated by users in the Web application. There are user and
audit annotations.
User annotations consist of comments per cell, cube (tab in the Web client), and model.
Audit annotations are records of user actions in the Web client, such as typing data, importing files,
and copying and pasting data. They can be enabled or disabled. For more information, see "Delete
Commentary" (p. 289).
When you delete commentary, the following process occurs:
❑ The macro step fetches and unpacks the model definition (this is a description of the entire
Contributor application).
❑ Processes the commentary for each e.List item in turn--deleting specified comments.
Tip: Adding a Wait for Any Jobs macro step before the Delete Commentary macro step ensures
that all jobs are complete before moving on to this macro step. For more information on the Wait
for Any Jobs macro step, see "Wait for Any Jobs" (p. 199).
The following table describes the relevant parameters.
Parameter Description
Application The name of the Contributor application datastore that the macro
step is being run against.
Annotation Filters
Include user annotations in Whether user annotations are deleted. It is selected by default.
the operation
214 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Description
Apply content filter Whether to delete commentary using a content filter. This is off by
default.
If selected, you must enter a character string as a filter.
Job Timeout (in minutes) A time period after which the macro step terminates if the job is
not completed. The default is 1440 minutes or one day.
Note: You cannot automate the running of the Generate Framework Manager Model extension.
To automate the Generate Transformer Model Extension, you must first run the extension using
Contributor Administration Console. This creates valid settings in the application datastore. The
macro then uses these settings.
The following table describes the relevant parameters.
Parameter Description
Application The name of the Contributor application datastore that the macro step
is being run against.
Parameter Description
Parameter Description
Application The name of the Contributor application datastore that the macro step
is being run against.
Enable Web Client Whether to prevent users from accessing the Contributor application
Barrier on the Web. Clear the Enable Web Client Barrier option to bring the
application back online. Select this option to take the application offline.
Note: You must first create a valid Administration Link for the application. For more information,
see "Administration Links" (p. 145).
Tip: Adding a Wait for Any Jobs macro step before and after the Execute Administration Link
macro step ensures that all jobs are complete before moving on to this macro step and after running
this macro step. For more information on the Wait for Any Jobs macro step, see "Wait for Any
Jobs" (p. 199).
The following table describes the relevant parameters.
Parameter Description
216 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Description
Job Timeout (in A time period after which the macro step terminates if the job is not
minutes) completed. If the job does not succeed, an error appears. The default is
1440 minutes, or one day.
Macro Doctor
Use this macro step to generate a report on macros for debugging purposes. In the event of problems
with macros you may be asked by customer support to create and run the Macro Doctor macro.
The Macro Doctor captures information about the macros. It also allows you to see more detail
about the execution steps, and write them to files so that they can be inspected. Those macro step
definitions may be imported into another system, if required.
The following table describes the relevant parameters.
Parameter Description
Folder for report and The location where the report and macro step information is created.
Macro Steps (Enter Local
Application Server Path)
Include detailed progress for Whether to include detailed progress information for each macro
Macro Steps step. This extra information can often aid the debugging process.
Macro Test
Use this macro step to test if the macro components are running correctly. When successfully run,
it logs a user-specified Windows Application Event Log message that can be modified.
The following table describes the relevant parameters.
Parameter Description
Execute Macro
Use this macro step to run another macro automatically. This macro step is very useful because
you can nest many macros inside one macro. For example, if you have weekly or monthly processes
that share macro steps, such as import and publish, you can use the Execute macro to run them
both.
The following table describes the relevant parameters.
Parameter Description
Macro The macro to run. Do not select the same macro that you are adding
this macro step to.
Number of times How many times the macro should run. The default is 1.
Important: Appropriate Access Rights need to be granted in order to use this macro step. For more
information, see "Access Rights for Macros" (p. 40).
The following table describes the relevant parameters.
Parameter Description
Tip: It is possible to modify an exported macro step’s XML file with a third-party editor and then
import again using this macro step.
The following table describes the relevant parameters.
218 Contributor
Chapter 11: Automating Tasks Using Macros
Parameter Description
● Append
Parameter Description
Root folder (Enter Local The location where the macro is exported to.
Application Server Path)
● Remove
● Leave
Parameter Description
Running a Macro
There are a number of scheduled and ad hoc methods that can be used to run the macro. You can
run a macro in the following ways.
● "Run a Macro from Administration Console" (p. 221)
For more information about the execution location and credentials for Contributor macros, see the
following table.
220 Contributor
Chapter 11: Automating Tasks Using Macros
Steps
1. In the Contributor Administration tree, click the Macros icon.
2. In the Macros list, select which macro you want to run and click Execute.
A dialog box appears informing you that the macro is running.
Tips: You can monitor the progress of the macro in the Macro Steps list. You can view any
error messages by clicking Error Details. You can stop a macro by clicking Stop. The macro
will stop before the next macro step begins.
Steps
1. In Cognos Connection, in the upper-right corner, click Cognos Administration.
4. Set the general properties and permissions, see the Cognos 8 Administration and Security Guide.
5. To run the macro immediately or schedule it to run at a specified time, click Run with options
and select to run now or later. If you select later, choose a day and time to execute the
macro and click OK.
7. Under Frequency, select how often you want the schedule to run.
The Frequency section is dynamic and changes with your selection. Wait until the page is
updated before selecting the frequency.
8. Under Start, select the date and time when you want the schedule to start.
Tip: If you want to create the schedule but not apply it right away, select the Disable the schedule
check box. To later enable the schedule, clear the check box.
Steps
1. In Event Studio, click the Actions menu and then click Specify Event Condition .
● If you want part of the event condition to apply to aggregate values, click the Summary
tab and follow step 3.
222 Contributor
Chapter 11: Automating Tasks Using Macros
● Type text or drag operators, summaries, and other mathematical functions from the
functions tab.
Tip: To see the meaning of an icon on the functions tab, click the icon and read the
description in the Information box.
4. If you want to check the event list to ensure that you specified the event condition correctly,
from the Actions menu, click Preview.
5. If you want to know how many event instances there are, from the Actions menu, click Count
Events.
7. Specify a name and location for the agent and click OK.
9. Click Advanced.
11. In the Select the planning macro dialog box, specify the task to include in the agent by searching
the folders to find the task you want and clicking the entry.
12. Under Run the planning macro task for the events, review the event status that will cause the
task to be run.
14. In the I want to area, click Manage the task execution rules .
15. On the source tab , click one or more data items that uniquely define an event and drag them
to the Specify the event key box.
17. On the Select when to perform each task page, do the following:
● In the Tasks box, click the task that the agent will perform for the event statuses you specify.
● Under Perform the selected task for, select one or more event status values.
18. If you want to manage the execution rules for another task, repeat step 4.
Tip: If you want to reset the execution rules for every task in the agent to the default values,
from the Actions menu, click Remove Task Execution Rules. Each task is reset to be performed
for new instances of events and all ongoing instances of events.
Important: The Macro Executor must be installed on the computer you are trying to run it from.
You cannot execute it remotely.
Important: If the macro name has spaces in it, you must enclose it in quotes.
Important: If jobs are scheduled to start while Go to Production is running for an application, the
job will fail.
The following is an example of a batch file (.bat) that can be used to run the macro:
"C:\Program Files\Cognos\C8\bin\epMacroExecutor.exe"
MacroName
IF ERRORLEVEL 1 GOTO ExceptionDetectedSettingUpLabel
IF ERRORLEVEL 2 GOTO ExceptionDetectedExecutingLabel
ECHO Succeeded
GOTO EndLabel
:ExceptionDetectedSettingUpLabel
ECHO Exception detected in setting up macro
GOTO EndLabel
:ExceptionDetectedExecutingLabel
ECHO Exception detected in executing macro
GOTO EndLabel
:EndLabel
PAUSE
224 Contributor
Chapter 11: Automating Tasks Using Macros
If you want to enable the use of the calendar graphic for date selection, this calendar control
(MSCAL.OCX) must be installed and registered on the computer on which Administration Console
runs. This control is available in the Microsoft Office suite or from Microsoft Visual Studio.
Date Formats
The format of dates used in the Macro step is as follows:
yyyy-mm-ddThh:nn:ss.ttt+00:00
It is in ISO8601 format, where:
Where:
● yyyy = year
● mm = month
● dd = date
● T (signifies time)
● hh = hour
● nn = minutes
● ss = seconds
● ttt = milliseconds
● +00:00=The time zone offset in hours and minutes, relative to GMT (Greenwich Mean Time)
also known as UTC - Coordinated Universal Time.
Here is an example:
2002-11-13T15:10:31.663+00:00
Troubleshooting Macros
You can monitor the status of macros within the Contributor Administration Console. If a macro
fails, the Error Details button is enabled. Click this button to display information about why the
macro failed. You can also find error messages in the error log.
Check that the correct filename is used in the actual macro step. If the file is open when the
batch command is run, the command fails and returns an error code of 1.
Check that the correct case is used for the macro name. The macro name is case sensitive.
Look in the TEMP folder of the Planning Service user context on the machine running the
Planning Service where the macro was executed. If you see the following message in the Error
Description column your security may not be set up correctly. For more information about
security, see "Security" (p. 27).
"Permission
denied"
Note: You can only run macros from the command line on a computer with a server install.
226 Contributor
Chapter 12: Data Validations
Data validation is the process of aligning plans with targets by enforcing business rules and policies.
Use the data validation feature in Cognos 8 Planning to define rules that ensure that incoming data
in a Contributor application is in the right format and conforms to existing business rules. Building
data validations involves defining a business rule that specifies the criteria that user input must meet
for the entry to be accepted.
A validation rule represents a single data entry requirement imposed on a range of cells in a single
cube of a model. This requirement is expressed as a rule or boolean formula (true or false) that
identifies invalid data entries when contributors or reviewers attempt to save or submit a plan. A
rule set is a collection of rules that can be associated with e.Lists and fail actions.
Data validation in Cognos 8 Planning has the following benefits:
● You can apply different rules to different e.List items.
Validation Methods
Cognos 8 Planning provides these methods for validating data:
● presence check
Validates input into empty numeric or text cells. This method checks that critical data is present
and was not omitted from the plan. For example, contributors must enter forecast data for
product sales or provide an explanatory note for variances.
● dependencies
The text cell is based on values in other cells that contain single or compound conditions. For
example, contributors must enter an explanation into a text cell for any capital request that
exceeds $25,000 or for a capital request in the Other category greater than $25,000.
Validation Triggers
Validation rules are run on the Contributor Web client or on Contributor for Excel. A rule is
evaluated under one or more of the following conditions:
● automatically, when a contributor saves a plan or when a reviewer submits a plan
● manually, when a contributor or reviewer selects the Validate Data option from the File menu
or the Validation Data toolbar button in the Web client
● manually, when a contributor or reviewer selects the Validate Data option from the Contributor
menu in Excel
When one of these triggers occurs, the rule formula is evaluated to either pass or fail. If any of the
rules in the rule set detects a failure during the evaluation, the rule set is considered to have failed
and the fail action specified in the rule set is performed. The contributor or reviewer may be prevented
from saving or submitting the plan.
Important: Rules that were built prior in versions to Cognos Planning 8.2 are incompatible with
the current product version, and must be redefined as follows.
The process flow for defining validations as follows:
❑ Plan how the rule applies to a reviewer, contributor, or both a reviewer and contributor (p. 229)
For reviewers, define validation rules against post-aggregate D-Lists. Because reviewers are
managing the aggregate of their contributors, the post-aggregate calculations, which are results
from all e.List items, are applicable only to the reviewer.
For contributors, define validation rules against pre-aggregation D-Lists (before the e.List).
For contributors and reviewers, define validation rules against post-aggregate calculations.
In the Data Validations, Rules folder, use the Validation Rule wizard to define the validation
rules. Specify the rule message that appears when a data fails validation. Contributors or
reviewers can then react to the failed entry. The D-Cube to which the validation applies, the
measures dimension whose items are used to define the rule formula, and the scope or target
range for validation.
228 Contributor
Chapter 12: Data Validations
In the Data Validations, Rule Sets folder, you can create rule sets by adding one or more rules,
and assigning fail actions. A rule set applies to a single data validation process.
❑ Associate the rule sets with groups of e.List items (p. 237)
In the Data Validations, Rule Set e.List Items folder, associate the rules sets with the groups of
e.list items. Specify the roles, such as contributors, reviewers, or contributors and reviewers
and their subordinates, to which the rule set applies.
In the following example, the first dimension, RollupTest Slots, holds data values. The second
dimension, RollupTest FirstDim, occurs prior to the e.List item, so its calculations are
pre-aggregation. The next dimension is the e.List. The final dimension, RollupTest LastDim, is
post-aggregation because it occurs after the e.List.
As shown next, the RollupTest FirstDim D-List includes the Conditional item, which is a test for
data input greater than 50,000, with a default calculation option of Force to Zero for the Flag
Value. That means it will not calculate a value for this aggregate.
The next example shows the RollupTest LastDim D-List that includes Conditional1 items that test
for LinkedValue greater than 50,000.
230 Contributor
Chapter 12: Data Validations
The following graphic shows the RollupTest LastDim D-List that includes Conditional2 items that
test for InputValue greater than 50,000.
The D-Cube that is built with RollupTest FirstDim and RollupTest LastDim shows how the values
for cells are calculated.
When the underlying calculations are defined in Analyst, you can view the outcome in the
Contributor Web client. Suppose that in the Contributor Web client, there are three e.List items,
A1 Profit-center, rolls up into A Region and A2 Profit-center.
For A1 Profit-center, because the pre-aggregate test is based on conditional values of 50,000, only
input cell passes the test. The Conditional cell test against the Input Value item, so the Conditional
is 0 and 1. Conditional1 tests LinkedValue and Conditional2 tests InputValue. The post-aggregate
is Text1 and Text2.
For A2 Profit-center, you can see how the pre-aggregation (LinkedValue and InputValue) and
post-aggregation (Text1 and Text2) tests change.
The Reviewer e.List item shows that the Flag Value is not present. The Force to Zero option in
Analyst suppressed this from the reviewer e.List item because no data was present.
The Conditional values are sums of the values from the Contributor e.List items. For A1 Profit-center,
the two values are 0 and 1. For A2 Profit-center, the values are 1 and 1. The aggregation added the
values of these flags for a reviewer of 1 and 2.
Contitional1 and Conditional2 do not have their values added because the calculation is recalculated
against the aggregation total. Note that the Conditional1 value shows the test failing in the first
row for the reviewer. Text1 and Text2 reveal that the formatted D-List items appear based on the
recalculated Conditional1 and Conditional2 fields.
232 Contributor
Chapter 12: Data Validations
When creating a rule, you must also specify the cell range (scope) that is subject to validation.
A validation rule contains a formula or expression that evaluates the data in one or more cells in
the grid and returns a value of true or false. If you require complex expressions that are
cross-dimensional or deeply nested, we recommend that you first construct them in Cognos 8
Planning - Analyst.
● Do not create contradicting rules for the same target range because they will prevent contributors
or reviewers from saving the plan.
● If an entry does not conform to a rule, ensure that you provide explicit instructions in your
message. For example, instead of stating invalid entry, state the message as Capital costs
greater than $25,000 must be pre-approved.
● Consider which items are visible and editable for the contributor. Cells that are readable can
cause a validation error, but hidden or no data cells do not impact validation rules and cannot
cause a validation error.
You can use saved selections to specify the data that you want validated. You can name and save
a collection of items from a dimension using the Access Tables and Selections node under the
Development branch in the Administration Console. Saved selections are dynamic in that the items
in the selection change when an application is synchronized following changes to the Analyst model.
Hidden, and empty cells are not validated when the rule set is run.
Steps
1. Open the Administration Console.
4. Click New.
● In the Rule Name box, type a unique name that distinguishes the validation rule from
others.
No blanks or special characters, such as apostrophe (’), colon (:), question marks (?), and
quotation marks (") are allowed.
● In the Rule Message box, type the error text message that you want the contributor or
reviewer to see if the validation fails.
We recommend that a message is included in the rule to facilitate data entry. The message
should contain meaningful information that helps the contributor or reviewer enter the
correct data.
7. Click Next.
8. On the Validation Rule Cubes page, select the D-Cube against which the rule is applied, and
click Next.
Assumptions cubes do not appear in the list of available D-Cubes.
9. On the Validation Rule Dimension Selection page, select a measures dimension in the D-Cube
whose items are used to create the boolean formula for the rule, and click Next.
A rule expression is defined against a specific dimension in the selected D-Cube. All dimensions
of the cube, except the e.List, are listed.
10. On the Validation Rule Expression page, build the business logic by defining a rule formula
that evaluates to either true or false, and click Next.
11. Under Available components select items from the specified dimension in the D-Cube that you
want to use to define your rule expression and then click the arrow to move them to the
Expression definition box. Use the IF statement, AND/OR boolean operators, or logical
comparison operators, such as =, <>, and <=. It is not necessary to begin the validation rule
formula with an IF function. You can use any boolean condition expression.
For example, one of your divisions is facing competitive pricing and has a minimum and
maximum margin target. Corporate marketing wants their average forecast margin to be greater
than or equal to 15% and less than or equal to 18%. In this case, the rule expression is defined
as (Margin >= 0.15) AND (Margin <= 0.18).
12. On the Validation Rule Scope page, select the cell range or D-Cube slice that you want to
validate, and click Next.
You can specify the range by selecting items from each dimension in the cube. Items can also
include saved selections.
234 Contributor
Chapter 12: Data Validations
All dimensions in the selected D-Cube are available with the exception of the measures and e.
List dimensions. Note that because the <ALL> item includes aggregates as well as details, it is
not an optimal data item to include in a rule.
13. Click Finish. If you want to change or review your settings, click Back.
The rule is automatically saved and associated with the model. It is now available for inclusion in
a rule set.
Steps
1. Open the Administration Console.
3. Click the Rule Sets folder, and choose whether to create a new rule set or edit an existing one:
● To edit an existing rule set, click the rule set that you want to change, and then click Edit.
4. In the Rule Set Name box, type a unique name for the rule set that distinguishes it from the
others.
5. In the Fail Action box, specify one of the following types of action to be triggered when one
or more rules in the rule set fails validation:
● To show only the rule message and take no action, click Message Only.
● To show the rule message and restrict contributors or reviewers from submitting the plan,
click Restrict Submit.
● To show the rule message and prevent contributors or reviewers from either saving or
submitting the plan, click Restrict Save and Submit.
Important: Use caution when applying this setting - this is not considered a best practice. If one
or more rules fail, contributors or reviewers cannot save the plan. To close the plan, the rules
must be resolved to a value of true, which may not be possible for the user to achieve.
6. In the upper rule grid, select the rule or rules that you want to include in the rule set, and click
Add.
The selected rule or rules appears in the lower grid.
Tip. You can remove a rule from the rule set using the Remove button.
7. Click OK.
You can now associate the rule set to e.List items (p. 237).
Steps
1. Open the Administration Console.
5. Choose how you want to modify the rule, and click OK.
6. If you want to rename the rule to something more obvious, in the Name box, type the new
name.
7. If you want to change the message to reflect the new constraints or limitations, in the Message
box, type the new message.
● In the Edit Validation Rule Expression dialog box, define the new boolean expression used
to evaluate data entry and click OK.
● Under Available components, select items from the specified dimension in the D-Cube that
you want to use to define your rule expression, and then click the arrow to move them to
the Expression definition box. Use the IF statement, AND/OR boolean operators, or logical
comparison operators, such as =, <>, and <=. It is not necessary to begin the validation rule
formula with an IF function. You can use any boolean condition expression.
9. If you want to change the slice in the D-Cube that is subject to validation, under Scope, click
the new items under each dimension in the D-Cube.
236 Contributor
Chapter 12: Data Validations
Steps
1. Open the Administration Console.
The Validation Rule Set grid shows all the available rules sets.
● Under E.List Item Name, click the e.List item to which you want to apply the rules, and
click Add. You can also select ALL, All DETAIL, ALL AGGREGATE, or any e.List saved
selections.
The By Rule Set tab is filtered by rule sets and is sorted by all the rules sets and their
associated e.List items. The By e.List Item tab is filtered by e.List items that are associated
with the current rule sets.
238 Contributor
Chapter 13: The Go to Production Process
Use Go to Production to formally commit a set of changes. Any issues such as invalid editors, an
invalid e.List, and destructive model changes, are reported by the Go to Production process.
The Go to Production process can be automated (p. 200) so that you can schedule it to run during
slow periods.
Go to production consists of the following stages:
● pre-production
● go to production
● post-production
The crucial stage is the go to production stage where the old production application is replaced by
the incoming development application.
Until the go to production stage, all Web client users can use the old production application as
normal. Immediately after the go to production stage, a new production application exists that Web
client users then use. After the go to production stage, Web client users attempting to open a model
have access only to the new production application. However, if users are already viewing or editing
a model from the old production application at the time of the go to production stage, client-side
reconciliation is required.
In the go to production stage, the old production model definition is replaced by a new production
model definition (the incoming development model definition). Many development changes have
no effect on the structure or content of production data blocks and affect only the production model
definitions. An example is everything appearing within the Web-Client Configuration branch in
the Contributor Administration Console. If these are the only changes made, the production
application is fully updated when the go to production stage is complete.
Other development changes require a new production model definition and require the production
data blocks to be updated. For example, synchronizing with the Analyst model can change the
structure and content of data blocks in many ways. If changes that effect data blocks have been
made, the go to production stage fully updates the production model definitions as normal, but the
data blocks are updated by a subsequent reconciliation process.
If an import is performed, an Analyst to Contributor D-Link is run, or if an administration link is
run in the development application, then import data blocks are created. If there are import data
blocks at the point of the go to production stage, these import data blocks are moved into the new
production application. After the go to production stage, the import data blocks are combined with
the production data blocks, which are also handled by the reconciliation process. After this, the
import data blocks are removed from the development application.
In summary, the go to production stage replaces the old production model definition with a new
one, and moves any import data blocks into production. If import data blocks or changes that affect
production data blocks are made, the production data blocks are updated by a reconciliation process
that follows Go to Production.
During the go to production stage the application is taken offline temporarily to ensure data integrity.
The new e.List item workflow states are determined to correctly process any e.List hierarchy changes.
As soon as those changes are applied the application goes online again and the post-production
processes are started. This offline period is typically so short that it is transparent to users, but it
can sometimes exceed one minute.
Planning Packages
In Cognos 8, a package is a folder in Cognos Connection. You can open the package in a studio
to view its content. A Planning package is a light-weight package that contains only the connection
information to the cubes in the Planning application. The D-list and D-List item metadata are
extracted from the Planning application at run-time.
To access Contributor applications, you must select the option to create the package when you run
Go to Production. This option also gives users access to Cognos 8 studios from the Contributor
application if they have the studios installed and enables users to report against live Contributor
data using the Planning Data Service (p. 304).
You may choose not to create a package if you just want to publish the data and create a PowerCube
or an Framework Manager model using the extensions. This will save time because the Go to
Production will finish more quickly.
To create a Planning Package, you must have the Directory capability. This is not part of the Planning
Rights Administrator role, but it is part of the Security Administrator role. For more information,
see "Capabilities Needed to Create Cognos 8 Planning Packages" (p. 32).
The Planning Package is created with the same display name as the Contributor application by
default, and a data source connection named Cognos 8 Planning - Contributor is created in
Framework Manager. You can configure the name of the Planning Package, and add a screen tip
and description. For more information, see "Set Go to Production Options" (p. 79).
The security on the Planning Package is as follows:
● The Planning Rights Administrator role is granted administrative access to the package.
● All the users who have access to the application are added as user of this package.
● The user who is logged on to the console when performing the Go to Production is the user
who creates the package. Therefore, that user is given administrative access to the package.
This user is not necessarily a planning administrator because they could have been granted only
Go to Production permission by a planning administrator.
If you remove an application from the console, any corresponding planning package in Cognos
Connection is disabled. The package will be hidden from the users and will appear with a locked
icon to administrators. This allows administrators to maintain an application while making it appear
offline to users. When the application is re-added in the console, the corresponding planning package
is re-enabled.
240 Contributor
Chapter 13: The Go to Production Process
Reconciliation
The reconciliation process ensures that the copy of the application used by the on the Web is up to
date. For example, all data is imported, new cubes are added, and changed cubes are updated. For
more information, see "Reconciliation" (p. 52).
The first time Go to Production is run for an application, all e.List items are reconciled. Subsequently,
only some changes result in e.List items result in changes being made. Reconciliation can take some
time, depending on the size of the e.List. If you are making changes that require reconciliation,
check that you made all required changes before running Go to Production.
Model Definition
A model definition is a self-contained definition of the model. It holds definitions of the dimensions,
cubes, and D-Links of the model, as set up in Cognos 8 Planning - Analyst. It also holds details of
modifications applied in the Contributor Administration Console. This includes configuration
details, such as navigation order for cubes, options such as whether reviewer edit or slice and dice
are allowed, and Planning Instructions. A model definition also includes e.List details and access
table definitions. It also contains all assumptions cube data, because this does not vary by e.List
item.
Data Block
The data block for an e.List item contains all data relevant to an individual e.List item, except
assumptions cube data (p. 115). It contains one element of data for every cell in the model, except
for any cell identified as No Data by Contributor access tables (p. 118). No Data cells are generally
treated as if they did not exist. This reduces the volume of data that must be downloaded to and
uploaded from clients, speeding up client-side recalculation and server-side aggregation.
When a Web client user opens an e.List item by clicking its name in the status table, a model
definition is opened, and then the appropriate data block is loaded into it. If a multi e.List item
view is opened, more than one data block is loaded. Wherever possible, the model definition and
data block are loaded from the client-side cache if enabled. If the client-side cache does not contain
an up-to-date version of the model definition or the data block, they are downloaded from the
server. Note that data in the data block is not compressed, although compression and decompression
takes place on transmission to and from the client.
In addition to a data block, each e.List item also has an annotation block. Various translation tables
exist if multiple languages are used.
Production Tasks
You can do the following to the production version of a Contributor application:
● publish data
You publish the production version of the application. However, you must set dimensions for
publish in the development application and then run Go to Production to apply the changes to
the production version. This is because setting dimensions for publish requires datastore tables
to be restructured (p. 79).
● configure extensions
Extensions allow you to extend the functionality of Contributor in ways that fulfill business
requirements. For example, an extension can use the existing data in Contributor and export
it to create reports. An extension can also extend printing to different types of formatting.
242 Contributor
Chapter 13: The Go to Production Process
Run Go to Production
You must run Go to Production to commit any changes made to the development application, such
as configuration options, importing data, and synchronize with Analyst.
You must wait for all jobs to stop running before running Go to Production. This includes the
reconcile job.
Before running Go to Production, ensure that
● an e.List, was imported, rights were set
● the Copy Development e.List item publish setting to production application and Prevent
client-side reconciliation options are set as required
For information about these options see "Go to Production Options Window" (p. 244).
Steps
1. Select the Contributor application.
2. Click the Go to Production button. If this button is not enabled, check that the application has
an e.List. If so, you do not have access rights to run Go to Production .
Back-up Datastore
This option creates a backup of the development and production application and stores them in
the location specified during application creation or in the Datastore Maintenance window (p. 80).
We recommend that you set this option. If you clear this option, a warning advises you to make a
backup in case of problems. Note that when you automate the Go to Production process, there is
no backup option and you should schedule a backup to be made before running the Go to Production
process.
244 Contributor
Chapter 13: The Go to Production Process
Workflow States
Reset resets the state of the e.List items in the Contributor Application.
If required, select one of the following options:
● Not Started
This sets every e.List item back to the state of Not Started.
● Work in Progress
this sets all e.List items to a state of Work in Progress, meaning that changes were saved but
not submitted.
● Locked
this locks all e.List items. No changes can be made to locked e.List items, but the data can be
viewed.
Skip top e.List items enables you to reset all but the top e.List items.
Subsequent Go to Productions
When you run Go to Production more than once, depending on the changes you have made, you
may see the following information:
● model changes
Changes to Cubes
When you expand the cubes listed under Common Cubes, the following branches are listed under
each cube: New, Old and Differences.
New and Old contain the same categories of information and list what was in the old model and
what is in the new model.
Name Description
246 Contributor
Chapter 13: The Go to Production Process
Name Description
UpdateLinks The update links that are associated with the cube.
The following window shows the differences for the D-Cube Revenue Plan that result from adding
a dimension:
Name Description
[New] A positive number indicates that this cube contains the e.List.
AggregationDimension
[Old] -1 indicates that there was no e.List in this cube in the old model.
AggregationDimension.
Changes to Dimensions
When a dimension is changed, three branches are listed under each dimension: New, Old, and
Differences. When you click one of these branches, something similar to the following appears:
This table lists the following details for each dimension item in the Now, Old, and Differences
windows.
Name Description
Item Guid A unique internal reference for items in a model. When you add a dimension
item, this item is assigned a GUID.
Parent Index Internal reference that indicates the parent of each item in the hierarchy.
Changes to Links
When a link changes, three branches are shown, New, Old, and Differences.
248 Contributor
Chapter 13: The Go to Production Process
Name Description
Name Description
CaseSensitive 1 = on
0 = off
When you click the Difference branch, you see an overview of the changes.
The Editor column lists the editors who are currently editing the e.List item.
Invalid Editors
An invalid editor is a user who was editing or annotating an e.List item when Go to Production
was run, and, due to a change, can no longer edit or annotate the e.List item. These changes can
be one of the following:
● The e.List item that was being edited or annotated was deleted.
250 Contributor
Chapter 13: The Go to Production Process
● The review depths of an e.List item that is being edited by a reviewer or annotated were changed
so that the user no longer has access.
Editor Lagging
Editor lagging lists those Web client users who are editing at the time, either on or offline.
Steps
1. Go to the Production branch of the application, and then click Preview.
● An e.List item that was being edited or annotated when Go to Production was run was deleted.
● The rights of a user who was editing or annotating an e.List item when Go to Production was
run have been changed to View.
● Reviewer edit was prevented and the reviewer was editing an e.List item when Go to Production
was run.
● The review depths of an e.List item that is being edited by a reviewer (with reviewer edit allowed)
or annotated have been changed so that the user no longer has access.
● A reconcile job (p. 52) is run for the e.List item that is being edited or annotated and Client
side reconciliation is prevented.
● Two reconcile jobs have been run for the e.List item while someone is editing or annotating it
(on or offline). Prevent client side reconciliation can be on or off. Note that running an
administration link or a Analyst<>Contributor link causes a reconcile job to run.
● Another user takes ownership of the e.List item while the current user is editing it or annotating.
● Users will receive a warning message and the buttons in the grid will disappear. Users will be
able to right-click in the data and save to file.
Finish Window
During the final stage of Go to Production, the following processes occur.
Datastore Backup
A datastore backup is made if this option was selected and happens after the cut-down models
process.
Preproduction Processes
● A master model definition per language is produced.
252 Contributor
Chapter 13: The Go to Production Process
● Error trapping takes place, for example, if there are no e.List items, Go to Production does not
take place.
Go to Production
● The development and production model definitions are unpacked and loaded into memory.
● Two e.List items are reconciled as a test. Most errors in reconciliation occur when the first e.
List items are reconciled.
● The switch-over from the development to the production application is performed. During this
stage, the system takes the application offline temporarily to ensure data integrity. The new e.
List item workflow states are determined during this time to correctly process any e.List hierarchy
changes. As soon as those changes are applied it the application becomes online again and the
post-production processes are started. This offline period is typically so short that it is transparent
to users, but it can sometimes exceed one minute.
● Completed jobs that are no longer relevant are removed from the job list, such as a publish of
a previous production application.
A message that tells you that you successfully put the development application into production.
Then the following operations are performed:
● Obsolete cut-down models are removed by the cut-down tidy job.
● A validate_users job is run to check that the current owner or editor of an e.List item can still
access the e.List item.
● Redundant copies of translations from the previous production application are removed by the
language_tidy job.
● If reconciliation is required, it is queued and run as soon as job servers are started and set up
to monitor the application.
Note: If you set the production application offline before running the Go to Production process, it
is offline when the Go to Production finishes running. If the production application is online before
running Go to Production, it is online when Go to Production finished.
254 Contributor
Chapter 14: Publishing Data
You can publish the data collected by Cognos 8 Planning - Contributor to a datastore, either from
the Administration Console, or using the publish macros (p. 208).The data can then be used either
as a source for a data mart or warehouse, or with Cognos 8 studios. The publish process creates a
datastore containing tables and views based on the publish layout and options that you select.
Publish Layouts
Choose from these types of publish layouts: table-only, incremental, and view.
● The table-only layout gives users greater flexibility in reporting on Planning data. The table-only
layout can also be used as a data source for other applications. This layout is required by the
Generate Framework Manager Model Admin extension (p. 308) and the Generate Transformer
Model Admin extension (p. 311).
● The incremental publish layout publishes only the e.List items that contain changed data. Users
can schedule an incremental publish using a macro (p. 213) or through Cognos Connection and
Event Studio. You can achieve near real-time publishing by closely scheduling incremental
publishes.
● The view layout generates views in addition to the export tables. This layout is for historical
purposes.
● An accurate snapshot is taken of the data at the time a publish is run to ensure a consistent
read.
● A publish job runs. This creates tables in the datastore, depending on the layout and options
selected. For more information, see "Jobs" (p. 47).
You can run a publish at the same time as an administration link (p. 145).
Publish Scripts
You may need to create publish scripts before you can publish data if you do not have DBA rights
to your datastore server.
To generate publish scripts, the Generate Scripts option must be set to Yes in the Admin Options
table (p. 178).
If you attempt to publish but a publish container does not exist, a script is generated. A DBA must
then run the script to create the container. A message indicating the location of the script is shown.
If the publish container does exist, a check is run to see if there are any datastore incompatibilities.
If there are incompatibilities, another script is generated. Incompatibilities occur if you republish
a datastore, and the format of the metadata has changed between publishes. For example, a cube
was added, data dimensions changed, items were added to the Analyst model. There are always
incompatibilities on the first publish, since the metadata tables are not present.You cannot publish
until this script is run to update the datastore.
You can generate a synchronization script manually by clicking the Generate synchronization script
for datastore button.
Warnings
You may receive a warning similar to the following when running a script generated by Table-only
layout publish, when the Generate Scripts option is selected:
256 Contributor
Chapter 14: Publishing Data
"Warning: The table 'annotationobject' has been created but its maximum row size (8658) exceeds
the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail
if the resulting row length exceeds 8060 bytes."
The table definition allows for a large amount of data to be stored per row. SQL Server generates
a warning to let you know that there is a limit on how much data you can have on a row. If your
annotation data exceeds this limit then your publish will fail. You can reduce the amount of data
by selecting a smaller data dimension or by reducing the amount of data in the system, for example
by using Delete Commentary.
❑ For a Table-only layout, you must set the publish e.List item settings on the e.List items tab.
Normally, this is desirable behavior, but the Scenario and Version dimensions that are often used
in planning applications are not suited for aggregation. One technique to handle this is to set up a
mandatory filter on your cube tables in Framework Manager, forcing the reporting environment
to either prompt for values whenever the fact table is used, or to have separate filtered query subjects
for each version.
Precalculated Summaries
Be aware of precalculated summary levels in the published tables when using the Table-only publish
layout. You may find that they complicate your data model. You can disable them by clearing the
Include Roll Ups publish option.
If you do not do this, then the data for precalculated summary levels is published into the same
tables as the detail items. If you are using item tables (named it_D-List_name and containing an
unstructured flat list of all items in the hierarchy) this is acceptable. If not, you may have reporting
issues as your queries need dimensional context in order to avoid double counting.
Note also that the publish take longer to run (more data points to write). If you are not using the
item tables then the reporting environment could confuse users because there are separate hierarchy
table aliases for each level in Framework Manager.
Changes to Dimensions
Add items None as long as the number of New columns added. Existing SQL
levels in the hierarchy remain the still works.
same.
Delete items None as long as the number of Columns are deleted. Processing
levels in the hierarchy remain the referring to these columns must be
same. modified.
258 Contributor
Chapter 14: Publishing Data
Rename items None unless name filters are used D-List formatted items are stored
in the BI application. in the fact columns as text rather
than as a foreign key. As a result,
text exported from previously
published data may not match this
text.
A full publish resets the text in the
Publish tables, but review external
datastores where these items have
not been normalized.
Rename the D-List Dimension table name changes. None, as long as the D-List is not
used in D- cube where it is not the
dimension for publish.
Changes to D-Cubes
Change Affect
Reorder dimension None. The column sequence in the datastore may change but this does
not impact reports.
Change Affect
Add dimension Assuming that the new dimension is not the dimension for publish, data
for all items in the new D-List are automatically summed if no action is
taken.
For most lists this is desirable, but care needs to be taken if the dimension
contains scenarios or versions.
Delete dimension Links to the dimension table are removed from the fact table. Reports
referring to items in that dimension are affected.
In the table-only layout, where nonuniform data exists and must be preserved, if you select Create
columns with data types based on the "dimensions for publish", it automatically creates enough
columns so that no data is excluded. However, if you manually choose the columns to create, only
the data in the format selected is published. For example, selecting the numeric and date/time options
260 Contributor
Chapter 14: Publishing Data
guarantees that only numeric and date/time data are written to the corresponding numeric and date/
time columns; text is excluded. As a result, if the first row of an item is a numeric value, it is stored
in the corresponding numeric column. The remaining data type columns for that item are populated
with null values.
In the view layout, data type uniformity is handled by storing all values in text columns. An
associated fact view (fv) is created using the sum hierarchy to view only numerical information.
● It contains numeric items with different display formats such as ##% and #,###.##.
● You need to treat some of the D-List fields separately for reporting purposes.
● The dimension for publish impacts the time the Publish takes to run. Even though there are
fewer columns to create, more rows are written to the datastore, and this takes time to write.
Tip: In some circumstances you may not want a dimension for publish. In this case your publish
table has one row for every combination of dimension and you would leave all the processing
and formatting intelligence to the reporting tool. Using the Table-only layout, you must select
a dimension for publish, so to achieve equivalent functionality, add a D-List to the cube
containing one item, and use this D-List as the dimension for publish.
● select whether to prefix the dimension for publish column names with their data type to avoid
reserved name conflicts
When using the Generate Framework Manager Model Admin extension (p. 308), the table-only
publish layout must be used.
The following types of tables are created when you publish using the table-only layout.
Attached Documents Contains metadata about the Ad_ for cell attached documents,
attached documents documentobject for tab (cube) and
model attached documents
Hierarchy (p. 264) Contains the hierarchy sy_ for the simple hierarchy
information derived from the cy_ for the calculated hierarchy.
D-list, which is published to
two associated tables.
Annotation (p. 267) Contains annotations, if the an_ for cell and audit annotations
option to publish annotations annotationobject for tab (cube) and
is selected. model annotations
262 Contributor
Chapter 14: Publishing Data
Column 128 30 30
Names cannot begin with a number or underscore (_), and can include the following characters:
● a through z
● 0 through 9
● _ (underscore)
Column Description
itemiid D-List integer identifier for the item, which is used as the primary
key
Column Description
Simple hierarchy tables are created by the publish table-only layout. They are intended to be used
when there are simple parent-child relationships between D-List items that can be summed. The
purpose of this is to allow a reporting tool to automatically generate summaries for each hierarchy
level, or for use with applications that do not require precalculated data, such as a staging source
for a data warehouse.
In the following examples, D-List items are represented by letters, and the relationships between
items are drawn as lines.
Parent D-List items are calculated from child D-List item dependencies. Leaf D-List items do not
have child D-List item dependencies.
All D-List items have their values shown in simple parenthesis and in addition, leaf D-List item
codes are shown in curly braces.
264 Contributor
Chapter 14: Publishing Data
2 The leaf item is part of a sub-hierarchy that has been moved to the
root (no parent).
The left pane is an example of simple hierarchies with values. The right pane is an example of simple
hierarchies with values and leaf D-List item codes.
In the left pane, [E] has more than one parent, so parentage is assigned to the first parent in the IID
order. In the right pane, [D] becomes a leaf D-List item, and [F] becomes orphaned and is moved
to the root in the right pane.
In the left pane, [P] is the product of [S] and [T]. Leaf D-List items of non-simple summaries get
moved to the root. In the right pane, [P] became a leaf D-List item, [S] and {T] were orphaned and
moved to the root in the right pane.
In the left pane, [B] is the product of [C] and [E]. [C] has its own simple summary hierarchy. Because
non-simple sums are not included in the hierarchy, in the right pane, [B] becomes a leaf, [E] and
[C] become orphaned and moved to the root, and [C] keeps its sub-hierarchy because it is a simple
sum.
If you select the Include Roll Ups option, the export tables contain all the data, including calculated
data.
If you do not select this item, the export tables contain only non-calculated fact data.
266 Contributor
Chapter 14: Publishing Data
Users who report against published data that contains only fact data use the reporting tool to
aggregate the calculated items when grouping with the hierarchical dimensions.
You can control how the export tables (prefix et_) are generated as follows.
● Publish only uniform cube data. Select the Create Columns With Data Types Based on the
Dimension for Publish option, the data type of each item of the Dimension for publish is used
for the columns of the export tables. If individual cell types differ from that of the corresponding
columns, the corresponding cell data is not published and an informative message appears.
● Include the original formatted numeric and date values, which are stored in the text column.
This is useful when the original format cannot be easily reproduced in the reporting tool
application.
● Publish entire cubes, or publish only leaf data and let the reporting engine perform the rollups.
In this way, you control the level of detail of the information to publish.
The summary hierarchy as specified in the sy_ tables must be used to perform the rollups. Leaf
cells are those that correspond to leaf items of the simple summary hierarchies.
The prefixes text_, date_, and float_ are used to identify the data types of columns in tables, and
the suffix _[count] is used to guarantee name uniqueness.
Column Dimension
HierarchyDimensionName The unique identifier (p. 263) of the e.List items for
the coordinates of the cell annotations.
Column Dimension
annotation_date The date and time the annotation was made. They are stored as UTC
+ 00:00.
268 Contributor
Chapter 14: Publishing Data
Column Description
HierarchyDimensionName The unique identifier (p. 193) of the e.List items for the
coordinates of the cell attached documents.
Dimension_DimensionName The unique identifier of the D-List items for the coordinates
of the cell attached documents.
MeasureDimensionItemName_user_id The last user who updated the attachment of the document.
DimensionItemName_date The last date the attachment of the document was updated.
DimensionItemName_filesize The file size at the time the document was attached.
DimensionItemName_comment A comment that was entered at the time the document was
attached.
DimensionItemName_value The cell value at the time the document was attached.
Column Description
Column Description
document_size The file size at the time the document was attached.
document_comment A comment that was entered at the time the document was
attached.
Metadata Tables
Metadata about the publish tables is maintained in several tables.
The description of each database object created during a publish operation is maintained in
applicationobject.
The columns of the P_APPLICATIONOBJECT table are as follows.
Column Description
270 Contributor
Chapter 14: Publishing Data
Column Description
columntypeid -
Column Description
columntypeid -
Column Description
Column Description
Column Description
itemguid Globally unique identifier of the item of the dimension for publish.
Common Tables
Common tables are created so that you can track the history of events in the publish container.
The P_ADMINHISTORY table stores information about when major events occurred to the publish
container.
The P_ADMINEVENTS table contains the IDs and descriptions of the event types used in the
P_ADMINHISTORY table.
The P_CONTAINEROPTION table is used for Oracle and DB2 to store tablespace information
for blob, data, and index.
Job Tables
The following tables are created to support jobs (p. 47).
272 Contributor
Chapter 14: Publishing Data
Table Description
P_JOB Information about the jobs that are running or ran in the application.
This information is used in the Job Management window.
P_JOBITEM Each Job Item is represented by a row in the jobitem table. The state
of the Job Item is also stored. If a problem occurred while running
the Job Item, descriptive text is stored in the failurenote column and
is appended to the failurenote column for the job.
P_JOBSTATETYPE Job status types: canceled, complete, creating, queued, ready, running.
P_JOBTASK Where and when the job items ran and the security context it used.
For more information about the table-only publish layout, see "The Table-Only Publish
Layout" (p. 262).
Steps
1. In the application's tree, click Production, Publish, Table-only Layout.
● The Dimension column indicates the data dimension for publish that is selected.
● The Annotation Rows column shows the number of annotation rows for a cube when you
click Display row counts.
● The Export Rows column shows the number of rows that are published when you click
Display row counts.
4. To set the Publish options and configure the Publish datastore connection, click the Options
tab (p. 274).
5. Click Publish.
6. If you are asked if you want to create a publish container, click OK.
7. Select the job server or job server cluster to monitor the publish container and click Close.
You need the rights to add an application to the job server or job server cluster.
A reporting publish job is queued. You can monitor the progress of the job. For more information,
see "Jobs" (p. 47).
Option Description
Creating a New Publish The first time you attempt to publish data, you can either create
Container the default publish container, by clicking Publish, or create a new
publish container (p. 282).
Configuring the Publish To configure the publish datastore connection, click the Configure
Datastore Connection button, see (p. 283).
Create columns with data To use the item types from the dimension for publish as the table
types based on the columns.
’dimension for publish’
Only create the following To manually select the data types that are part of the publish process
columns for each measure.
You can choose to publish Numeric, Text, and Date columns.
Within the Text column, you can also choose whether to include
formatted numeric and date values.
274 Contributor
Chapter 14: Publishing Data
Option Description
Include Roll Ups Selecting this check box includes all items, including calculated
items. Clearing this option only publishes leaf items, and therefore
fewer rows. You can recreate the calculation in your reporting tools
by linking the et and sy tables.
Include Zero or Blank Values Clearing this check box means that empty cells are not populated
with zeros or blanks. This can speed up the process of publishing
data substantially, depending on the number of zero or blank cells.
Prefix column names with Select this option if you wish the column name to be prefixed with
data types the data type to avoid reserved name conflicts.
Include User Annotations Selecting this check box publishes cell level user annotations in a
table named an_cubename.
Include Audit Annotations Selecting this check box publishes audit annotations in a table
named an_cubename, in the column annotation_is_edit.
Include Attached Documents Selecting this check box includes information about attached
documents. Information about the attached document such as the
filename, location, and file size are published with the data.
Steps
1. In the application's tree, click Production, Publish, Incremental Publish.
Note: If you use this option without changing data in an e.List, the e.Lists without changes are
not included in the publish.
4. Click Publish.
A message displays indicating if an Incremental Publish job was initiated or, if no changes were
detected.
Annotation (p. 278) Contains annotations, if the option an_ for cell and audit annotations
to publish annotations is selected. annotationobject for tab (cube) and
model annotations
276 Contributor
Chapter 14: Publishing Data
Column Description
Two types of hierarchies are currently supported; complete hierarchies and simple summary
hierarchies.
Complete hierarchies are used to produce reports on the entire contents of cubes. Complete
hierarchies are used to organize cube data and are not used to perform rollups and calculations in
the reporting engine. The rules that govern the generation of complete hierarchies in the cy_ tables
are as follows:
● The parent of a given item is the first simple sum that references the item.
● If this sum does not exist, it is the first non-sum calculation that references the item.
Simple summary hierarchies are used when only detail items are published and rollups are performed
from the reporting engine. The rules that govern the generation of these hierarchies are as follows:
● The parent of a given item is the first simple sum that references it.
● If there are there are multiple candidates for the parent of an item, it is assigned to the first
parent in iid order and the other candidate parents are considered to be detail items in the
hierarchy.
● In the case where a parent cannot be identified that way and the item is not a simple sum, it is
considered to be a root item.
Simple summary hierarchies are not necessarily complete because all items that are part of a D-List
may not necessarily be part of the hierarchy.
The starting point for the production of these hierarchies is the graph of items dependencies produced
when equations are parsed. This graph specifies all parent/child relationships between items. Because
the simple summary hierarchy is limited to simple sums, sub-hierarchies can be detached from the
main hierarchy and moved to the top.
278 Contributor
Chapter 14: Publishing Data
Column Dimension
Dimension_DimensionName The unique identifier of the D-List items for the coordinates
of the cell annotations.
annotation_user_gu The globally unique identifier of the last user who updated the
annotation.
annotation_date The date and time the annotation was made. They are stored
as UTC + 00:00.
Column Dimension
Views
An ev_view is created to provide a more user-friendly access to its associated export table (et_table)
, which contains cube data. In this view, GUIDs are simply replaced by the display name associated
with the D-List items, and export value are cast to varchar when published as blobs.
A fact view (with fv_prefix) is created for each cube being published and is limited to numeric values
by joining the export values from the et_ table to the items in the hy_ tables for the cube. These
rules for deriving this hierarchy are explained earlier.
A complete view (with cv_prefix) is created for each cube being published and is built by joining
the export values from the et_ table to the items in the cy_ tables for the cube.
The following views are created in a view publish layout.
fv_cubename A view on the cell data for a cube that resolves the star schema linking
to the flattened out hierarchy for a dimension.
ev_cubename A view on the cell data for a cube that resolves the star schema linking
to the items in a dimension.
av_cubename A view on the cell annotations table for a cube that resolves the star
schema.
● Go to Production was run (p. 239) after the data dimensions for publish were selected
For more information about the view layout, see "The View Publish Layout" (p. 276).
Steps
1. In the application tree, click Production, Publish, View Layout.
2. On the Cubes tab, check the cubes you want to publish data from.
● The Dimension column indicates the dimension that is selected.
● The Annotation Rows column shows the number of annotation rows for a cube when you
click Display row counts. Note that only cell annotations are published.
● The Export Rows column shows the number of rows that are be published when you click
Display row counts.
3. Click the e.List Items tab to select the e.List items to publish. You are unable to publish before
doing this.
4. Click the Options tab. This step is optional and enables you to set the Publish options and
configure the Publish datastore connection. For more information, see "Options for View
Layout" (p. 281).
280 Contributor
Chapter 14: Publishing Data
5. Click Publish.
6. If you are asked if you want to create a publish container, click OK.
7. Select the job server or job server cluster to monitor the publish container and click Close.
You need the rights to add an application to the job server or job server cluster.
A publish job is queued. You can monitor the progress of the jobs.
Option Description
Creating a New Publish The first time you attempt to publish data, you can either create
Container the default publish container, by clicking Publish, or create a new
publish container (p. 282).
Configure the Publish To configure the publish datastore connection, click the Configure
Datastore Connection button, see (p. 283).
Do Not Populate Zero/Null/ Ensure that empty cells are not populated with zeros. Selecting this
Empty Data option can substantially speed up the process of publishing data,
depending on the number of blank cells.
Publish Only Cells With Selecting this check box publishes only rows that include at least
Writeable Access one cell with write access; rows for which all cells are read-only or
hidden are not included. Clearing this check box publishes all cells,
including hidden cells, regardless of access levels.
Use Plain Number Formats Selecting this check box removes any numeric formatting for the
purposes of export. It exports to as many decimal places as are
needed, up to the limit stored on the computer. Negative numbers
are prefixed by a minus sign. No thousand separator, percent signs,
currency symbols, or other numeric formats that were applied on
the dimension or D-Cube are used. Plain Number Format uses the
decimal point (.) as the decimal separator.
Remove all data before Selected by default, selecting this option ensures that a consistent
publishing new data set of data is published. It publishes data for all the selected cubes,
and remove all other published data in the datastore.
If this check box is cleared, it leaves existing data, unless the e.List
item is being republished. In this case, it removes the existing data
for that e.List item and replace it with the new data.
Option Description
Include User Annotations Selecting this check box publishes cell level user annotations in a
table named an_cubename.
Include Audit Annotations Publishes audit annotations to a table named an_cubename, in the
column annotation_is_edit.
Steps
1. In the Production application, click Publish, and either Table-only Layout or View Layout
depending on the type of publish container you require.
2. Click the Options tab, and then click the Configure button.
3. Select the datastore server where you want the publish container to be created.
Location of datastore files Enter an existing location for the datastore files on
the datastore server. Required only by SQL Server
applications.
282 Contributor
Chapter 14: Publishing Data
7. If you have an Oracle and DB2 UDB application, click Tablespace and then specify the following
configuration options:
● Tablespace used for data
8. Click Create.
If you are prompted to create a script, this must be run by a DBA to create the publish container.
If you are not prompted to create a script, the container is created.
The publish container must be added to a job server or job server cluster so that the publish
jobs are processed.
Note: Tablespace settings can be configured only when creating a new publish container.
Steps
1. In the Production application, click Publish, and either Table-only Layout or View Layout
depending on the type of publish container you require.
2. Click the Options tab, and then click the Configure button.
If you are prompted to create and generate a script, do the following:
● Name and save the script, and pass to the DBA to run the script.
Option Action
Trusted Connection Click to use Windows authentication for the logon method to
the datastore. You do not have to specify a separate logon ID
or password. This method is common for SQL Server
datastores and less common, but possible, for Oracle.
Use this account Enter the datastore account that this application will use to
connect. This box is not enabled if you use a trusted
connection.
Password Type the password for the account. This box is not enabled if
you use a trusted connection.
Test Connection Click to check the validity of the connection to the datastore
server. This is mandatory.
5. If you want to configure advanced settings, click Advanced, and enter the following information.
Typically these settings should be left as the default. They may not be supported by all datastore
configurations.
Connection Prefix Specify to customize the connection strings for the needs of the
datastore.
Connection Suffix Specify to customize the connection strings for the needs of the
datastore.
6. Click OK.
Steps
1. From the Tools menu, click Maintenance, Validate Publish Containers. A list of publish
containers is displayed.
284 Contributor
Chapter 14: Publishing Data
286 Contributor
Chapter 15: Commentary
Attached documents and user annotations that are linked to a plan are grouped together and are
named Commentary. The user can view an attached document by browsing the Commentary of
an application.
The Maintenance branch of the production application enables the user to delete user annotations
(p. 287), audit annotations (p. 287), and attach documents (p. 290) from the application datastore.
The user can also filter what to delete based on a date or the text that it contains.
Annotations
The two types of annotations are user annotations and audit annotations.
User Annotations
User Annotations consist of the following:
● Annotations per cell.
Any owner of an e.List item, that is a user with directly assigned or inherited rights that are greater
than View can annotate. Users with View rights cannot annotate, but can view annotations.
Annotations include date/time, user name and text.
Annotations in cells are indicated in the cell by a red dot and are showed as tips when the mouse
moves over the dot.
Users can browse and print annotations.
Audit Annotations
Administrators can choose to record user actions. These records are called audit annotations. Audit
annotations can be made visible in the Web application to the user, and can be published.
The user can record user actions in the Web client, such as typing data, importing files, and copying
and pasting data. Tracking changes through the system is useful for auditing purposes, and if an e.
List item has multiple owners and users want to see who made changes.
To control its impact on the size of the application datastore, this feature is configured in Application
Settings.
Users can view audit annotations using the Annotation Browser, if they are visible in the Web client.
For more information see "Change Application Options" (p. 72).
Steps
1. Right-clicks in the cell, tab or model to be annotated.
2. Selects Annotate, Annotate cell or tab or model and then selects Add.
3. Types the note and closes it by clicking the button in the top right hand corner.
Users can only annotate a particular cell, cube or the model once in a session (but can annotate
more than one cell or cube in a session). They can edit annotations in that session. An annotation
session is ended by saving.
e.List items can be annotated in any workflow state, for example locked. Making an annotation
does not affect the workflow state.
A user wanting to annotate a contribution e.List item may never bounce the current editor of
the e.List item, irrespective of the status of the Allow Bouncing option (p. 72).
Assumption cubes cannot be annotated. This is because assumption cube data is stored in the
model definition, not in the data block, so there is no ability to make it user specific.
288 Contributor
Chapter 15: Commentary
2. Type the HTML link command file: using the following format: file:\\uncdrivename\docs\
expenses.xls
Where uncdrivename is the UNC (universal naming convention) name for the drive. Use this
instead of a fixed drive letter such as f:\. This is because a fixed drive letter may not be the same
for the people viewing the annotation.
Only use this method of linking to a file if the user expects the file to be viewed by a small
number of people (such as 2 or 3). If the user expects more people to view this it is better to
make the file accessible from a Web site.
Delete Commentary
Administrators can delete comment in a Contributor application using date and time, character
string and e.List item name filters. See "Deleting Commentary" (p. 289) for more information. The
user can also automate the deleting of annotations by using a macro "Delete Commentary" (p. 214).
Users can also delete annotations. See the Contributor Browser User Guide for more information.
Deleting Commentary
You can delete all commentary in a Contributor application using date and time, character string
and e.List item name filters.
After you specify the filters, e.List items for the annotations to be deleted, and click Delete
commentary, a COMMENTARY_TIDY job is run. The deletion is not seen by web clients until a
reconcile is run. This enables the commentary to be deleted while the Contributor application is
online. This may be run as a macro (p. 214).
Annotations and saved annotations can be deleted by the creator until the annotation is submitted.
Steps
1. In the Production branch of the application, click Maintenance, Delete Commentary.
● Apply date filter. Select if the user wants to delete commentary by date. If the user selects
this option, they must select a date from the Delete commentaries before date box. It will
default to today's date at midnight, local time.
● Apply annotation content filter. Select if the user wants to delete commentary by content.
For example, if they want to delete annotations containing the word banana, any annotations
containing this word will be deleted, if they also conform to the other filters.
3. Click the e.List items tab and click the e.List items that the annotations will be deleted from.
Attach Documents
The user can attach many types of files to a cell, cube, or model to help support their planning
process. The types of files that can be attached are configured by the administrator in the Contributor
Administration Console. The attachments are stored in a Planning Application database.
The following default file types are allowed:
● Microsoft Word (.doc)
The user can add or remove any required file type from the defaults provided. Executable files (.
exe) are not included in the default list because of security reasons, but can be added by the
Administrator.
290 Contributor
Chapter 15: Commentary
Steps
1. In the Administration tree, click System Settings, and Web Client Settings.
2. In the Attached Documents area, click Limit Document Size if the user wants to restrict the
size of the attached files.
3. Enter an amount (in megabytes) for the Maximum Document Size (MBs).
4. In Allowable Attachment Types, choose to either remove a selected file type by clicking Remove
or click Add to add a new allowable attachment type.
5. At the end of the list of file type, enter a label name and the file type extension. Make sure the
user appends the file type extension with an asterisk (*).
Note: Changes made to the Attached Documents settings take effect almost immediately and
without the need to perform a Go To Production.
Attaching a Document
The user can attach a document to a cell, tab, or model in the Contributor Web application.
Note: The user can also do this in the Contributor for Excel.
Steps
1. In the Contributor workflow window, the user clicks on an available e.List item that they want
to open.
2. In the Contributor grid, the user can either click on the Attached Documents button or right-click
in a cell and select either cell, tab, or model and click Add. The Attach a new document dialog
box appears.
3. In the Source file location, enter either the location, the file, or click the browse button and
browse to the file location.
4. Enter comments into the Comments box. There is a 50 character maximum limit for this box.
A red triangle appears in the corner of the cell to which the document is attached. A copy of the
document is attached to the application, not the original file. This is similar to attaching a file to
an email and is not meant to perform as a document management system.
Note: Attached documents are not available when working offline and the user cannot attach a
document while working offline. However, it is possible to see if a document is attached to a cell
while offline.
Steps
1. In the Contributor grid, click the Browse Commentary button or right-click a cell and select
Browse Commentary. An icon also appears in the Contributor workflow window notifying
the user that one or more documents are attached to an e.List item. However, they cannot open
attached documents from the workflow window.
2. In the Commentary Browser dialog box, the user selects the commentary item that they want
to view and then click View Document to open the file. The user can filter the items to just
show user annotations or attached documents. They can also choose whether to view
Commentary for the current page in the grid or Commentary for all pages.
3. To edit commentary, select the commentary item and click Edit Document. The item opens
allowing the user to make changes and save the new version along with the application. The
user will be prompted to update the repository if they made changes to the file.
4. To delete commentary, the user selects the check box for the item that they want to delete and
click Delete.
Note: Only the owner or the Contributor administrator can delete an attached document.
5. The user can print an annotation by selecting the file and clicking Print. To print a document,
open it and print from the associated viewer.
Copy Commentary
Attached documents and user annotations that are linked to a plan are grouped together to form
Commentary. The user can copy commentary between Contributor cubes and applications using
administration, system, and local links.
You can create a link that moves data from multiple sources. If the multiple sources contain
commentary, once the link is run the target will contain all the commentary available from the
sources.
For more information, see "Managing Data" (p. 141).
Note: The user can only copy Commentary using links that contain data.
292 Contributor
Chapter 15: Commentary
Note: Identical documents from different sources are treated as separate documents.
294 Contributor
Chapter 16: Previewing the Production Workflow
The Preview window gives you a preview of the production e.List and workflow state and allows
you to view properties of the e.List items. The icons indicate the current status of the data in the
production application. Clicking Refresh enables you to keep track of the status of the icons. For
example, when you have put a development application into production, you can see when e.List
items have been reconciled, see "Reconciliation" (p. 52) In this case, the icons will change from:
Not started, out of date .
Not started, reconciled .
See "Workflow State Definition" (p. 297) for more information.
To preview the data in the production application in the Preview window, expand the Preview tree,
right-click the e.List item and click Preview.
When you preview an e.List item it appears to behave like it does in the Web. For example, you
can right-click in the grid and click Annotate cell, Add and an annotations window appears. You
can type in the annotations window and when you close the window, you can view the annotation
by moving your mouse over the red square. However, after you have closed the Preview, these
changes are not saved.
Note: Any action you perform in Preview has no bearing on the Production application.
● Owners
● Editors
● Reviewers
● Rights
e.List item state The workflow state, see "Workflow State Definition" (p. 297)
for more information.
Date state changed This gives the date and time that the workflow state changed
in the following format: yyyy-mm-dd hh:mm:ss.
User who last changed the state The name of the user to have last changed the state.
Number of children The number of items in the next level below this e.List item.
Number of locked children The number of child items that are locked, indicating that
data was submitted.
Number of saved children The number of child items where work has started and been
saved.
Owner name The name of an owner of the e.List item. An owner is a user assigned to an
e.List item with greater than View rights.
Current Owner This is checked if the owner is the current owner of the e.List item. The
current owner is the last user to have opened an e.List item for editing.
● Annotator - the name of the last or current annotator, and the time they started creating an
annotation.
For information about editing while offline, see "Working Offline" (p. 86).
The Data Reviewed indicator indicates whether the e.List item was reviewed and the Data Viewed
indicator indicates whether the data was viewed or not.
296 Contributor
Chapter 16: Previewing the Production Workflow
User, Group, The User, Group, or Role assigned to the e.List item (more than one user,
Role group, or role can be assigned to an e.List item).
Rights The level of rights that a user has to the e.List item.
Inherit from If the rights have been directly assigned to the user, this cell will be blank.
If the rights have been inherited, this indicates the name of the e.List item
the rights have been inherited from.
Not started The e.List item has not been edited None of the items that make up this
and saved (it may have been edited e.List item have been edited and
but the changes not saved). saved.
Work in The e.List item was edited and saved All items that make up this e.List
progress but not submitted. item have been edited and saved. At
least one item has not yet been
submitted.
Locked The e.List item was submitted and The e.List item was submitted.
can no longer be edited.
Has a current editor/annotator. The e.List item was opened for editing/
annotating. An edit session is ended by the user
closing the grid, or by submitting the e.List item.
Has a current editor/annotator and There is a current editor or annotator, and the
is out of date. data is out of date.
These additional states only appear to the user in the front window of the Contributor application,
not in the grid.
298 Contributor
Chapter 16: Previewing the Production Workflow
Each icon represents the state of the e.List item. The lowest level e.List items (for example, labeled
A1 Profit Center) are contribution e.List items, that is items that you enter data into. The higher
level e.List items are review e.List items, and the state of a review e.List item is affected by the states
of the contribution e.List items that feed into it.
Other Icons
Each of the Workflow state icons can have additional indicators that tell you whether the e.List
item is being edited, is out of date, or both. They are a grid, a box or both a grid and a box.
The following show examples of these indicators, but note that they can apply to all workflow
states:
● Has a current editor/annotator. The e.List item was opened for editing/annotating. An edit
session is ended by the user closing the grid, or by submitting the e.List item .
● Is out of date . This indicates that the e.List item needs updating.
300 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor
With Other Cognos Products
Client and admin extensions help Cognos 8 Planning - Contributor to work with other Cognos
products (p. 302).
You can analyze and report on published Contributor data in Cognos 8 Business Intelligence using
the Generate Framework Manager Model admin extension (p. 308). Additionally, the Planning Data
Service provides access to unpublished Contributor data for Cognos 8 Business Intelligence users.
You can use Excel with Contributor, benefiting from the formatting capabilities of Excel (p. 313).
You can take actuals from an Enterprise Resource Planning (ERP) system and combine them with
planning information to perform comparative analysis using Cognos Performance Applications
(p. 315).
The following diagram illustrates the various integration points between Cognos 8 Planning and
other Cognos products.
Published Data
Cognos 8 Cognos 8
Business Intelligence Metrics Manager
Cognos 8 Business
Intelligence
Transformer 7.4
Cognos 8 Cognos 8
Planning - Contributor Planning - Analyst
All extensions are installed as part of the main Planning installation. For more information, see the
Cognos 8 Planning Installation Guide.
Note: Ensure that both the Administration Console computer and the client computers meet all of
the software and hardware requirements before configuring and running Contributor client and
administration extensions.
For a current list of the software environments supported by Cognos products, see the Cognos
Global Customer Services Web site (http://support.cognos.com).
Client Extensions
Web client users can use client extensions to take advantage of the functionality of Excel (p. 313).
Client extensions are activated through the menu bar in the Contributor grid.
You can control when an extension is available for Web client users by enabling and disabling it
in Contributor Administration Console.
Tip: On the Configure Extensions tab, right-click the extension and click Enable or Disable.
Steps
1. In the Contributor Administration Console application tree, click Production, Extensions, Client
Extensions, and then click the Extension Groups tab.
3. Click OK.
The name of the new extension group appears in the Extension Group list.
Tips:
● You can rename an extension group by clicking Edit in the Extension Group dialog box.
● You can reorder extension groups by using the arrow buttons on the Extension Group tab.
302 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products
Tip: You can reset a client extension back to its original settings by clicking the Reset button. This
resets the configuration back to its original, unconfigured state and all settings and data are lost.
Steps
1. In the Contributor Administration Console application tree, click Production, Extensions, Client
Extensions, and then click the Configure Extensions tab.
3. In the Display Name box, type a name for the extension or leave the default name.
4. If the Activation Mode box shows that Manual activation mode is selected, in the Extension
Group box, click the appropriate Extension Group.
5. In the Extension Properties-Users dialog box, click All Users or Selected Users.
6. If you clicked Selected Users, select the check box next to each user who should have access.
7. If you are configuring the Export for Excel extension, in the Location on client for saved
selections box, type the full path of the saved selections folder.
9. Click Finish.
Admin Extensions
Administrators use admin extensions to generate Framework Manager models, Transformer Models,
and Cognos PowerCubes from Contributor applications. This enables you to report on Contributor
data in Cognos 8 studios, and view data in PowerPlay Series 7.
Step
● In the Contributor Administration Console application tree, click Production, Extensions,
Admin Extensions, select the extension, and click Run.
304 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products
queries that access more than one cube. However, this may result in large numbers of packages in
Cognos Connection which could be difficult to manage.
Note: Cross tab reports in any of the Business Intelligence studios do not support text or date-based
measures, including annotations, if configured for display. If a text or date-based measure is selected,
it appears as "--" in the report.
● Other than the e.List, if no dimensions have defined formats, then the first dimension is used.
● If more than one dimension has defined formats, the dimension with the lowest priority
calculations is used.
The default measures dimension can be overridden when publishing, by selecting the Dimension in
the Cubes screen. If you republish the data and change the dimension at a later date, be aware that
this may break some saved reports.
Steps
1. From the Windows Start menu, click Programs, Cognos 8, Framework Manager.
3. In the New Project page, specify a name and location for the project.
4. Optionally, you can add the new project to a source control repository by doing the following:
● Click Repository, and then select the Add to repository check box.
● In the Location in Repository box, browse to a location to add the project and then click
Select.
6. In the Select Language page, click the design language for the project.
You cannot change the language after you click OK, but you can add other languages.
8. If the data source connection you want is not listed, you must create it (p. 307).
If the Planning Data Service is configured, a data source named Cognos Planning - Contributor
is available. This gives you access to cube (OLAP) data only. If you want to access table data,
you must create a data source that points to these tables.
10. Add security to the package if required. See the Cognos 8 Framework Manager User Guide for
more information.
Note: You save the project file (.cpf) and all related XML files in a single folder. When you
save a project with a different name or format, ensure that you save the project in a separate
folder.
12. When prompted to open the Publish Wizard, click Yes. This enables you to publish the new
package to Cognos Connection.
14. To enable model versioning when publishing to the Cognos 8 Content Store, select the Enable
model versioning check box.
15. In the Number of model versions to retain box, select the number of model versions of the
package to retain.
306 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products
Tip: To delete all but the most recently published version on the server, select the Delete all
previous model versions check box.
16. If you want to externalize query subjects, select the Generate the files for externalized query
subjects check box.
17. By default, the package is verified for errors before it is published. If you do not want to verify
your model prior to publishing, clear the Verify the package before publishing check box.
Note: You can run the Framework Manager Metadata wizard repeatedly to import multiple
cubes into the same Framework Manager project. For more information about creating
Framework Manager projects, see the Framework Manager User Guide.
Steps
1. In Framework Manager, click the Run Metadata Wizard command from the Action menu.
4. In the name and description page, type a unique name for the data source and, if you want, a
description and screen tip. Select the folder where you want to save it.
6. Under External namespace, select the namespace set up previously in Cognos Configuration.
Tip: To test whether parameters are correct, click Test the connection. If prompted, type a user
ID and password or select a signon, and click OK.
7. Click Finish.
The data source appears in the Directory tool in the portal or in the list of data sources in the
Metadata Wizard in Framework Manager.
Tip: To test a data source connection, right-click the data source in the Data Sources folder and
click Test Data Source.
Base Model
The base model contains the definitions of objects required to access Cognos Planning data published
in a table-only layout. The objects include table definitions (query subjects), dimension information,
security filters, and model query subjects.
User Model
The user model provides a buffer to contain the modifications made by the Framework Manager
modeler. When modifications are made to the Contributor application, or to the Analyst model,
the base model can be updated using Generate Framework Manager Model. Then, the user model
can be synchronized using the synchronize option in Framework Manager.
The synchronization process makes all the modifications to the base model appear in the user model.
This is done by synchronizing the user model with the base model and by reapplying any changes
made to the user model by the modeler to the synchronized user model.
The package published to Cognos Connection is published from the User Model.
Note: It is recommended that you install the Administration components (Analyst and Contributor
Administration Console) on the same machine as the Planning Server components.
❑ Ensure that you can access Cognos Connection
For example, in the address bar of your Web browser, type http://computer_name/cognos8/.
❑ Ensure that you can publish the Cognos Planning data in a table-only layout.
❑ Configure the Publish datastore to use the logon and password of the datastore server, not
Trusted Connection.
308 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products
● publishes packages
Folders
Framework Manager contains a series of folders containing objects of the same type. These folders
are created in two top-level folders: Physical View and Business View. The Physical View Folder
contains all the database query subjects and the Business View folder contains all the dimension
and star schema objects.
● annotation tables
Joins
Joins are created between related tables, such as the cube export data tables and derived hierarchy
tables.
Column Usage
The usage attribute of the query items contained in the database query subjects are set to the correct
value: fact, identifier, or attribute.
Security Filters
If the models are generated from a Contributor application, security filters are created for each cube
export data query subject. The filters grant users access to the same e.List items as in the Contributor
application. A security filter is created for every user on every cube export data query subject.
If the models are generated from Analyst, no security filters are created.
Regular Dimensions
For each derived hierarchy and complete hierarchy query subject, a regular dimension object is
created and saved to the Derived Dimensions and Complete Dimensions folders respectively. These
folders are located in the Business View folder.
Measure Dimensions
For each cube export table, a measure dimension object is created. It is stored in a folder that has
the same name as the cube. These folders are located in the Business View folder.
Data Source
Data source refers to the data source created in the Cognos Connection Portal.
Package
A package contains all the objects in the Framework Manager model. The administrator of the
package is the user generating the model.
In Contributor, the users who have access to the package are the users of the Contributor application.
In Analyst, the only user to have access to the package is the user generating the model.
Steps
1. Click Extensions, Admin Extensions, and double-click Generate Framework Manager Model.
310 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products
● Package name
The name of the package to be published to the portal. The Package name must not already
exist on the portal.
● Package location
Where the package is stored in Cognos Connection.
● Package screentip
● Package description
5. Specify the type of data source query subjects to include in the model.
Steps
1. In the appropriate Contributor application, click Extensions, Admin Extensions, and double-click
Generate Framework Manager Model.
3. Enter the Project Location where the model you want to update is stored.
6. From the Project menu, click Synchronize and then click Run the script from the starting point.
7, or publish the PowerCube to a package in Cognos Connection and view its content using any
of the Cognos 8 studios.
Because the PowerCube is based on published data, the Generate Transformer Model extension
generates a single Transformer model for each Contributor model. The extension automatically
extracts the necessary information about your Contributor model from the publish tables and the
application model, and then creates the equivalent model in Transformer. After the Transformer
model is created, you can modify it using the Transformer interface and optionally, publish the
cubes to a Cognos Portal. Generate Transformer Model uses the last publish data source.
With the Generate Transformer Model, you can:
● generate a Transformer model
Before you can use the Generate Transformer Model Wizard, you must configure your environment.
Do the following:
● If you create PowerCube(s), ensure that you can access Cognos Connection.
For example, in the address bar of your Web browser, type http://computer_name/cognos8/.
● Before you can create PowerCube(s), you must first have Transformer installed and configured
on the computer where the Planning Server components are installed.
Security Considerations
The Transformer model and PowerCubes generated can only be secured against a Series 7 Namespace.
The name of the namespace in Cognos Configuration must match the name of the Series 7 namespace.
We recommend that the user class for the administrator creating the Transformer model has the
property: Members can view all users and/or user classes in the User Class permissions tab. This
property is set in the administration console of Access Manager Series 7.
Automation
This extension can be automated. It must first be configured. For more information, see "Execute
an Admin Extension " (p. 215).
Steps
1. In the Contributor Administration Console application tree, click Production, Extensions,
Admin Extensions and double-click the Generate Transformer Model extension.
3. Specify the locations for the Transformer model and the PowerCubes.
312 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products
This location can contain only one model. The paths must be located on the Planning server,
and can be UNC paths.
5. You can choose to include security information. To do this you must specify a Series 7
Namespace.
7. If you selected Create PowerCube, you can choose to create a Planning Package, enabling you
to view its content using any of the Cognos 8 studios.
8. Click Finish.
● Resize the worksheet so you can see more or less data on a page.
● Save data as an Excel workbook and work locally without a connection to the network.
To use Contributor for Excel, administrators must create a Contributor Web site, and client users
must install Contributor for Excel on their computers. For more information about installation,
see Contributor for Excel Installation Guide.
e.List slice). A two-dimensional window footprint is the number of rows in a cube multiplied by
the number of columns that can currently be viewed on a worksheet.
Performance is affected by the size and complexity of Contributor e.List models. Larger and more
complicated models take longer to download than smaller models. Contributor for Excel does not
impose any new limits on size and complexity.
The performance of Contributor for Excel is also affected by cubes containing large numbers of
Contributor cells visible on worksheets at one time. The following actions are affected:
● breakback
● entering a value
As a result, you may want to use the most relevant data and not all possible data. There are several
ways to limit the two-dimensional window footprints of cubes. You can design compact,
multidimensional cubes. If the model requires a long D-List, consider using access tables to send
only the items needed to different e.List items. Finally, consider using cut-down models as another
way of restricting portions of long D-Lists to some e.List items.
● Permit users to decide their own level of granularity and build their own incoming formulas.
314 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products
Print to Excel
Contributor Web client users can print data using the print formatting options available from Excel.
Using the Print to Excel functionality is the default standard for the Web Client. The Activation
Mode is set to custom. If the Web client does not have Excel installed, the standard print capability
is provided.
To configure the Print to Excel extension, see "Configure Client Extensions" (p. 303).
● return completed planning data to the data warehouse using an ETL tool such as Data Manager
for comparative analysis
● monitor live or published planning data during the planning cycle against current operational
data in the Performance Applications warehouse
The data warehouse extracts, and changes that occur during the planning cycle, are managed using
the Import from IQD wizard. Monitoring is done directly against Contributor data using the
appropriate extensions.
Steps
Note: If you create a D-List using the Import from IQD wizard, you should not add any items
manually. If you do add items manually, these items will be removed every time you refresh
the D-List.
After planning models are designed and sourcing is identified, the solution to integrate the
actuals information with planning information can be implemented using either the mapping
table that is generated during the IQD import, or if the mapping tables are not required, you
can use a Cognos package as a source to populate D-Lists in Analyst.
For more information about financial planning with Cognos performance applications and Cognos
8 Planning, see the Analyst User Guide.
316 Contributor
Chapter 18: Example of Using Cognos 8 Planning
with Other Cognos Products
Cognos 8 Planning integrates with all other Cognos 8 Business Intelligence products. For example,
you can create reports on planning data and you can create macros with administration links that
are triggered by events in planning data.
The example in this section shows you some of the ways that Cognos 8 Planning works with Cognos
8 Business Intelligence. It demonstrates just a few of the many ways that you can view and use your
planning data.
The Central Europe region of the Great Outdoors Corporation plans to increase sales in its new
stores by holding promotions. The regional manager, Sébastien Pascal, wants a report delivered to
him the first day of every month that shows the current Central Europe promotions plans compared
to projected costs for the promotions and average monthly store revenue.
To complete this task, you need to create a report on the contributions for Central Europe and use
that report in an event that delivers a scheduled news item to Sébastien Pascal. You require Cognos
8 Business Intelligence products: Framework Manager, Report Studio, and Event Studio.
The example uses the Great Outdoors New Stores sample available on the Cognos Global Customer
Service Web site http://support.cognos.com.
The items created in this example, including the report, event, and PowerCube are included with
the sample download for your reference.
Tip: Search the Global Customer Support Web Site for the document type Utility.
Tip: The default location for the samples folder is C:\Program Files\cognos\c8\samples.
Tip: On the Configuration tab, click Content Administration and select New Import .
2. Complete the Import wizard to import the package and data source connection.
3. On the Configuration tab, click Data Source Connections, select the properties for the new
data source connection, new_stores_power_cube, and update the location of the PowerCube
on the Connection tab to <install location>\samples\Planning\Contributor\en\Data\
Data_go_new_stores_contributor\store_cost.mdc.
Tip: Click Tools, Refresh Console after the deployment to display the application, administration
link, and macro.
3. If you saved the store_cost.mdc Powercube to a location other than the default location, edit
the Administration link data source and target application to map to the data source,
new_stores_power_cube, and the imported planning application, go_new_stores_contributor.
● "Add Applications and Other Objects to a Job Server Cluster" (p. 55)
318 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products
Steps
1. From the Content Administration page on the Configuration tab in Cognos Administration,
click Planning and then click Macros.
2. Click Run with options for the Import Average Monthly Revenue macro, and select to
run now.
Tip: You can view the progress of the macro in the Monitoring Console on the Macros tab.
Steps
1. Publish the go_new_stores_contributor application using Table-only Layout publish.
Include all cubes and e.List Items in the publish and configure a Publish Datastore. Name the
datastore, go_new_stores_table, and add it to the job server cluster.
Tip: Clear Prefix column names with data type on the Options tab.
You can view the progress of the publish in the Monitoring Console on the Job Server Clusters
tab.
2. Run the Generate Framework Manager Model admin extension. Name the package
new_stores_FM_model and store the Framework Manager Model in <install location>\samples\
Planning\Contributor\en\Data\Data_go_new_stores_contributor.
Select all cubes and the data source query subjects Unformatted lists and Complete hierarchy
lists for the model.
3. In Framework Manager, open the model created by the Framework Manager extension.
4. Create a filter on Measure Dimension Promotions Plan to exclude the ALL RETAILERS D-List
item.
Tip: Double-click on the Measure Dimension Promotions Plan in the Business View and click
the Filters tab. Use Retailer Type and the not like operators.
Tip: Planning levels are numbered in Framework Manager. To make them easier to use in
Report Studio, rename the levels to reflect the content.
6. Select the new_stores_FM_model package and publish the package to make it available in
Cognos Connection.
320 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products
Create a Report
You are now able to create a report on Planning data to compare the cost of promotions against
the planned promotion value. This report will use a crosstab report to compare information that
uses one or more criteria and a chart to reveal trends and relationships.
Your final report for budget version 1 will look like this.
2. Create a table (2 columns by 4 rows) to be used as the template for the report.
Tip: Use the tool box to drag a table into the report area.
3. Using a text item, create headings for Budget Version 1 (Central Europe) and Budget Version
2 (Central Europe).
Tip: Drag a text item to the first and the third rows of the first column.
4. Drag a crosstab to the cell in the second row of the first column.
● Franchise/Corporate
● Month of Promotion
● Retailer Type
● Promotion Costs
322 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products
8. In the Query Explorer, add Average Monthly Revenue to Data Items from New Store Plan and
use the Budget version 1 dimension in the Slicer.
9. Copy and paste the crosstab into the cell in the fourth row of the first column.
Tip: Select the crosstab in the properties pane to create the copy.
10. Create a second query, a copy of the first, and apply it to the second crosstab. Change the Slicer
so that Query 2 applies to Budget version 2.
2. From the Data Items Insertable Objects tab for Query 1, drag the following data items into
the Category (x-axis):
● Central Europe
● Franchise/Corporate
324 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products
● Month of Promotion
3. Drag the following data items from Query 1 into the Series:
● Promotion Costs
6. Change the Line Styles to dotted red line and rename Statistical Maximum to Average Monthly
Revenue.
7. Copy and paste the combination chart into the cell in the fourth row of the second column and
apply Query 2 to the second crosstab.
Tip: Create a signon to the data source connection so that users don’t have to enter database
credentials when they run reports or reports are run by an event. For more information, see the
section Create or Modify a Data Source Signon in the Cognos 8 Administration and Security Guide.
Steps
1. Open Event Studio using the package named new_stores_FM_model.
2. Create an agent with an event condition expression for Planned Promotion Value greater than
20,000. This event will be scheduled to run once a month, the event condition expression will
return a result so the event is triggered.
326 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products
Validate the expression , then preview the results to check that the expression returns
a result.
3. Include the Run a report task and select the Central Europe Promotions Report.
In the Headline box, type Central Europe New Store Promotions Available.
Under Link to, click Select an entry and select the Central Europe Promotions Report.
5. Select My Folders as the News list locations. The news item will be published to this location.
6. Click Schedule the agent … and select the By Month tab. Schedule the agent for the first day
of every month.
7. Save the event as New Stores Event. In Cognos Connection, click Run with options , for
the New Stores Event to run the event now.
View the news item, containing Central Europe Promotions Report, in Cognos Connection on
the My Folders tab. This news item will be created the first day of every month.
328 Contributor
Chapter 19: Upgrading Cognos 8 Planning -
Contributor
You can upgrade users, user classes, groups, libraries, and applications from previous Cognos
Planning versions.
The upgrade process involves the following tasks:
❑ Plan the upgrade.
For more information, see the Cognos 8 Planning Installation and Configuration Guide.
● If you want to upgrade only existing libraries, start Analyst and from the File menu, select
Administration, Upgrade, Existing Libraries, browse to find your existing Libs.Tab file,
and click Open.
● If you want to upgrade only existing user classes, start Analyst and from the File menu,
select Administration, Upgrade, Existing User Classes, browse to find your existing
usersclasses.Tab file and click Open.
● If you want to upgrade existing native security to Access Manager, start Analyst and from
the File menu, select Administration, Upgrade, Existing Native Users and Groups, browse
to find your existing existingusers.Tab file and click Open.
Files are converted automatically to 8.3 format, after which they can no longer be opened in
earlier versions of Analyst.
Administration Domain. This is done separately from the Contributor application upgrade.
For more information, see "Upgrade the Planning Administration Domain" (p. 330).
● You can also use the wizard to upgrade and migrate Contributor applications from one
datastore provider to another. For example, you can upgrade a 7.2 application to an 8.3
application and migrate from Oracle to SQL Server.
When Contributor is upgraded to the current version and tested, and you no longer need the old
namespace, you can use the Deployment wizard to migrate objects from the older namespace to
the Cognos 8 namespace. For more information, see (p. 168).
Steps
1. In the Contributor Administration Console, click Tools, Upgrade Planning Administration
Domain.
2. Click Next.
330 Contributor
Chapter 19: Upgrading Cognos 8 Planning - Contributor
3. Configure the datastore server connection for the datastore server that contains the Planning
Administration Domain.
5. Enter the Datastore server name, or click the browse button to list the available servers
(SQL Server only).
Setting Description
Trusted Connection Click to use Windows authentication as the method for logging
on the datastore. You do not have to specify a separate logon
id or password. This method is common for SQL Server
datastores and less common, but possible, for Oracle.
Use this account Enter the datastore account that this application will use to
connect. This box is not enabled if you use a trusted
connection.
Password Type the password for the account. This box is not enabled if
you use a trusted connection.
Setting Description
Connection Prefix Specify to customize the connection strings for the needs of the
datastore.
Connection Suffix Specify to customize the connection strings for the needs of the
datastore.
8. Select the Planning Administration Domain that you want to upgrade, test the connection, and
click Next.
9. Click the namespace to secure the objects against and click Next.
10. If you want to upgrade existing Planning Administration Domain objects, select the Replace
objects if they exist in the current Planning Store check box.
If you do not select this option and the objects exist, they are not upgraded.
12. In the Map Planning Administration Domain Objects page, in the Target column, click the
options that you want.
You must map the job servers and job clusters that are configured in the source Planning
Administration Domain to the upgraded job server and job server clusters so that macros are
upgraded correctly.
13. If you want to choose defaults that apply to all applications, click Map All Applications and
complete the fields.
A log located in the %Temp%epUpgrade directory notifies you of any warnings that occur while
upgrading the Planning Administration Domain.
● shows you if there are users who are working off-line because off-line data cannot be upgraded
due to a new caching file that is used in the current version of Contributor
● upgrades the following client extensions: Excel Export (Export for Excel), Client Loader (Get
Data), Excel Print (Print to Excel)
332 Contributor
Chapter 19: Upgrading Cognos 8 Planning - Contributor
● Admin extensions
Cognos Planning 7.1 or 7.2 Admin extensions are removed because substantial changes were
made to the extensions for the current version.
● audit information
History table data and Job metadata is not upgraded. When the Go to Production process is
run, cut-down model information is automatically generated after upgrading.
● scripts
In Contributor 7.2, you automate Contributor functionality using scripts. This functionality
was replaced by macros. You cannot migrate your 7.2 scripts to macros. For more information,
see "Automating Tasks Using Macros" (p. 191).
● published data
Publish datastores are not upgraded. Prior publish datastores can be retained and are compatible
with 8.3. However, if you need to recreate your publish datastore as part of your 8.3 deployment,
Cognos recommends rebuilding it as UTF-16 to better conform to global business standards
and to ensure easier compatibility with future Cognos releases.
You cannot publish to the Contributor application container. You must publish to a separate
container. We recommend that you compare the results of publishing from earlier versions of
Contributor with publishing in the current version to ensure that the publishing is performing
as required. Any publish scripts must be re-created using the new macro functionality.
● Analyst>Contributor links
When you upgrade applications that contain Analyst>Contributor links, you must open the
link in Analyst and reselect the source and target of the link. For more information, see "Update
a Link from a Computer That Cannot Access the Original Datastore" (p. 352), and the Analyst
User Guide.
Analyst and Contributor macros that use Analyst>Contributor links will fail if you do not
update the source and target of the link.
You should not run multiple versions of Contributor on the same computer.
To upgrade an application, you must have the Planning Rights Administration capability. By default,
this capability is granted to the Planning Rights Administrators role.
Before upgrading an earlier Contributor version to the current version, we recommend that you
install on a separate server and then upgrade each application.
We recommend that you back up the data stores that you intend to upgrade.
For more information, see "Security" (p. 27) and the Cognos 8 Administration and Security Guide.
Steps
1. Under Datastores, click the required datastore, right-click Applications, and click Upgrade
Application.
2. Click Add.
4. Enter the Datastore server name, or click the browse button to list the available servers
(SQL Server only).
Setting Description
Trusted Connection Click to use Windows authentication as the method for logging
on the datastore. You do not have to specify a separate logon
id or password. This method is common for SQL Server
datastores and less common, but possible, for Oracle.
Use this account Enter the datastore account that this application will use to
connect. This box is not enabled if you use a trusted
connection.
Password Type the password for the account. This box is not enabled if
you use a trusted connection.
334 Contributor
Chapter 19: Upgrading Cognos 8 Planning - Contributor
Setting Description
Connection Prefix Specify to customize the connection strings for the needs of the
datastore.
Connection Suffix Specify to customize the connection strings for the needs of the
datastore.
7. Select the application that you want to upgrade, test the connection, and click Next.
8. Choose whether to create the datastore now and continue upgrading the application (Create
and populate datastore now) or to exit the wizard and create and populate the datastore using
scripts Generate datastore scripts and data files. Then, in either case, click Next.
9. If you chose to use a script, give the script to your DBA to have the datastore created.
You can later link to the new datastore using the Contributor Administration Console which
will resume the upgrade wizard.
● Click the namespace which will secure the upgraded application and click Next.
● Click Finish.
The Upgrade Application(s) page appears with the application that you specified added to
the list of applications that can be upgraded.
● Click Upgrade.
The results of the upgrade show in the Upgrade log for application(s) page.
You must add the application to a job server or job server cluster (p. 50), run Go to Production
(p. 239), and set up the Contributor Web site to enable users to access Contributor applications
(p. 76).
Upgrade Security
If your version 7.2 Planning application was secured using Contributor native security, you can
upgrade directly to a Cognos 8 namespace.
If your Planning application or Planning Administration Domain was secured by a Series 7 namespace
that was administered by Access Manager, you can upgrade your security to a Cognos 8 namespace
using the Contributor Administration Console deployment wizard.
To upgrade your security, you must configure Cognos 8 Planning to use both the Series 7 namespace
that was originally used as well as the namespace to which you are upgrading. In the Contributor
Administration deployment wizard, you must first export the Planning application or the Planning
Administration Domain, and then import the application or domain again. During the import, you
can map the security to your new namespace.
After you have upgraded the security for all of your applications or your Planning Administration
Domain, you can remove the Series 7 namespace from your configuration.
For more information, see "Deploying the Planning Environment and Viewing the Status of
Deployments" (p. 168) and the Installation and Configuration Guide.
336 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model
Design Considerations
You can design a Cognos 8 Planning - Analyst model to be used for a Contributor application and
create links between Analyst and Contributor (p. 347).
● Update links must target the specific cubes in the main library.
● The Contributor administrator must have write access to all objects used in the Analyst model.
● If a cube consists of D-Lists that are all from the common library, it is an assumption cube.
Because of this, such assumption cubes do not appear when selecting the e.List in the
Administration Console during application creation.
You do not explicitly select the other library. This is the first library other than the template library
that is referenced when looking for dependencies on other objects. If there are references to more
than one other library, errors are reported. It may be necessary to trace dependencies in Analyst to
establish where the reference occurred.
D-Cube Restrictions
D-Cube options can cause problems in Contributor applications.
The following options are not supported in Contributor but do not stop a Contributor application
from working:
● All settings in the D-Cube, Options menu: Widths, Lines, Zeros, Break-back, and Stored Copy
● Integer break-back
This is ignored, producing different results in Contributor if break-back is switched on in
Contributor
● D-Cube Sort
The following D-Cube cell options are ignored: holds, locks, protects, and annotations.
Example
In the following example, Sales is calculated as Total Units * Price and Price is weighted by Total
Units.
338 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations
The weighted average in Price, Q1 is calculated correctly by Contributor if the weightings in cells
Total Units, Jan to Total Units, Mar are already calculated. In other words Q1 must be a higher
priority calculation than Total Units. Or, if equal priority, the time dimension should be later in
the cube than the Sales calculation dimension.
If Total Units is higher priority than Q1, the cell Price, Q1 are calculated before the weightings in
cells Total Units, Jan to Total Units, Mar are calculated. As a result, the weighted average could
not be calculated correctly.
D-Links
There is a lot to consider relating to D-Links when designing a Contributor model in Analyst.
Target areas of D-Links are automatically protected in Contributor to prevent a D-Link from
overwriting data provided by a planner. You do not have to protect D-Link target areas using access
tables.
In Contributor, update links are used in the Contributor model only if the Execute check box for
the D-Link is selected.
Special D-Links
Lookup D-Links, internal D-Links, and some break-back D-Links run automatically as relevant
data is changed on a particular tab.
For example, with lookup D-Links, if a planner changes a D-List formatted value in the lookup
target cube and presses Enter, lookup D-Links are run into the cube.
Automatically executing internal D-Links can be useful for solving problems, such as to express
values as a percent of a total. A lookup link can change its own source data so that the same D-Link
needs to be run again. In Contributor, such internal D-Links run until the source data stops changing
or up to a maximum of 100 times.
A D-Link that targets a subtotal to perform a break-back allocation to detail items is named a
break-back D-Link. The detail items can be writable by a planner, although normally another
D-Link supplies the weightings for these detail items. The planner can change values in these detail
items if they are not supplied by another D-Link. The break-back D-Link runs automatically when
they press Enter.
● multiply dates
All operations on mixed data types are considered invalid by the Contributor link engine with one
exception. You can add numbers to or subtract numbers from dates.
The results obtained in Contributor when performing invalid data type operations depend on the
D-Link mode. Fill puts zeros into the relevant target cells, whereas Substitute leaves the relevant
target cells alone. Add and Subtract effectively behave like Substitute, adding or subtracting zero.
In Analyst, all operations on data types are permitted. The Analyst D-Link engine simply operates
on the underlying values.
Invalid D-Links
Invalid D-Links prevent Contributor applications from being created, and also prevent
synchronization.
D-Links that use the e.List in any way other than the following are invalid:
● Link between an assumption cube, which does not have an e.List, and the target, a cube with
an e.List, where nothing is selected and it is unmatched.
● Where the e.List is present in both the source and the target, and the matching is done using
match descriptions with the default options: Case Sensitive On, Match Calculated Target Items
Off.
340 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations
● D-Cube allocation tables that are not assumption cubes, which means they contain the e.List.
● Allocation tables that contain deleted dimension items but which have not had these items
removed or edited in Analyst.
Dimensions
A dimension is also referred to as a D-List in Analyst. Consider the following when designing
dimensions:
● "Dimension Order" (p. 341)
Dimension Order
As a general rule, in an Analyst model, choose dimensions in the following order:
1. Calculation D-Lists such as P&L, and Balance sheet D-Lists.
2. The e.List.
We recommend that the e.List is second in the dimension order, because this affects the size of the
data blocks. The data blocks store all detail cells for each cube, together with any calculated cells
for which the calculation comes earlier in the calculation sequence than the aggregations up the e.
List. These calculations are referred to as pre-aggregation calculations. The dimension order is the
primary method for controlling the calculation sequence. As a result, the position of the e.List in
the dimension order affects the number of cells stored in the blocks, and therefore the block size.
In many cases you can choose a different dimension order without affecting the calculations, and
this can be used to minimize the block size.
Example
For example, in the cube Revenue Plan, the dimensions are
● Product Gross Margin
● Channels
● e.List
● Months
● Versions
With this order, the calculated items on the dimensions Indoor and Outdoor Products and Channels
are stored on the data blocks.
The dimensions can be reordered as follows without changing the calculation results
● Product Gross Margin
● e.List
● Channels
● Months
● Versions
The calculated totals on the products and channels dimensions are no longer stored on the data
blocks. They are recalculated when the data is loaded on the client or during publish. In general,
the e.List is not the first dimension because there is typically one dimension of the cube for which
the calculations must be pre-aggregation. However, in many cubes there are other hierarchical
dimensions in addition to the e.List (products and channels in the example), and the order of these
can be switched without affecting the calculations.
Low priority calculations are pre-aggregation and are always stored on the data blocks regardless
of dimension order. High priority calculations are post-aggregation and are never stored on the
data blocks.
342 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations
Supported BiFs
When creating dimensions that are used in cubes in the Contributor application, the following BiFs
(built in functions) are available.
● @Cumul
● @Days
● @DaysOutstanding
● @Decum
● @Delay
● @DepnAnnual
● @Deytd
● @Differ
● @Feed
● @Feedparam
● @Forecast
● @Funds
● @Grow
● @IRR
● @Lag
● @Last
● @Linavg
● @Mix
● @NPV
● @Repeat
● @Time
● @Timesum
● @TMin
● @TMax
● @TRound
● @Ytd
@Last Differences
@Last looks back along the series of data in the input row and returns the most recent non-zero
value.
In Analyst, any positive number greater than 1E-13 is non-zero. Negative numbers must be greater
than -1E-12.
In Contributor, any positive number greater than 1E-15 is non-zero. Negative numbers must be
greater than -1E-14.
@Time Restrictions
The implementation of @Time in Contributor is identical to the implementation in Analyst except
for the following restrictions:
When using Method 1, the calculation will give different results depending whether the dimension
on which the calculation is defined comes before or after the e.List in the D-Cube's dimension
sequence. In other words, its results depend on the time at which it is executed.
@Time is not supported in Contributor in the circumstances listed below. In all these cases the
function returns zero.
● Method 2 (date last saved) is not supported. If you use @Time(2) in a D-List, a warning appears
while you create or synchronize the application, and the result is always 0.
● Methods 9 and 15 return a result of 0 with a generic timescale, or if the switchover date is not
set.
For information about using these built in functions, see the Cognos 8 Planning - Analyst User
Guide.
Date Formats
The following date formats are supported. Using any others prevents a Contributor application
from being created and prevents synchronization:
● DD/MM/YY
● DD.MM.YY
● MM/DD/YY
● MM.DD.YY
● DD-Mon-YY
344 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations
● DD Mon YY
● Day DD-Mon-YY
● YY-MM-DD
● YYYY-MM-DD
● YYMMDD
● YYYYMMDD
● DD-MM-YYYY
● MM-DD-YYYY
● DDMMYYYY
● MMDDYYYY
● DD/MM/YYYY
● MM/DD/YYYY
● DD.MM.YYYY
● MM.DD.YYYY
● In Contributor, the numbers are entered as shown. If the cell shows 1000 for an underlying
value of 10, and you type in 1200, the new value shows as 1200 with the underlying value now
being 12.
Example
You can make small changes to such cubes so that the application can be created. For example,
this dimension would be opened in full:
A
B
C
:
:
Z
Total (A to Z) = A+B+C+...+Z
You can include additional calculated items to the hierarchy so that these items are opened instead
of the total of all detail items.
For example, you can add this extra dummy total.
346 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations
Total A = +A
Then, when the check for forward-referenced weighted averages is performed, only items A and
Total A are used, reducing the memory requirements by a factor more than 10. Such a dummy total
can be excluded from the Contributor application by setting it to no-data in an access table.
It is also good practice in such cases to keep the Analyst cube as small as possible by including only
a single-item e.List.
Analyst<>Contributor Links
You can transfer data between Analyst and Contributor using the Analyst D-Link function. All the
standard features of a D-Link are available, such as the use of A-tables, D-Cube allocations, local
allocation tables, match descriptions, and matching on codes.
These types of links are available:
● Analyst to Contributor
● Contributor to Analyst
● Contributor to Contributor
For small amounts of data, an Analyst<>Contributor link can be a quick and effective method of
transferring data. However, for large amounts of data, it is more effective to use Administration
links, see (p. 145).
Analyst<>Contributor links work in the same way as a standard Analyst D-Link. They treat
Contributor as a single large cube, which means that with large models, you can quickly run into
memory problems. We recommend that Analyst<>Contributor links be used only for ad-hoc transfers
of small amounts of data of no more than 5 to 10 e.List items.
You can avoid memory problems for links that target an entire e.List in Contributor by using the
@SliceUpdate macro. This macro processes the link in slices of the e.List, making it a much more
scalable solution.
Most D-Links that have Contributor as a source or target behave the same as standard Analyst
D-Links. The few exceptions are as follows:
● Only cubes that contain the e.List are available as a source or target for Analyst<>Contributor
links.
This includes totals on the e.List dimension as well as any total in other D-Lists.
● Match descriptions in Analyst D-Links to or from Contributor treat the pipe symbol as a blank.
The pipe symbol is used in Analyst as a line-feed for column headers. It is stripped out when
you create a Contributor application from an Analyst model.
Otherwise, most D-Link types are permitted. You can use Match Descriptions, local allocation
tables, A-tables, and D-Cube allocations. You can cut subcolumns, so that you can match on codes.
You can run accumulation links both ways, but lookup links run from Contributor to Analyst only.
If you use a saved allocation table and rename D-List items in the Contributor application when
using Contributor as a source or target in a D-Link, the allocation table must be manually updated
for the link to work.
Analyst users who do not have the Contributor Administration Console installed are not able to
run Analyst<>Contributor D-Links.
When you install Client tools onto a workstation, it is installed only for the user doing the
installation.
To run a Contributor<>Analyst link, users must have Analyst and the Contributor Administration
Console installed. They must also have rights to Analyst and the appropriate Contributor
applications.
In addition, organizations may prevent access to the database or the Web server using the IP address,
limiting who can run these D-Links.
D-Links from ASCII and ODBC directly into Contributor are not allowed. You must use Contributor
Import to do this.
Steps
1. In the Analyst D-Link editor, choose Contributor Data as the source or target.
348 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations
Tip: You may need to click the Refresh button to the right of the Name list to view the list of
available Contributor applications.
6. Pair dimensions against the target (or source) cube as you would for a standard D-Link.
Analyst>Contributor D-Links
These links can target either the production or development version of Contributor. If targeting the
development version, they appear on the Contributor screens only after the Go to Production process
is completed.
Important: If targeting the production application, the link changes the data, even if the user has
submitted data.
When you run an Analyst>Contributor link that targets a development application, the data is read
out of Analyst when you run the D-Link. When you run the Go to Production process in the
Contributor Administration Console, or through Automation, the prepared data is written directly
to the import queue in the data store for the Contributor application as a prepared data block, e.
List item by e.List item.
There may be a delay between the Go to Production process and the data being reconciled in the
Web client. If, in the meantime, a planner edits one of the cells targeted by the link, that cell is
overwritten during reconciliation. This behavior is very similar to the reconciliation that takes place
when you import data into Contributor from text files or using DTS.
When you run an Analyst to Contributor link that targets the production application, the data is
read out of Analyst when you run the D-Link. An automatic activate process is run that applies the
data to a cube. If running the link using macros, you must run the @DLinkActivateQueue macro.
Contributor>Analyst Links
When you run a Contributor>Analyst link, the following occurs:
● A snapshot is taken of the production version of the Contributor Application.
To ensure a consistent read if you are using the @SliceUpdate macro, take the Contributor
application offline, or use the @DLinkExecuteList macro.
● A Contributor session is loaded and the entire data block is loaded for each e.List item.
If the link is set up for more than one e.List item, it is equivalent to loading a multi-e.List item
view which is very memory intensive.
● The data is written directly to the Analyst cube data file (H2D file).
Contributor>Contributor links
These links go from the production version of a Contributor source to either the development or
production version of the Contributor target.
They are typically used between separate applications. If the applications are small,
Contributor>Contributor links can be fast. However, if you transfer data between larger applications
this way, you may run into problems due to memory use and scalability problems. You can avoid
these issues by using the @SliceUpdate macro. It can be more effective to use administration links
in the Contributor Administration Console, which copy data between Contributor cubes and
applications. This process is scalable and can move large volumes of data into either the development
or production version of the Contributor application.
Save As Method
This method results in a copy which refers to the original Contributor application(s) and/or Analyst
D-Cubes.
Steps
1. In Analyst, open the link.
5. Click OK.
Library Method
This method lets you select the link with or without other objects and choose either to copy or
move the link. This results in a link which refers to the original Contributor application(s) although
the source or target Analyst D-Cube (if it is not a Contributor > Contributor link) could be changed
by this method if certain reference options are chosen when copying.
Steps
1. In Analyst, from the File menu, click Library, Objects.
2. Select the link with or without other objects and move it down.
350 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations
4. In the Copy dialog box, enter a new name for the link, select a target library in which to copy
the link, and select how to remap references.
5. Click OK.
● A link can only be pointed to an application where it will refer to template cubes which were
copied at the same time. You cannot copy a link into a library which already contains suitable
template cubes and then refer the link to an application based on that library.
● If you make a copy of a link using this method and copy the link and its associated objects at
the same time, then you will not be able to refer the link back to the original application. You
will have to make a new application based on the copied library and refer the link to this new
application.
● If you copy macros which refer to Contributor applications using the Library Copy wizard,
then the macros will continue to refer to the original application. You must open the copied
macros and manually edit them to refer to any new applications based on copied libraries.
Steps
1. Use the Library Copy wizard to copy the link and any related Analyst template cubes at the
same time.
3. Point the link to your new application by using one of two ways.
● Open the link and then select your new Contributor application when prompted.
● From the File menu, click Library, Object. Double-click the link to move it down and then
right click the link and select Change Contributor source on D-Links.
● the use of multi-e.List item views with access tables, the size of which are not decreased as much
as single e.List item views.
This is the amount of space reserved for Analyst. As a general rule, this should not be more than
half the available RAM. If you set this option too high the Analyst process can use so much memory
that it does not leave enough for the Contributor process.
Update a Link from a Computer That Cannot Access the Original Datastore
If a Contributor cube is used as a source or target, and the link is opened from a computer that
cannot access the original datastore, you are prompted to reselect the connection and application
to point to the data store and application name that holds the cube the link was built on. All
matching is then preserved. Save the link so it will run in the future.
Multiple data sources can be used. If two applications are built from the same Analyst library, the
GUIDs match when pointing the link to the original data store.
To run a link from a workstation that does not have access to the original datastore you must
manually open the link and reselect the connection. You can also update the connection for several
links at once.
Steps
1. From the File menu, click Library, Objects and select one or more links that you want to update
and move then to the bottom pane.
2. In the bottom pane, right-click and click Change Contributor Source on D-Links.
352 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations
inserted a new product as part of a model change, you cannot import data into the new product
until the Contributor model is synchronized. You must then run Go to Production to activate the
model and data changes.
Example
You can use Contributor>Contributor links to preserve data during cube dimensional restructuring,
like when adding a dimension.
Steps
1. Take the Contributor application offline.
2. Run a link from the production version of the Contributor application to the import queue of
the development application.
3. Run Go to Production.
354 Contributor
Appendix A: DB2 UDB Supplementary Information
This section provides an introduction to the Cognos 8 Planning - Contributor for administrators
with responsibility for DB2 Universal Database (UDB) databases within the large enterprise,
specifically IBM DB2® Universal Database (UDB) version 8.1 for UNIX/Windows/Linux.
It assumes a familiarity with the tools provided by the database and a knowledge of security and
backup practices within the enterprise.
● Connections are made to the Cognos 8 Planning DB2 UDB database and all SQL statements
are fully qualified SCHEMA.TABLENAME.
For more information on installation requirements and procedures, see the Cognos Global Customer
Services Web site.
Alternatively, the application generates DDL scripts to do these things. You can then review the
scripts and execute them yourself. Additionally, Contributor needs to be able to bulk load data
using the DB2 UDB import utility, as well as carry out a non-logged delete of all data in a table.
When determining whether to generate DDL scripts, you may need to consider whether Contributor
should execute DDL against your enterprise database without review. You should also consider
whether local policy allows the Contributor security context sufficient privileges to be able to create
and remove Contributor datastores.
356 Contributor
Appendix A: DB2 UDB Supplementary Information
You may want to generate DDL scripts to comply with your own storage standards or to customize
storage clauses to take advantage of sophisticated enterprise storage. You may also want to amend
or add sizing clauses to the Contributor default DDL.
Naming Conventions
Static object names are the same across all applications and do not change during the life cycle of
the Contributor application. Examples of static objects include the applicationstate table, which
contains the Contributor application definitions, and the history table, which log events and data
changes.
Dynamic objects, primarily the import tables and publish tables and views, are named after objects
within the Analyst model. Object names correspond to Contributor model object names with a
subsystem prefix. Examples of dynamic objects include im_cubename, which contains the import
staging tables.
During application creation, Contributor forces dynamic object names and Contributor application
datastore names to conform to the following conventions:
● only lowercase letters a to z and numeric characters are allowed
The application datastore name defaults to the name of the Analyst library that is used to create
the application. Dynamic object names are based on the Contributor object name to which they
correspond, such as publish data is et_cubename.
Metadata
Every Contributor datastore contains the metadata subsystem. The content of the metadata tables
is critical to the functioning of the Contributor application. The metadata provides a mapping from
internal Contributor model identifiers to the external database objects.
DDL scripts may be amended to conform to local storage conventions. You must not amend database
object names within the DDL script or allow the information contained within the metadata tables
to become out of sync with the underlying database objects.
Backup
Contributor does not back up data stored in the DB2 UDB database. You must back up the
Contributor datastore using tools supplied by other vendors. We do not anticipate problems restoring
from backups that use these tools, provided that
● the backup is taken of the whole datastore application (and the CM datastore?)
● no attempt is made to restore individual tables from backups taken at different times
Standards
All SQL is standard ANSI SQL and is executed via ADO / OLEDB.
The design of the datastore objects remove the need for complex table joins (the only place JOINs
are used is within the Reporting Views) and the few SORTs are typically on small result sets.
Data for transmission over HTTP (to and from the users entering the numbers into the model) is
compressed and stored as XML documents.
Steps
1. Set locks to default to row-level locking and try to avoid upgrading the locks to table-level
locking.
2. To prevent lock escalation, ensure that the LOCKLIST and MAXLOCKS settings are not too
small.
Notes: Currently, DB2 UDB does not use the buffer pool to manage LOBS. The datastores chosen
for tables containing LOBS should be placed in file containers that will be buffered by the operating
system.
For more information, you may want to refer to the IBM DB2 documentation on performance
considerations for LOBs.
358 Contributor
Appendix A: DB2 UDB Supplementary Information
Job Architecture
Contributor operates a consistent and proven code stream across multiple database providers; that
is, a large proportion of the code (excepting database administration and DDL functions) is common
across different databases. The code is distributed within a classic n-tier architecture.
Data processing is carried out by job servers via the job architecture
A job may contain multiple job items which represent atomic items of work. Jobs are queued for
execution and picked up automatically.
A single-processor machine normally executes a single job item at a time. Job items are executed
by job servers.
Members of the job server cluster identify items of work by polling the job subsystem within the
Contributor datastore at regular intervals. Each job server continues to execute job items until no
more work exists. An individual job server may be asked to monitor one or more Contributor
applications. An job server may therefore be polling one or more Contributor datastores which
contain job subsystems within the Contributor environment.
The job architecture enables database administrators to limit the number of DML operations carried
out against the enterprise database by adding and removing job servers from currently executing
jobs.
Concurrency
UDB configuration parameters related to applications are dependent on the expected concurrency
on the database.
For Cognos 8 Planning, database concurrency is a function of the number of job servers and the
number of threads per server. The maximum number of concurrent applications can be determined
by adding up all the active job tasks for all applications plus the epjobexec job itself plus active
connections for the Administration Console plus any run-time server side components.
Capacity Planning
Capacity planning and system sizing is dependent on model size and the number of Contributor
applications. Data volumes may grow during Publish. For more information, see "Reporting Data:
Understanding the Publish Job Process" (p. 360).
Alternatively, you may choose a tool, such as Cognos Data Manager to populate the tables directly.
Import Data Process
❑ Whichever method you choose, the import data is processed and compressed by the Prepare
Import job and data is made available to web client users by the Reconcile job.
❑ The Prepare Import job retrieves data from the datastore, processes it, and reinserts it into the
application datastore in XML format.
● using import load replace, truncates potentially large tables or large number of simple SQL
insert statements at start of job followed by a bulk load of data per node per cube plus
annotations
Data Loading
Publish is broken up into units of work and processed via the job cluster. Data is uploaded using
the DB2 import utility.
Contributor supports options to accumulate all the data into large text files before uploading to
the target tables in the publish datastore. This is an interrupted publish. It does not reduce the size
of the publish data but it may fit more easily into enterprise procedures.
Job Failure
If an attempt at loading data fails because of inadequate disk space, the Publish job will cancel the
job. After you have allocated more tablespace, the Contributor Administration Console user should
attempt to run the job from the beginning.
If Cognos 8 Planning fails to create a table during Publish then the next time Publish is run, the
application attempts to create a table again.
360 Contributor
Appendix B: Troubleshooting the Generate
Framework Manager Model Extension
Use troubleshooting information to help solve problems you may encounter generating Framework
Manager Models.
Note: Make sure you specify the Oracle driver and not the Microsoft ODBC Driver for Oracle.
● You are using Oracle or DB2 in a multi-computer environment and the configuration to access
the datastore is not configured in the same way on all computers.
● at the very end of the Generate Framework Manager Model process when the Finish button is
pressed and the system is trying to generate the Framework Manager model
● when testing the Gateway URL from Generate Framework Manager Model using a distributed
environment when the Cognos 8 BI Server is on the same computer as Cognos 8 Planning
To resolve this issue, delete the directory data source, and publish to a new container.
Steps
1. Stop and restart the services.
3. On the Configuration tab, click Data Source Connection. Delete any directory data sources.
362 Contributor
Appendix C: Limitations and Troubleshooting when
Importing Cognos Packages
Use the following limitations and troubleshooting information to help solve problems you may
encounter when importing a Cognos Package into Cognos Planning.
Tip:
● Extract the measures via the OLAP interface in a separate Administration link.
● Cognos Planning supports a wide range of aggregation types, for example, weighted averages.
You can load the leaf-level values from SAP BW into Cognos Planning for aggregation. This
requires that Cognos Planning is at the same level of aggregation as SAP BW which might
require a change to Cognos Planning or SAP BW.
● If you have Data Manager installed, and have a good working knowledge of it, you can bypass
the Administration link and achieve the desired result for most aggregation types by moving
the data directly into the Cognos Planning import tables.
Aggregation Support
SAP BW aggregation types that are not supported by Framework Manager, but that are supported
in Cognos 8 queries by pushing the aggregation to the SAP BW system, are not supported by the
new Cognos Planning access method for SAP BW.
Tip: You can load the leaf-level values and do the aggregation in Cognos Planning where more
complex aggregations can be achieved, but there are some aggregations that cannot be replicated.
This also requires that Cognos Planning is at the same level of aggregation as SAP BW which might
require a change to Cognos Planning or SAP BW. You can alternatively use the OLAP interface.
Tip: Switch the flag for writeContributorXML and writeDmsSpecOnSuccess from False to True in
the \Cognos\c8\bin\ dataimportserviceconfig.xmlTags file:
A generated file named: dmresult_<adminlink name>_<timestamp>.xml can be used to see records
that are inserted, updated, or rejected.
Tip: Use Macros to run Administration links and add a Go to Production step to move the data
into the Production application automatically.
Tip: Remodel the data in the source or in Framework Manager to avoid this scenario.
Tip: Create separate links for the numerics and non-numerics. If non-numerics need to be mapped
as 1 to many, then adapt the Cognos Planning model to run the value in once and perform the 1
to many mapping using D-Links. You can create multiple Administration links if the model cannot
be changed.
Tip: There is currently no workaround for this except to use the SAP BW OLAP interface.
364 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages
cannot include references to both the relational and the OLAP parts of the Framework Manager
model.
Tips:
● Use the InfoCubes that underpin the multi-provider
Tip: To improve performance, you may create separate link elements within a single link so when
the link executes, the link elements will be executed in parallel. Or you can create separate links
and run them in parallel.
Tip: The workaround is to delete the detail fact query subject and recreate it.
SAP BW Hierarchies
Members in SAP BW hierarchies must have unique business keys, as assigned to the Business Key
role in Framework Manager, across all levels.
When working with SAP BW data, the Dimension Key field for any dimension should be hidden
in the Model (not the Package) - both for the OLAP and Detailed Fact Query Subject access before
the Package is published. It is not intended for direct use from within Cognos Planning.
Query Prompts
Query Prompts defined in Framework Manager are not supported in any Cognos Planning links.
Tip: Do not use the SAP variables when the package will be consumed by Cognos Planning.
366 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages
2. Switch the flag from False to True for the following tags:
● writeContributorXMl
● writeDmsSpecOnSuccess
Setting the value to true will cause the file to be written when the import succeeds.
Generated Files
The following files are generated automatically to help you troubleshoot the import functionality.
dmrejected_<adminlink name>_<timestamp>.xml
This file is a raw output of rows that were rejected in processing the link within Data Manager.
These rows come directly from the datasource that the link reads. Rows are rejected when data
from the query items do not match expected target allocations in the target cube. For most data
sources, the rejected rows will contain data from the query items the link references, making
troubleshooting easier because, for example, the value in the rejected file would contain the
descriptions. However, for SAP Administration links where the Detailed Key Figures performance
enhancement is being used, the rejected rows will contain key values, not descriptions.
Example: dmrejected_TestCaseSap4_Wed Jan 10 12_29_32 CST 2007.xml
<adminlink name>.cmd
Used with <adminlink name>_ImportFile.xml, this file can be used to rerun the import outside of
Contributor. This can be useful if it is unclear whether the Modeled Data Import is actually being
executed. Problems might be uncovered by running the import outside of Contributor. Double
clicking the file will execute it. By adding a pause command after the Java command inside the file,
the command file window will stay open until a key is pressed.
Example: testAnalystFileWrite1.cmd
This file is created by Contributor and Analyst, and is deleted after the Modeled Data Import
completes successfully. If the Modeled Data Import data fails, the file remains. The file contains
the commands and parameters to run the Modeled Data Import. It also contains a valid passport
that is only good until it expires. If the valid passport is copied along with <adminlink name>.xml
while the import is occurring, a copy of the file can be used later. If the passport in a cmd file has
expired, the expired passport can be replaced with a valid one. A valid passport can be taken from
a recently created cmd file.
<adminlink name>_Result.xml
This file is used by Contributor to determine the success or failure of the import.
Example: TestCaseSap2_Result.xml
If the link executed successfully, this file contains a subset of the results contained in the
dmresult_<adminlink name>_<timestamp>.xml file. Also, if the dmrejected file (described above)
is created, this file will contain a message that lets you know where the dmrejected file can be found.
Contributor reads this file to get the resulting status of the import. This file name doesn't contain
a timestamp. If the link failed, this file will contain a portion of the exception message that can be
found in the Planning error log.
<adminlink name>_ImportFile.xml
Used with <adminlink name>.cmd, this file can be used to rerun the import outside of Contributor.
This file is created by Contributor and Analyst, and is deleted after the Modeled Data Import
completes successfully. If the Modeled Data Import data fails, the file remains.
This file contains the commands and parameters to run the Modeled Data Import, information
about the matched dimensions, unmatched dimensions, data dimensions, and import table connection
info and column names.
contribXml_<timestamp>.xml
This file can be used to check if Contributor or Analyst is producing a valid file for Modeled Data
Import. You can use this file to step through the Modeled Data Import code, if the model and cube
can be reproduced, or access to model and cube are provided.
Example: contribXml__Wed Jan 10 12_29_13 CST 2007.xml
This file contains the adminlink xml that the Contributor Application or Analyst model has sent
to the Modeled Data Import. It also contains information about the matched dimensions, unmatched
dimensions, data dimensions, and import table connection info and column names.
368 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages
dmspec_<adminlink name>_<timestamp>.xml
This file can be validated against Data Manager's Data Movement Service schema to validate it for
well-formedness and proper content. It can also be used to create a Data Manager package. Packages
can be imported into the Data Manager user interface to be inspected and executed. Problems with
the spec can be discovered when creating the package and executing the package within the Data
Manager user interface.
Example: dmspec_TestCaseSap4_Wed Jan 10 12_29_32 CST 2007.xml. This file contains the Data
Manager's Data Movement Service spec file. These are the commands sent to Data Manager to
import the data from a source to the target.
dmresult_<adminlink name>_<timestamp>.xml
This result information, before it's written out, is used to create the <adminlink name>_Result.xml
file. The <adminlink name>_Result.xml file is a subset of the information in this file.
Example: dmresult_TestCaseSap4_Wed Jan 10 12_29_32 CST 2007.xml
This file contains the result of executing the spec in the Data Movement Service. If the import was
successful, the dmresult file will contain 'T' for the componentSuccess e.List node, the number of
rows read from the datasource, number of rows rejected, and the number of rows inserted into the
import table or written to the Analyst output file.
If unsuccessful, the dmresult file will contain either 'F' for the componentSuccess e.List node and
useful error message information, or no information at all.
Description: Query Items added to a Framework Manager model under a Query Subject that use
the concatenation functions ( '||' or '+') in the Query Item's expression are not supported in this
release. The query engine used to retrieve data when processing the link is not able to handle these
query items. It does not matter if the link is SAP or not.
Fix: Added Query Items with concatenation in the expression can be used within Analyst to build
D-Lists that can be included in a cube. However, these same query items cannot be used as a source
query item within a link in Analyst or Contributor. To get the appropriate mapping to occur when
choosing the source query items for the link, pick one of the query items used in the concatenation
expression. Then, when mapping to the target dimension that includes the concatenated values,
map the single source query item to the target dimension and use a sub-string on the target dimension
to achieve the appropriate mapping.
Description: ConformanceRef is a hidden Framework Manager attribute used to link the OLAP
query items to the relational query items and exists when the Detailed Fact Query Subject is created
for a SAP model. When processing a link with a SAP model that has the Detailed Fact Query Subject
created and a query item is discovered in the link that cannot be linked to the Detailed Fact Query
Subject, an exception occurs. Examples of this are if any query item in a dimension or the Key
Figures that has been added to the model since the Detailed Fact Query Subject was created, or if
a query item is added under a query subject folder.
Query items like this have an expression that pulls values from one or more dimension or Key
Figures values. These can never be linked to the Detailed Fact Query Subject. If the first query item
of the link can't be referenced to the Detailed Fact Query Subject, then the Detailed Fact Query
Subject won't be used for the entire link, and the link should run successfully. But, if a query item
that can't be linked to the Detailed Fact Query Subject is processed after one or more query items
that can be linked, then the link will fail.
Fix: Deleting and regenerating the Detailed Fact Query Subject and republishing the package will
fix query items added to the Key Figures or a dimension.
When dealing with query items added to a query subject, deleting and not regenerating the Detailed
Fact Query Subject, then republishing the package will allow the link to run. However, the added
query item can't contain an expression using concatenation.
370 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages
Note: The above error message occurred when the expansion limit was set to 64,000. The error
message would refer to 200,000 if a client were to encounter the error with the Cognos default
setting.
Description: Framework Manager import uses the Java coding language when processing the link
information from Analyst or Contributor. Java XML parsers have a built-in default limitation on
how large an XML document can be. Exceeding this limitation will cause an exception to be thrown
and the link will fail. The Framework Manager import code provides a configurable override to
this limit. The Java limit of 64,000 has been increased to 200,000 by the override value. However,
it might be possible to build a link that exceeds this increased limit.
Fix: Open the dataimportserviceconfig.xml file located in the bin folder of the Cognos installation
directory. Find the parameter with a name of "EntityExpansionLimit" and increase the value. A
suggested increase would be to make the value 300,000. Increasing the expansion limit may mean
that more internal memory will be needed, causing a out of memory problem if the maximum heap
size isn't also increased. Find the JavaMaximumHeapSize parameter and increase that as well.
Doubling the value to 256M should be safe for 300,000, but memory limitations on the machine
may still cause out of memory issues.
Description: There is a limitation on the number of filter values in the Cognos query engine, and
exceeding that limit by building a query with a very large number of filter values causes the link to
fail. This can happen in Analyst when building a D-Link with one or more matched dimensions
mapping to large cube dimensions. It happens in Analyst or Contributor when the link contains
one or more unmatched source dimension that are filtered with a large number of values. It happens
in Contributor when the link contains one or more matched dimension manually mapped with a
large number of values. Look at the link itself to determine if the quantity of filters might be causing
the problem. The point where a problem occurs is somewhere around 300 total filter values. If this
error message is encountered, and the link deals with a large number of filter values, the fix
description below is the best way to get the link to run.
Fix: Open the qfs_config.xml file in the configuration folder under the Cognos 8 installation
directory. In the provider e.List with the name of "OlapQueryProvider", add the following:
<!---Allow use of the optimization for IN operator--><parameter
name="ApplyExtensiveComparisonOptimization" value="true"/>
Note: Turning this parameter on affects all queries, not just queries in Framework Manager links.
While performance may suffer when this is on, it can be turned off or removed from the configuration
file after the link has executed.
Description: In the SAP models, each level in a hierarchy contains a query item that has a role of
_businessKey. This query item is not intended for use in links, and therefore should not be used.
This query item is a special field that contains specific key values. If the query item is compared to
values that are not in the field's domain of values, an exception is thrown.
Fix: Since these query items are not intended for use in links, they should be hidden from view in
the model (not the package) - both for the OLAP and Detailed Fact Query Subject access before
the package is published.
Steps
1. Create a (.bat) file to convert the dmspec_<adminlink name>_<datastamp>.xml into a package
file
● The command in the .bat file is:
"Cognos
Installation Directory\bin\CatAdapterTest"
-x "D:\DMSpec\generatePkg\TestCaseSap4.xml" -p "D:\DMSpec\generatePkg\
TestCaseSap4.pkg"
where "D:\DMSpec\generatedPkg\TestCaseSap4.xml" is the dmspec file
to process and the "D:\DMSpec\generatePkg\TestCaseSap4.pkg" is the
resulting package file.This example shows the files have been
copied or renamed to an easier name to type. The path to the xml
and pkg files will have to match the computer’s directory structure.
● Following the above command with a pause command on the second line will leave the
command window open to view the package creation. This is useful if the package creation
fails so you can see messages.
2. Run the bat file. A (.pkg) file will be created if successful. If unsuccessful, error message on why
the package couldn’t be created will be displayed.
3. Open the Data Manager user interface, then open an existing catalog or create a new catalog.
4. Import the package file to create a build. From the File menu, click Import Package and navigate
to the package file that you just created. It is not necessary to backup the catalog.
372 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages
From the package file, a new build will appear under the Builds and Jobstreams e.List. You
can click the new build to see a graphical representation of it.
5. Fix the Connection. The build will not run until the Framework Manager package has been
associated with the build's source connection and the target connection is correct.
● Click the new build and note the number of the source connection that appears on the very
left. Also, note the name of the target connection, on the far right of the build. This is
usually CON1, or something similar.
● Right-click the source connection and click Properties. Click the Connection Details tab.
● Connection types will be selected on the left. Click Published Framework Manager Package.
On the right, the Package box will be empty.
● Click the … button. A Cognos 8 logon window will appear. After that, a list of published
Framework Manager packages will appear and select the appropriate package and click
OK.
● From the Connection Properties window, click Test Connection to verify that the connection
is now good. Click OK.
● Right-click the target connection and click Properties, and then click Connection Details.
● Verify that the Connection Details are correct. Click Test Connection. Click OK to save
any changes.
● Click Save.
6. Highlight the build and click Execute. Even if the build is highlighted, it won't execute unless
it was the last thing clicked.
● A command window will open showing the status and results of the build execution.
7. To determine problems with the v5 query, right-click the datasource icon in the build and click
Properties.
Select one row. Click the Query tab and click Run. Problems with the V5 will be displayed as
it tries to run.
Note: Check the status of a INTER_APP_LINKS in the Administration Console. If the cmd file has
been deleted, the contents can be copied from the Failure Information dialog box.
Steps
1. Edit the cmd file and add a line at the end of the file that contains pause. This will keep the
command window open after the import finishes.
Another option is to redirect the cmd file output to a text file. Add >> c:\windows\temp\
linkoutput.txt to the end of the first line. This will redirect the output to the linkoutput.txt
file and can be viewed after the cmd file execution completes.
3. Locate and double-click the cmd file to view the output from the import. Also, dmspec, dmresult,
and <adminlink name>_Result files are created like when running within Contributor.
Job Timeouts
An Administration link that executes for more than 30 minutes may appear to have timed out,
showing up as Cancelled in the Monitor Links section of the Administration Console.
Even though the execution of the link may have failed, you can look in Task Manager for dmrunspec.
There will be one for each link element in the link. If the Administration link is marked as failed
or cancelled - check for dmrunspec instances.
To increase timeouts you need to edit epJobExecutorResources.xml, located at <install_location>\
cognos\c8\bin, and increase the value for Wait this long to see if RUNNING Job Items complete.
Default setting is 1800 (30 minutes). The file is installed as read-only. We recommend that you
back up the file and reset the read-only flag to writeable. After changing this setting, the Planning
service needs to be stopped and restarted on the machine that is executing the link.
374 Contributor
Appendix D: Customizing Cognos 8 Planning -
Contributor Help
This section provides extra information about creating information for planners and reviewers.
● Detailed cube help. This appears as a separate web page when the user clicks the Help button
at the top of the grid. This is described in the following sections.
● A start tag consists of a left and right angle bracket, with a tag name in between. For example,
<b>.
● An end tag consists of a left and right angle bracket, a tag name, and a forward slash. For
example, </b>.
<h1>Sample Help Text</h1> <h1> indicates the start of text that is displayed
in heading 1 style.
Sample Help Text is the text that is displayed
in heading 1 style.
</h1> indicates the end of heading 1.
376 Contributor
Appendix D: Customizing Cognos 8 Planning - Contributor Help
Steps
1. Create a folder for images in the same directory that you have used for the web site.
3. You must use the full path to reference the image, otherwise it will not display to all users.
Example
To link to a file named File.html located in the subdirectory Path found on the server www.cognos.
com, you enter the following:
<A HREF=HTTP://www.cognos.com/path/file.html target=_blank >text or image</a>
Example
To add a link to your technical support contact, you could use:
<A HREF=mailto:support@mycompany.com> E-mail technical support</A>
This will appear in a manner similar to this in the Web browser:
E-mail technical support
378 Contributor
Appendix E: Error Handling
● How to use the epLogfetcher to locate and retrieve different error logging files.
● Timing and logging registry - timing and logging provides timing for processes. This is written
to a file named PlanningTraceLog.csv, which is in the same locations as the PlanningErrorLog.
csv file and is useful for troubleshooting.
History tracking Tracks actions performed Database table named Not applicable
by users. history.
JCE error logs Errors with the calculation Job server, jce*.tmp
engine. administration
machine or client
machine on the local
Temp folder
(%TEMP%)
Timeout Errors
If you are experiencing timeout errors on long running server calls. Change the default remote
service call timeout value (default 480 minutes) to allow for longer calls.
Steps
1. On the System Settings page, click the System Settings tab.
For information about the Maximum e.List items to display as hierarchy options, see "Import e.
List and Rights" (p. 92).
History Tracking
The history tracking feature in Application Options (p. 72) tracks the actions performed by users.
When you have Actions timestamps and errors, or Full debug information with data selected,
information is recorded in the database in a table named history.
You can use history tracking if you have problems with, for example:
380 Contributor
Appendix E: Error Handling
● Workflow - in which case you set history tracking to Actions timestamps and errors.
● Aggregation (if it appears that you have incorrect aggregation), you should set it to Full debug
information with data.
The actionid contained in the history table is made up of two codes: a result code and an action
code. The first 2 digits are the result code and the rest make up the action code.
The following table shows the result codes and their meanings.
00 Success
01 Not Owner
02 Being Edited
03 Data Changed
04 Annotation Changed
05 Locked
06 Not Locked
0A Grantor Locked
0B Already Reconciled
0C Not Initialized
The following table shows the Actionid from the history table and the action that it refers to. Note
that sometimes actionids may be combined. For example, if a user has made a change and submitted,
you might get the Actionid oxo4Ao.
0x0000 None
0x0008 Annotate
0x0010 Edit
0x0020 Save
0x0040 Start
0x0080 Submit
0x0100 Reject
0x0200 Reconcile
0x0400 Release
0x4000 Update
382 Contributor
Appendix E: Error Handling
This is not an exhaustive list, and you may get an error message telling you that a log was created.
To search for a JCE error log, search for JCE*.tmp.
These log files are stored in hidden folders.
GUID Each distinct error has a unique GUID (a unique identifier) by which it is
identified. By looking at entries in the log with matching GUIDs it is possible
to group log entries by particular errors. It is possible to cross reference errors
between different log files if the error stack spans server side and client side
components.
Stack This is used in conjunction with the GUID to determine the source of the error,
and the call stack that the error was passed through before being reported. A
value of 1 indicates the source of the error, and the highest value is the point at
which it was reported to the user. Again these sequences can span log files.
Date Time The date and time at which the error occurred. The time is taken from the
machine where the component represented in the current log entry is running.
Component The name of the component represented in the current log entry.
Version The version of the component represented in the current log entry.
Information
Procedure The procedure within the file where the error occurred.
Line Number The line number within the procedure where the error occurred. This enables a
developer to trace exactly which call caused the error, and in conjunction with
the error number and error message, it gives a high degree of detail about the
problem.
Source The origin of the error. May or may not be within Cognos components.
User Domain/ The User Domain/User Name under which the component represented in the
User Name current log entry was executing.
384 Contributor
Appendix E: Error Handling
Previous User The domain on which the user was logged into and the previous user name.
Domain/
Previous User
Name
Previous The machine from which the call was made to the current component. This is
Machine the indicator to go and look for error logs on this machine, where it may be
Domain\ possible to find corresponding entries (matched on GUID), lower down the call
Previous stack. It may also provide clues to other errors that occurred prior to the issue
Machine being investigated.
Name
It is imperative that these logs are provided to development when reporting problems.
● PlanningTimer--for timer files. These files will be present if timing has been enabled (see below.
)
● AnalystLog--Analyst errors.
● JLog--J server errors. It contains errors relating to data, links, and calculations.
Steps
1. Run epLogFetcher.exe from installation_location\Cognos\c8\bin\
Machine to Enter the machine name or IP address with the log files. To search the
Search (or IP local machine, enter localhost.
Address)
Select Protocol Select HTTP if you are looking for components on the Administration
server.
Select COM if your Administration Console is on a separate machine
to the Administration server and you are searching locally Check -
was MTS server.
Working Folder Enter or browse for a folder to retrieve the files to on your local
machine.
4. Click Add. This adds the search criteria to the top panel. Repeat steps 2 to 4 until you have
added all the log files you need.
5. To start the search, select the lines containing the search criteria and click View File(s).
Tip: You can do this one line at a time, or you can select multiple lines by holding down CTRL
and clicking. The results of the search are displayed in the lower panel.
6. In the lower panel select the files you want to bring into the working folder and click Get File
(s).
If you select two or more files with the same name into the same working directory, a number is
appended to the file name.
This tool only finds IIS logs if they are in the default path. It is not capable of retrieving logs from
remote client machines.
386 Contributor
Appendix F: Illegal Characters
The following ASCII characters are not allowed as e.List item and user names, e.List item and user
captions, user logons and user email.
They are also not allowed in dimension names or in namespace names.
Note that these are non-printing characters below ASCII code 32.
0 NUL Null
5 ENQ Enquiry
6 ACK Acknowledge
7 BEL Bell
8 BS Backspace
11 VT Vertical tab
13 CR Carriage return
14 SO Shift out
15 SI Shift in
24 CAN Cancel
25 EM End of medium
26 SUB Substitute
27 ESC Escape
28 FS File Separator
29 GS Group Separator
30 RS Record Separator
31 US Unit Separator
388 Contributor
Appendix G: Default Options
The following sections describe the default options for a Contributor application.
Grid Options
In the Grid Options, you can set the following:
Application Options
In Application Options you can set the following:
Allow bouncing On
Admin Options
You can configure the import and publish actions using the following options:
390 Contributor
Appendix G: Default Options
Base Language EN
Note that Admin Options are not visible to users when the DBAuthority key is not set to DBA in
the registry.
Go to Production Options
You can set the following options prior to creating the production application:
Planning Package
Description: Blank
Back-up Datastore On
392 Contributor
Appendix G: Default Options
Include Rollups On
e.List
When importing the e.List with just the compulsory columns in the file, you get the following
defaults:
Publish No
Rights
When importing rights with just the compulsory columns in the file, you get a default of Submit.
Access Tables
If no access levels are set, the following defaults apply:
● All cubes apart from assumptions cubes have a global access level of Write.
● Assumption cubes (cubes used to bring data into an application) have a global access level of
Read.
AccessLevel No Data.
The base access level for rule based access tables is Write.
Delete Commentary
You can set the following options:
394 Contributor
Appendix H: Data Entry Input Limits
The data entry limits for Cognos 8 Planning - Analyst and Cognos 8 Planning - Contributor are
affected by a number of different factors. Limitations may be imposed by a number of different
factors such as operating system, datastore provider, and computer hardware.
Note: The limits described here are guidelines, and are not hard and fast rules.
View Publish
● SQL Server = unlimited
● UDB = unlimited
Note that the publish views cast down to a varchar: SQL = 8000, UDB = 1500 (that is you only
see 8000 characters in the SQL view)
Table-only Publish
Table-only publish varies by format
Text fields (epReportingText)
● SQL Server = 8000
● UDB = unlimited
● Oracle = 4000
396 Contributor
Glossary
access tables
In Contributor, controls access to cells in cubes, whole cubes, and assumption cubes.
accumulation D-links
D-links that consolidate data from a source D-cube to a D-cube based on text data.
administration job
An administration task that runs on job servers and is monitored by the Contributor Administration
Console. These tasks are commonly referred to as jobs. Some examples of jobs are reconcile, publish,
cut-down models, links.
administration link
A link that enables an administrator to move data between Contributor applications. An
administration link can contain multiple applications and cubes as the sources and targets of the
link. A link can contain multiple elements which target either the development or the production
application. Administration links run using the job architecture and so are scalable.
administration machine
In Cognos Planning, the computer that is used to operate Contributor Administration.
administration server
In Cognos Planning, the server that contains the planning components package (COM+ package)
and where control of the online application is maintained. You connect to this machine when you
first run Contributor Administration.
application
In Cognos Planning, a Contributor application. Contributor applications are used for the collection
and review of data from hundreds, or thousands of Web servers. One application can be used by
many users in different locations at the same time.
Application server
See Job Server.
assumption cube
In Cognos Planning, a cube that contains data that is moved into the Contributor application when
the application is created or synchronized. It does not contain the e.List. Therefore, data applies to
all e.List items, and is not writable. The data it contains is often named "assumption data."
A-table
In Analyst, an allocation table that shows how two lists correspond. It is useful for transferring
data when no character matches are possible between lists of items.
BiF
Built in Function. In Cognos Planning a BiF is a special calculation formula that was set up
specifically for planning. For example, depreciation, discounted cashflow, forecasting using different
drivers, and stock purchase prediction based on future sales.
bounce
In Cognos Planning, a term used to refer to the removal of the currently editing owner of an e.List
item in the Contributor Web client. A planner or reviewer may "bounce" the owner.
commentary
In Cognos Planning, commentary represents any additional information attached to Contributor
cells, tabs, or e.List items, including both user annotations and attached files. You can use
administration links, system links and local links to copy commentary.
contribution
In Cognos Planning, data that is entered into an e.List in the Contributor application.
Contributor Administration
A tool which enables administrators to publish an Analyst business model to the Web, manage
access settings and model distribution, and configure the user's view of the model.
cube
A physical data source containing a multidimensional representation of data. A cube contains
information organized into dimensions and optimized to provide faster retrieval and navigation in
reports. In Cognos Planning, a cube (see also D-Cube) corresponds to a tab on Contributor client
user interface.
current owner
In Contributor, the person who is editing or lasted opened an e.List item for edit.
cut-down models
In Cognos Planning, customized copies of the master model definition that have been cut down to
include only the specific elements required for a particular e.List item.
datastore
In Cognos Planning, the location where one or more Contributor applications are stored. A datastore
contains the information needed to connect to a database supporting the Contributor applications.
398 Contributor
Glossary
D-cube
In Cognos Planning, a multi-page speadsheet made up of two or more dimensions. A D-cube must
contain at least two dimensions. In Contributor a D-cube is referred to as a cube.
dimension
In Cognos Planning, the rows, columns, and pages of a cube are created from dimensions. Dimensions
are lists of related items such as Profit and Loss items, months, products, customers, and cost centers.
Dimensions also contain all the calculations. One dimension can be used by many cubes.
In Cognos 8 BI a dimension is a broad grouping of descriptive data about a major aspect of a
business, such as products, dates, or markets. Each dimension includes different levels of members
in one or more hierarchies and an optional set of calculated members.
D-link
In Analyst, a link that copies information in and out of cubes, and sometimes to and from text or
ASCII files.
D-list
An alternative term for dimension.
D-list format
Lets you enter text from another D-List in a row or a column. The format may be used in
database-type functions to consolidate data in a similar manner to query-style reports.
drill down
In Cognos Planning, drill down is a technique used to analyze D-Cube data that was imported by
a D-Link. You can drill down on any single cell in a D-Cube. If the cell contains data transferred
by a D-Link, drill down opens a view of the source data. If the data was imported from another
D-Cube, drill down opens the appropriate selection from the source D-Cube. If the data was imported
from an external source (a mapped ASCII file or an ODBC database), drill down extracts the relevant
data from the source file and displays it in a special drill-down results dialog box.
In Cognos 8 BI, drill down refers to the act of navigating from one level of data to a more detailed
level. The levels are set by the structure of the data. See also drill up.
e.List
The basis for the structure of a Contributor application. An e.List is a hierarchical dimension which
typically reflects the structure of the organization (for example, cost centers and profit centers).
editor
In Cognos Planning, a planner or reviewer who is editing a contribution
extensions
In Cognos Planning, extends the functionality of Contributor Administration and Web Client. There
are two types of extensions: Admin Extensions and Client Extensions. Admin Extensions run in
the Administration Console. Client Extensions are activated from the tool options on the Contributor
Grid.
file map
In Analyst, a file map tells the program how to split an ASCII or text file into columns of data. A
file map puts in the divisions, or breaks, between one column of numbers and another. It defines
the start point and width of each column of data within an ASCII file, and denotes whether the
column is a numeric, text, or date field. If there is only one column, a file map is superfluous. File
maps are always necessary when using an ASCII file as the source for a D-Link.
Get Data
In Cognos Planning, a command in the Web client that loads the screen that displays local links
and system links.
go to production
In Cognos Planning, a process in the Contributor Administration Console that takes the development
application and creates the live production application.
grid
In Cognos Planning, a tabular form for viewing and entering data.
GUID
Global Unique Identifier. A unique internal reference for items in a model. For example, when you
add a dimension item, this item is assigned a GUID.
hold
In Cognos Planning, a function that protects a cell against breakback.
import block
In Cognos Planning, a package of data from Analyst or an external system that is validated and
prepared for import into a Contributor application. The import block is imported into the
Contributor application datastore via a reconcile job.
import link
A function used in Analyst to update the items in a dimension on a regular basis from a source file
or database.
400 Contributor
Glossary
job server
In Cognos Planning, a machine that runs the administration jobs. There may be multiple job servers.
A job server is sometimes referred to as an application server.
library
In Cognos Planning, the storage location of the model. The library includes a group of connected
Analyst objects: macros, reports, D-Links, selections, D-Cubes, maps, A-Tables, D-Lists, and formats.
A library is similar to a Windows directory.
local links
In Cognos Planning, a link defined and run by a user in the Web client.
lock
In Cognos Planning, a function that prevents data being entered into cells whether by typing or via
a D-Link.
lookup d-links
In Cognos Planning, D-Links that look up data from a source D-Cube based on text data. It uses
a database D-Cube as a target.
macros
In Cognos Planning, a single object defined by an administrator to automate a series of
Administration tasks in Contributor. Each task is known as a step. In Analyst, a set of commands
that have been recorded and grouped together as a single command, which is used to automatically
complete a list of instructions in one step.
match descriptions
In Cognos Planning, used to automatically match source and target dimension items with the same
name. In addition, match descriptions can be used to perform an allocation by date.
maximum workspace
(MAXWS) The amount of memory reserved for Analyst. May be changed to allow larger models
to run more effectively.
model
A physical or business representation of the structure of the data from one or more data sources.
A model describes data objects, structure, and grouping, as well as relationships and security.
In Cognos 8 BI, a design model is created and maintained in Framework Manager. The design
model or a subset of the design model must be published to the Cognos 8 server as a package for
users to create and run reports.
In Cognos Planning, a model is a group of D-Cubes, D-Lists, D-Links, and other objects stored in
a library. A model may reside in one or more libraries, with a maximum of two for Contributor.
namespace
For authentication and access control, a configured instance of an authentication provider. Allows
access to user and group information.
In XML, a collection of names, identified by a URI reference, which are used in XML documents
as element types and attribute names.
In Framework Manager, namespaces uniquely identify query items, query subjects, and so on. You
import different databases into separate namespaces to avoid duplicate names.
offline grid
In Cognos Planning, the application that is used to access a section of an offline Contributor
application. The purpose is to enable users to enter or view data while there is no network
connection.
owner
In Contributor, a user who is assigned to an e.List item through the Rights screen and is permitted
to edit or review it. These rights may be directly assigned, or may be inherited.
planner
In Cognos Planning, a person who enters data in the Contributor application in the Web client.
product application
In Cognos Planning, the version of the Contributor application seen by the Web-client user. The
version of the Contributor application that is seen in the Contributor Administration Console is
the development application.
protect
In Cognos Planning, a function that is used to prevent data from being typed into a cell. However,
data can still be transferred into a protected cell via a D-Link.
publish
In Cognos 8 BI, refers to the creation of a package that makes metadata available to the Cognos 8
server. Information in the package is used to create reports and other content.
In Cognos Planning, refers to a function that is used to copy the data from Contributor or Analyst
to a datastore, typically so it can be used for reporting purposes.
publish container
In Cognos Planning, a datastore container created specifically to publish data to.
402 Contributor
Glossary
reconciliation
In Cognos Planning, a process that ensures that the copy of the Contributor application that the
user accesses on the Web is up to date, for example, all data is imported. Reconciliation takes place
after Go to Production has run and a new production application is created.
reviewer
In Cognos Planning, a person who reviews the submissions of reviewers or planners.
rights
In Contributor, assigning rights enables administrators to determine what users can do in a
Contributor application. Rights determine whether a user can view, edit, review, and submit data.
saved selections
In Contributor, dynamic groups of items from a dimension or e.List. When used in conjunction
with access tables, access tables provide a high level of control over the access or cells.
In Extensions, sets of data configured during an export or refresh. A user can choose a saved selection
and update just the data without reconfiguring the report or export criteria.
In Analyst, sets of data used to save a specific D-Cube orientation, including a selection of rows,
columns, and pages for later use. The selected items, sort order, and slice of the D-Cube are all
saved in a named selection.
synchronize
In Contributor, a function used to update all cubes, links, and so on in an application when the
underlying objects in Analyst change. Changes include renaming dimensions, adding, deleting, or
renaming dimension items.
system links
In Contributor, a link that is defined by the Contributor administrator and run by a user in the
Web client. This is part of the Get Data functionality in the Web client.
table-only layout
In Cognos Planning, a publish schema that consists of a table-only layout, and is particularly suitable
for the Generate Framework Manager Model extension.
view layout
In Cognos Planning, a publish schema that consists of a layout of views over text values.
404 Contributor
Index
406 Contributor
Index
C commentaries
cab downloads deleting, 289
allowing, 69 commentary
caching breakback considerations, 293
Contributor data for Cognos 8, 308 copy, 292
Calculation Engine (JCE) cumulative, 292
error logs, 382 definition, 398
capabilities, 32 deleting, 289
capacity planning, 359 moving with administration links, 148
cascaded models, 144 moving with system links, 148
cascade rights, 37 viewing and editing, 291
changing components tabs
applications and translations, 183 translation, 185
e.List, 133 concurrency, 359
character large objects, See large objects (LOBs, BLOBs, condition
CLOBs) specifying for event, 222
client-executed links, 142 configure application, 68
client extensions, 302 configuring attached document properties, 290
configuring, 303 configuring the Web client rights, 37
extension groups, 302 contribution e.List items, 105
client-side cache, 72 contributions, 24, 83
client-side reconciliation, 52 definition, 398
CLOBs, See large objects (LOBs, BLOBs, CLOBs) Contributor add-ins, 84
CM-REQ-4159, 362 Microsoft Excel, 313
code pages, 190 Contributor Administration
Cognos 8, 304 definition, 398
caching Contributor unpublished data, 308 contributor-only cubes, 75
Cognos 8 Business Intelligence studios copy
connecting to data sources, 308 import, 173
Cognos Connection copy commentary, 292
create a data source connection, 161 copy development e.List item publish setting to
Cognos namespace, 27 production application, 79
Cognos Performance Applications, 315 copying
Cognos Series 7 namespace, 28, 335 Analyst Contributor links, 350
color copyright material
selecting for changed values, 70 printing, 15
column headings create a data source connection, 161
EListItemCaption, 96 creating
ELIstItemIsPublished, 97 application, 63
EListItemName, 96 applications, 37, 58
EListItemOrder, 96 applications using a script, 37
EListItemParentName, 96 connections from Cognos 8 BI products, 308
EListItemReviewDepth, 97 cube help, 375
EListItemViewDepth, 96 datasource connections, 305
detailed fact query subject, 166
Framework Manager projects, 164, 305
408 Contributor
Index
410 Contributor
Index
412 Contributor
Index
414 Contributor
Index
T upgrading
table-level locking, 358 Admin extensions, 329
table-only layouts administration links and macros, 330
definition, 403 Analyst - Contributor links, 329
table-only publish layout, 262 applications, 58, 329
table-only publish post GTP, 77 Contributor web site, 336
take ownership planning administration domain, 330
send email, 72 rights, 37
test environment, 168 Web sites, 336
text formatted cells what is not upgraded in Contributor, 329
data entry limits, 395 wizards, 332
timeout, 380 use client-side cache, 72
translating user annotations, 287
help, 189 behavior, 288
searches, 189 restrictions, 287
strings, 185 user models, 308
translation users, 19, 29, 98
application tabs, 185 annotations, 287
assigning to users, 183 classes and permissions, 30
changes, 183 loss of access to e.List items, 251
cycles, 183 validate, 109
exporting files, 188
importing and exporting files, 187 V
importing files, 188 validate
rights, 37 users, 109
trees, 83 validation methods, 227
troubleshooting version dimensions, 257
Generate Framework Manager Model view application details, 77
extension, 361 view depth, 98, 102
importing Cognos packages, 363 viewing
importing from Cognos package, 366 imported access tables, 124
macros, 225 view layout
modeled data import, 366 definition, 403
unable to change model design language, 362 view rights, 108
unable to connect to Oracle database, 361
unable to create framework manager model, 361 W
unable to retrieve session namespace, 362 warning messages
importing the e.List and rights, 93
U Web clients
underlying values, 314 settings, 85
unowned items, 91 web client settings, 69
unpublished Contributor data, 308 web client status refresh rate, 72
unregistering Web sites, 83
namespaces, 29 creating, 23
upgrading, 336
416 Contributor
Index
wizards X
Import from IQD wizard, 315 XML
workflow state definition, 297 default locations and filenames, 390
write access, 116