Sie sind auf Seite 1von 37

INTERNAL & CONFIDENTIAL

SAP HANA XS – Architecture Overview


This document is describing the Architecture of SAP HANA XS (extended
application services), its components and their interaction for application
programming of native HANA apps.

Stefan Bäuerle

The architecture described in this document has been defined and implemented by the XS Team.
Thanks to all colleagues that supported with their knowledge and patient explanations to create this
document.

Version 1.01 – 2013-02-21

Status: released

Change History
Version Changes
Version 1.0
2012-11-27  Released Version 1.0
Version 1.01
2013-02-21  Minor changes

1
INTERNAL & CONFIDENTIAL

1 Contents
2 Context ............................................................................................................................................ 3
3 Goal of this Document..................................................................................................................... 3
4 Programming Model Philosophy ..................................................................................................... 4
5 SAP HANA – Native XS Integration .................................................................................................. 4
5.1 Overview.................................................................................................................................. 4
5.2 XS Engine as Extended Index Server........................................................................................ 5
6 HANA XS in a Nutshell ..................................................................................................................... 8
6.1 Basic data access processing (HTTP requests) ........................................................................ 8
6.2 Ressource Handlers („Containers“) ......................................................................................... 9
6.3 Additional Components......................................................................................................... 15
6.4 XS Applications ...................................................................................................................... 19
6.5 User Management, Authentication & Authorization ............................................................ 20
7 Repository ..................................................................................................................................... 24
7.1 Basic components.................................................................................................................. 24
7.2 Type Specific Plug-ins – Resource Parser/Compiler .............................................................. 26
7.3 Authorization Checks............................................................................................................. 29
7.4 Extensibility ........................................................................................................................... 30
8 CDS & RDL – Supporting native app development on HANA ........................................................ 31
8.1 CDS – Core Data Services....................................................................................................... 31
8.2 RDL – River Definition Language ........................................................................................... 33
9 Appendix........................................................................................................................................ 36
9.1 Sources .................................................................................................................................. 36

2
INTERNAL & CONFIDENTIAL

2 Context
The SAP HANA Platform serve as the technology backbone on the one hand side providing a new high
performance database for existing SAP solutions and on the other hand side Extended Application
Services (XS) that enable a new class of native high performance HANA applications.

This document and the architecture described are based on the current state of HANA SP5
development.

3 Goal of this Document


This document describes the architecture of the HANA Application Services (HANA XS) and their
components (like the XS Engine and the repository) including their main building blocks and
interactions between them.

This document also includes related architecture aspects like CDS (Core Data Services) and RDL, the
development environments and their relation to the native artifacts of SAP HANA to understand the
inner functionality for native HANA applications.

3
INTERNAL & CONFIDENTIAL

4 Programming Model Philosophy


The XS Engine, its basic principles and the provided services are built around a programming model
philosophy that is fundamentally different than the programming models that have been used so far
in well-known application servers like ABAP or Java.

The programming model philosophy is based on:

 Resources containing the development artifacts. Resources can be static resources like
HTML-, JPG-, client side JS or XML-files (that are managed by the repository but not
interpreted and processed) or resources that describe native HANA artifacts. The latter
resources are stored (and managed) in the repository and compiled into runtime artifacts
that are used by the XS-Engine (like xsapp, xsjs, xsodata) or the HANA DB and its engines (like
tables, views, procedures).
To define the resources, easy text-based tools (down to usage of plain notepad) shall be
enabled without an additional tool lock-in enforced by the infrastructure.
 Stateless Requests via HTTP / OData are the fundamental paradigm for clients accessing the
XS engine for data processing (read / write).
 Read access scenarios are basically kept similar and as close to SQL as possible avoiding any
unnecessary data processing as far as possible between the client and the core HANA
engines.
 Write access is handled via the application logic that is handled by the application
container(s) (currently server-side JavaScript) of the XS engine to process the requests. Data
manipulations and data intensive calculations are delegated to the HANA Database.

5 SAP HANA – Native XS Integration


This section describes various aspects of the SAP HANA XS Engine. After providing an overview on
how the XS Engine itself is embedded into SAP HANA, different aspects are explained in detail to
understand the inner construct of the XS Engine, related components and their usage for native
HANA applications.

5.1 Overview
As part of SAP HANA and in addition to the in-memory database and inner data processing parts of
SAP HANA, the XS Engine provides access to the SAP HANA database using a consumption model
exposed via HTTP. In addition to providing application –specific consumption models, the XS Engine
also hosts system services e.g. built-in web server to access static content stored in the repository.

In the XS Engine the persistence model with tables, views and stored procedures is mapped to the
consumption model that is exposed to clients. This can be done in a declarative way or by writing
application specific code that runs in the XS Engine. The application specific code running in the XS
Engine is basically used to provide the consumption model for client applications. Data intensive
calculations are preferably done close to the data in the index server using SQLScript, modeled views,
L procedures or C++.

4
INTERNAL & CONFIDENTIAL

5.2 XS Engine as Extended Index Server


The XS Engine can be seen as a special index server1 that on the one hand does not use the data
store and data processing itself and on the other hand is extended by additional services that enable
application development and mapping to the consumption model.

Browser

Client

R
HTTP / OData

SAP HANA

HTTP-Server (ICM)
R

Index Server – XS Engine


XS Services
Request handler Application Container ...
DB Catalog
(Cache)

R Index Server „Core“


Repository
HANA Studio
SQL / Data Processing SQL Session &
Data / Persistence Connection Mgr

Distributed Execution (TCP/IP) R

Index Server
Repository
SQL Session &
Data Processing
DB Catalog

Connection Mgr
SQL Processor Interal
lowlevel API
StoredProcedure
Processor Data store
Calc Engine / Persistence

Figure 1: Overview XS Engine as extended Index Server

The HANA application services provided by XS consist of various components. HTTP based
consumption requests are handled by the request handler that – depending on the type of request –
directly provides the response or delegates the call to the corresponding resource handler. The
application containers are used to process application specific content like logic implemented in
server side JavaScript. Further details can be found in section 0.

The HANA repository is used to manage the application content. The repository itself is used by the
application services (XS) but belongs to the index server in general since it is also used for other
HANA scenarios. HANA native applications persist their content in the repository and – depending on
the content type – compile artifacts into the runtime catalogs. Section 7 provides more details on the
repository.

1
Starting in SP5, there is also the option to run the XS Engine in embedded mode within a „standard index
server“. In this setup, there is no addition XS Engine process but one of the index-servers is chosen to host the
XS Engine
5
INTERNAL & CONFIDENTIAL

As soon as the XS Engine needs read/write access to database tables and view (including repository
content) or needs to call stored procedures, a local database connection is opened and SQL
statements are sent to the local SQL processor. Since the distributed executors in the SQL processor
determine the location of the data and take care for the delegation of the call to the corresponding
index server, the same mechanism is used by the XS engine. The SQL statements in that case are
always delegated to the corresponding index server that contains the data. This is similar to a
distributed processing across multiple index servers. A detailed view on this low-level communication
across index-servers is shown in Figure 2.

TCP/IP

Index Server Index Server


Processing client / SQL statement

SQL Sessions &


Connections
R
Direct usage of
column store DML
SQL Layer For data changes
Delegate plan execution to
(Parsing, Plan generation, …) Index server that contains
R The required data store
R (distributed execution graph)
R

Plan executor Plan executor


Low-Level APIs R
R Distributed
processing
R
SQL Processor Calc Engine / OLAP Column Store DML Column Store DML
(lowlevel API using Plan) Engine / JoinEngine Table updates Table updates

Figure 2: cross index-server communication in detail

The above mentioned mode running the XS services as separate own XS-Engine process (index server
variant) is currently the default mode. In addition, the XS engine can also be configured to run in
embedded mode using one index server as host process for the XS engine. In this case, data access
for data on this index server is not performed via cross index server access but remains local in this
index server.

6
INTERNAL & CONFIDENTIAL

Browser

Client

R
HTTP / OData

SAP HANA

HTTP-Server (ICM)

R XS Engine configuration
Index Server as embedded mode to
HANA Studio run within one of the
XS Services index servers.
Request handler Application Container ...

Repository Index Server „Core“ R

SQL Session &


Data Processing
DB Catalog

Connection Mgr Distributed


SQL Processor Interal Execution
lowlevel API (TCP/IP)
StoredProcedure
Processor
Data store
Calc Engine / Persistence

Figure 3: XS Engine embedded mode

The embedded mode has benefits in case of a single node installation and using applications on XS
that operate primarily on the database with only little logic on the XS layer. In cases of distributed
setup or running applications that require larger parts of logic to run on the XS layer, a separation of
the processes is to be considered to balance performance between user interaction (consumption
processing) and data processing on the database. Especially with a growing number of users and
increased consumption processing, this separation might be beneficial to reach the required
scalability. For separating the processes, the cross index server communication needs to be
optimized to reach comparable performance.

7
INTERNAL & CONFIDENTIAL

6 HANA XS in a Nutshell
This section describes the XS Engine and its components in more detail. Figure 4 shows an overview
of the XS Engine and the various components within the XS Engine to realize native applications on
HANA.

Client
R
HTTP / OData

HTTP-Server (ICM)
R

XS Engine (Index Server)


Request handler Ressource Handler
R
R R C++ - App Container
JavaScript - App Container
URL Session
Mapper Manager
Application

Session State
R OData Service R
Background
(declarative)
Job / Scheduler
R XMLA
Configuration
R R
Application
R Registry R Outbout Protocols R
Search Service
HTTP
Core
SMTP
R R
R R
R

TimerThread Index Server „Core“

Session &
Authorization Repository Authentication
Connection Mgr

Distributed Execution (TCP/IP) R

Repository Index Server

SAP HANA

Figure 4: XS Engine components

6.1 Basic data access processing (HTTP requests)


Basic data processing touches some of the components shown in Figure 4 to handle incoming
HTTP(s)-requests. In one round trip, a http-request is forwarded to the request handler that is calling
the other components to process the request.

The session manager keeps track of already known sessions (by handling cookies attached to the
requests) to make sure that a user is not forced to logon between different stateless requests. For
the first request, the authentication component is called to authenticate the user (based on e.g.
username / password)2. Whether authentication for a certain request/path is required is defined for
the application/package using the .xsaccess file that specifies the authentication method (e.g. {

2
Additional authentication means like SAML2 (using SAP ID service as IDP) or X.509 support will be added
8
INTERNAL & CONFIDENTIAL

“method” : “Basic” } )
A session-ID is created for the user session and kept in the session state (basically linking the session-
ID with the user information). A session cookie (containing the session-ID) is returned with the first
response to the client (as http-only cookie) and used to identify the session and thereby the user for
subsequent requests.

The URL mapper resolves the URL of the http call and identifies the resource of the repository that is
to be executed. Once the mapping to the concrete resource is done, the URL is resolved and the file
extension of the resource is known (e.g. xsjs, odata) and the corresponding resource handler3 (e.g.
JavaScript VM for xsjs) is called after checking with the authorization component whether execution
is allowed for the user.

The application registry is a local cache in the XS Engine containing application information that is
described in repository resources (e.g. .xsapp, .xsprivileges files). Using this cache, the XS
engine has ultafast access to application information (the object tree) avoiding access of resources in
the repository.

6.2 Ressource Handlers („Containers“)


Depending on the URL and file extensions, different resource handlers are available to process the
requests. Depending on the resource type, a dedicated assigned resource handler is called. For some
resource types, the corresponding handlers can be seen as containers processing the request, e.g. to
execute application logic in JavaScript.

Requests on static resources (like images or HTML pages) are returned directly as response without
being handled by a resource handler.

6.2.1 C++ Container


The C++ container is used to execute C++ functionality within the XS Engine. The C++ Container is
only used for internal components that are shipped as part of HANA and cannot be used for
application development itself.

Components that are implemented using the C++ Container are e.g.: RepoBrowser, ReplicatorApp,
Planning (RSR), and the TrustManager.

The native OData container explained below could have been realized also as specific C++ App but
has been decided to be a separate native container.

6.2.2 JavaScript Container


The JavaScript container is currently the primary container to execute request/response-based
application logic on HANA. Native development of server side JavaScript logic is currently for current
applications developing natively on HANA including developers outside SAP using XS starting with
HANA SP5.
As outlook, server side JavaScript will be supplemented by RDL (see 8.2) as primary development
language frontend to the developer compiling into native HANA artifacts (e.g. SQLScript, server side
JavaScript).

3
Handler e.g. are: XSJS, XSCFUNC, XSODATA, REPO (static content)
9
INTERNAL & CONFIDENTIAL

The JavaScript container makes use of a pool of JavaScript VMs using Mozilla SpiderMonkey as
runtime environment. To execute sources, JavaScript files (xsjs) are compiled into bytecode that is
passed to the JavaScript VM for execution. This is shown in Figure 5.

JavaScript
Bytecode
(compiled
XSJS sources)

JavaScript - App Container

Pool
R
JavaScript VM
(Mozilla SpiderMonkey)
JavaScript
execution
is routed to one
of the VMs
from the Pool

Figure 5: JavaScript VM – execution of JS logic in HANA

Within the JavaScript code, the JavaScript Syntax and the common functions available in JavaScript
and the SpiderMonkey runtime can be used. Nevertheless, application developers also want and
need to access specific contexts, components and services of HANA and the XS engine. To enable
developers to do so, a specific JavaScript Object “$” is introduced by the XS Engine that contains
further components (JavaScript Objects) like “db”, “request”, “response”, “http”, “config”, “repo”,
“util” etc. Each of them has specific functions and sub-objects for the corresponding component.

An example JavaScript code using the JavaScript objects provided by the XS Engine is shown in Figure
6.

Example JavaScript code accessing XS context for db and response.

var conn = $.db.getConnection();


var pstmt = conn.prepareStatement( "select * from DUMMY" );
var rs = pstmt.executeQuery();

$.response.contentType = "text/plain";
if (!rs.next()) {
$.response.setBody( "Failed to retrieve data" );
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
} else {
$.response.setBody("Response: " + rs.getString(1));
}

rs.close();
pstmt.close();
conn.close();

Figure 6: JavaScript example

Behind the scenes, the XS engine creates the JavaScript Objects for the Mozilla SpiderMonkey
runtime and links the JavaScript objects to corresponding C++ objects for the XS Engine and index
server runtime. The same happens with JavaScript functions that are attached to the different
objects (like $.db.getConnection()). These functions are mapped to corresponding C++-methods
10
INTERNAL & CONFIDENTIAL

that execute the required functionality. In addition JavaScript privates are attached to the
corresponding objects that keep the context to transport required objects within the corresponding
session between the different JavaScript and C++ functionalities. Figure 7 shows this relation of
JavaScript and C++ artifacts.

JavaScript code (e.g.):


$.db.getConnection()
C++ object/ JavaScript
method Private
JavaScript
$
Object <<provides>>

JavaScript
Object JavaScript Function <<returns>>
JavaScript Object

C++-Object db getConnection() ...

request
(C-Container for
„$“-object tree) response C++ Method
JavaScript
(here: get
http Private
PTIME.connection)
config
Legend Keeps execution context to map back from
repo
JavaScript runtime to context in C-Objects for
C++ coding util execution of C-Methods behind the JS-
Wrappers
JS Objects
... (e.g. keeps pointers to Request, response,
(à SpiderMonkey)
Session, …)
JS consumption

Figure 7: JavaScript Objects and binding to C++ Objects/methods

In addition, the JavaScript functions return again JavaScript Objects (again bound to C++ objects and
methods) that have again further components and functions (like conn.close() in the example
above).

Server side JavaScript logic used as entry point from e.g. HTML pages is implemented in *.xsjs files.
It is also possible to implement libraries in JavaScript that can be used in other xsjs files. These
libraries are contained in *.xsjslib files containing the corresponding functions. When using such
a library, the corresponding lib file needs to imported using $.import( ). This function read the
corresponding library file from the repository (or the cache) and makes the file available in the
SpiderMonkey runtime. After importing the file, the library functions can be used.

Function someFunction() {
$.import("sap.reuse.logic","reuseLib");
//execute a function from the library
var result = $.sap.reuse.logic.reuseLib.doReuseLogic();

}

11
INTERNAL & CONFIDENTIAL

6.2.3 OData (Native Support – Declarative Service Definitions)


The XS Engine provides native support for OData. OData is used as protocol (on top of HTTP) to
connect consumption clients (e.g. Browser or mobile applications). OData exposes resources (data)
and operations on these resources.

Hello_odata.xsodata
service {
"hello.odata/otable" as “Entries”;
}

The above shown example exposes a table otable in package4 hello.odata as OData service for
external consumption. The OData service thereby contains all table columns as corresponding data
elements (which means the table is exposed as OData resource to read all its content). With the
addition as “Entries”, the OData result table is renamed accordingly.

In addition to tables, views (in the different variants supported by HANA) can be exposed as OData
resources in the same way.

Any HTTP client can consume the OData service using an URL constructed via host, port, package
name and service name, e.g.:

http://hanaxs:1234/hello/odata/hello_odata.xsodata

Following the REST / OData principle, the content of the corresponding data can be access via URL:

http://hanaxs:1234/hello/odata/hello_odata.xsodata/Entries

While the OData infrastructure is currently based on OData version 2.0, the current implementation
does not completely cover all features specified in the standard. The OData resource handler
supports currently a set of resource types (e.g. service documents, entity sets, $metadata, etc.) and

4
Content definitions are stored in the repository structured in packages reflecting a virtual folder hierarchy
(see also 7 Repository)
12
INTERNAL & CONFIDENTIAL

query parameters (e.g. $format, $filter, $orderby, …)5. Additional OData abilities like
$expand for association/link navigation will be added in next development phases.

At the current stage, only read-only OData services can be exposed that are entirely declarative by a
corresponding model (an *.xsodata-file in the repository). Other demands need to be
implemented individually using JavaScript.

For the next development waves also write scenarios shall be supported via OData. This also includes
exits (hooks) to add application implementation logic into the OData services. This shall be possible
for read and write scenarios (for read-only scenarios, this could be reached by using calculated
views).

Current Limitations (SP5)


Currently only read-only OData services are natively supported. Write scenarios need to be
supported (planned for next development cycle) including the ability to hook in application
implementation logic to process data.
Beside hooks for write scenarios, also additional application level services shall be exposable via
OData. This links OData to the Programming Language layer e.g. exposing services defined as
JavaScript functions, SQLScript procedures or as RDL actions (at runtime, the corresponding to
compiled artifacts).
Note: hooks for implementations are also required for read-only services – in this case, this can
currently be done using calculated views). Hooks shall enable to link application logic (e.g. SQLScript
procedures, JavaScript functions, RDL actions) into the processing of the OData service requests.

6.2.4 XMLA
XML for Analysis (XMLA) is the XML representation (protocol) for MDX (MultiDimensional
eXpressions – a query language for OLAP databases6). MDX is a query language like SQL adding a
calculation language and covering multi-dimensional aspects. XMLA provides the benefit that no
specific client component (like jdbc, odbc, etc.) is required7.

MDX is natively supported by HANA SQL connections using the MDX engine. Incoming MDX requests
are handled by the MDX engine that parses the statement and translates it into a calculation scenario
for execution.

XMLA uses WebServices to enable platform-independent access to XMLA-compliant data sources for
Online Analytical Processing (OLAP). XMLA enables the exchange of analytical data between a client
application and a multi-dimensional data provider working over the Web, using a Simple Object
Access Protocol (SOAP)-based XML communication application-programming interface (API)8.

The specification defined in XML for Analysis Version 1.1 from Microsoft forms the basis for the
implementation of XMLA. The XMLA specification defines two web service operations:

 Discover
This operation is used to query metadata.

5
Further examples and available OData abilities can be found at the documentation:
http://hanaxsdev.wdf.sap.corp:8025/sap/hana/xs/docs/index1.html
6
Wikipedia: http://en.wikipedia.org/wiki/Multidimensional_Expressions
7
See also: http://msdn.microsoft.com/de-de/library/ms187178(v=sql.90).aspx
8
See also: http://hanaxsdev.wdf.sap.corp:8025/sap/hana/xs/docs/index1.html
13
INTERNAL & CONFIDENTIAL

 Execute
This operation is used to execute MDX commands and receive the corresponding result set;
Options can be set to specify any required XMLA properties, e.g. to define the format of the
returned result set.

In the context of the XS Engine, XMLA is – similar to OData – added as natively supported resource
handler. This handler basically exposes the available MDX abilities of HANA via XMLA using the
WebServer and HTTP connectivity abilities of the XS Engine.

To enable XMLA processing, via the XS Engine, the XMLA service needs to be exposed. This happens
by adding a file to the XS Application with suffix .xsxmla.

myXMLAservice.xsxmla
service {*}

Currently, the file exposes all authorized data and content via XMLA requests. This is not sufficient
since it does not allow any isolation and controlled exposure of services. Further exposure
declarations and authorization definitions need to be added to this service definition file.

By registering the *.xsxmla file above, the XMLA service requests can be submitted to the XS
engine via HTTP(s) passing the XMLA as XML-body in the request. The *.xsxmla file thereby has two
meanings. On the one hand side it defines a part of the URL-path for the service (Error! Hyperlink
reference not valid..xsxmla) and on the other hand side, the request handler of the XS Engine can
identify by the file-extension that the call is to be forwarded to the XMLA resource handler.

The XML body of the request is forwarded without further processing to the XMLA resource handler
that translates the XMLA format to MDX and calls directly the HANA MDX engine. The MDX engine
uses the CalcEngine for data processing. Results of the query are converted back from MDX format
into XMLA representation and returned on the HTTP(s) request as response body to the client.

HTTP(s) call
R
containing
XMLA Body

XS Engine
R
Request XMLA
Handler Ressource Handler

HANA DB
HANA
MDX Engine
R

HANA
Data store
Calc Engine

Figure 8: XMLA handling

14
INTERNAL & CONFIDENTIAL

6.3 Additional Components


6.3.1 Configuration
Application developers define the configuration files containing the configuration data structure and
the settings. The files are stored within the repository as resources (like other resources). Using the
extension mechanism of the repository, it is possible for partners and customers to adjust the
configuration and thereby apply changes to the application accordingly (e.g. default values or value
help setting).

There are basically two files required to define the configuration. One file with the extension
.xscfgm contains the configuration structure definition and another file with extension .xscfgd
defines the actual configuration values for the defined elements. Examples for both files are:

configFileName.xscfgm9
int32 maxCount;
list<string> names;
struct myStructType{
string name;
int32 counter;
};
myStructType complexVar;

The definition of the configuration structure follows the principle of <type> <elementname>; to
express the configuration structure. Element names have to be unique in this configuration file. The
configuration model is defined as part of the application and is not to be changed by partners or
customers (beside possible extensions that might be applied similar to other structure extensions).

The corresponding definition of configuration parameters looks like:

configFileName.xscfgd
implements package/configFileName.xscfgm;
maxCount = 9;
names = ["one", "two", "three"];
complexVar.name = "foo";
complexVar.counter = 3;

The configuration component provides functions (in JavaScript) that allow applications to access
configuration settings. This API can e.g. be called from JavaScript code. In this case, a JavaScript
object is returned that corresponds to the structure of the defined configuration elements within the
file (see below).

var myconfig = $.config.getObject("package", "configFileName");

if (myconfig.maxCount === 9 ) {
// do something
}

9
Examples can also be found at the documentation:
http://hanaxsdev.wdf.sap.corp:8025/sap/hana/xs/docs/index.html
15
INTERNAL & CONFIDENTIAL

The configuration files are also stored in the repository together with the application. Access to the
configuration settings – as shown in the example above – is done via API providing the package and
file name of the configuration file (*.xscfgd). It is possible that the structure definition of the
configuration file (*.xscfgm) resides in a different package than the actual data file. This is resolved
by the data file pointing to the corresponding definition using the implements statement.

Note and recommendation for adjustment: As shown in section 8, CDS and the CDS DDL format are
introduced to express application structures (including e.g. default values). The format for
configuration files as shown above expresses similar aspects (data structure and the settings
(values)). Since CDS DDL is used elsewhere for application structure definitions, the same format
should also be used to express structures of configuration files including their value settings. The
change to CDS format is planned for the next development cycle.

In addition to the configuration settings as described above, it is also possible to import table
content to fill tables with some default content (e.g. shipment of some master data or configuration
that is not statically bound to a model but based on a DB-table and its content). To import table
content, several files10 are defined:

 A DB-table is created in some schema (using the schema definition via some *.schema file
and the table definition via some *.hdbtable file11)
 A *.csv file is added to the repository containing the to be imported table content as
comma-separated list.
 The table import model using a *.hdbtim file that defines the path to the DB-table (as
defined in the files above) and that defined a placeholder for the path of the table content
file (csv).
 In addition to the table import model, a table import data file (*.hdbtid) is defined that
binds on the one hand side to the table import model definition and on the other hand
defines the concrete paths (package + file) in the repository for the content files that are to
be imported.

The indirection of table import model and table import data allows binding to another data content
(csv) file using the repository extension mechanism.

On activation of the files, the csv-file is imported into the defined table. The access to the content is
done via standard table access using regular SELECT statements from e.g. JavaScript via
$.db.getConnection(); and conn.prepareStatement ( <select statement> );

6.3.2 Background (Scheduled) Jobs


Note: the enablement of background jobs in XS-Engine is currently being developed and still subject
to change.

Beside the option to run logic on the XS-Engine initiated via http (web) request, there is also the
option to run logic via background job. At the current state, only JavaScript functions can be
executed on a timer based trigger.

10
Example can be found at: http://hanaxsdev.wdf.sap.corp:8025/sap/hana/xs/docs/index1.html?path=5.9
11
To be switched to CDS entity definition
16
INTERNAL & CONFIDENTIAL

Each XS session has in its context the information if it is a web-session (also allowing access to
request/response) or a background session where these objects are not given (since not started via
http).
RegisterTimer( ms ) RegisterTimer( ms )
R R
Time Thread timeoutReached()
timeoutReached() (IndexServer)
callback callback

R R JavaScript - App Container

Background Job / Scheduler Pool


R R
Background XSBackground JavaScript VM
Manager Thread
manage (Mozilla SpiderMonkey)
Execute
Xsjs (compiled)

Batch_job

Figure 9: Scheduling of Background jobs and execution of JavaScript logic

The background job component consists of a background manager and a set of background threads12
that are scheduled per defined job. The background manager itself is a thread that is triggered via
the central timer thread infrastructure (from HANA basis). The background manager registers itself
on a certain interval13 via method registerTimer( <interval> ). Once the timeout is reached,
the timer thread calls the method timeoutReached() at the background manager.

When triggered, the background manager accesses the list of active XS-Background-threads
(scheduled by the background manager) and accesses the background job registry table on the HANA
DB to read the jobs that shall be scheduled or stopped.

Comparing both lists, there can be several situations:

 A registered job is already scheduled and a thread is available


In this case, the background manager keeps this job unchanged and continues with the next
registered job
 A registered job is scheduled and a thread is available but the job parameters are changed
the background manager changes the thread parameters to the ones read from the registry
and thereby adjusts the scheduled thread
 A registered job is found but no thread is registered
A new background thread is created and registered for the timer component using
registerTimer( <interval> ). At the same time, the job parameters are passed to the
background thread.
 A thread is found but the corresponding registry entry is missing
This indicates a job that has been removed from the registry but is still scheduled in the
system. The background manager terminates the thread.

12
The background manager creates/requests as many background threads as required for the defined set of
jobs. There is no limit assumed but there might be a limit by the general HANA thread handling.
13
Currently 10 seconds – could be changed to any other interval
17
INTERNAL & CONFIDENTIAL

Also the individual XS Background threads are timer threads that are registered via
registerTimer( <interval> ) and once the timeout is reached are called via
timeoutReached().

Within the XS background thread the handling for each scheduled job happens. In the current
implementation, the user is set from the HANA configuration settings (*.ini-file at
/hdb/custom/config that contains the settings for background handling). Default user – if no
specific user is configured – is XSBACKGROUND. The user needs to have the required authorizations to
perform the different jobs. The main part for job execution is the handleBackgroundRequest()
that receives package-name, filename (xsjs), user and a json parameter string to execute. Package-
name and filename (xsjs) are used to retrieve the compiled JavaScript file from the repository. The
JSON parameter string is part of the job registry table (static parameters) that are passed to the
JavaScript function.

Within the handling, a Background Session context is set, the URI tree is parsed and the repository is
called to get the compiled XSJS file. A JavaScript VM is requested (as onetime box) per job and the
JavaScript code, context and parameters are passed to execute.

The convention for background jobs is currently that the function signature (and name) in the XSJS
file must be xsJob( param ).

Function xsJob( param ) {



If ( Param.someElement == foo ) {

}

}

Note: the fix name of the JavaScript function (here: xsJob ( param)) should become more flexible
in further evolutions of the background job handling by allowing to specify the function name to the
configuration.

Exceptions that are thrown in the JavaScript function are handled by the XS Background Job in a way
that they are written to the trace. The same applies for values that are returned from the function. At
a later point, the returned JSON could be added to a background log14.

At the current stage, the maintenance of the background jobs (DB-table – registry) is done manually
via HANA studio adding job definitions to table _sys_xs/batch_jobs. It can be imagined to write a
HTML5 UI as XS Engine App that is used to maintain the job definitions in the registry and to view the
job-logs.
Also native HANA studio UIs for admin/configuration similar to other HANA studio admin
functionalities are imaginable.

14
Adding the return values to the background log is planned and the background log will be accessible via SQL
monitoring view
18
INTERNAL & CONFIDENTIAL

6.4 XS Applications
XS applications consist of a set of resources within a package (or a package hierarchy) in the
repository. Beside the application content (e.g. xsjs, html, … ), there are resources that define
metadata for the XS engine around the application:

.xsapp
Each application must have an application descriptor file (a file without file name and extension
.xsapp). The application descriptor is the core file that is used to describe an application's
availability within SAP HANA Application Services. The package that contains the application
descriptor file becomes the root path of the resources exposed by the application. If the package
sap.test contains the file .xsapp, the application will be available under the URL
http://<host>:<port>/sap/test/.

The file content must be valid JSON for compatibility reasons (like {}) but does not have any content
used for processing.

.xsaccess
The application access file (a file without name with extension .xsaccess) enables to define access
rules to the application. Similar to the .xsapp file, also the .xsaccess file is using the JSON format.
.xsaccess files can be located in the root package of the application or in any sub-package. When
accessing the application via an URL, the .xsaccess file in the sub-package or up the package
hierarchy to the root package is used.

The .xsaccess file defines data (content) exposure to clients, defines the authentication method
and configures application privileges required to access content of package and its sub-packages,
specifies URL rewrite rules and defines if SSL is required.

The URL rewrite rules allow exposing other URL pathes to the clients that are mapped to other
packages on the XS application. Using this, it can be avoided to expose the inner application structure
in the outside URLs. Nevertheless, it is not possible to have complete URL aliases but only a rewrite
from one (sub)package within the application to another one is possible.

.xsprivileges
The .xsprivileges file enables to define application privileges that can be used for access
authorization in the .xsaccess file or checked programmatically to secure parts of an application.
Multiple .xsprivileges files are allowed, but only at different levels in the package hierarchy. The
privileges defined in a .xsprivileges file are bound to the package where the file belongs to and
can only be used in this package and its subpackages.

A privilege is simply defined by specifying an entry name with an optional description. This entry
name is then automatically prefixed with the package name to form the unique privilege name.

Since a general authorization concept in HANA is currently worked out that also covers application
privileges, data context filters, roles and role assignments to users, the above mentioned privileges
will transition into that concept over time.

19
INTERNAL & CONFIDENTIAL

6.4.1 Accessing the Database from XS applications


For XS applications written in server side JavaScript are assumed to primarily delegate data
processing to the HANA database. Therefore, the built in application services expose JavaScript
functions to perform data processing calls via SQL or calling stored procedures (SQLScript).

Calling SQL on the HANA DB is done via DB-connections on the one hand side and providing the SQL-
statement as string to the function prepareStatement() at the connection. The prepared
statement can be executed to run the SQL command on the database and to receive the result (e.g.
for queries). After processing, the statements and connections are close.

Example JavaScript code calling SQL on the HANA DB.

Var conn = $.db.getConnection();


var pstmt = conn.prepareStatement( “select * from DUMMY” );
var rs = pstmt.executeQuery();

rs.close();
pstmt.close();
conn.close();

Alternatively to the pstmt.executeQuery() it is also possible to run pstmt.execute(). For data


changes on the database, there needs to be a separate call to commit the changes using
conn.commit() before the connection is closed.

For execution of stored procedures, there is the possibility to use the function prepareCall().

Example JavaScript code calling a SQLScript procedure on the HANA DB.

Var conn = $.db.getConnection();


var pc = conn.prepareCall( “{CALL <procName>(params) WITH OVERVIEW}”);
pc.execute(); // execute the procedure;
var rs = pc.getInt(2); // e.g.get the result integer value

pc.close();
conn.close();

6.5 User Management, Authentication & Authorization


When building native HANA applications using application services (XS), there are also needs to
consider user management, authentication and authorization. In general, these topics are general
topic for HANA and not bound to XS. When writing XS applications, there are some aspects how to
deal with user management and authentication / authorization.

6.5.1 HANA User Management


SAP HANA contains a basic user management that allows to create users on HANA (e.g. via the HANA
Studio) providing beside the username a password for the user. There are currently no further user
settings (e.g. preferred language etc.) nor is there a distinction between different types of users (e.g.

20
INTERNAL & CONFIDENTIAL

system/technical users, application users, …). For technical users – even if they are not classified as
such – it is possible to suppress regular password changes.

In the context of HANA native applications, the user management might need enhancement e.g.
supporting eMail addresses15 as user names (to enable the web-experience for application logons),
introducing user aliases (that could be changed without the need to change the internal user
identifier) and the enablement to create users embedded within an application (without using the
HANA studio)16 which is especially relevant for self-contained applications where the creation of user
shall not require an administrative step (which might even not be possible in HANA cloud
environments).

6.5.2 Authentication
Authentication is in general the process of validating the identity of an user. HANA in general
supports different types of authentication like

 Basic authentication using username / password


 Kerberos authentication via Kerberos ticket and Kerberos authentication provider (a
Kerberos server)
 SAML authentication (“bearer” assertion)

The above mentioned authentication method can be used when authenticating a user at a HANA
system e.g. for database clients.

In the context of XS, the basic authentication, form based authentication and SAP logon ticket is
supported. The required authentication method for a XS application can be defined in the
.xsaccess file mentioned above in section 6.4.

Additional authentication methods like X.509 certificates and SAML2 are planned to be added for
SPS6. In addition, SPNego (Kerberos over HTTP) and OAuth are planned.

6.5.3 Authorization
Authorization mechanisms ensure that users are only allowed to perform actions in a system where
they are allowed to. HANA authorization mechanisms (SQL privileges, analytical privileges) are used
to specify who is allowed to access which data for which activities.

In general, privileges can be granted (or revoked) to users based on database objects like tables,
views, procedures. This is done via GRANT (or REVOKE) command defined in the SQL standard.

GRANT SELECT on <table> TO <user> [with GRANT OPTION];

Using this mechanism, it is possible to allow users access to some defined database objects or
execution privileges to stored procedures but it is not possible (without workaround) to grant more
fine granular privileges on e.g. table content (e.g. a user is only allowed to see orders from US).

15
eMail addresses are currently not enabled as user names due to limitations in the allowed character set for
user names.
16
Admin users could also use on SQL commands CREATE USER … from within an application but this includes
still some limitations from an application point-of-view reaching an embedded user creation and management.
21
INTERNAL & CONFIDENTIAL

For analytics, HANA allows to deal with specific analytics privileges that allow selective access control
(e.g. on value ranges of a dimension of an OLAP cube) for read-only cases. These selective privileges
are generated when modeled views are activated. Technically, these privileges are not handled via
the database privileges but are converted during activation and plan generation into dedicated
where clauses filtering the query results.

Beside these basic authorization checks, XS applications define additional privileges via the
.xsprivileges file (see 6.4) and check them in their application code. A definition of privileges
looks like:

Package: sap.example.source file: .xsprivileges.


{ “privileges”:
[
{ “name” : “Execute”, “description” : “basic execute permission” },
{ “name” : “Admin”, “description” : “admin permission” },
]
}

With this, the created privileges are: sap.example.source::Execute and


sap.example.source::Admin.

The privileges can be assigned to users via

CALL “_SYS_REPO”.”GRANT_APPLICATION_PRIVILEGE”(‘”sap.example.source::Execute”’, ‘<user>’ );

For XS applications, a “start-authorization” is checked automatically by the system. Within the XS


application, it is currently required to check explicitly for the permission for a given user if certain
application logic shall be bound to a given application privilege. For this, the JavaScript code looks
like:

Package: sap.example.source some XSJS code checking the privilege.



If ( $.session.hasAppPrivilege( “sap.example.source::Admin” ) )
{

}

To allow authorization definitions on data records (value filters) and to allow such application
privileges, an authorization concept is worked out currently and is about to be implemented in HANA
that adds the possibility to define privileges and roles and to be able to assign these privileges/roles
to users. The checks are then to be performed by the engines avoiding explicit code to perform
authorization checks. The same applies for definition of value filters for records in the database table
that can be defined with this concept as part of the roles and are checked by HANA when accessing
the data (basically using a similar approach than the analytical privileges mentioned above).

This means, the current privilege mechanism needs to be extended to support the requirements for
future XS based applications. Major topics that have to be addressed are:

 enabling more advanced support for write/change authorizations


 better support for authorizations on the application level (e.g. to support “workflow” type of
authorizations) without the need of programming
22
INTERNAL & CONFIDENTIAL

 awareness of the context of the application or request for authorization decisions

The goal is to have one consistent authorization concept across database and XS with a clear security
model to cover all types of access to the data. The authorization model will also be aligned with CDS
definitions to ensure a consistent and manageable security model.

23
INTERNAL & CONFIDENTIAL

7 Repository
To realize native apps on HANA, the HANA repository is used to manage the different resources
(either static content, libraries like SAPUI5-Lib or the defined application resources containing stored
procedures, table (entity) definitions, JavaScript-code, RDL files and the like) that altogether define
the behavior and appearance of the application. Figure 10 shows an overview of the repository that
is explained in this chapter.

Other Texteditors

TeamProvider File System


Developer Tools
Command line
Regi
„Regi“

R
Design Time API

SAP HANA
R
XS Engine
Design Runtime
R R
Time API

Repository

Repository Core R
Type
Repository API
specific
R Plug-In
R
Authori- CDS extensions
zation Object Activator Deploy /
Package
Export / and compile
and DU
Import depend. Determine
Mgmt
Mgmt Refs / texts
R Catalog++
CDS Extensions

Type specific DB Catalog


runtime tables (Runtime Objects)
Repository
Object Versions Procedure
E.g. XSJS
MetaData Generic Runtime Runtime bytecode Table / View

Table ...
Repository

Archive / ZIP

Figure 10: HANA Repository

7.1 Basic components


The repository manages source files (resources) of different artifacts that are used in the HANA
context. The files are organized in a folder-like hierarchy (via packages, e.g. sap.crm.sales) where
files (filename + extensions) are managed. The files contain basically content whose type is indicated
by the file extension (like xsjs for JavaScript code executed by the XS Engine). For cases like CDS
content, there is the convention that the filename is the same as the top-level artifact within the file
to allow omitting the filename and its extension in the reference.

The repository keeps versions of the different resources separating inactive, active and history
versions separation of versions into different areas (tables) of the repository. Files that are saved to
the repository are inactive. Inactive versions are only visible to the user who created them (via an
24
INTERNAL & CONFIDENTIAL

inactive workspace). There can also be multiple inactive versions of the same file in the same
repository; multiple inactive versions can be created by one user or by multiple users. To distinguish
multiple inactive versions of one user, inactive versions are uniquely assigned to a workspace that
can be used by developers for different parallel tasks. Once a file gets activated, it gets compiled into
the runtime artifacts (if required) and turns its status into active. The previously active version is
moved to the history to keep track of previous versions. There can be multiple history versions but
only one active version at a time.

The repository is able to manage delivery units to organize content for shipments of solutions (Figure
11 shows the relationships of the different concepts). The import / export functionality is used along
the lifecycle to export resources and delivery units for shipment (as archives) and to import the
archives in target systems. The software logistics around those archives happens outside the
repository and HANA. A transport management application (on XS) is planned.

Version
Repository content structure
1
Delivery 1 * 1 * Resource
Package Active Version
Unit (repository Object)

*
Inactive Version
1
Workspace
*
History Version
Inactive *
Session 1
1
User 1 Owner
Content maintenance sessions Repository Version Control

Figure 11: Relationships of Repository Concepts

During development, the access to the repository is done e.g. via Regi – a client tool that interacts
with the repository design time API to read and write files from/to the repository and to activate the
files. The repository API is resource based. Technically, Regi uses a SQL-connection and the
procedure REPOSITORY_REST to access the repository. This can be seen as REST-like paradigm even
if not using the typical REST access via HTTP.
Regi is embedded into the HANA studio via the TeamProvider (using Regi behind the scenes) or Regi
can be used manually via command line if development happens natively with any editor and using
Regi to synchronize, commit (save) and activate files.

Dependencies between resources are determined by the developer tools knowing the type specific
content of the resources. The repository itself treats the resources in a generic way. The
dependencies are reported to the repository via the corresponding runtime plugin that is called to
determine the dependencies. This enables the repository to manage dependencies17 between
resources which is especially important for the activation of resources that typically requires a
certain order considering the dependencies. For activation, the different content types have specific
plug-ins that contain logic how to parse the file contents and generate/compile the corresponding
runtime artifacts.

17
Dependency types known to the repository are currently: “regular dependency”, “extension” and
“compiledFrom”
25
INTERNAL & CONFIDENTIAL

Dependencies of resources depend on the version of the source file since other versions might add or
remove dependencies. On the target side the dependencies refer to a repository object (not a
concrete version). The version for the target object is based on the visible version of a certain session
Repository sessions are specified when connecting to the repository. The type of the session (active,
inactive, historical) determines which version is retrieved when accessing a repository object.

7.2 Type Specific Plug-ins – Resource Parser/Compiler


The repository manages different kinds of content. Static content (e.g. images, HTML files) are not
compiled into runtime artifacts but the files are directly used as stored in the repository and
interpreted by some other environment (e.g. HTML in Browsers). As soon as extensions to such
objects need to be supported, corresponding runtime plug-ins and a runtime persistency needs to be
available to manage the extended content.

Other artifacts like procedures, tables, views, xsjs-files, etc. are translated into corresponding
runtime artifacts. For this purpose, each file extension has a corresponding type-specific plug-in
defined that takes care for checks (syntactical correctness)18 on the one hand side and a compilation
into some runtime artifact on the other hand.

For database artifacts like tables, views, procedures and the like, the corresponding plug-ins parse
the file content and generate the corresponding artifacts in the HANA catalog.

JavaScript files (xsjs) are also checked and compiled but not stored in the HANA catalog but in a
runtime object persistency area within the repository19. The JavaScript sources are compiled into
byte code that is executable in the JavaScript container (Mozilla SpiderMonkey).

CDS artifacts (the DDL part) are enriched constructs of native constructs like tables and views using
advanced elements like associations and type definitions. The CDS artifacts and the corresponding
CDS plug-in generate the native HANA catalog entries like tables and views and in addition persist the
additional information like association information in a catalog extension (called catalog++). The
usage of a catalog extension is short term a risk mitigation to not impact the HANA catalog and
attached runtimes in the early stage. The integration of these extensions into the HANA catalog is
planned.

18
Checks are done via the generate() method followed by a rollback to ensure that check and compile are
consistent.
19
The runtime persistence should not be part of the repository but should be a type specific runtime
persistency similar to the HANA catalog. This is planned.
26
INTERNAL & CONFIDENTIAL

Figure 12: CDS compiler / Plug-ins within HANA

The CDS Runtime PlugIn implements the Repository Runtime Plugin Interface. The repository
provides the IDs of the resources to the plugin. The plugin can decide based on this information if the
corresponding runtime artifacts need to be created or deleted. The actual handling is delegated to
the CDS Activator that basically calls the CDS Compiler (retrieve references; compile the source
content into catalog artifacts). The CDS Compiler follows the common sense of compiler
construction: The compiler frontend consisting of Scanner/Lexer (transforming the source file into a
token stream), the parser (builds an AST from the token stream) and the analyzer (traverses the AST
into a symbol table). The compiler backend consists of a generator that generates the output format
to write the HANA DB catalog and the CDS catalog.
The CDS catalog20 is independent of the CDS compiler and responsible for consistent CDS artifacts
and their persistency including different read and write APIs.

CDS DDL files can also contain CDS QL parts e.g. for view definitions. These QL artifacts are expanded
by the CDS compiler into native SQL views and forwarded to the PTime infrastructure and the DB

20
Currently, the CDS catalog is an extension to the HANA DB catalog. Over time, it will be merged into one
catalog.
27
INTERNAL & CONFIDENTIAL

catalog (see Figure 13). CDS artifacts in RDL files are extracted and forwarded to the CDS PlugIn. CDS
QL used in RDL logic is also parsed using a QL parser and expanded into standard SQL that is compiled
into the corresponding SQLscript procedures covering the application logic for runtime execution.

RDL PlugIn
R R
CDS QL
RDL (within logic) CDS QL
Parser Parser
RDL file Expanded SQL

R
HANA CDS DDL R
file
Repository SQLScript Procedure (incl SQL)

CDS PlugIn Procedure


R PlugIn
R
CDS DDL CDS QL
CDS QL (view)
Parser Parser (SQLScript)
CDS DDL file
Expanded
(e.g. view)
SQL view

R
SQL View

Repository PTime

HANA Catalog

Figure 13: CDS compiler – handling of QL components

Beside the CDS QL artifacts contained in the CDS data definition parts (DDL) or the RDL application
logic that is expanded to standard SQL by the repository plug in’s, I will also be possible to use CDS
QL for adhoc queries from any consumer similar to the current usage of SQL. With this, the query
execution needs to be able to handle also CDS QL extensions of standard SQL. To reach this, the SQL
statement execution will contain a hook for the CDS QL parser (see Figure 14) that inline expands the
CDS QL statement at runtime to SQL within the PTime component and the expanded SQL is used for
processing21 (including statement optimization and plan execution). From a consumer point of view,
this enables the usage of CDS QL as query language extension to SQL.

Having the CDS QL parser and translator as part of the query execution engine, some query
expansions that are done today within the repository plug in’s at file activation could be avoided (e.g.
expanding CDS QL into SQL for stored procedures resulting of RDL logic). The resulting artifacts (like
stored procedures) could contain CDS QL queries that are expanded and executed at runtime.

21
As option, the execution is intercepted and the expanded standard SQL created by the CDS QL parser and
translator is submitted as query again. The concrete way of implementation depends on integration options of
CDS QL into the PTime environment.
28
INTERNAL & CONFIDENTIAL

CDS QL
Statement

PTime
SQL statement Expanded
Execution SQL
CDS QL Interception R
(Parser & Translator)

R
Expanded SQL

Analyzer
R

Optimizer
R

Plan executer

Data store

Figure 14: CDS QL execution at runtime – embedded into PTime query execution

RDL files (see below) are also persisted in the repository and handled in a similar way like CDS- or
JavaScript files. There is also a type-specific plug-in for RDL sources. One difference for RDL files is
that these files contain several artifacts since they define CDS entities (à tables) and the attached
application logic that translates into JavaScript or SQLScript procedures. Therefore, the RDL plug-in
checks and parses the RDL file and extracts the different artifacts. For each artifact, the
corresponding plugin is called internally (like CDS, xsjs, procedure, …) to compile the fragments
accordingly (Figure 15). This delegation to other plug-ins is currently hard coded22.

Activation

Type Type Catalog++


specific specific R
Plug-In R Plug-In DB Catalog
RDL (Runtime Objects)
Source CDS
RDL
XSJS
R
Procedure Generic Runtime
... Objects

Figure 15: RDL activation sequence (schema)

The compilation of RDL code into server side JavaScript and SQLScript is the current status of
optimization. Over time, the compilation should generate more and more low-level logic (like native
execution plans or functions on L). For execution reasons in HTTP consumption it might be necessary
to have wrapper in JavaScript handling the request and delegating to the generated low-level logic.
This also requires that the corresponding components currently used from JavaScript (like outbound
HTTP) are made available as function to be used low-level.

7.3 Authorization Checks


Access to repository content is controlled via authorization checks. Users accessing the repository
content need to have privileges assigned and the repository is checking against the privileges. The

22
The hard coded delegation between one plug-in another plug-ins could be avoided using a contract and
factory pattern to call other plug-ins. For a file extension, a dedicated plug-in is always registered.
29
INTERNAL & CONFIDENTIAL

assignment of repository privileges is done via statement GRANT PRIVILEGE like for assignment of
privileges for SQL access. Nevertheless, the authorization checks are not done on SQL level via the
SQL engine but via specific calls of the repository runtime (C++ code) to the authorization engine. The
Repository defines own privileges for repository access like REPO.READ (all starting with prefix
REPO).

Privileges can be defined on repository packages (e.g. REPO.READ, REPO.EDIT_NATIVE_OBJECTS,


REPO.ACTIVATE_NATIVE_OBJECTS, REPO.MAINTAIN_NATIVE_PACKAGES) that check if users are
allowed to access certain packages within the repository. If a user has privileges on one package, this
immediately also means that a user is allowed to access sub-packages below (including all resources
part of those packages).
Beside the privileges mentioned above (that are typically assigned to developers), there are
additional privileges REPO.EDIT_IMPORTED_OBJECTS, REPO.ACTIVATE_IMPORTED_OBJECTS and
REPO.MAINTAIN_IMPORTED_PACKAGES that are required in some cases. Using these privileges
shall only be done in exceptional cases since working on imported objects will be overwritten by the
next import of these files. Imported objects are identified by comparing the source system
information of each package.
There are also privileges that are independent of packages to protect certain actions in the
repository. These privileges protect import, export, maintenance of delivery units and accessing
foreign workspaces.23

As a default, a user is not able to access any package or content without having the corresponding
privilege granted. This means that developers require the corresponding privileges before they are
able to read from the repository or create own packages and content.

GRANT REPO.READ on "<package>" to <user> [with GRANT OPTION];


GRANT REPO.EDIT_NATIVE_OBJECTS on "<package>" to <user> [with GRANT OPTION];
GRANT REPO.ACTIVATE_NATIVE_OBJECTS on "<package>" to <user> [with GRANT OPTION];
GRANT REPO.MAINTAIN_NATIVE_PACKAGES on "<package>" to <user> [with GRANT OPTION];

Users accessing the repository via Eclipse (HANA Studio) or Regi (using the Repository-REST
procedure) need additionally the execute permission on this procedure.

GRANT EXECUTE on SYS.REPOSITORY_REST to <user> [with GRANT OPTION];

Note: User SYSTEM does not require an explicit assignment of the privileges since SYSTEM is
populated with the all privileges during repository setup.

7.4 Extensibility
The dependencies of resources within the repository already foresee extensibility24. The repository
defines the “extends”-relation to indicate that some file defines extension aspects of another
file/resource. Obviously, the concrete syntax for the extensions within the resources is domain
specific but the repository knows about the dependency between the resources and its extensions
and can utilize this information at compile time to provide the resources for basic objects and its
extensions to the runtime plugins that merge a resource and all its extensions into one runtime
artifact representing the extended set.

23
See also: http://trexweb.wdf.sap.corp:1080/wiki/index.php/Repository_Architecture:_Authorizations
24
Extensibility is already in use for configuration and destinations.
30
INTERNAL & CONFIDENTIAL

The extension mechanism is e.g. used to handle configuration settings allowing partners and
customers to add own configuration files (using extension relations) and the compilation step ensure
the proper configuration settings for the runtime environment.

8 CDS & RDL – Supporting native app development on HANA


Native application development on HANA can already be done via the HANA Database abilities and
extended services via the XS Engine. To support application developers, additional concepts are
introduced to simplify application development on HANA.

Figure 16: Overview on CDS and RDL

8.1 CDS – Core Data Services


As most databases, HANA supports SQL as the standard means to define, read and manipulate data.
On top of that, pretty much every consumer technology or toolset introduces higher-level models to
add crucial semantics and ease consumption (e.g. OData EDM models, Semantic layer in BI Platform).
Also the River Programming model and RDL follow the same pattern.

Even though those higher-level models share many commonalities, the individual information cannot
be shared across stacks. This leads to a fragmented environment and a higher degree of redundancy
and overhead for application developers and customers. To address that, a common set of domain-
specific languages (DSLs) and services for defining and consuming semantically rich data models is
specified as an integral part of HANA – called Core Data Service (CDS) – that can hence be leveraged
in any consuming stack variant as depicted above.

The Core Data Services comprise a family of domain-specific languages which serve as a common
core model for all stacks on top.

 The Data Definition Language (DDL) for defining semantically rich domain data models which
can be further enriched through annotations or higher level programming language
constructs (e.g. actions, event handlers, constraints, …) when embedded into a programming
language environment. DDL uses QL e.g. for view building and uses EL e.g. for calculated
fields.
 The Query Language (QL) for conveniently and efficiently reading data based on data models
as well as defining views within the data models. The QL is an extension to SQL introducing
the use of e.g. associations as defined by DDL within queries.
 The Expression Language (EL) for specifying calculated fields, default values, constraints, etc.
within the queries as well as for elements in data models

31
INTERNAL & CONFIDENTIAL

Beyond the above mentioned DSLs, the core data services also comprise means for writing data,
expressing application semantic constraints, transactional handling, etc. yet these services are not
specified or in the process to be specified as of now.

The key concept of Core Data Services is to pull data modeling as well as retrieval and processing of
data to a higher semantic level close to the conceptual thinking of domain experts. This is done by
carefully extending the relational model and language of standard SQL by the following central
concepts:

 Entities with structured types – instead of flat tables only –, containing elements with…
 Custom-defined/Semantic Types instead of being limited to primitive types only.
 Associations in the data model – replacing JOINs by simple path expressions in queries.
 Annotations to enrich the data models with additional metadata – e.g. for OData, ….
 Expressions used for calculations, etc. – not only in queries but also in data models.

Semantic types can be scalar or structured as shown in the examples below. Entities in addition to
types also relate to a corresponding DB-table that is created from the entity definition reflecting the
entity. Associations are resolved into foreign key attributes on DB-level.

type Amount {
element value : Decimal(10,2);
element currency : String(3);
}

entity Address {
key streetAddress : String(77);
key zipCode : String(11);
city : String(44);
kind : addressType;
}

entity Employee {
key element ID : UUID;
element name : String(77);
element salary : Amount; // Amount is a structured type
element addresses : association to Address [0..*];
element homeAddress = addresses[kind=home];
}

As known from databases, it is also possible to define views based on entities (or other views). View
definitions look like:

define view EmployeesView as SELECT from Employee {


ID,
name,

}

Based on entities or views defined via CDS DDL (as shown above), queries can be defined via CDS QL
that allow to make use of e.g. association information as extension of SQL. This allows developers to
formulate complex queries in a significantly shorter form expressing the intent and reducing the
potential of errors.

SELECT id, name, homeAddress.zipCode FROM Employee WHERE ...

32
INTERNAL & CONFIDENTIAL

Note that it is pretty straight forward to understand the semantics. The projection makes use of the
association to navigate to the address entity and gets the element zipCode as attribute in the result
set. The query is equivalent with the following statement in standard SQL.

SELECT e.id, e.name, a.zipCode FROM Employee e


LEFT OUTER JOIN Employee2Address e2a ON e2a.employee = e.id
LEFT OUTER JOIN Address a ON e2a.address = a.id AND a.type=’homeAddr’
WHERE …

Note: currently, the CDS QL is expanded by the compilers into the standard SQL statement for
execution. This intermediate step is to be removed as soon as the HANA SQL engine natively
supports the SQL extensions introduced by CDS QL (see also section 7.2).

More details on CDS including the current version of the language specification can be found at the
wiki page: https://wiki.wdf.sap.corp/wiki/display/ngdb/HANA+Core+Data+Services

8.2 RDL – River Definition Language


RDL defines a development model and language for native HANA applications. RDL thereby is a
development language at design time that is intended to be container and runtime independent and
that compiles into some runtime containers optimally. For HANA, RDL compiles into native HANA
artifacts like CDS, XSJS, SQLScript and others. The compilation is briefly mentioned in chapter 7.2.

Goals and Boundary Conditions of RDL are:

 Increased productivity and maintainability for enterprise application developers


 HANA focus: A language to allow leveraging HANA to build high performing business
applications
A model that ensures optimal execution performance and scalability of native HANA
applications.
 Distill application assets (“Timeless Software”): Provide a language that will focus application
developers on their intent rather than the application’s optimization and technical setup.

Targeting these goals is supported by the following design principles:

 Readability: expressive power for application semantics/operations


 Modularization: enabling separation of concerns at any level required -
Data model, core business logic, error/event handling, access control, …
 Declarative logic specification: focus on capturing intent
 Coherency: language constructs that allow intuitive combination of different aspects of the
application.
 Open: easily interoperate with existing assets

RDL is leveraging HANA CDS (DDL/QL/EL) and is adding more expressive syntax – especially the PL
(Programming language) part on top of it to express application logic and write scenarios. The
different DSLs (RDL, CDS) form a DSL family to be used to create native HANA applications.

The example below shows some application logic implemented in RDL with some CDS DDL and CDS
QL artifacts embedded. CDS DDL is used to declare types and entities (basically the data structures),
CDS QL is used to query data from the entity persistency and RDL PL is used to add application logic
(e.g. actions) on top.
33
INTERNAL & CONFIDENTIAL

34
INTERNAL & CONFIDENTIAL

Example RDL snippet containing CDS DDL and CDS QL artifacts.


type Amount {
value : Decimal(15,2) default 0.0;
currency : Association to Currency;
}
entity PurchaseOrderItem
{
key element purchaseOrderItemId : ID;
key element purchaseOrder :
Association to PurchaseOrder;
@required element product : Association to Product;
element note : LargeString ;
element quantity : Quantity;
element shipments :
Association to PurchaseOrderSchedule[0..*]
via backlink purchaseOrderItem;
element grossAmount : Amount;
element netAmount : Amount;
element taxAmount : Amount;
element purchaseOrder : Association to PurchaseOrder;

}

entity PurchseOrder {
element orderItems : Association[0..*] to PurchaseOrderItem
via backlink purchaseOrder;

action updateOrderStatus()
{
let delivered = SELECT SUM(quantity.value)
FROM orderItems.shipments
WHERE deliveryDate <= UTCDateTime.now();
let requested = SELECT SUM(quantity.value) FROM orderItems;
if (delivered == requested) {
this.orderingStatus = DELIVERED;
this.save();
}
}
}
entity BusinessPartner {
action processAllOpenOrders() {
apply updateOrderStatus to SELECT * FROM OpenOrders
WHERE partner = this;
}
}

More details on the RDL language specification can be found at the Wiki-page:
https://wiki.wdf.sap.corp/wiki/display/GlubyPedia/RDL+Business+Logic+Specification

35
INTERNAL & CONFIDENTIAL

9 Appendix

9.1 Sources
HANA DB Bluebook by Wolfram Kleis

https://portal.wdf.sap.corp/irj/go/km/docs/corporate_portal/WS%20PTG/Product%20Archit
ecture/Knowledge%20Transfer/Bluebook/SAP%20HANA.pdf

HANA XS Engine

XS Community: There you find information about XS, documentation, and our forum for
questions. https://community.wdf.sap.corp/sbs/community/xs

Stable link to the current documentation:


http://hanaxscons.wdf.sap.corp:8015/sap/hana/xs/docs/

HANA Repository

Documentation on the HANA repository:

Overview of the project: http://trexweb.wdf.sap.corp:1080/wiki/index.php/Repository

Architecture: http://trexweb.wdf.sap.corp:1080/wiki/index.php/Repository_Architecture

Providing own object types:


http://trexweb.wdf.sap.corp:1080/wiki/index.php/Repository_Architecture#Extending_the_
Repository_with_new_Object_Types
http://trexweb.wdf.sap.corp:1080/wiki/index.php/Repository_Architecture:_Runtime_Plugin
s

LCM capabilities:
http://trexweb.wdf.sap.corp:1080/wiki/index.php/Repository_Architecture:_Transport

Authorizations:
http://trexweb.wdf.sap.corp:1080/wiki/index.php/Repository_Architecture:_Authorizations

Core Data Services (CDS):

CDS Wiki: https://wiki.wdf.sap.corp/wiki/display/ngdb/HANA+Core+Data+Services

CDS Architecture: https://wiki.wdf.sap.corp/wiki/display/ngdb/CDS+Architecture

CDS Architecture Documentation:


https://wiki.wdf.sap.corp/wiki/display/ngdb/CDS+High+Level+Architecture+Documentation

RDL:

Wiki: Language Spec:


https://wiki.wdf.sap.corp/wiki/display/GlubyPedia/RDL+Business+Logic+Specification

36
INTERNAL & CONFIDENTIAL

Blog introducing RDL:


http://www.saphana.com/community/blogs/blog/2012/11/15/introducing-rdl-the-river-
definition-language

HANA XS Documentation:

Documentation:
http://hanaxsdev.wdf.sap.corp:8025/sap/hana/xs/docs/index1.html
http://hanaxsdev.wdf.sap.corp:8025/sap/hana/xs/docs/index.html

Developer Guide: \\trextest\trex_test\huester\docu

37

Das könnte Ihnen auch gefallen