Sie sind auf Seite 1von 38

openSAP

Modern Data Warehousing with SAP BW/4HANA


Week 2 Unit 1

00:00:07 Hello and welcome to week two, Integrating SAP HANA Platform Capabilities,
00:00:12 unit one, Open Access to SAP BW/4HANA Data. In this unit, we will cover
00:00:20 and introduce the mixed architecture as one of the best practice architectures
00:00:26 for data warehouses. We will show the different architectural types
00:00:31 as well as the SAP HANA view generation demo. Alright, let's have a look at the idea
00:00:39 of mixed scenarios and mixed architectures. What's the point about it?
00:00:44 You know that SAP BW/4HANA is an application running on top of the SAP HANA database,
00:00:50 and, of course, the SAP HANA database itself, and you remember this from week zero
probably,
00:00:55 can also be used to build data warehouses. So, the key idea here is to basically combine
assets
00:01:01 from both sides, manage data on both sides, reuse data from one side on the other side,
00:01:08 basically all kinds of combinations of processes, data, whatever you can think of
00:01:14 in a single data warehouse architecture to basically pick and choose what makes life easiest
00:01:21 in a certain situation. We have tools in place which help with the integration
00:01:28 of such scenarios to basically cross the border between the SQL world and the BW/4HANA
world.
00:01:34 We have tools in place which help with transporting such scenarios
00:01:39 because that's also something which happens on different levels.
00:01:43 But the key is that SAP HANA is underneath all the artifacts are basically the central repository

00:01:49 and runtime for all the data warehouse scenarios. What are the motivations and the benefits
00:01:57 of running such a scenario and the usage of the best of both worlds?
00:02:02 It's, of course, with that approach it's possible to use SAP BW/4HANA skills as well as native
SQL skills.
00:02:13 So, the idea is to combine the skills and use the best of both worlds here.
00:02:20 The next point is that it's quite easy with this kind of architecture to seamlessly integrate
00:02:26 BW data into the native world. As well as vice versa.
00:02:31 So this means native-built data models can be integrated in BW/4HANA and used as a source
or as one layer
00:02:45 for virtualization here. It's quite important when you, let's say, create a
00:02:54 calculation scenario out of a BW4/HANA object, that you can use all the HANA features and
functions.
00:03:05 Like predictive, spatial, or some other HANA platform features.
00:03:10 All the HANA platform features which are available can be used on BW data.
00:03:17 That's basically one of the aspects of openness of BW4/HANA in general,
00:03:21 that we basically said we open access of data which we store in BW/4HANA to any
00:03:26 of consumer who's able to kind of speak SQL, which contains, of course, all the engines
00:03:31 which you have inside HANA, but also things beyond, like BI clients,
00:03:35 which are not even SAP, provided by SAP, but by partners. All that basically is possible
00:03:44 because we have this SQL interface on top of BW data. Yeah. That SQL interface is,
00:03:50 of course, needed for accessing the data via SQL tools. So, let's come to the first example
00:03:59 of an architecture type, which we see here, and it's what we call a virtual data mart.
00:04:04 So, the idea in this scenario would basically be that the customer is using SAP BW/4HANA
00:04:10 to mainly do the data warehouse stuff, store the data, transform the data, manage the data,
00:04:16 using all the stuff which we saw in week one. You'll remember all the aDSO stuff, the request
handling,
00:04:22 all these nice features, use that to manage the data. Maybe also do the partitioning, whatever,

00:04:27 all these features, but then, basically, do the consumption of data and maybe
00:04:31 some additional logic on the SQL side by basically using generated views out of
00:04:37 BW objects, and that's possible in any layer of BW/4HANA. So you can basically use pretty
much any object
00:04:44 in BW/4HANA to generate a calculation view out of that, that's what you see on the left hand
side,
00:04:49 as generated view. And on top, you can build your own logic, right.
00:04:53 You can build calculations on top, you can build projections on top, you can join data,
00:04:58 you can basically do whatever you want and then even consume the data via SQL
00:05:03 by basically any type of client or by any other tool, right?
00:05:06 Could also be one of the HANA engines, could be pretty much anything that's able
00:05:12 to kind of speak SQL. Important to mention here as well is
00:05:17 that these generated HANA views are part of the SAP BW/4HANA lifecycle management.
00:05:23 Exactly. So when you change a BW/4HANA object, the generated view gets adapted
automatically
00:05:28 Yeah. And the authorization, the security concept
00:05:31 of BW/4HANA, I was almost forgetting this, but it's a very important point, the security concept

00:05:37 of BW/4HANA is also incorporated in these views, so we make sure that you can't see data
using the SQL
00:05:43 access, which you're not supposed to see, via the BW/4HANA access already.
00:05:51 So, the next architecture type is the pure SQL consumption, or the SQL consumption.
00:05:57 And therefore, it is possible to create a so-called HANA calculation scenario out of a BW
query.
00:06:05 And then you can create your own view built on of the generated view, and consume that via
SAP clients
00:06:16 as well as non-SAP clients or third-party tools. But you can use, or you can access the BW
queries as well.
00:06:24 That's the idea here. So that you combine the best of both worlds
00:06:28 here, accessing the BW queries to use the analytic engine for certain scenarios and for certain
access patterns,
00:06:37 but on the other hand, you can access the data fully native by creating a calculation scenario
00:06:47 and access this calculation scenario via third-party tools or our SAP BI clients.
00:06:57 Here, it's the same, the generated views are part of the HANA lifecycle management
00:07:01 and we are using the SAP BW4/HANA authorization concept to access the right data.
00:07:07 All right. Another aspect which is also very interesting and which is also very commonly
00:07:12 by our customers is that they basically, use technologies for certain data warehouse
operations.
00:07:21 Basically to do, for example, certain complex transformations or transformations which are

2
00:07:28 on mass data, not record by record, as the typical BW transformation works.
00:07:35 And this can also be done by again taking, for example, an aDSO using the generated
calculation view
00:07:41 out of this aDSO, then putting your transformation logic in, for example, a SQL view, or a
calculation view on top,
00:07:49 and again consuming this calculation view in a data flow via a data source
00:07:53 and then loading to the next layer. That's a schema which we see quite frequently,
00:07:58 how customers are using the modeling capabilities of especially calculation views to build
certain
00:08:05 transformation logic which would not be so easy to build using the standard framework of
BW/4HANA.
00:08:12 Yeah, and last, but not least it's of course possible to consume natively built SQL models
00:08:20 created via views into BW/4HANA. So the idea here with that pattern is
00:08:27 that the customer is loading the data into a HANA table, building the logic in a view, and then
consuming that HANA
00:08:37 calculation view in a CompositeProvider and/or Open ODS view into BW/4HANA.
00:08:43 But this is part of one of the other units later on. Exactly. But it's basically something which you
would
00:08:49 also see, the picture here looks very simple, it's something which you would also see if you
basically
00:08:53 have both an SQL-based data warehouse and a BW/4HANA-based data warehouse
00:08:58 on the same platform, right? Then you would like to share data from one side
00:09:01 with the other, and we basically here described up to now, both directions opening, or making,

00:09:08 the data in BW/4HANA available via the generated calculation views
00:09:12 and consuming the data from the SQL side on the BW/4HANA side using CompositeProviders
or Open ODS views
00:09:20 or even data sources when it comes to loading. So this means that you store the data once
00:09:24 and you can consume the data many times. So it's a very flexible way
00:09:28 of using both the technologies on one platform and minimizing redundancy in data movement
in between.
00:09:35 Now let's come to a system demo. And we picked a very interesting topic, which is,
00:09:41 especially right now, very interesting. Because customers are asking a lot about data privacy.

00:09:46 Data privacy is one of the hot topics these days, and we basically picked an example
00:09:51 which is called differential data privacy. It's a functionality which comes with HANA 2.0,
00:09:59 and it's basically, if you look at the example which you see here, if you look at the upper table,

00:10:04 it's basically some sort of information about employees potentially, so you have the
00:10:11 of an employee, some information like the birthday, I don't know why the weight is in here, but
never mind.
00:10:18 I hope that's not stored in our HR system, to be honest. It's probably not the case.
00:10:25 But salaries are very confidential and very sensitive information.
00:10:30 Now the point about differential privacy is that, of course, for many analyses, it's very important

00:10:35 to allow people access to that kind of data, but to allow this access in a way that makes it
impossible
00:10:43 to identify persons, for example, based on the salary, or get information about individual
salaries, while,

3
00:10:50 on a higher level, you still want to be able to make conclusions about the data.
00:10:55 So, basically, the idea is to change the key figures of this table, so change the salary, in a way

00:11:01 that an individual line would not make any sense anymore. So the salaries for each individual
line
00:11:07 will be totally different, but, if you sum up a sufficiently large number of employees, then, that's

00:11:13 what we'll see in the demo, then the numbers are quite close to the original, to the original
numbers again.
00:11:20 Now, we have to add to this, that this is a HANA 2.0 feature which only works
00:11:24 with the HANA deployment infrastructure, so the newer repository of HANA 2.0,
00:11:32 and we're currently in a POC state here. So, this is not officially released, the way
00:11:36 that we did here, so generating calculation views from BW/4HANA into the HANA deployment
infrastructure
00:11:43 is not officially released yet. There's a note out there, note 2463312,
00:11:50 which you should check for the current status. We hope that at some point in time in the future,

00:11:54 this will be officially available and then such scenarios can be built by any customer.
00:11:59 We thought the example is so interesting that it makes sense to show it to you anyway.
00:12:04 So, let's jump into the demo here. Here's the aDSO which contains the data
00:12:09 which we just looked at, right? So, what's important to note here is the checkbox
00:12:15 which says that we want to generate an external SAP HANA view, a calculation view, out of
this aDSO.
00:12:22 When we look at the details of the aDSO, you see a list of attributes which is similar
00:12:28 to what we saw in the table before. Fortunately, there's no weight information here,
00:12:35 but we see some information about gender. For example, we see the important
00:12:40 and sensitive key figure, which is the salary. We see geographic information like region, zip
code.
00:12:46 And so on. Now, let's look at the HANA side,
00:12:52 at the corresponding generated calculation view. And we see that here it's basically with a
prefix BW2HANA
00:13:00 and then the name of the aDSO. Here you see the structure of this generated HANA view.
00:13:06 It's basically the same structure as we have for the aDSO A table itself with two additional
technical columns,
00:13:13 the one row count column and the request transaction sequence number, in addition.
00:13:19 But that's basically the interface which BW/4HANA generates for SQL consumption.
00:13:27 Now let's look at logic and then this anonymization, this differential privacy logic,
00:13:33 how this is put on top and how you can basically leverage this.
00:13:36 So here we are, as I said, in the access A environment, access advanced, which is basically
the modeling environment
00:13:46 for HANA 2.0. And we have a calculation view here. Let's look at that one.
00:13:52 Which basically takes the view generated by BW/4HANA, adds a note which basically contains
the configuration
00:14:00 for this data anonymization feature, for the differential privacy,
00:14:04 and then just does a projection, and the typical stuff which you do, on top in a calculation view
in general.
00:14:10 So, maybe let's have a brief look at the details of this note.
00:14:14 I don't think the mappings are very interesting, but what you see is... maybe let's look at the
columns first,

4
00:14:21 so it's again the same list of columns, plus an additional column.
00:14:25 Namely, the salary here. So, here we have the original salary, that's the unscrambled
00:14:30 salary which contains the original data, and here's the kind of scrambled salary,
00:14:35 which, to which some additions or subtractions are applied, to basically make the data, yeah,
useless
00:14:43 on an individual line level, as I just described. There are some parameters which this algorithm
needs,
00:14:51 it doesn't make sense to go into detail here, we'll see the results in a second when we look
00:14:56 at the data which comes out of this view. We'll basically now do a preview on this calculation
view,
00:15:03 and see how the original key figure, the original salary as it's stored in the aDSO,
00:15:11 and the scrambled one, how they compare. Now maybe let's start with a comparison on level
00:15:20 of individual lines. When you look at this, you see, well in total,
00:15:28 there's no big difference. We could, we could also look at the tabular view,
00:15:32 if you read carefully, you see there's a minimal difference from a percentage perspective,
00:15:37 between these two numbers. So, when you aggregate this to a very, very high level,
00:15:42 then you basically have an error which you can neglect. You would see the same, for example,
if you drill
00:15:51 a little bit for example, by T level, which is kind of the salary level in SAP terms.
00:15:58 We have five salary levels if you do the comparison, maybe let's do this, graphically again,
00:16:05 you would see the numbers of the original salary and the scrambled salary.
00:16:09 They are pretty much the same. And the same applies, of course, if you, for example,
00:16:13 if you drill down by region, or let's maybe take the zip code here, we have quite a number
00:16:18 of different zip codes, so it's a much finer granularity already, but you see some deviations
here,
00:16:25 but if you really look at the numbers and do the math here, you would see that it's always
00:16:30 in the range of probably below one percent. So, if you look at these numbers, they would really
still
00:16:35 make sense for a business person, right, if you have to do certain analyses
00:16:41 on this low granularity, on this highly aggregated data, you still get to the right conclusions
00:16:48 because you almost see the same data, but what happens if you really drill down, say,
00:16:53 to the individual employee, is very surprising. That's the cool feature here.
00:17:00 Now compare the original salary here and the scrambled salary on individual line level,
00:17:08 you will see that it doesn't seem to make any sense, right?
00:17:11 Here you have a salary of roughly 120,000 whatever it is, euros, dollars, doesn't matter.
00:17:17 Over here, it's 31,000. Down here we have a very funny example where someone is
00:17:23 obviously willing to pay money to actually go to work. Which is not even the case for us.
00:17:28 Even though we like SAP a lot and we like SAP BW/4HANA a lot, and we like this feature a lot.

00:17:33 But I would not work for a negative salary, of course. So, that's a very interesting feature,
00:17:40 a functionality which the HANA platform provides, which is very useful and, as we said,
00:17:45 a very hot topic these days. And using this mechanism of exposing the BW
00:17:53 to the SQL side, using HANA calculation views, you can actually very easily leverage this,
00:17:58 on top of data stored in BW. And, as Gordon described, we could actually
00:18:03 also do the same scenario the way back and integrate the results of this scrambled data
00:18:10 into BW via CompositeProvider, and then have a closed scenario within BW,

5
00:18:15 you use a BW query on top of scrambled data which is stored in an unscrambled fashion
inside BW/4HANA.
00:18:23 And I guess with that, we are finished for today. So what are the key takeaways, Gordon?
00:18:28 The key takeaways, mixed scenarios are the best practice for modern data warehouse
architectures,
00:18:36 using best of both worlds, using BW/4HANA scenarios, as well as SQL data warehouse
scenarios,
00:18:45 SAP BW/4HANA logic can be exposed to SAP HANA and then we can leverage all the HANA
platform
00:18:53 features and functions, and generated HANA views are part
00:18:58 of the lifecycle management, and it's possible to use the BW authorization concept.
00:19:04 And in the demo, we showed that it is quite to consume BW data, in the HANA view
00:19:14 and build some quite interesting logic on top of Yeah.
00:19:19 And that's it for today. And please don't forget your self-test.

6
Week 2 Unit 2

00:00:08 Hello and welcome to week two, unit two: Using SAP HANA for Data Integration.
00:00:15 In the previous unit, we talked about the integration of SAP BW/4 objects
00:00:21 into the native HANA world. And in this unit, we will talk about
00:00:25 the integration of the HANA capabilities into BW/4HANA. This means we will give you an
overview
00:00:31 of the data integration capabilities we have in BW/4HANA followed by an overview
00:00:37 of SAP HANA Enterprise Information Management. The abbreviation here is SAP EIM.
00:00:44 And we will give you the complete picture or a picture of the integration scenarios
00:00:49 we have in SAP BW/4HANA, followed by a system demo, of course.
00:00:54 All right, so let's have a look at the overall picture of data integration with SAP BW/4HANA.
00:00:59 There are basically two main pillars here. One is built-in capabilities of BW/4HANA
00:01:05 for data integration. The other pillar is basically
00:01:08 what the HANA platform provides us. Let's look at the built-in capabilities first.
00:01:13 Those are mainly based on Operational Data Provisioning.
00:01:16 That's a framework which we built a while ago for integration of data from classic SAP source
systems,
00:01:23 like SAP ECC systems, S/4HANA systems, Business ByDesign, and so on.
00:01:29 The second option which the BW/4HANA application comes with is the possibility to do file
uploads.
00:01:36 Here we support stuff like CSV files, Excel sheets, and so on.
00:01:41 And when you look at the capabilities which the HANA platform brings into play,
00:01:46 then that's mainly access to non-SAP data, of course, because that's what we need
00:01:51 in addition to the capabilities of the application. So reaching out to any type of relational
database.
00:01:58 For example, reaching out to social media data. That's possible with HANA Enterprise
Information Management.
00:02:06 Also reaching out to big data systems like Hadoop, Hive, Spark, Vora.
00:02:10 That's also part of that framework. And together we have a pretty comprehensive
00:02:16 of data integration capabilities. Now that's not all we have.
00:02:21 On top of the functionalities which come with the bundle of BW/4HANA and the HANA
platform,
00:02:27 the HANA Data Management Suite also provides additional data integration capabilities.
00:02:33 For example, SAP Data Services. And then there's, for example, SLT,
00:02:36 SAP Landscape Transformation Replication Server, such capabilities you can use on top of
00:02:41 the actual BW/4HANA system. And even that is not all that we can offer.
00:02:49 You can include, of course, into your system landscape and into your scenarios, any kind
00:02:54 of standard ETL tool which is certified for SAP HANA. So now we'll give an overview of the
capabilities
00:03:03 we have in the integration part. As you know, we have the HANA database as a platform,
00:03:11 and on top of that we have the application server, and now the question is how we can
00:03:15 bring in the data from the database into the application server.
00:03:20 And therefore we the HANA Enterprise Information Management, SAP HANA EIM, with the
functionalities of
00:03:26 smart data integration and smart data quality. We have smart data access for accessing the
data virtually.
00:03:34 And we have the SAP Landscape Transformation Replication Server

7
00:03:38 to replicate data from the source into a target. Target in that case is, of course, the SAP HANA
database.
00:03:45 And EIM is for persisting the data rapidly. Exactly.
00:03:53 So, what do we have in Smart Data Access and Smart Data Integration?
00:03:58 And what are the differences between smart data access and smart data integration?
00:04:03 SDA and SDI? SDI means there's smart data integration.
00:04:09 Here we integrate data virtually via real-time or via batch.
00:04:14 In the end, this is done in the so-called data provisioning server.
00:04:18 From an architectural point of view. It's what you see on the right-hand side.
00:04:21 Exactly, that's on the right side. We have a data provisioning server,
00:04:25 and on the other side, we have the smart data access for accessing the data more or less
virtually.
00:04:31 Here we create virtual tables, and then we directly consume these virtual tables into
BW/4HANA.
00:04:39 And this is running on the index server of SAP HANA. From architecture, it's more or less the
same,
00:04:46 but we have of course differences here. We have a variety of adapters you can use here.
00:04:55 We have built-in adapters. We have, I think, 30 or 40 adapters,
00:04:59 it's documented in the EIM Help. As well as custom adapters.
00:05:05 Custom adapters means we deliver a so-called software development kit, an SDK,
00:05:10 where the customer can build their own adapter based on the business requirements here.
00:05:17 And the connector between the HANA world and the BW world here, is the so-called virtual
tables.
00:05:23 And the virtual tables are created via the BW layer. Right, so let's have a look at
00:05:33 the way you consume these functionalities from a BW perspective.
00:05:37 What we described so far was basically the functionality which is contained in the HANA
platform.
00:05:43 So how does BW help you, and how does BW integrate this and help you leverage these
capabilities?
00:05:50 To consume data from any of these sources using HANA EIM, like smart data integration
00:05:56 or also smart data access, we have a dedicated source system type in SAP BW/4HANA,
00:06:02 which is the so-called HANA source system. So the HANA source system supports, as I just
said,
00:06:07 both smart data integration and smart data access. It also allows you to access local HANA
schemas,
00:06:13 so if you have a couple of additional schemas running on the same database
00:06:18 as your BW/4HANA installation, then of course you can access data from there.
00:06:23 And it also allows access to other tenants in the same HANA database.
00:06:26 If you have a multi-tenant installation, then you can also reach out to these other tenants.
00:06:31 That's basically the connectivity and how BW/4HANA leverages the connectivity
00:06:36 which you get from SDA or SDI. That's done on the source system level.
00:06:41 On the DataSource level, we do the actual connection to a table, for example,
00:06:45 or any kind of source object on the remote server, database or whatever it is,
00:06:51 it could be a web server, whatever. So basically when you create a HANA DataSource
00:06:56 based on a table in a remote database, BW/4HANA will create the corresponding virtual table
00:07:02 as an adapter to the foreign object, and manage the lifecycle of all of this.
00:07:07 So basically, all the work which is done on HANA platform level is kind of hidden from
00:07:13 because BW does this automatically once you create the DataSource.

8
00:07:18 And, of course, the DataSource manages the data transfer when you create, it's kind of the
adapter of the BW side,
00:07:23 to connect a data transfer process, to create an open ODS view, all that kind of stuff.
00:07:28 One remark. We saw in one of the first slides that we have two options to access non-SAP
data here.
00:07:36 We had the more standard stuff like the relational databases and social media stuff.
00:07:43 And we also had the big data systems. We have a dedicated source system type
00:07:47 in BW/4HANA as well for big data systems, it's called the Big Data source system.
00:07:52 From a technology perspective, it's actually the as the HANA source system.
00:07:57 It's just encapsulated in a different folder basically in your source system tree
00:08:03 because the world of big data is getting so important that it makes sense to put this structure
into
00:08:08 your BW/4HANA data sources right away so you don't mix up access to HANA systems
00:08:14 with access to Hadoop systems, for example. Right, and here in the picture, you see how this
works.
00:08:19 You can actually either, using such a HANA source system, access tables or views in a local
HANA schema.
00:08:28 Or you can access remote sources via a virtual table, which as I said,
00:08:33 will be generated by BW/4HANA for you. All right, and with that, we're ready for the demo.
00:08:39 What are we going to show, Gordon? In the demo, yeah of course,
00:08:42 we will configure SAP HANA smart data integration, SDI. We will connect a source system
00:08:48 to our BW/4HANA system. And then we will create an
00:08:53 SAP HANA source system in our system. We will create the corresponding data source here.

00:08:58 And we will show you the tables which are created from BW/4HANA.
00:09:02 So that's pretty much end to end. We will basically take our BW/4HANA system
00:09:05 and the underlying HANA platform, connect this to, in this case, another HANA database,
00:09:11 and then create the source system on the BW and even reach out to one of the tables
00:09:16 in the remote source. So that you get an impression of how this feels,
00:09:19 what you have to do on the HANA back-end side. what the BW/4HANA application offers on
top of
00:09:24 So okay, let's jump into the system.
00:09:32 Here we have the studio. Now we have to go to the... on the system layer.
00:09:41 So that's the back-end side of the HANA database right now, right?
00:09:44 So we're not looking at the BW modeling tools, but actually at HANA Studio.
00:09:47 So we have to go to provisioning here. Provisioning encapsulates all the connectivity
00:09:52 from the HANA database to other sources. Here we have the so-called remote sources,
00:09:55 which are the connections to other databases. And basically we're creating a new remote
source now
00:10:01 to a different database. Yeah, we have to give them a name.
00:10:05 Let's say, M29 OPENSAP. The adapter...
00:10:12 Here you see the list of adapters which Gordon mentioned earlier.
00:10:14 This is basically the list of adapters for smart data access.
00:10:19 If you switch the source location from index server to data provisioning server,
00:10:25 which we haven't configured here, you would have additional adapters provided by
00:10:29 smart data integration. So here, since we're reaching out
00:10:32 to another HANA database, we choose HANA ODBC here. And we basically have to provide
the connectivity now.

9
00:10:38 So we need the server name, that's a quick cut and paste, I guess.
00:10:46 Yeah. Because that's kind of difficult to remember.
00:10:50 Port is 30015. Then you basically provide, that's I think
00:10:55 the general ODBC port for the remote HANA database. Then you have to provide credentials,
00:11:02 user name and a password. And once you've done that,
00:11:05 you can actually start working with the remote data, as we'll see in a second.
00:11:09 So unless we have done a typo. We'll see, yeah, creation of the remote connection,
00:11:16 or the remote source was successful. We see the connection now on the left-hand side here.
00:11:22 There's M29_OPENSAP, and you can actually even start browsing the source objects here.
00:11:27 But that's something... From now, once we've done this kind of connectivity,
00:11:33 once this is established, we can actually switch back to the BW side.
00:11:37 And then really start working, creating a HANA source system on top of this connection.
00:11:42 And then move on to really reaching out to the remote objects.
00:11:45 And we'll see how that works. Now we create a new source system.
00:11:49 This is new, source system. We give it a name, ZOPENSAP.
00:12:00 Now we have the different connection types here. We have to choose one.
00:12:04 In our case, of course, this is as we explained, this smart data access.
00:12:08 Smart data integration would basically be the same bullet point by the way.
00:12:11 But you see the options which we just mentioned. The Local HANA Schema, Smart Data
Access
00:12:15 or Smart Data Integration, another tenant, or the Big Data. All these four would basically be
based
00:12:20 on the same technology. The other ones are the ones which I described initially,
00:12:23 which are provided by the BW/4HANA application. All right, so now we select a remote source.

00:12:31 Remember the remote source which we just created, or the list which was already available.
00:12:35 Here's the one. Yeah, here's one we already created: M29_ OPENSAP.
00:12:39 Of course, use that. Now the point is in this database,
00:12:45 we have of course multiple schemas which we could potentially access.
00:12:47 And we choose the schema from which we want to read our data.
00:12:52 That one. This is the schema here.
00:12:54 Okay, so now we basically have the possibility to access any kind of database artifact
00:12:59 from this schema once the source system is activated. And that, of course, accessing remote
object,
00:13:06 in BW is done via data source, right? A data source is always the connector
00:13:10 to any source object in the outside world, whether it's an SAP system, an ECC system,
00:13:15 whether you have the classic extractors or in S/4HANA with the CDS views.
00:13:20 Here it's the source objects, or the database objects in the remote database.
00:13:29 So, here you can see, this is the already created source system.
00:13:33 And now we will create a new data source based on the remote source we created previously.

00:13:41 So here we are. Right, it's in the source system


00:13:45 which we just created. So it's tied to this connection.
00:13:47 Exactly. And now we should actually...
00:13:50 Now we have the possibility to choose a source object, a proposal from a HANA table or view,

00:13:57 or from a native DSO which we've already mentioned. So, here we are.

10
00:14:04 Okay, now we should get a list of the source objects in this remote schema.
00:14:09 And we see a group of tables from which maybe we choose the sales orders tables here.
00:14:20 The data source gets a name. We choose an application component.
00:14:27 openSAP, of course. I tend to choose "No it's not connected"
00:14:30 because I'm such a lazy guy. So this is, of course, transactional data.
00:14:35 Finish. So, and here we are.
00:14:40 Here you can see the name of the data source, the source system, and the extraction field.
00:14:46 You can see the details about the connectivity. So some source system details,
00:14:49 it's based on smart data access. It's the HANA ODBC.
00:14:53 Connectivity, you see the name of the remote source, all that is in here.
00:14:58 Maybe let's look at the fields. That should now be the list of fields
00:15:02 which we have in the source object. And once we activate this.
00:15:09 Yeah, we have a couple of objects here. Now I will activate the data.
00:15:13 And then we should actually also be able to see the virtual table which was being created
00:15:17 by the BW data source. So the connectivity, the adapter object,
00:15:21 on the HANA data platform side. Therefore we have to go to the properties level.
00:15:26 Now the data source is activated. So, and here on the technical attributes,
00:15:34 for extraction you can see the virtual table for accessing, or the view for the query access.
00:15:40 And this can be used as well. So those objects are things
00:15:45 which are actually generated on database level, so when you go back to the database
explorer,
00:15:50 on the HANA studio side, you should actually be able to find those objects
00:15:53 somewhere in the schema on which the BW/4HANA server runs. Yeah, I mean, that's it so far.

00:16:00 Now we've created a source system, we've created a data source. And now we look ready to
load the data into BW/4...
00:16:06 Now it's the same game as always, as if it were, say, an ECC system
00:16:11 with a data source as an adapter you can basically start building.
00:16:15 So what are the key takeaways? First of all, BW/4HANA has
00:16:18 very comprehensive data integration capabilities. Partly delivered by the application itself,
00:16:26 partly they are coming with the HANA platform. And then, of course, there's a lot of additional
stuff,
00:16:34 with the HANA Data Management Suite, with third-party tools,
00:16:38 so all of that can add to the connectivity as well. So we are very, very open in that respect.
00:16:44 And you've also seen how BW basically helps to leverage and orchestrate these capabilities.
00:16:50 So you don't really have to work a lot on the database level to, for example,
00:16:54 create the virtual tables and establish the connectivity. All of that is done by BW/4HANA.
00:17:00 Once you create the standard BW/4HANA objects, which you also know from the old worlds,
00:17:06 for example, of a BW system, with the DB connect source system and so on, right?
00:17:11 Creating a data source does all the stuff which you actually need.
00:17:14 The only thing you need to do on the back-end side is actually establishing the connectivity.
00:17:19 Yeah, and with that, I guess it's time for the self-test.

11
Week 2 Unit 3

00:00:08 Hello, and welcome to week two, unit three, Working With External Data.
00:00:15 In the last unit, we already covered the possibilities how to integrate external data,
00:00:22 and now the question is how we can use the data in BW/4.
00:00:26 Use it here means directly consume the data via queries, or directly access the data
00:00:31 via an Open ODS view, via virtual access, or persist the data in an advanced DataStore
Object.
00:00:40 Here we have the possibilities to use InfoObjects, or we have the possibilities to use fields
00:00:46 in the advanced DataStore Object. As you might know, a typical or a usual way
00:00:53 to model a data flow in BW is first, you have to create InfoObjects.
00:01:00 You have to model InfoObjects. This could be painful,
00:01:03 and this could be a long-running process. Usually it's a significant effort here
00:01:12 to model all the needed InfoObjects. Of course, in our business content,
00:01:18 we deliver InfoObjects as a jumpstart, but in the end,
00:01:24 you have to think about the design of all the InfoObjects, all the related attributes, these are
InfoObjects as well,
00:01:31 and then you can start creating your data model. Once you are done with the InfoObject
modeling,
00:01:37 then you can start building, or modeling your advanced DataStore Object.
00:01:41 Once this is done, then you can start loading the data. Besides that, we have of course
00:01:47 first load the data into the InfoObjects to see which data you have,
00:01:54 and then you can combine that InfoObject in the advanced DataStore Objects.
00:01:59 This, of course, has some advantages. So this means when you have InfoObjects already in
place
00:02:07 delivered via business content or former projects, then this way of modeling could be quite fast
as well.
00:02:13 It's extremely efficient. I mean, you model the object once,
00:02:17 and wherever you use it, you have it right away with all its properties, with all the attributes,
00:02:22 with the text, with all the hierarchies, with the type definition, which is very consistent,
00:02:27 so you don't have to remember what type did I use for my customer ID.
00:02:31 Was it a character 10, was it a character 15, was it a numerical character.
00:02:35 You just use the InfoObject wherever you need, and it's guaranteed that the system,
00:02:39 for example in all the aDSOs where you have some custom information,
00:02:43 that the system uses exactly the right data type. Yeah. So this consistency
00:02:47 is one of the strong points about InfoObjects. As Gordon said, the kind of downside
00:02:56 when it comes to non-SAP data is that you don't get them for free.
00:02:59 You have to build them, and you have to build a significant number of InfoObjects
00:03:02 before you can actually start working. Now, let's see what the other option here is.
00:03:08 Well, with BW/4HANA, we actually support a second modeling paradigm,
00:03:13 which is more directed to having a quick look at data, and then start working, and start maybe
transforming data
00:03:20 according to our needs. And that's basically based on fields.
00:03:23 So the idea is, as you see on the right-hand side, you basically have access to the data,
00:03:28 for example via data source, as we showed in the last unit.
00:03:33 Then you think, well actually, I would like to persist this data in an aDSO
00:03:39 without a lot of modeling. In fact, using this field based modeling approach

12
00:03:43 for advanced DataStore Objects, it's possible to generate an aDSO
00:03:47 out of a data source, for example. So you don't have to model anything.
00:03:50 It's basically just taking over the definition of the data source,
00:03:54 and modeling an advanced DataStore Object or generating an advanced DataStore Object
00:03:58 or DataStore Object right out of this with all the properties,
00:04:00 all the field names from the data source, and the data types.
00:04:03 From a functional perspective, it's really like a complete aDSO,
00:04:06 it has all the functionalities with parallel loads, a separate activation step, the rollback
mechanisms.
00:04:12 All that is still in place, but it has much, much slimmer metadata.
00:04:18 Basically, it's much faster to model. And you can bring in your data from the external source
00:04:25 very, very quickly into BW, and then start continuing your work.
00:04:28 Of course, when you generate such an aDSO from a data source, that's just a template.
00:04:32 You use the data source as a template, and you can adjust it afterwards.
00:04:36 We'll see this in the demo that, for example, if you realize that the customer ID,
00:04:39 which comes from the external source, actually matches one of the InfoObjects,
00:04:42 you can use the aDSO generated from the data source, and then start replacing those fields
00:04:48 like the customer ID by InfoObjects which you already have.
00:04:51 So you can also create a mix of InfoObjects and fields, and basically combine the best of both
ways here.
00:04:58 So what's the key idea here? It's really about quick consumption of the data.
00:05:04 On the other hand, if you don't want to bring the data into BW at all
00:05:08 but want to virtually access it, there's an even faster way,
00:05:12 which allows you to basically just take the data source as it is,
00:05:15 enrich it with a little bit of, kind of, BW semantics. Basically, describe whether it's master data
or facts.
00:05:23 If it's facts, also define what, for example, characteristics and what fields are key figures.
00:05:29 Then you can basically start working with the external data right away
00:05:32 without any data movement, and without any additional effort.
00:05:36 So just basically adding this kind of metadata in the so-called Open ODS view,
00:05:41 and then you can right away, start working with the data, and then refine the solution on the
way.
00:05:47 Introduce transformation, maybe decide to actually load the data, and then continue to work.
00:05:53 It's really driven by the data, looking at the data, and evolving the scenario
00:05:58 step by step towards an actual solution. So, Uli already mentioned it.
00:06:07 The DataStore Object can contain fields, and/or InfoObjects.
00:06:14 Here on the screenshot, you can see that this InfoObject contains purely fields.
00:06:20 You can see it on the right side. Here, we have a special symbol.
00:06:25 This DataStore Object only contains. Only contains fields, yeah.
00:06:27 You can see that on the special symbol. We don't have an InfoObject symbol here.
00:06:33 You can see we have different... we have key fields, we have data fields.
00:06:36 In the end, this is a, yeah, a typical DataStore Object, but in that case, it contains only fields.
00:06:45 It can be modeled with the fields. The aDSO can be created using a data source as a
template.
00:06:51 We will show that later on in the demo as well. It's interesting to know that it is possible
00:06:57 to combine fields, and create InfoObjects in one advanced DSO.
00:07:03 So in the end, you have the same features and functions in the DataStore Object with fields

13
00:07:08 as on the DataStore Object with InfoObjects. Exactly, now coming to the Open ODS view,
00:07:13 if you really want to consume the data right away, make it available wherever it is at this
moment,
00:07:19 whether it's in a local HANA schema, whether it's outside your database,
00:07:22 and you reach out to it via Smart Data Access, for example. Open ODS views are a way to do
that.
00:07:29 What you basically do is you use the source with the structure it has,
00:07:33 and you add a little bit of semantics. So you basically define whether it's facts,
00:07:37 master data, or texts, all that's possible. Hierarchies are not supported here.
00:07:43 The metadata is pretty much defined on field definitions again.
00:07:48 So basically, we have a list of field names and data types. It's possible to reuse InfoObjects as
well,
00:07:55 so if one of the fields which you have, just as in the case of the advanced DataStore Object,
00:08:01 turns out to actually be an InfoObject, which you already have in place,
00:08:05 you can associate this InfoObject, and basically create a link between a field
00:08:09 and the information which comes from the InfoObject. You can also create such links,
00:08:15 or so-called associations, between multiple Open ODS views. So if you have an Open ODS
view, for example for a fact,
00:08:21 and another one for master data, you can create a link from, say, as you see
00:08:25 in this example here, from the CustomerID field of your facts to the customer Open ODS view,

00:08:30 and then combine these two things to basically use, for example,
00:08:33 attributes of the customer for drilldown in the sales orders.
00:08:38 Of course, all that is also built into the authorization framework of SAP BW/4HANA
00:08:44 so it's also secure from that perspective. So yeah.
00:08:50 Preparation of demo. This is now the preparation of the demo.
00:08:53 As we already showed in week one, we will use exactly the same data model here
00:08:58 as we used in week one. We have a sales order, table, and in that sales order table,
00:09:04 we have already created a sales order InfoObject. In that InfoObject, we have different
InfoObjects.
00:09:12 Additionally, we have OrderDate, DueDate. Besides that, we have the texts,
00:09:18 and we have the attributes here. For some of them, we have hierarchies as well, yeah.
00:09:23 Right, but here the point is, we basically don't want to use all the InfoObjects
00:09:27 which we created earlier, which we prepared for this openSAP class basically.
00:09:32 Remember, when we showed this demo in week we basically had all the InfoObjects prepared.

00:09:37 We had built them ahead of time, and we're basically just showing you the result
00:09:42 or maybe assembling some of the InfoObjects into the advanced DataStore Object.
00:09:45 Now, the demo is really about looking at these data structures as they are
00:09:50 and making them available in BW/4HANA as quickly as possible.
00:09:54 All right, yeah. As we said, we will basically show both parts.
00:09:56 We will show you how to quickly load data into an aDSO from the sales order source, I think,
here.
00:10:02 Then we'll show you how to directly consume this data without even moving it to BW/4HANA.
00:10:08 Yep, then let's jump into the demo part. Here we are.
00:10:13 It's actually the same data source which we used in the previous unit, right?
00:10:17 It's the one, which we created based on smart data access, can you maybe zoom in a little bit
or...?

14
00:10:25 Sure, zoom. That's the one. That's the sales orders
00:10:31 pointing to the remote HANA database. Now, let's create a DataStore Object
00:10:38 directly derived from the definition of the data source. So we basically create a DataStore
Object.
00:10:45 We connect it with the data source. And the system knows right away how the metadata
00:10:50 of the DataStore Object should look. Assign a name, and as you will see,
00:10:54 that's basically all you have to do. Then you have the full-fledged capabilities
00:10:59 of BW/4HANA for data management available. So all the functionalities of the DataStore
Object
00:11:05 are in place. You see, you can actually choose what kind of,
00:11:09 what flavor of DataStore Object you want to use. When you go to the details,
00:11:13 you'll see the field list of the source object. So all that is in place.
00:11:18 I think we have to define a key field? No, we actually have a key field...
00:11:20 We have a key field, yeah. ...in this case.
00:11:22 So we can probably activate this right away, and we are done with modeling a persistency in
BW/4HANA
00:11:27 without any upfront effort. Remember, last week, we basically had
00:11:32 the InfoObjects in place before we could actually do that. What you can do as well is we can
change the type here.
00:11:38 Exactly, so what we could do, for example, is see we see here, we have the ProductID and the
CustomerID.
00:11:43 If, for example, for one of them, we know we have a suitable InfoObject,
00:11:47 we can replace the field with the InfoObject, and leverage all the information
00:11:53 which comes from that, all the metadata. And eventually, on the CompositeProvider level,
00:11:57 also the data coming from that InfoObject. Right, and now we could potentially activate it,
00:12:02 but I guess we don't want to go into more detail here. If you go back to the, oh...
00:12:07 That's activated. Let's activate the object. See how fast it is.
00:12:10 How fast. It's fast, yeah. Here you can see the special signs
00:12:14 for InfoObjects and for fields. That's fine. Of course, you also see we have all the capabilities
00:12:20 which we know from data flows in BW/4HANA. We can build a transformation.
00:12:26 Also that works, of course, from a data source to any advanced DataStore Object,
00:12:30 whether it's based on fields or InfoObjects or any mix, doesn't matter.
00:12:34 We have DTPs available. So all the toolset which you know, is available here.
00:12:40 All right. That's it for that part.
00:12:44 Now, basically let's see how we can directly consume this data without even moving it, right.
00:12:50 So we basically want to leave the data where it in the remote database,
00:12:56 and first, have a look at it from the BW side to then potentially
00:12:59 decide what we want to do, right. Just to get a first impression.
00:13:02 Maybe create a quick query, which business user can actually start working with,
00:13:06 before we decide what the next steps are to make this really a kind of industrialized solution,
00:13:11 which also has to, for example, guarantee certain availability SLAs or performance SLAs,
00:13:19 all that kind of stuff. It's really about getting a very,
00:13:21 very quickly look. And for that, you create an Open ODS view
00:13:25 on the same data source again. You will see that we can right away
00:13:31 run a query on that object. So again, we assign a name to the Open ODS view here.
00:13:39 In this case, it's facts. Yeah, it's facts.
00:13:41 It's sales order data. It's based on the data source,

15
00:13:45 which we just selected. So the system proposes that already.
00:13:49 It still gives you the option to change because there might be multiple data sources
00:13:53 with the same name in different source systems. I guess that's it.
00:13:57 Just like in the case of the aDSO, you now see the field list on the left-hand side.
00:14:01 In the middle, if you open the characteristics and key figures folder, you actually see,
00:14:08 see all the attributes which are available and the key figures and so on.
00:14:13 Right, so we could activate this? Yeah, sure.
00:14:17 Right now, there's one thing which is actually missing here.
00:14:21 If you look at the key figures, they don't have a currency assigned.
00:14:24 We do have a currency field, but the system has not detected that this is a currency.
00:14:28 That's something which we can, for example, model on top of the source data.
00:14:32 So, Gordon, if you take the currency field, and put it, drag it down to the Currency folder,
00:14:38 then we can actually maybe use the sub total, one of the key figures, and assign the currency
field
00:14:43 as the Currency element here. Maybe let's just do it for that one.
00:14:48 Then we'll see that this already has the same effect as if you are modeling an InfoObject
00:14:53 with a currency on the BW/4HANA side. So now, we're actually ready to run a query
00:14:59 on this table, right? Without any movement?
00:15:01 I mean we can start with a simple data preview. Let's just do a data preview,
00:15:05 let's not do anything too complex here. Bring this up a little bit.
00:15:12 You see that the system has started summing all the numbers. The subtotal displays a star.
00:15:19 This star comes from the fact that for the sub total, we have defined a currency.
00:15:23 So the system knows apparently there are multiple currencies,
00:15:26 so I can't just add them up. If you drill down by currency,
00:15:28 you should see the effect here. For the sub total, we actually have a currency assigned.
00:15:34 For the other key figures, we don't. That's why the system just interpreted them as numbers
00:15:38 and added up euros and US dollars. So you see that there's some semantics
00:15:43 which the source itself doesn't come with. It doesn't know the connection between the key
figure
00:15:47 and the currency in this case. But you can define this additional semantics
00:15:52 in the Open ODS view and then basically use all the BW functionalities,
00:15:56 like all the formatting for example, which you get in the query,
00:15:59 which puts the dollar sign in front of the number, and the euro symbol at the end of the
number.
00:16:03 All this formatting is basically out of the box there. Interesting to know is that we're now
accessing the data
00:16:08 fully virtually, without persisting the data. So we're reaching out to the external database.
00:16:14 Next step, and that's probably the conclusion of the demo, would be to add master data to this.

00:16:19 Now, again we have two options. We could, for example use an InfoObject here as well.
00:16:24 If you go to maybe product, or CustomerID here, on the right-hand side, we have the
possibility
00:16:30 to create an association to an InfoObject. So we could just, as in the case of the advanced
00:16:36 a few minutes ago, associate an InfoObject here, and then leverage all the capabilities of the
InfoObject.
00:16:41 For example, if it has a hierarchy, this would be consumable on the query side.
00:16:47 If we have navigation attributes, we could use them and so on.

16
00:16:50 But we are actually going for a different way We are basically adding a second Open ODS
00:16:56 for the customer master data coming from the remote system. Right, so we're basically
combining
00:17:01 two data sets from the remote system and accessing them from our local BW/4HANA system.

00:17:10 So now, we're talking about master data. That's master data, right.
00:17:14 It's again, now in this case, it's based on a virtual table using smart data access.
00:17:18 We have our source system assigned already. That's the source system we created in the last
unit.
00:17:23 This is this openSAP source system. Exactly. Here we are. Now, let's choose
00:17:29 the right table. So we're even skipping the step
00:17:31 of creating a data source here, we could also create a data source,
00:17:34 and then the Open ODS view on top. But for...
00:17:38 Take this products here. Let's take a customer.
00:17:41 Yeah, let's take the customer. All right and finish.
00:17:46 Right, so this is master data. Looks a little bit different.
00:17:49 Maybe, let's have a look at this here. Okay, that looks good.
00:17:52 Can you open the characteristics? Also looks good, okay, that's almost fine.
00:17:57 Now, let's add a little bit more complexity here. For the customer, we want a text
representation as well.
00:18:04 So we can actually add, just as for the InfoObject, we can add texts.
00:18:09 Of course, these texts need another source. In this case, we have created,
00:18:15 we find in the source system a table, which creates the language-dependent text
00:18:20 for the company or for the customer. Customers, maybe not such a good example.
00:18:26 In that case, for products that would be, of course, very relevant because bikes
00:18:29 have a different in German than in English, for example. So we have a customer text here as
well.
00:18:35 So let's finish that. Exactly, and then you see there's an additional
00:18:41 called Text here. You've probably to do this small modification here,
00:18:47 removing the language. Remove it, yep.
00:18:52 Reassign it. Just drag it over to the Language Code here.
00:18:57 And now, we're basically done. Let's activate this Open ODS view.
00:19:00 Now, it basically behaves based on the external like an InfoObject with attributes and text.
00:19:06 No hierarchies but attributes and language- dependent text. We could even start defining
authorizations
00:19:11 on top of this by the way. So now, let's go back to the facts,
00:19:18 which we created earlier. Here we are, these are the facts.
00:19:21 All right, and associate this, go to the CustomerID.
00:19:28 One below the product. Yeah, here we are.
00:19:32 Exactly and associate this Open ODS view for the customer. Right, and let's even use some
navigation attributes.
00:19:41 So we want to drill down by some of the stuff just as we did in week one if you...
00:19:50 ...enlarge this a little bit. Let's take some of the...
00:19:52 Postal Code maybe. Exactly, Country, Region,
00:19:54 State, Province, and City maybe, the lower four ones, OK.
00:19:59 That's fine. So those are ones we want to use.
00:20:01 You can also switch the display here to Text The system should actually offer this.

17
00:20:06 Maybe let's do Text. Key and Text or Text? Text and Key is fine or Key and Text is also good.
00:20:11 Either way, and let's activate this now. If we run the preview now,
00:20:15 we should actually see multiple things. First of all, we should have all this address information,
00:20:22 the geographic stuff available for drilldown in the preview.
00:20:27 If we drill down by customer, we should actually see the text representation
00:20:31 next to the CustomerID. So maybe let's have a look at the attributes.
00:20:37 For example, we can drill down by city now. Let's use city, just for...
00:20:44 demo purposes. You see a list of cities here,
00:20:46 which actually come from the customers. So it's really a join with navigation attributes
00:20:50 as you know for InfoObjects. To conclude the demo, maybe let's drill down by CustomerID.
00:20:58 And check if we really see the text representation next to the CustomerID.
00:21:03 So it offers basic functionality, which you know from InfoObjects.
00:21:08 Not the full flexibility, not the full capabilities like hierarchies are missing,
00:21:12 some formatting things like conversion exits are also missing.
00:21:16 But it's a very quick way to start looking at data, and start working with data,
00:21:21 and determine what the next steps are. For example, if you see that,
00:21:24 well, that's all good and nice, but actually, we need to tweak the customer numbers
00:21:30 a little bit to really match because potentially, the master data
00:21:34 and the transaction data comes from different systems, and they don't really match.
00:21:38 Then you can work out what you have to do. Maybe introduce a transformation in between
somewhere,
00:21:43 store parts of the data and transform them, and bring it together with the other data then
00:21:48 on this transformed data. So you can basically start working up your way
00:21:51 towards a solution, which business users can work with. Of course, you have the possibility to
give something
00:21:58 to the business user very, very quickly without having this upfront design of a long
00:22:06 list of InfoObjects, and the long loading processes
00:22:09 before you can actually start. So this is really a second modeling paradigm
00:22:13 next to the kind of standard InfoObject-based EDW-styled modeling paradigm,
00:22:18 which is very, very useful in many projects. So let's summarize what we saw, Gordon.
00:22:26 Yeah, we saw that we have a field-based modeling speeds the data integration up dramatically

00:22:31 for non-SAP data. You can use BW/4HANA for agile
00:22:38 and data-driven modeling approaches. So this is what we called agile data modeling.
00:22:44 But you can use, let's say, classic BW modeling as well. I mean, when InfoObjects are already
in place,
00:22:51 then we are quite fast as well. But this kind of modeling, this field-based modeling,
00:22:57 speeds up modeling scenarios, and, of course, can be used as a prototyping mechanism.
00:23:03 That's also very useful here. To show the business the data,
00:23:05 what is in the data, and so on and so forth. And actually determine where the kind of
00:23:10 deficits are, what you have to work on and so Yeah, and I think the beauty of BW/4
00:23:14 is that you can combine both approaches, that you can combine this field-based approach
00:23:20 with the classic InfoObject-based approach. It's not about right and wrong.
00:23:23 It's about what's right for a given purpose. Okay.
00:23:27 With that, we'll hand over to the self-test.

18
Week 2 Unit 4

00:00:08 Hello and welcome to week two unit four SAP SQL Data Warehousing - Overview.
00:00:14 So in this unit, we are basically going to introduce the second pillar of SAP's data warehousing
strategy,
00:00:19 which is SAP SQL Data Warehousing. And for that, I have Axel with me
00:00:22 who is from our SQL Data Warehousing team. Thanks for having me.
00:00:25 You're most welcome, Axel. So what are we going to discuss in this unit?
00:00:29 We'll first give an introduction of SAP SQL Data Warehousing,
00:00:32 which is based on open standards. We'll show you the core components
00:00:36 of SAP SQL Data Warehousing, and we will basically also do a little bit of a positioning,
00:00:41 and show you what are the driving principles behind the development of this SAP SQL Data
Warehousing,
00:00:47 which is basically our big point here is this Agile Data Warehouse development.
00:00:52 So, Axel, maybe can you describe the big picture a little bit for us?
00:00:55 Yeah, so the SAP SQL Data Warehousing approach is based on the HANA Data Management
Suite.
00:01:00 You can see this in the middle of the screen. So the HANA Data Management Suite based
00:01:06 on certain components, which are SAP HANA, of course, including the XSA or the Cloud
Foundry stack,
00:01:11 also the supporting tools available on the stack. We have the integration with the SAP Data
Hub,
00:01:17 also focusing on the SAP Enterprise Architecture Designer, which is a modeling tool.
00:01:23 And then we also leverage SAP Big Data services to leverage the Hadoop space in this
environment.
00:01:31 So, having said that, this is our baseline. We have the HANA Data Management Suite
00:01:35 with the different products and tools available, and on the next screen, we'll show you how this

00:01:42 phase into the SQL data warehouse approach. So that's basically now the next level of detail,
00:01:48 where you show what tools and what editors you're using in the area of SQL data
warehousing.
00:01:54 So for us it's important to really highlight that we have a bunch of editors available,
00:01:59 but let's take a quick look at the further details. So in the middle of the screen you can see
00:02:05 the different components also available on the HANA Data Management Suite,
00:02:09 which is the SAP Enterprise Architecture Designer. I tried to highlight that one already before,

00:02:14 so this is the tool you can really focus on data modeling. It's not an implementation tool it's a
real modeling tool.
00:02:22 Sorry, it's the one on the left-hand side right, called SAP EAD...?
00:02:25 Correct. SAP Enterprise Architecture Designer, it's on the left side of the screen.
00:02:31 And we are actually focusing on two modeling concepts in EAD for data warehousing,
00:02:36 which is the conceptual data model. So this is a model relevant for the business users
00:02:41 where you are able to provide information on an entity level.
00:02:46 You specify the different attributes and the measures, and also how the entities relate to each
other.
00:02:51 And within the same tool, you can migrate a conceptual model into a physical model.
00:02:57 So this would be related to the physical implementation, so you can focus on
00:03:03 how you want to model the data warehouse, which data modeling techniques.

19
00:03:07 And you also integrate the different entities on the table level,
00:03:10 you apply primary keys and all of that stuff. So you can do all of that
00:03:14 in Enterprise Architecture Designer. Once you're done, you can export the models
00:03:19 into an XSA-compatible structure. And as XSA is based on a file-based approach,
00:03:25 so every object you want to build on the database, every object you want to build on the
application layer
00:03:31 can be exported in a separate file. Due to the fact that you get a bunch of files,
00:03:35 we are offering either a local export, like a zip file export, a project export,
00:03:41 or you can bring them into a Git repository or any other repository management system
00:03:46 to keep track of the number of files related to your application which also relates to versioning.

00:03:52 So that's the topic on Enterprise Architecture Designer. Once you model your system
00:03:57 and you want to reuse that, you switch to Web IDE,
00:04:01 which is the web-based integrated development environment. And within the Web IDE,
00:04:06 we have a number of editors available, you can see in the middle of the screen.
00:04:10 So there's interaction from the EAD models, you can bring them into a local zip file
00:04:16 or into a Git repository. And the same repository with the same objects
00:04:20 can be reused in the Web IDE. So you clone the repository into your project and Web IDE,
00:04:26 and due to the file extension, this will bring up the built-in editors in the Web IDE.
00:04:31 So Web IDE is installed on the HANA platform, there's no separate tool installation required.
00:04:36 Everything is installed on the platform. The related topics for data warehousing are, of course,
SQL.
00:04:42 So if you want to focus on your scripts to create the tables and the different entities,
00:04:48 you can do that. What we're focusing on is really a Git-based approach,
00:04:53 a workspace approach, where we have a number of files created,
00:04:58 then you'll use the Calc View editor, a graphical editor, to edit
00:05:03 how you want to integrate the data. You can bring in procedures.
00:05:07 We have for persistency management, of course, tables in there, but we're also offering
something we call
00:05:13 the native DataStore object, and we'll go into the further details in the next section.
00:05:18 We're able to reach out to connected systems using virtual tables.
00:05:24 We have the flow graph integration, which is how you want to process the data
00:05:28 from one persistency to another... Similar to transformation capabilities which...
00:05:31 Correct. And then we're offering also something we call Task Chain.
00:05:36 So this is if you want a sequence and the activity of a flow graph,
00:05:42 or an NDSO data activation, or you have stop procedures,
00:05:46 you want to put them in a sequence will give you the Task Chain editor,
00:05:49 which is also a graphical editor, where you can put the sequence in a row.
00:05:54 So this is again, as you mentioned, so this is related to BW process chains,
00:05:59 but it's fully embedded in the Web IDE. So this is maybe a little bit or maybe a lot
00:06:03 of theory here and a lot of different components. As Axel said in the next unit,
00:06:07 we are basically going through a very detailed demo where you can see
00:06:10 all these components in work and where you get a feeling of what this looks
00:06:14 in the Web IDE with all the individual components. And you'll get a much better hands-on
feeling about it.
00:06:19 There's one topic I want to mention. So the integration to SAP Data Hub,

20
00:06:22 which is also part of the HANA Data Management Suite. We have a data tiering solution in
place,
00:06:27 which is called Data Lifecycle Manager, so you can see that also in the middle of the screen.
00:06:32 So this is, you know, how you can offload the into the SAP Data Hub core data tiering solution,

00:06:38 which also offers the option using DLM to offload the data into the Big Data services
environment.
00:06:45 So the components we highlighted before related to HANA Data Management Suite.
00:06:50 So you can all find them, and we have the graphical editors available
00:06:54 to make the implementation and the model experience a real success.
00:06:59 All right, so let's move on to the main drivers for this kind of development.
00:07:03 What are the key ideas behind or driving in the development of SAP SQL Data Warehousing?

00:07:09 Okay. So, of course, it's time to market. So I highlighted before the
00:07:13 Enterprise Architecture Designer as a modeling tool. So this is our offering,
00:07:16 where we bring in industry data models. So this will give you a jumpstart
00:07:21 on the capabilities we offer. And, of course, as we offer conceptual models
00:07:26 in Enterprise Architecture Designer, we'll have the ability to talk to the business users
00:07:31 to verify our offering, and then the business user can adjust based on their need,
00:07:36 then you can bring that into the physical environment. So this is in terms of time to market.
00:07:42 Due to the fact that we are open standard here, what we're offering is also methods
00:07:47 and standards to run complex systems. So continuous integration, continuous testing
00:07:52 is a topic you might be aware of. So every time you develop a new feature,
00:07:57 we'll offer the option to bring in automated tests, and you can kind of run them
00:08:02 before you check in the changes to be aware that the changes are consistent,
00:08:08 but also in within the aspect of the performance so that will provide a stable system
00:08:13 and also a good performance response once you bring the changes to the system.
00:08:20 As this solution is attracting IT professionals, people out there really interested
00:08:26 in using the bits and pieces to build a data warehouse system,
00:08:30 also using continuous integration, continuous testing like open standard environments,
00:08:36 we are attracting those type of IT professionals with this solution.
00:08:41 Of course, every time HANA brings in new features, they're immediately available for the
solution
00:08:47 as we're fully based on the Web IDE, and all the editors we build are available in the Web IDE.

00:08:52 So once we bring in new features on the HANA level, they're immediately available,
00:08:56 so you stay competitive no matter if you're interested in new levels of machine learning as an
example.
00:09:03 So this becomes available. So all the predictive algorithms, of course,
00:09:07 the geospatial capabilities of HANA, all this is built in and readily available for solutions.
00:09:14 It also integrates the complexities so if you want to bring in data from different sources,
00:09:19 of course you can do that. And you can also, as I mentioned before,
00:09:22 in terms of data tiering, you can offload some data to some other storage locations
00:09:27 like SAP Data Hub core data tiering solution or Big Data services still accessible
00:09:32 from the central HANA model. So you don't need to change the model if you tier data,
00:09:37 it's going to be all accessible from the single HANA model. Maybe coming back to the point of
data integration

21
00:09:41 those capabilities are actually also the same that we use from the BW/4HANA side
00:09:45 when it comes to integrating non-SAP data. That's basically the topics of previous units
00:09:50 of this week, which we already saw there. Right, so you already described it's not only
00:09:55 the set of tools which we have in the HANA platform, it's also the modeling paradigm
00:09:59 and development paradigm, which is a key component of this whole concept.
00:10:04 Can you maybe elaborate a little bit on this? Of course, so we're heavily focusing on DevOps.

00:10:09 So we'll give you the ability to work on the modeling, on the planning, and on the
implementation part,
00:10:15 which is kind of the development approach. But then once you bring that into an operational
level,
00:10:20 you need to be sure that everything you implemented has been fully tested before you bring
that
00:10:25 into an operational approach. And if you follow the slide on the screen,
00:10:31 you can see we have different sections in here, and they all nicely interact with each other.
00:10:36 The tools are associated with the different phases. You know, we talked about Enterprise
Architecture Designer,
00:10:42 we talked about Git, we talked about the Web IDE,
00:10:45 we talked about continuous integration and continuous testing tools.
00:10:50 We have data warehousing foundation available for data tiering solutions,
00:10:54 and this all nicely integrates. So it's a tool-based development process,
00:11:00 and we'll offer this option in terms of faster time to market. So, basically you do not have to
follow this approach,
00:11:08 of course, as a customer, but I mean, for those people who are familiar
00:11:12 with kind of modern standard software development, that's, of course, the approach
00:11:16 which is typically applied in the cloud world. And we basically have used this as a template
00:11:24 for all the development of data warehouse solutions as well, right?
00:11:28 It's an option, but we have prepared the tools in a way that they actually support
00:11:32 this modeling methodology. In this class, we're talking on-premises, of course
00:11:36 we have them available also in our Cloud Foundry like SAP Cloud Platform environment.
00:11:40 So you can easily adapt then also for the cloud environment. All right, so that's basically the
key idea
00:11:46 of agile development here, right Axel? Yeah, so we learned in the past,
00:11:49 so if you're coming from an SQL approach, and you probably have two systems,
00:11:53 like a dev system and a production system, in most cases, those systems are out of sync.
00:11:59 So if you look at the left side of the screen, you probably end up in a situation,
00:12:05 we'd better not change a running system, because we're not prepared for the changes,
00:12:09 we have no continuous integration, no testing in there.
00:12:12 So therefore you better change where the DevOps approach I just walked you through.
00:12:18 We're kind of pushing you in a way you might want to change as frequently as required.
00:12:24 Because every time you change something, we're offering the option to bring an automated
test.
00:12:29 So you're kind of ensured that the modification you applied already fits the system
requirements.
00:12:36 So you can change as frequently as required, which, of course, is a major culture change
00:12:43 compared to, you know, we only do less frequent updates to the system versus what we
change as frequently as we like.

22
00:12:51 That's basically encouraged or enabled by integrating the Ops part in the in the DevOps cycle
correct?
00:12:57 Correct, yes. So also from a development perspective,
00:13:02 working with a SAP SQL Data Warehousing is a little bit different from,
00:13:06 for example, working with BW/4 HANA or with an ABAP-based system.
00:13:10 You don't have a central development system on which everybody works,
00:13:13 but you have a slightly different approach here. Yes, so our approach here is,
00:13:17 of course, we have a central repository. You can see that in the middle of the screen on the
left.
00:13:21 So in this case, it's Git, but you can use other repository management systems.
00:13:26 So in these terms, you have a central repository, but every developer clones the repository
00:13:33 into his local workspace. And every developer can work on the full entity
00:13:38 of the objects available on the Git repository, so everybody works in his own set of isolation
levels.
00:13:45 The check-in process, of course, needs to be managed. As one developer completes the
development of a feature,
00:13:51 it has to go back to the central Git repository, and this is a check-in process.
00:13:56 And for the check-in process, we have the automated test to verify that the modification
doesn't break the system.
00:14:02 So the check-in process is a thing you need to discuss within a team. You can sometimes
automate it
00:14:09 because you're implementing something new, or that nobody else makes changes to.
00:14:15 But in general, it's a topic to check back the local changes into the central Git repository,
00:14:21 and then the central repository will be deployed once to your QA system or production system.

00:14:27 One nice feature is also that you can hold multiple versions of your data warehouse
development
00:14:32 in the same Git repository right? So you can basically have last week's version,
00:14:35 this week's version, versions from a long time back, and all this can be kept here and actually
then checked out
00:14:41 to do corrections or enhancements of even older versions which are currently still deployed
00:14:46 in the productive system while the development side has already progressed to a much much
newer release, right?
00:14:52 Yeah, so that's the slide on the right. So the specifics on the XSA and Cloud Foundry
00:14:58 is we're specifying every object on the application level or on the database level by file.
00:15:04 So within that file, we specified the to-be structure. So we don't specify the delta,
00:15:10 like the Alter Table as an example, We only specify the new to-be structure.
00:15:15 And once you build it on the database, the according Alter Table statements
00:15:21 for persistency management will be adjusted and generated, and applied to the database.
00:15:26 So it's agnostic, so that's why you specify the new to-be structure,
00:15:31 you check that in on the Git repository because you're using the different editors
00:15:36 to specify how the objects should be created and specified. You check that in into a Git
repository,
00:15:42 and then you run a build step, and the build step will adjust the structures
00:15:47 on the database. It will apply the changes to the application layer.
00:15:52 So it's kind of disconnected, but you're clear on, I have a version of files,
00:15:58 and this will also come back to your point, we're able to revert to the previous version

23
00:16:03 because in the Git you have different branches. You can bring in different versions of
branches,
00:16:09 and once you want to switch back to the previous branch, as the branch consists of the full set
of objects,
00:16:15 you can reset it to the previous branch. And due to the capabilities
00:16:19 of Core Data Services that we're comparing what's the structure on the database,
00:16:23 what's the new to-be structure, we're dynamically generating
00:16:26 the according Alter Table statements to be applied on the database.
00:16:30 So data will be kept as long as the data types are compatible
00:16:33 once you're switching versions. And for the virtual objects like the calculation views
00:16:39 or the flow graphs, as an example, they will be all the time deleted and recreated.
00:16:44 So we'll keep track of the versioning here. We have the different layers
00:16:48 on specifying what the application should look like, and then we have the build phase,
00:16:53 where you're building the objects on a database, and the adjustments will be dynamically
generated.
00:17:01 So basically each of the tool areas which we saw earlier whether it's Enterprise Architecture
Design,
00:17:06 whether it's artifacts built in the Web IDE, or even the stuff coming from smart data integration.

00:17:11 All this produces files, and the XSA build process turns these files into runtime artifacts on the
database.
00:17:19 Correct, yes. And in some cases, as you said,
00:17:22 if it requires changes to physical objects like tables, it actually determines
00:17:26 whether it's an alter statement, whether it's an add column statement,
00:17:29 or whatever, what kind of changes are required. So maybe let's briefly summarize:
00:17:35 On the left-hand side, we see the positioning, a little bit of the positioning of SAP SQL Data
Warehousing.
00:17:41 What it's geared to, what the customer group is which we want to attract with it.
00:17:46 Right, so the customers we want to attract are the customers who are interested
00:17:52 in running an SQL-based data warehouse today. So we want HANA as a platform to attract
those customers
00:18:00 to onboard the solutions they have today on a non-SAP environment.
00:18:04 So onboard those applications to the same platform, right next to BW, as an example.
00:18:11 So you can reuse all the competencies you've built in your career,
00:18:18 and you can reuse them in the SQL Data Warehousing approach.
00:18:23 We're focusing, of course, on third-party data warehouse replacements,
00:18:27 so this is the topic on, well, let's bring a Teradata system and the model into a HANA world.
00:18:37 We are offering development agility, as we tried to highlight before.
00:18:40 So we'll give you the tools and the techniques in an open standard environment, where you
can
00:18:47 the systems and the applications you're building more secure because you're able to pre-run
tests before you
00:18:54 bring the systems into your production environment. And I guess we have already discussed
00:18:59 all the characteristics here on the right-hand side. So that from a degree of freedom
perspective,
00:19:04 you basically have the full SQL capability. So you're completely free in what modeling
paradigm
00:19:10 or methodology you want to follow. Whether that's multi-dimensional,

24
00:19:12 whether it's the normal form, whether it's data vault, anything is fine right there.
00:19:17 None of these approaches is preferred, everything is possible here with this toolset.
00:19:22 Agility we basically mentioned, think of the DevOps picture, that's really key to understanding
00:19:31 how we want these components to interact and how we basically encourage customers to use
them.
00:19:36 Flexibility, customizability, of course. That's one of the key components as well,
00:19:41 and I think we have well described this. What's maybe interesting or important to consider
00:19:46 is that, of course, we have a variety of tools here, which are provided by SAP.
00:19:51 But we're still open to integrating tools from third-party vendors as well.
00:19:55 So you can use the full-fledged tool set, which the HANA platform provides.
00:19:59 And basically these tools cover all the aspects of data warehousing.
00:20:02 If you think back to the slide where Axel showed you all the components,
00:20:07 you have everything in place you need to build a data warehouse.
00:20:09 And you can use these components in an integrated way, as we describe here,
00:20:13 but you can also replace individual things, individual components of this picture
00:20:18 with your own best-of-breed solutions and bring those in. Yeah, correct.
00:20:22 That's the open standards approach I would say. Okay, so let's summarize: What have we
learned in this unit?
00:20:28 We've basically gotten our first impression of SAP SQL Data Warehousing.
00:20:32 You have understood that it's an open approach, and that we basically want to address
customers
00:20:40 with deep SQL and database knowledge with it. So it's not so much about a conflict with
BW/4HANA,
00:20:48 it's actually much more about extending our footprint in the data warehousing area
00:20:52 and also about collaboration with BW/4HANA, which we'll see in the next unit.
00:20:57 You've also seen that we basically have a tool in the HANA platform,
00:21:00 with Enterprise Architecture Designer, with the Web IDE, with the features
00:21:05 which smart data integration brings, which allows you to build an end-to-end data warehouse
00:21:10 completely out of the box with this tool set. And last but not least, also from a methodology
perspective,
00:21:17 we actually have kind of one preferred approach, which is DevOps, which of course is optional,

00:21:22 you don't have to follow it. But we strongly encourage it, and we make sure that our tools
00:21:27 support this modeling and this working paradigm, development paradigm as well as possible.
00:21:33 And I guess with that I say thanks to Axel and encourage you to do your self-test.
00:21:39 Thank you.

25
Week 2 Unit 5

00:00:08 Hello, and welcome to week two, unit five: SAP SQL Data Warehousing integration with SAP
BW/4HANA.
00:00:14 With this unit we are basically going to cover a number of scenarios showing the integration
capabilities
00:00:21 between SAP SQL Data Warehousing and SAP BW/4HANA, and we will give you a demo
which gives you an impression
00:00:27 of the look and feel of SAP SQL data warehousing, as well as the integration with BW/4HANA
on the other side.
00:00:34 So, let's jump into the scenarios. So, from a use case perspective, we'll try
00:00:40 to highlight the onboarding of SQL data marts to your data warehouse system.
00:00:44 We'll also want to highlight the combination of SQL data warehousing and BW/4 on the same
platform.
00:00:51 And, of course, we'll also want to focus a little bit on building SQL-based data integration
layers,
00:00:58 which could be of third normal form or the data vault modeling technique.
00:01:02 Basically, that's the scenario which you see on the right, in the image.
00:01:06 We basically have, pictured here, an integration layer consisting of a couple of tables on the
SQL side,
00:01:12 we have a flow graph which joins and transforms the data into a usable format for the
BW/4HANA side,
00:01:22 stores it in a table or maybe in an NDSO, which we'll also see in the demo.
00:01:27 And then on the BW/4HANA side, you can use the techniques which were learned
00:01:30 in the previous units of this week, like the HANA Source System and HANA DataSources
00:01:35 to continue working with this data, either in a virtual way, accessing it right
00:01:39 on the SQL side, or to load it further into SAP BW/4HANA data targets.
00:01:45 I guess with that, we are ready for the demo. So as we said, the first part of the demo, which
will be done by Axel,
00:01:52 basically gives you an impression of the look and feel of the Web IDE and how you work with
the tools of SAP SQL Data Warehousing.
00:01:58 And later on, I will take over and show you how to integrate those assets, those artifacts,
00:02:05 with the SAP BW/4HANA side. So we're logged on to the Web IDE for SAP HANA;
00:02:12 it's a browser-based application. Within that, we have a workspace,
00:02:17 and we also prepared a project to do that. The different modules within a project, we're going
to focus
00:02:23 on the database module at the beginning. So, there's a source folder, and we created a set
00:02:29 of subfolders to do that. One folder is about the source data,
00:02:34 so those are the tables, like third normal form, as we highlighted before, and you can have a
look
00:02:39 at the definition of the objects by right mouse clicking on the .HDBCDS file extension file,
00:02:46 which is pretty much the definition of the entity, like a table.
00:02:50 We have different editors, like a code editor, and also a graphical editor.
00:02:55 So this is the definition of the entity address. It consists out of a number of attributes,
00:03:02 and also holds the key. So this is the definition of how we want
00:03:05 to create a table later on, using the build process. Maybe I can briefly jump in here.
00:03:10 So you see this is not a create table statement, but it's a description of the end-state of the
table
00:03:15 as you want to have it in the target system, or as a result of the build.

26
00:03:20 So we are actually on the file level, on top of the actual database where we do the modeling
00:03:25 and the description of the artifacts. The build process will basically take those files
00:03:31 and turn them into runtime artifacts in the database, right?
00:03:33 Exactly, so this is the to-be structure. So the way you want to have the table created at the
end,
00:03:39 the way the table exists right now in the database, or it doesn't exist.
00:03:43 So this is the part of the build process. So the build process kind of compares the new
00:03:48 to-be structure with the existing structure on the database, and it will identify
00:03:53 the relevant auto-table statements and also execute the statements
00:03:57 to adjust the definition, as you can see it right now, with the structure on the database.
00:04:03 So it's implicit auto-table, which are going to be executed. The data will be kept, as long as it's
possible,
00:04:09 like, data type conversions, with supporting all the conversions, you know, available.
00:04:15 Of course, if you do some inconsistent data type conversions you know, you need to manually
adjust that.
00:04:22 But, that's the way you can do that. And next to the code editor,
00:04:28 we also have a graphical table editor. So that's the table editor using the graphical form.
00:04:34 If you double-click on it, you can see the same elements available also on the .HDBCDS file,
00:04:41 and actually, the graphical table editor is leveraging the same .HDBCDS file I highlighted
before
00:04:48 on the code editor level. So there's no inconsistencies, it's two views
00:04:51 on the same thing, but there's really one thing underneath, which is the file definition.
00:04:57 Exactly. If you want to apply some changes on the graphical editor, like adding a column, you
can do that in here.
00:05:03 So what will happen once you save, the definition will be adjusted on the .HDBCDS file.
00:05:09 And you can really decide whether you prefer to work with the file-based editor approach,
00:05:13 and to cut-and-paste out of maybe some template objects, or whether you prefer this graphical
way of editing.
00:05:18 Exactly. It's quite convenient.
00:05:20 And those files are also the ones which are being generated out of Enterprise Architecture
Designer, or any other modeling tool.
00:05:26 So you can pre-generate the files, and then you can reuse them in the Web IDE build and
editor concept.
00:05:32 And for the detailed implementation, if you apply changes to the files, you save them, and you
bring them back
00:05:38 into your git repository, which is the external repository. So you keep the repositories in sync
00:05:44 no matter where you change the file. Okay, so this is one entity,
00:05:48 then we also have a customer entity, and I'll just walk you through how this looks like.
00:05:54 So, it's pretty much about master data, right? So this is the customer entity,
00:05:58 and then we also have a customer address entity. So, it's pretty much straightforward.
00:06:04 Having said that, we have those tables. We're loading some data to those tables.
00:06:09 And then, we created two more folders. One folder, we called it Dimensions.
00:06:15 So this is a folder where we're creating a customer dimension table.
00:06:20 We're integrating data in here. The way of integration is we're using an STI flow graph,
00:06:26 a Smart Data Integration flow graph which runs on the HANA platform.
00:06:31 And it processes data from the source table into a target table. The target table is this
dimension table, right?
00:06:37 Right. So that will store the result of the transformation in the flow graph.

27
00:06:41 Exactly. We're using the flow graph editor to visualize how the flow graph is configured.
00:06:49 Once we open the flow graph, it's pre-configured in this sense, to use the source data address
table.
00:06:55 Also, to integrate the data from the source data "customer address" table. And also, as a third
pillar
00:07:02 to use the source data "customer data". So then we have a joint operation to integrate the
data.
00:07:08 You can also add, like, data quality processes into this flow graph.
00:07:13 And then we'll push the data into a dark target table, which the table we pretty find here.
00:07:18 You can also push the data into a generated table. So once this is configured and you saved it,

00:07:25 and you build it on the database, which has been done already,
00:07:30 you can click the execute button to process the data from the source table into the target table.

00:07:37 So what's really happening here is, and that's why we show this example,
00:07:40 we have data coming in third normal form, highly normalized data structure, these three tables.

00:07:47 We basically join them to build a de-normalized dimension on the right hand side for the
customer,
00:07:53 which is something we can easily use on the BW/4HANA side.
00:07:55 Correct. So you can double check if the data
00:07:58 has been processed, and the quality of the data by clicking on the grid.
00:08:02 So this will now connect to the database explorer module and displays the data process into
the target table.
00:08:11 As Ulrich mentioned, this will be one interface you can integrate or access from the
BW/4HANA perspective,
00:08:18 which is probably almost sufficient. We also prepared another level; we call that a data mart.
00:08:27 Of course, we're reusing the same table and information, but it could be a way more complex
scenario
00:08:33 in terms of transaction data, if you wanted to leverage the capabilities of a Native DataStore
Object,
00:08:39 including request management, like rollback capabilities, and also delta data processing
capabilities.
00:08:46 So if something you're aware from the BW/4HANA standard advanced NDSO object.
00:08:52 So the NDSO is kind of the environment and the objects we've created for the SQL Data
Warehousing.
00:08:58 So you're able to process master data or transaction data into the NDSO, leveraging the
recordMode capabilities
00:09:07 or chain-data capture capabilities to process the data. So, it's also based on the same file
extension, .HDBCDS.
00:09:18 Once you right-click on it, and open it with the graphical editor, you'll find out
00:09:23 that the graphical editor component is exact same as we used before on the table level.
00:09:29 So, it comes up with the same UI. Once you double-click on it
00:09:34 you see like the same elements and the same tabs, but there's one more tab called DSO
detail.
00:09:39 So within the DSO detail you can, you know, either check or uncheck the changelog
capabilities.
00:09:46 You have the capabilities on how the measures should be aggregated.

28
00:09:50 You have the information on the inbound queue deltas and also what inbound queues we
have.
00:09:55 So that's very very similar to what you have in the advanced datastore object.
00:09:58 Even some of the terminology is actually the same, and that's of course one of the features
which helps you to integrate data from one side to the other
00:10:05 because it basically relieves you of taking care of delta mechanisms,
00:10:09 because the NDSO has this built in. Right.
00:10:12 Of course, this is an unknown topic to the SQL world, to the pure SQL world,
00:10:17 because this is SAP property object, but we make this available
00:10:21 with the SQL Data Warehousing approach. Of course, we also have a flow graph in here
00:10:26 to push the data to the inbound queue. And then, there are two steps required,
00:10:31 like pushing the data into the inbound queue of the NDSO, and then run the NDSO data
activation task.
00:10:37 Due to the fact that this is a sequence of activities, we offer the option of the data warehousing
foundation
00:10:43 task chain capabilities, so we've prepared a task chain where, in this task chain, what we
pretty much do
00:10:49 is we'll of course execute the flow graphs, so this is the exact same flow graph, you know,
00:10:54 we prepared to push the data to the NDSO inbound queue. Then we'll run a second step,
00:11:00 which is the activation of the NDSO. So this is the way you process data from the inbound
queue,
00:11:06 into the active data table, and also in parallel to the changelog table to provide the capabilities
on rollback,
00:11:12 and also delta data processing. And for those of you who are a little bit confused now,
00:11:16 this is in fact very very similar technology to what we're using with the BW/4HANA process
chains
00:11:22 in the BW/4HANA cockpit. Right. So what we also have as the administrators
00:11:28 should not log on to the Web IDE, so we have a data warehousing foundation monitor.
00:11:33 Once we'll bring up the monitor, we'll log on to the data warehousing foundation monitor,
00:11:39 and from the look and feel, you see the same task chain as you prepared in the Web IDE
before.
00:11:44 So this will be the UI for the administrators to verify if a task chain has been executed,
00:11:50 also the status of the execution. And you can also schedule the execution.
00:11:55 So what we've done in the past, we've already prepared the run,
00:11:59 so the run's completed successfully. Once you click on it, you get all the information,
00:12:04 like the status and also the details: what has been processed.
00:12:08 So, in case of error, you'll also find the error messages in here.
00:12:11 So this is the way the administrator can keep track on what task chain has been executed,
00:12:18 will also provide you a view on the NDSO status. So if you have a number of NDSOs in the
system,
00:12:24 you can verify how the processing status of the individual NDSOs looks like.
00:12:29 So we'll give you way information using the monitor, which is relevant for the administrators.
00:12:36 So, let's switch back to the Web IDE. There's another component in here;
00:12:42 it's called Database Explorer. So this is the view we provide you
00:12:46 to access the container we previously built, including all the objects related for the database.
00:12:52 So in this sense you'll find like your data store object. In here, we'll have an admin UI in here.
00:12:58 If you click on "Manage", you'll find the UI where you see the request ID, the load status, and
also

29
00:13:05 if the request is available for reporting, and also if the request has been activated.
00:13:11 Maybe one step back, Axel? Just to make sure I have the right understanding.
00:13:15 So what you see here is really the result of the build process, right?
00:13:18 Before the first build, even if you start modeling
00:13:21 on the Web IDE side, but don't start the build process, all this doesn't exist?
00:13:24 Exactly. After the build process, this container exists
00:13:28 and all those stuff inside there? Yep. Exactly.
00:13:30 So this is the result of the build of the full project within the development perspective.
00:13:38 So this the administration UI for the NDSOs, and you can run like all the housekeeping
capabilities,
00:13:44 you can roll back activities in here. So everything you're pretty much aware
00:13:48 of from the BW/4HANA environment. But this is a new environment for the SQL people.
00:13:53 But this is the UI we'll provide. Of course, as this is all based on tables,
00:13:58 you can also have a look at the tables available within this container.
00:14:02 So you'll see all the different containers in here, container objects in here, of course also the
tables
00:14:09 from the source data environment, the dimension table, and also from the data mart.
00:14:16 And you also see a couple of tables generated by the NDSO. Exactly.
00:14:19 So, because request management, of course there are some hidden tables in there,
00:14:23 but they are all visible from the Database Explorer. So this how the table looks like,
00:14:28 and this is also the schema you need to leverage once you want to access this information
00:14:35 from a BW/4HANA perspective. Having said that, so, back to you, Ulrich.
00:14:40 So, let's switch over and see how we can consume this, these artifacts which Axel created
from the BW/4HANA side.
00:14:47 So I'm switching back to the BW/4HANA modeling tools, and I have prepared a data flow
which basically leverages
00:14:54 or loads classic SAP data on the left hand side, so that's the transaction part with sales order
data.
00:15:01 And we want to enhance the sales order data with customer master data coming
00:15:05 from the SQL Data Warehousing side. So what I've done is basically I've created
00:15:10 a source system here, based on a local HANA schema. We've seen this in a previous unit.
00:15:21 I chose this schema which Axel just showed, the schema which is actually generated
00:15:25 during the build process, when this container is actually turned into database artifacts.
00:15:33 So that's the source system from which we're going to load data, or from which
00:15:38 we're going to access data. On top of the source system,
00:15:41 I have created a data source here. This data source just picks one of the artifacts
00:15:45 out of the schema, here you see the schema name again, and I pick the NDSO, so that would
basically be an access
00:15:52 to the active table of the NDSO. We can actually also try to do a data preview here.
00:15:58 If we're lucky, we see some data, right. So that's now directly read from the NDSO active table.

00:16:05 It should actually be exactly the same data which Axel just showed on the database explorer
side.
00:16:10 And of course we can continue to build on top of this data source.
00:16:14 For example, putting the data flow on top. If we want to load data, we could actually load this
00:16:23 into an InfoObject, we could load this into an NDSO with a transformation DTP.
00:16:27 Or we could just put an open RDS view on top which I've done here.

30
00:16:31 If you look at that, we've put an open RDS view on top of that data source, and thereby
00:16:40 can basically access all the data of the native datastore object on the SQL side.
00:16:48 And we can now of course bring this together with the sales orders and the composite provider

00:16:52 by creating an appropriate association. And then we have all the master data
00:16:58 from the SQL side available in BW without actually having to replicate it again, right?
00:17:03 Loading it would be another option, but in this case if we have this tight integration, all the data

00:17:08 is ready for consumption on the SQL side, why would we load it again;
00:17:11 we can just directly consume it. Yep, that's it from my part of the demo,
00:17:17 so let's switch back to the presentation. What have we learned in this unit?
00:17:22 We've basically given you an impression of the interoperability between SAP's SQL Data
Warehousing
00:17:29 and SAP BW/4HANA, and how these two things can be combined on the same platform.
00:17:35 We've shown you a demo where you got an impression of the capabilities and the look and
feel
00:17:40 of the SQL Data Warehousing side, with the Web IDE, and the other tools.
00:17:46 And we've showed you how you can integrate this from the BW/4HANA side.
00:17:50 And with that, you're ready for the self test.

31
Week 2 Unit 6

00:00:08 Hello, and welcome to week two, unit six, Remarks on Architecture.
00:00:14 So, basically, over the last two weeks, we've seen a lot of different technologies,
00:00:19 we've started with the kind of classic modeling way of BW/4HANA which is derived from the
old way in BW,
00:00:27 based on Info Objects and the modernized world of advanced DataStore objects and
CompositeProviders.
00:00:32 We've seen all the HANA technologies and how you can incorporate them into BW/4HANA,
00:00:36 we've seen how to work with fields, how to reach, how to excel data, use a lot of virtualization.
00:00:41 In this unit, we basically want to show you how to put these technologies to use,
00:00:45 how to make sense out of this, and then how to put an architecture framework around this
00:00:51 to get an understanding of how to use these technologies in the right way and then what the
appropriate combinations
00:00:57 of technologies are that make sense. A second question which we'll deal with,
00:01:02 is what is an appropriate reference architecture for SAP BW/4HANA?
00:01:06 You know probably from the past, that we have always propagated a certain architecture
00:01:11 called "layered scalable architecture" for classic BW. What's the corresponding blueprint in
case of BW/4HANA?
00:01:21 And we'll also want to focus a little bit on a topic which we also dealt with in the units
00:01:28 about SAP SQL Data Warehousing, namely that technology and architecture are one thing,
00:01:33 but the development processes are an important aspect of an overall solution as well.
00:01:39 Let's maybe do the review of the classic architecture and then see how we move on.
00:01:45 I will start with the classic LSA architecture. LSA stands for "layered scalable architecture",
00:01:51 and it's more or less the classic way to model an enterprise data warehouse approach.
00:01:56 It's in the end, this is a top-down modeling approach to fulfill all the business requirements.
00:02:01 The characteristics of this classic layout scalable architecture approach,
00:02:06 is that we have stacked layers, we have a high level of data persistency in the system
00:02:13 and we load data from one layer to the other layer. It's top-down driven,
00:02:19 it's a company-wide approach to in the end, combine different source systems
00:02:24 into one enterprise data warehouse approach. Usually, this kind of classic layered scalable
architecture
00:02:32 provides a high level of governance for the company, and in the end it's like a blueprint,
00:02:40 you can copy that blueprint for other scenarios as well. Basically, all scenarios follow the same
standard,
00:02:44 the same approach, and each of the layers, of course, has certain clear service levels or
requirements
00:02:50 associated with it. Now the question is do we have an evolution of the LSA?
00:02:55 Right, and of course, that's also something which is not completely new, we've propagated
this,
00:02:59 for a couple of years already, we have something which we call LSA++
00:03:03 the next step of evolution in LSA. Besides the top-down modeling approach,
00:03:09 which we take over from LSA, we also added the bottom-up modeling approach here,
00:03:14 that's basically when you think of what we did in previous units this week,
00:03:20 what you get when you think of starting to work with data, looking at the sources and then
evolve the data model,
00:03:26 right, such approaches are also possible with BW/4HANA, so we put this bottom-up approach
for agile development,

32
00:03:33 next to the top-down approach which is for scenarios where high governance is needed
00:03:39 or big scenarios where you need this sort of governance. LSA++ puts a strong focus on
virtualization,
00:03:45 so basically tries to avoid persisting data, just for its own sake, maybe just because
00:03:51 a blueprint says so, but it really puts an effort on only materializing and persisting data
00:03:56 when there's really a good business justification for it, a requirement which really needs
persisted data
00:04:03 for certain things. That also means that LSA++ really propagates
00:04:08 using all these HANA technologies which we've seen in this week, right, using calculation
views
00:04:15 as part of your architecture, all that kind of stuff. The only layer if you look at what we have
here,
00:04:21 the only layer that's really mandatory are the source layer, of course,
00:04:25 you need source data somewhere, and the virtualization layer, so the Virtual Data Marts, the
composite provider basically,
00:04:30 and to a certain extent open all the S views, those are the layers which are really, really
mandatory,
00:04:37 which we always need, then you probably put your queries on top,
00:04:41 but everything else is to a certain extent optional, right. You only build it and load it,
00:04:46 if you really have a corresponding requirement, so LSA++ basically means is,
00:04:54 depending on what kind of solution we want to build, you actually pick and choose the layers,
00:04:59 which are relevant for the solution. Now, if you look at this from an EDW perspective,
00:05:06 from a classic EDW perspective with an Enterprise-wide Data Model,
00:05:10 then of course, many of these optional layers will be relevant because you have to harmonize
data,
00:05:14 you need to take, probably need to keep the history of data, so a raw data warehouse on Open
ODS Layer will be relevant,
00:05:24 an integrated data warehouse layer will be relevant, if you have to harmonize data from
various sources.
00:05:29 That basically means in this situation, an LSA++ implementation,
00:05:33 will look pretty much like a classic LSA implementation. You will probably not
00:05:37 have the architected data marts anymore, because the speed of HANA makes these obsolete
00:05:41 in many cases, but other than that, you will probably have very similar layers.
00:05:47 The question is what's the benefit of using BW/4HANA, as opposed to classic BW in these
cases?
00:05:53 Of course, there are benefits, and I want to mention two main benefits here,
00:05:58 the first thing is you've seen how we worked through the data modeling process with
BW/4HANA,
00:06:04 we've seen quite a lot of demos, and that's why we put this strong focus on demos in week
one and week two as well.
00:06:11 You've seen that we have a complete, new set of tools, which allows you to work much faster
and much more efficient
00:06:16 in a much more streamlined way than in the past, so building data models with BW/4HANA
00:06:22 is a much more streamlined and much more efficient process than it used to be in the past with
classic BW.
00:06:27 That's one aspect so it's the tool aspect. The other aspect is that we have this clear separation

00:06:33 between the persistence and the virtual data modeling of your composite provider where you
actually assemble,

33
00:06:38 remember that from week one, where you actually assemble the star schema out of facts
00:06:43 and the master data, right, and this also gives you much more flexibility
00:06:47 in combining facts with master data, or combining facts with facts,
00:06:50 remember the join and union capabilities which you have in the composite provider,
00:06:54 so that's also a step ahead, because it basically means that what you create
00:06:58 on top as a virtual data mart which you expose to the end user is something which is much
easier to change than in the past,
00:07:04 because it's not a persistent object but a virtual object, so here you see that even in these kind

00:07:09 of classic EDW scenarios, you benefit from BW/4HANA technologies.


00:07:15 On the other hand, if you don't have a classic EDW project, but just maybe a smaller project
just for one group
00:07:25 of business users, for one department, also, something which is not as big,
00:07:29 then of course, you can also benefit from the top-down model,
00:07:31 from the bottom-up modeling approaches which we've seen, when you don't have data
harmonization issues
00:07:36 because you just have a couple of smaller data sets, where harmonization is not such a big
deal,
00:07:42 then you can really start working in completely different ways,
00:07:44 and build your solutions much, much faster, and that's the point about LSA++
00:07:48 from my perspective, it's not so much a blueprint, as LSA was but it's actually a guidance,
00:07:54 a guidance to the right solution, right, it basically tells you how to think
00:07:59 or how to organize your solution, think about what your needs are,
00:08:03 and then basically come up with a blueprint for this individual use case, right,
00:08:07 so EDW use cases would look very much like LSA in the past, if you have something which is
much, much simpler,
00:08:13 where you just need a data snapshot once in a while, and you associate some master data,
00:08:17 then you can basically work with, say, a staging layer or corporate memory, the same for
master data,
00:08:22 assemble things and you're done, or you access the data virtually,
00:08:26 so that's the real point about it, it's basically a framework which guides you to pick
00:08:32 and choose the right layers for your solution. Okay, and now the question is what is the right
approach
00:08:40 of bringing technology, architecture, and processes together?
00:08:43 So you heard about the last couple of weeks, that we have a couple of,
00:08:48 that we have more than one ingredient here, we have in the end, we identified for that kind
00:08:53 of architecture, three ingredients, this is technology, what is technology?
00:08:58 Technology in our case is BW/4HANA, or the native HANA approach and of course,
00:09:06 the mixed modeling to combine both worlds, we have the technology approach here,
00:09:11 and we have the architecture approach here, this means is it more an Enterprise Data
Warehouse approach
00:09:16 or is it more an agile approach driven by the business users and last but not least, we have the
process approach here,
00:09:27 is it more from IT driven, is it IT driven, or is it business driven, is all the requirements
00:09:36 coming more and more from the business, or is IT owning more or less the requirements.
00:09:42 And you can also say is it something which requires a lot of governance then you probably
00:09:46 need a different process than if it's something, which is on a smaller level, on a smaller scope,

34
00:09:51 for one department, then you can probably live with a much, much more lightweight
00:09:55 and agile process than in other aspects, and all these three ingredients together
00:10:00 are basically very important, and have to be adjusted accordingly to basically
00:10:04 be able to build a big, a good solution. We have some examples or some ideas on the left-
hand side,
00:10:09 what things work together and what things don't. And the idea is to find the best ingredients
00:10:15 for your company, so this means when you have an agile architecture then you of course,
00:10:20 need an agile development cycle, otherwise, this architecture makes definitely no sense.
00:10:25 Then technology and architecture wouldn't help you, even if you get everything perfect here,
00:10:29 if you have delivery cycles of half a year, then agility is dead, right. That's part of the idea here.

00:10:35 Now let's have a look at basically, categorization of what we see in our customer base
00:10:42 in BW/4HANA. what kinds of solution customers build, using our technologies.
00:10:48 The left thing, the simplified EDW, is basically what I described when I,
00:10:52 when we had the LSA++ picture on the screen. Remember, we, I basically described how you

00:11:00 benefit from BW/4HANA technology, and the new object world if you
00:11:07 want to build a classic data warehouse, so, that's of course, the kind of mainstream
00:11:12 of what our customers are doing, the vast majority of our customers are still building,
00:11:17 using a top-down approach, they are basically using InfoObjects in a heavy way from business
content
00:11:24 or self, or custom built, and they basically, built classic Enterprise data base warehouses,
00:11:32 where the main benefits are basically in the two areas which I described,
00:11:36 it's the streamlined way of working with objects and the higher flexibility which you have
00:11:41 from virtualization. Then we have a group of customers,
00:11:45 then what you see on the lowest, on the lower bar here, on the blue bar, in the bottom here,
00:11:54 is that this can basically be achieved mainly by using the technologies which you get out
00:12:00 of the box with BW/4HANA, using BW/4HANA modeling objects,
00:12:05 advanced DataStore objects, InfoObjects, CompositeProviders, transformations, DTPs,
00:12:09 that's basically all you need for that, and you will have a much, much better data warehouse
solution, than you did in the past.
00:12:16 We have also a very big group of customers, who build something which is still close
00:12:23 to an Enterprise Data Warehouse, but much more flexible in the sense
00:12:27 that they start working in more virtualized ways, so they, for example, incorporate more of the,

00:12:34 more technologies from the HANA side, they bring in calculation views for certain areas,
00:12:37 we had these examples of and then categorizations of mixed scenarios in one of the units,
00:12:43 where we said, where for example, doing transformations on the fly can be done,
00:12:47 using a composite calculation view, then you put a CompositeProvider on top,
00:12:51 all that kind of stuff or using such calculation views for transformations, all that basically
00:12:57 gives you additional possibilities, to make your solution more flexible,
00:13:01 and maybe remove one of the other layers in your persistence architecture,
00:13:04 and then you'll basically do the next step, using more HANA technology,
00:13:08 in addition to the BW/4HANA technology, so we move a little bit to the right,
00:13:12 from a technology perspective as well, maybe you use a little bit of bottom-up modeling
00:13:17 next to top-down modeling, but it's still pretty much a waterfall project,
00:13:22 where you do an upfront design, you gather the requirements,

35
00:13:28 and then you start the implementation. Now, let's go to the far right here,
00:13:32 we actually have a couple of customers, who do something completely different with
BW/4HANA,
00:13:36 and that's very, very interesting, it's... I would say a niche, it's not a big group of customers,
00:13:42 but the interesting thing is, it's something which you would never have thought possible,
00:13:46 and you would not have associated BW in the past, with such a solution, you would never
00:13:51 have come to the idea to use BW technology for such solutions, you would've
00:13:54 used something completely different, with BW/4HANA, this is possible,
00:13:59 with the same technology which you have for the kind of classic Enterprise Data Warehouse
00:14:04 solutions which we can also build, and that's something which is,
00:14:08 we took here the term relational data lake, it's basically the idea that these customers,
00:14:14 just take data from all the sources wherever it comes from, for example, using SLT to replicate
data
00:14:20 from an S/4 system, using OData to get data from SuccessFactors,
00:14:25 whatever you have using an ETL tool to bring data in, put all the data into some HANA
schemas, right,
00:14:31 maybe just a single HANA schema, maybe one HANA schema per source, whatever,
00:14:35 and then they start building from the bottom-up to an actual solution, for example,
00:14:41 combining the source tables which are potentially highly normalized,
00:14:46 using HANA calculation views, building some logic into these calculation views,
00:14:53 and once they have reached the layer which they can consume, they actually use the
technologies of BW/4HANA on top,
00:14:59 so they use CompositeProviders, Open ODS views to basically model,
00:15:04 start a model the individual dimensions for example, using Open ODS views,
00:15:09 they use CompositeProviders to assemble star schemas, and then they put queries,
authorizations,
00:15:15 and all that stuff on top, and basically consume the data via an analytic front end,
00:15:20 that of course, uses HANA technologies to a much, much heavier extent, right,
00:15:26 basically all the data is only stored in HANA, consumption is done on BW side,
00:15:31 a couple of these customers basically tell us, what they like about this solution is that they
have all the capabilities
00:15:36 of the query designer, basically using all the modeling capabilities you have there,
00:15:41 they have the authorizations modeled on the BW/4HANA side,
00:15:45 which is also something they are used to, and which is very relevant and easy,
00:15:51 so they basically combine these two worlds, but that's not the only reason it's in many cases,
00:15:56 when you talk to these customers, they know that this is only the first step for them,
00:16:01 so the reason why they are using BW/4HANA for these technologies is not because that's the
end state,
00:16:05 but it's actually just the first step toward something which might potentially evolve over time,
00:16:11 more and more into the real data warehouse, so they realized that this was a solution,
00:16:15 which they could quickly build, and hand over to business user,
00:16:20 maybe they even built this together with a business user, right, looking at the data,
00:16:23 adjusting the calculation used consuming on the BW side, adjusting the query, and having a
look at the data,
00:16:29 and maybe giving an 80 percent solution to the business user right away,
00:16:32 after a couple of hours, maybe a couple of days, but they also realize that at some point in
time maybe,
00:16:39 if data volume is growing or if they have additional requirements, if they

36
00:16:42 need to take snapshots, then they have all the, kind of more heavyweight data warehouse
00:16:48 or EDW functionality from BW already in place, and can start leveraging this, right,
00:16:53 so they can for example, use the results, which they, of the artifacts that they built
00:17:01 on the HANA site, and really store them into ADSLs and then continue
00:17:05 to build a composite provider on top later on, right, so that's a way for example, to improve
performance,
00:17:09 at a certain point in time, maybe to, yeah, performance is one aspect,
00:17:18 maybe to take snapshots of data, all that kind of stuff, and then gradually evolve the solution
00:17:23 from something which is very simple, kind of close to real-time,
00:17:27 into something which is more of a data warehouse, so this is really a very nice example
00:17:31 of customers who are starting to work bottom-up, looking at the data, and then
00:17:36 evolving the solution over time. And you see that from a process perspective, of course,
00:17:42 that's basically what Gordon described in the last slide, you also take a different approach
here,
00:17:47 if you want to use these technologies and combine these technologies in the way I just
described,
00:17:51 you would never use a waterfall project, right, you would basically, ideally, sit down with an
expert
00:17:58 from business, look at the data, and discuss what's good, what parts of the data are good,
00:18:03 what do I have to change, and then do a next iteration, and the next iteration, and already after
each iteration,
00:18:08 have something to hand over to business, so that they can at least start working and don't
have
00:18:12 to wait until the final solution's actually ready. Yeah, so, well, let's summarize.
00:18:19 Let's summarize, well, so, why is the question find the right architecture
00:18:25 for your BW/4HANA, is not so easy to answer, so the point here is we have more than one
ingredient,
00:18:33 to build the right or the best combination of architecture, technology, and processes,
00:18:38 you described it already, what are the benefits of that kind of architecture
00:18:43 or that kind of technology, for some customers, the classic BW approach
00:18:49 with the LSA architecture or the LSA++ architecture, with different layers, persisting all the
data,
00:18:56 may fit best, but for others may not, that's why we have these different ingredients,
00:19:02 and now the question is, what is the best mix, what fits best?
00:19:06 But the point is exactly for that kind of different customer needs,
00:19:11 and different customers, and different situations, we offer a variety of technologies, processes,

00:19:21 and architecture types, and that's from my point of view, the beauty of BW/4HANA, you can
combine different things
00:19:28 and find the right and best architecture for your business. And with this, we hand over to the
weekly assignment.

37
www.sap.com/contactsap

© 2018 SAP SE or an SAP affiliate company. All rights reserved.


No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP SE or an SAP affiliate company.

The information contained herein may be changed without prior notice. Some software products marketed by SAP SE and its distributors contain proprietary software components of other software vendors.
National product specifications may vary.

These materials are provided by SAP SE or an SAP affiliate company for informational purposes only, without representation or warranty of any kind, and SAP or its affiliated companies shall not be liable
for errors or omissions with respect to the materials. The only warranties for SAP or SAP affiliate company products and services are those that are set forth in the express warranty statements
accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.

In particular, SAP SE or its affiliated companies have no obligation to pursue any course of business outlined in this document or any related presentation, or to develop or release any functionality
mentioned therein. This document, or any related presentation, and SAP SE’s or its affiliated companies’ strategy and possible future developments, products, and/or platform directions and functionality are
all subject to change and may be changed by SAP SE or its affiliated companies at any time for any reason without notice. The information in this document is not a commitment, promise, or legal obligation
to deliver any material, code, or functionality. All forward-looking statements are subject to various risks and uncertainties that could cause actual results to differ materially from e xpectations. Readers are
cautioned not to place undue reliance on these forward-looking statements, and they should not be relied upon in making purchasing decisions.

SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE (or an SAP affiliate company) in Germany and other
countries. All other product and service names mentioned are the trademarks of their respective companies. See http://www.sap.com/corporate-en/legal/copyright/index.epx for additional trademark
information and notices.

Das könnte Ihnen auch gefallen