Sie sind auf Seite 1von 217

1

As we start this session, please keep handy your 3 ring binder which should have the
checklists at the front of it and the presentations after. Please keep good notes in it.

2
My name is Jeremiah Curtis and Ive been with Infor for about 13 years. I work in the
Engineering Services group and we install and upgrade SXE and other associated
products. Engineering Services is part of the Infor Consulting Service team.

3
As we go through the system admin class, I encourage you to ask questions. If you
want to hold them until the end, that is fine, too.

Well first go over some concepts regarding how the database works. After we get
through some of that theoretical discussion, well get into the specific details of how
to work in and manage your environment. Well review your servers and database.
Well also try to crash it! And do some crash recovery. Of course, if you are live with
SXE, well skip the crashing part of it..

4
5
It is important to understand what Infor expects from the Systems Administrator.
Infor offers classes like this to teach Sys Admins how to administer the SX.enterprise
database and associated environment
The following is to help you understand what types of things Infor expects the Sys
Admin to know how to do.
Obviously Infor is here to help, and if you are having trouble with any of the following
procedures, simply log a call with support for assistance.

6
Disconnecting users - This should be done by way of the shutuser script. This is a
perfect example of something that Infor expects the Sys Admin to know how to do,
and to be able to perform this function without having to call support every time a
user needs to be disconnected from the database.
Verify Backup Logs - You should check the backup logs every morning.
Monitor/maintain extent structure dbstats.log We will look more at database
extents later in this session.
Purging Database log files - If you have more than 200 users on your system, you
should look at purging your database log files once in a while. This will also be done
when you do the dump and load on your database, which is done once a year.
Promon - The Progress Monitor or promon is a very useful tool, and you should
know how to at least get into and move around in promon.
Maintaining Scripts - If you edit the standard Infor scripts, then you need to support
and maintain those scripts. Infor only supports our standard scripts.
Maintain .pf files - Sometimes you will be asked to modify Progress parameters, this
is something you need to know how to do.
Maintain library file - You may be called upon to compile and library a piece of code,
this is well documented. The document is available from our web site

7
Brokers start & stop - Starting and stopping the database broker in the event of a
crash is something a Sys Admin should know how to do.
Removing .lk files - As a part of crash recovery, you may be required to remove the
database lock file. It is important to know when it is safe to do so, and how to do this
if necessary.
Clearing shared memory - As a part of crash recovery, you may be required to clear
the shared memory. It is important to know when it is safe to do so, and how to do
this if necessary.
Disconnecting users - If you have a power outage, and your database server is still
running on battery power, it is important to know how to clean up all of the user
processes that are still on the system even though the users are no longer connected,
due to the power failure.
Truncating BI files - It is important to know how to truncate your before-image file
after a crash.
Error messages in log files - It is important to know where and how to look at the
errors in the various different log files associated with the SX.enterprise application.

8
Some work, like upgrades, applying patches, and dump/loads are considered
schedulable and billable work.
Infor recommends you to call ahead when doing these types of procedures and
scheduling a tech to be available to assist you should anything go wrong with the
procedure.
Infor does offer 7x24 support, but the 7x24 or after hours support is not required to
assist with these types of procedures, and will simply help you restore back to where
you started.
Scheduling a tech for a prearranged type of procedure can save your company a lot of
time and money should something go wrong during that procedure.

9
Infor has a partnership relationship with Progress. Infor is one of Progress largest
customers.
Youll often hear Progress and OpenEdge used interchangeably. To be clear,
OpenEdge is the name of the product and Progress is the name of the company.

Infor recommends that you call Infor Support for assistance with your Progress
database, instead of calling Progress Support.

10
It is important to note that OpenEdge is not an operating system.
OpenEdge is an application software that runs on top of various different operating
system platforms.
OpenEdge is both a database engine and a programming language.
SX.enterprise is written in Progress 4gl language.
4GLs are designed to reduce programming effort, the time it takes to develop
software, and the cost of software development

As we use both the Progress OpenEdge database engine and the OpenEdge code
engine, this makes compatibility issues between the code and the database non-
existent.

Youll often hear Progress and OpenEdge used interchangeably. To be clear,


OpenEdge is the name of the product and Progress is the name of the company.

11
The database consists of 4 primary physical file types
It is important to understand that when we say physical elements of the database,
that we mean the files located on disk. This is because the Progress database engine
utilizes a lot of processes and memory, and in the event of a crash, the files on disk
will be the only elements that are left. (more on this in the crash recovery section)
The four elements include:
.db - the database file that contains the data, this includes all data files for the
database that might not have a .db extension
.bi - the before-image file, this file is used for transaction logging and is a critical piece
for the crash recovery mechanism.
.lg - the database log file, this file is very important for troubleshooting.
.lk - the database lock file, this the only physical piece of the database broker, and
should only exist when the database broker is up and running.

12
This is our database diagram, and we will be adding to this screen as we go along.
The .db, .bi, and .lg are represented by a cylindrical icon. This icon will be utilized to
represent physical elements of the database. Representing these elements as
separate objects does not imply that they are to be located on separate
disks/spindles. In fact the .db and .lg will always be located in the same directory.

13
The primary goal of a database broker is to allow multi-user access to a database.
The database broker consists of 3 primary elements:
The database broker process - _mprosrv
The Shared memory segment
The database lock file - .lk (note the database lock file should only exist when the
database broker is running)

14
15
When a database broker is started against the Progress database, the first thing that
happens is a database broker process gets spawned.
That database broker process is _mprosrv
Note: The process may be seen by running the following command:
Unix:
# ps -ef |grep mpro
Windows:
By looking in the task manager.
This process attempts to allocate a pool of shared memory.

16
Once the _mprosrv process creates the shared memory area, it then allocates
portions of this shared memory into various structures or memory tables.
One of the shared memory structures that we will be looking at is the User Table,
which contains the User ID, TTY, and Process ID of the users that are logged into the
database.
One of the other shared memory structures that we will be discussing is the Lock
Table.

17
After the _mprosrv process has allocated the shared memory and built within it the
necessary shared memory structures, it creates a .lk or database lock file.
Note: This is the only physical part of the database broker, and should only exist if
the database broker is up and running.

The three parts of the database broker are:


1.) Database Broker process
2.) shared memory
3.) database lock file.

18
Now that the database broker is running, users may start to log into the database.
For this discussion we will be looking at a Direct Connect Client. This discussion does
however fully apply to both a Direct Connect Client and a Remote Client. The only
difference between a Direct Connect Client and a Remote Client is the actual
connection to the shared memory. After this discussion we will take a look at how
the Remote Clients connect to the database.

A direct connect client is a CHUI user, a report manager or report scheduler, or any
process that connects from the DB server directly.

19
When the client User 1 attempts to connect to the database, the process that gets
created is the _progres process.
This can be seen on your system by running the following command:
Unix:
# ps -ef |grep _progres
Windows:
By using the task manager.

That _progres process talks to the database broker process (_mprosrv).

20
The database broker acknowledges the user and logs them into the user table in
shared memory.

21
Here we can see that the database broker has logged User 1 into the user table in
memory.
You can also see that the user table is storing the User ID, TTY, and PID for each user
that logs into the database.

22
The last thing that happens during the user login process is, an entry for the user gets
logged into the database log file.
Example:
07:19:23 Usr 10: Login by User1 on /dev/pts/16. (452)

The database log file is a very useful file for troubleshooting. It keeps track of when
users log into and out of the database, and any errors that the database broker
receives.
Note: If your company has more than 200 users you may have to keep an eye on the
growth of your database log file. This file will write an entry into the log file each
time a user logs into or out of the database, so it may grow quite large.
Lets review where this file is and put it in our checklist.

23
The following data flow discussion will help you understand what is going on behind
the scenes. With this understanding, you will be able to better know what, if any,
potential data loss may result, in the event of a crash.
What you are about to see is quite a complex picture. There are a lot of things
happening behind the scenes.
All of these pieces are in place for 2 reasons:
1.) speed
2.) data integrity
The bulk of this process happens in memory, so it is very fast. This process is also
designed so that should anything go wrong, your data in the database is not
compromised.

Note: If nothing else is learned from the following data flow discussion, it is
important to at least know the following:
1.) Users never get to see, change, delete, or work with the data in the database in
any way.
2.) Users only ever get to work with data in memory.

24
As we can see here the record ABC is located in the database.

25
When a user process wants to work with a particular record, it first scans memory to
find out if the record is already in memory. If that record is not found in memory
then it sends a request to the database broker to pull the record from the database
on disk to the buffer pool in shared memory.

26
Once the record is in memory all of the users have access to that data and can begin
reading and possibly changing this data or record.

27
Now that we have our User1 logged in to the database, and the record has been
pulled into memory, lets see how this record gets updated.
During this discussion we will also be talking about the lock table in memory.

28
The first thing that happens when a user wants to update a record is an entry is
logged into the lock table in memory. This is done so that no two users can be
updating the same record at the exact same time. This is just one of the ways that
Progress provides data integrity for the data in the database.

29
The changes made to the record occur in memory, which again is very fast. Then
once the change is complete the lock goes away, and other users are now able to
change/update the records once again.

30
Everything to this point has occurred in memory, so it is very fast.
However, since memory is volatile, we need to get this information to disk. This
information however does not get updated to the database.
This is where the before-image file comes into play.
When we talk about the before-image file, there are 2 parts to consider.
1.) the before-image or .bi file itself
2.) the before-image writer or BIW

31
Here we can see the .bi file expanded out. The .bi file is made up of clusters which
are made up of blocks.

32
The process that is responsible for writing the information from memory to the .bi file
is the BIW or before-image writer.
Since there was a change in memory, we need that information written to the .bi file.
So the biw wakes up and writes out the changed memory blocks to the .bi file.

33
However, the biw does not simply write the new value. The biw first writes a begin
transaction note or bt note.

34
Then the biw writes a note describing a single change. (shown here as c -> d).

35
Then finally the biw writes an end transaction note or et note.
This gives us the full scope of the transaction. The scope of the transaction being a
begin note, all of the notes describing the changes to the transaction, and then an
end note. This is so that Progress knows all of the changes necessary to reproduce a
transaction. This is important when we look at crash recovery a bit later.
Note: The .bi file is a sequentially written file. All of the changes that occur to the
database are written out sequentially to the .bi file or transaction log.

36
Lets Review what we have so far...

37
The data started in the database.
The user scanned memory for the record and did not find it.
The user then sent a request to the database broker or _mprosrv process.
The database broker wrote the record from the database on disk into memory.
The user could then see the record and begin working with that record.
The user changed that record and those changes occurred in memory.
The BIW then woke up and wrote the changes to the before-image file.
The changes were recorded in the .bi file by the biw in the form of begin-transaction,
a note describing the change, and an end transaction. This is to maintain the full
scope of the transaction.

38
This is where the after-image file comes into play.
The after-image file is an optional piece to the Progress database. You do not
necessarily have after-imaging running on your system.

39
Here we have the after-image or .ai file. It is shown here as a cylinder because it too
is a physical part of the database. I have also shown it over on the right-hand side,
because the .ai file must be located on a separate spindle (disk) than the database or
before-image files.

40
The AIW process is responsible for writing the information from memory to the .ai
file on disk.
The information that gets written to the .ai file is identical to the information that was
written to the .bi file. The difference is that once the information has been
eventually written to the database from memory, the space in the .bi file is marked
for reuse. The .ai file is not marked for reuse and is a consistent log of all of the
transactions that have occurred to the database since the last backup.
So, if we lose the database or .bi file, we simply restore the database and, using
Progress utilities, roll-forward all of the changes that have occurred to the database
since that last backup. Therefore getting the database back to the point that it was in
just before the crash.

41
The Asynchronous Page Writer or APW is the process that eventually writes the
information from memory back to the database.

42
So as we see here, it is the APW process that gets the job of writing the information
from memory back to the database on disk. We can also see that the database finally
has the new value for the record that was changed by User1.
Up to this point I kept saying that the information eventually gets written to the
database. This is absolutely correct. This is because, once the information has been
written out to the before-image file, it is preserved on disk. We do not care when the
information gets written out of memory back to the database, because if anything
happens to memory, the information is already on disk. Thus it is not important to
write the information from memory back to the database right away.
So when is eventually? We usually tune the database so that it synchronizes memory
to disk approximately every 5 minutes. Well take a look at this in more detail when
we visit the performance tuning section of the class.

43
So, lets take a look at the whole picture in a final review...

44
The data started in the database.
The user scanned memory for the record and did not find it.
The user then sent a request to the database broker or _mprosrv process.
The database broker wrote the record from the database on disk into memory.
The user could then see the record and begin working with that record.
The user changed that record and those changes occurred in memory.
The BIW then woke up and wrote the changes to the before-image file.
The changes were recorded in the .bi file by the biw in the form of begin-transaction,
a note describing the change, and an end transaction. This is to maintain the full
scope of the transaction.
Then the AIW process writes the same information that was written to the .bi file to
the after-image file.
And finally the changes that occurred in memory get written out to the database by
the APW process.

45
46
Everything we have looked at to this point has dealt with a direct connect client. As I
noted earlier, the only difference between a Direct Connect Client and a Remote
Client is how they connect to the database.
Now we will look at how a Remote Client connects to the database.
We will also look at the Progress appserver.

47
The client in this slide is a direct connect client. The prowin32 process is the PC
equivalent to the _progres client process.
When the User1 double-clicks on the SX.enterprise icon on their desktop, it launches
the prowin32 process. This process then connects through the network to the
database broker. The database broker then logs the user into the virtual system
tables in memory.

48
Once the broker has logged the user into the virtual tables in memory it then spawns
an auto-server for that user. The database broker then tells the user to disconnect
from the database broker and reconnect, through the network, to the auto-server
that was spawned for them.
The auto-server layer is important to the graphical client, because since the prowin32
is a process on a PC, it cannot talk to Unix memory. Therefore the prowin32 process
sends its requests to the auto-server. The auto-server then talks to the Unix shared
memory and relays the information back, across the network, to the connected user.

49
The users are also connected to shared memory through a Progress appserver.
This gives each graphical client 2 connections to the database.
Note: The auto-servers are a one to many clients relationship and the appserver has
a one to one client relationship.

50
This usually brings up the question of what is the Progress appserver and why do we
have them?
Lets take a look at how the Progress appserver works...

51
Typically the appserver is running on the system where the database is located. For
the purpose of this discussion we have moved the appserver to its own box separate
from the database server. The client in this example is an SX.enterprise client
connected to the appserver and connected to an auto-server on the database system.

52
53
Example 1: User1 requests an address for a specific customer. Since this is a small
and static amount of data, this request goes to the auto-server on the Database
Server. This is a small amount of data going across your network.

54
55
Example 2: User1 requests a specific price, for a specific product, for a specific
customer. Since this request requires business logic to determine what the price
for this product will be, this type of request goes to the appserver.

56
The appserver then takes over the request and pulls across all of the necessary data
needed to determine the price for the appropriate product. Then once the appserver
has determined the price,

57
it sends just the price back to User1 across the network. This again is a small amount
of data going across your network.
As you can see, if the appserver is running on a system other than the database
system, you need to have a large network pipe between these two systems, and this
should be on a separate, dedicated segment of the network than the rest of your
network.

58
As we saw, each graphical user has 2 connections to the database. One through the
auto-server, and one through an appserver.
Smaller requests get routed to the auto-server.
Larger requests that require business-logic go to the appserver.
Infor has written the client code to determine which requests use the appserver and
which requests do not.
Infor requires all users be running on appserver.
The appserver system must be located on the same Local Area Network or LAN as the
database server.
You will see a significant performance degradation if users are running SX.enterprise
graphical across a Wide Area Network or WAN and not using the Progress appserver.
If you choose to move the appserver to a system other than the database server, you
must have the largest network connection possible between the two systems, and
they should be on a dedicated segment of the network. We typically do this on
customers with 1500 users or more.

59
The Architecture at first glance can be a bit complex, lets try to simplify it a bit

Before users can start connecting to the database, a lot of things have to happen.
There are several pieces to the architecture and this section will explain how they all
work together.

60
We start off with the database.

61
Admin Server: Before any Progress brokers or servers may be started we have to
start the Admin Server. The admin server is the key to the OpenEdge architecture. It
is similar to the services on an Windows NT system. It is a service that you start up
and it handles the starting and stopping of the other Progress Brokers and Servers.

62
Name Server: The Progress nameserver is similar to a network name server or DNS
that you may be familiar with. It is responsible for directing Progress clients to the
available application brokers. The nameserver is started automatically by the Admin
Server.

63
Database Broker: The database broker then can be started. A request gets sent to
the Admin Server to start the database broker against the specified database. Once
that is done, we can see the process running, the lock file gets created, and the
shared memory gets allocated

64
Application Broker: The application broker then can be started. A request gets sent
to the Admin Server to start the specified application broker.

65
Once the appbroker is running it registers itself with the nameserver that the admin
server started.

66
Now that the admin server, name server, database broker and application broker are
all running, we can begin connecting users to the database.

67
Here User1 double clicks the SX.enterprise icon on the desktop. A prowin32 session
begins and communicates through the network to the database broker and requests
to log in.

68
Auto Server: The database broker acknowledges the connection for User1 and
spawns a child process called an Auto Server. The auto server makes a connection to
the database. Note that this connection will remain throughout the entire life of the
database broker.

69
The database broker then sends a message back to User1 instructing the prowin32
process to disconnect from the database broker and

70
reconnect to the auto server. Note that this connection will remain active throughout
the entire life of the prowin32 client session.
On the client, the SX.enterprise login box is now displayed.

71
User1 logs in and hits enter the prowin32 process then sends a message across the
network to the Progress nameserver requesting the location of the registered
application broker.
The nameserver then responds with the host name and port number for the
application broker.

72
The prowin32 process then disconnects from the nameserver and

73
connects to the application broker at the host/port provided by the nameserver.

74
The application broker acknowledges the login of User1 and spawns an application
server. The application server connects to the database and this connection is
maintained throughout the life of the application server.

75
The application broker then tells the prowin32 process to disconnect and

76
reconnect to the application server designated for User1. This connection will be
maintained throughout the entire client connection.

77
User1 is now has two connections to the database, once via an auto server and once
via the application server.

This architecture model allows us to scale your environment horizontally . That is,
by adding an additional server or 2, we can increase the performance of the
appbroker/appservers. The Name servers allows load balancing between the
appbrokers.

78
The admin server is the key to everything. Nothing else may be started without the
admin server running.
The nameserver directs clients to the application brokers.
Application brokers spawn application servers.
Database Brokers spawn auto servers.

Summary:
Brokers handle logins.
Servers serve data.
The Auto Server connections are a one to many relationship, so one auto server will
typically be servicing multiple clients.
The Application Server connections are a one to one relationship, so one application
server is only ever servicing one client at a time.

79
80
Next we discuss how to properly disconnect users from the Progress database.
It is important to use the Infor shutuser script and never use the Unix kill command or
the Windows end task to disconnect a user.
There are also risks involved in shutting out a user from the database.

81
When attempting to disconnect a Direct Connect Client connection to the database,
the process that we want to cleanup is the _progres process.
Example: A user has run a huge report without any options, therefore the report is
going to be huge, and probably not what the user wanted.
If we simply used the kill command or end task to kill the _progres process, then we
definitely clean up the client process. However, since the database broker process is
responsible for transferring data from the database into memory, we would not stop
the reporting process. The database broker would continue to transfer all of the data
necessary to create the report and then attempt to deliver that report to the user,
and then realize the user is no longer connected. Meanwhile using lots of disk space,
memory, and system resources while building the report.
If we use the shutuser script to disconnect a user from the database, this uses
Progress commands to disconnect the user. The shutuser script will send a message
to the database broker, telling the process that this user needs to be disconnected.
The database broker will log the user out of the user table in memory and stop
processing anything for that user.
Also, if a user has a lock in memory at the time you execute a Unix kill or Windows
end task on the _progres process, Progress will panic and bring the database broker
down. This is to provide data integrity to the database.

82
For a Remote Client connection, the process we want to cleanup is the prowin32
process. This connection can be jeopardized in many ways. Through network
connectivity, loss of power, the PC becoming locked up, etc
Since this is the case, the auto-server acts as a buffer to shared memory. If the user
gets disconnected the auto-server will recognize this and send a signal to the
database broker to log this user out of the system tables in memory and stop
processing anything for that user.
There will still be times that you will need to use the shutuser script on graphical
processes.
The following is the correct syntax to use with the shutuser script:
# shutuser 0001usr

Where 0001 is the company number that the user is logged into for SX.enterprise.
The company number must be padded to 4 characters with zeros.
And, usr is the SX.enterprise login for the specific user.
So user jab logged into company 500 would be as follows:
# shutuser 0500jab

83
84
Lets try to crash our database, as long as users arent in. There are several ways to
purposefully crash an Opendge database. We could remove the lock file, which is the
only physical part of the database. We could change the hostname of the server, or
we could delete a data file.

Before we crash it, lets back it up first. That way, if we mess it up too badly, we can
just restore from backup. This backup command is one I like to run when Im feeling
extra paranoid. Well talk about backups later on in more detail, but for now, lets run
backup.online /db/nxt and see that it creates backup files in the backup directory

85
Next we will take a look at Crash Recovery.
Recovering from a crash is a 5 step process. The last 3 steps are easy and
straightforward.

When we talk about crash recovery, it is nice to use a power-outage as an example.


This is because it is a very clean way of crashing the database, and we know that we
will lose all processes and anything in memory. This does not mean that the
following examples will only apply to a power outage situation. In any crash, you will
lose anything that was in memory at the time, and any associated processes that
were connected to the database. So, keep in mind that the following applies to all
crashes.

86
Figuring out why we crashed is essential. If we crashed due to a power outage,
nothing will be written to the log file. For some reason, hard drives need electricity in
order to write data.
In most other scenarios, something should be written to the database log file.

If we dont figure out why the database is down, then when we start it back up, it
could come down again.

87
Now that weve figured out why it crashed, we need to fix it.
Did our disks fill up? Then add more disk, or delete some files.
Did the power go out? Write a check to the power company and get them to turn it
back on.

These first 2 steps Figure out why and Fix it could take a while to finish. Happily,
the next 3 steps are simple and straightforward.

88
Well discuss 3 scenarios to discover why the last 3 steps are necessary

89
In our first power outage scenario we see that the information started in the
database, then the user could access and change that data, but the biw did not get a
chance to wake up and write that information from memory to the .bi file.

90
91
When we lose power everything in memory was lost, along with any processes that
were running.

92
93
In this scenario, when we restored power, the only place that the change was ever
recorded was in memory, so we lost the change.
This is why we tell our users to check their last 3 minutes of work after a crash.
Now, 3 minutes is a really long time in computer terms, but it is a nice safety net to
give the users so that they can comfortably know what data they may lose.

94
Well discuss 3 scenarios to discover why the last 3 steps are necessary

95
In our second power outage scenario we see that the information started in the
database, then the user could access and change that data, the biw was awakened
and started writing the changes from memory to the .bi file, but it did not get a
chance to write the end-transaction note.

96
97
When we lose power everything in memory was lost, along with any processes that
were running.

98
99
In this scenario, when we restored power, the change was recorded in the before-
image file by the BIW, and the only thing that did not get written was the end-
transaction note.
When we start the database broker, it reads through the .bi file and processes all of
the transactions logged there.

WE DO NOT WANT THIS TO HAPPEN AUTOMATICALLY!


Since Progress does not have the full scope of the transaction, signified by the begin
and end transaction notes, it must eliminate the transaction.

Infor recommends the use of the truncate.bi script to process the transactions in the
.bi file before bringing the database broker up. This is so that you know everything is
ready to go before you start the database, and you dont have to find out that the
database will not start for some reason.

100
Well discuss 3 scenarios to discover why the last 3 steps are necessary

101
In our last power outage scenario we see that the information started in the
database, then the user could access and change that data, the biw was awakened
and wrote the changes from memory to the .bi file, but the APW did not get a chance
to write the change back to the database.

102
103
When we lose power everything in memory was lost, along with any processes that
were running.

104
105
In this scenario, when we restored power, the change was recorded in the before-
image file by the BIW, and the only thing that did not get done was the APW did not
get a chance to write the change back to the database.

When we truncate the bi file, it rolls all of the completed transactions into the
database, and discards the incomplete transactions.

Then, we can start the database without additional delay and the users can begin
checking their last 3 minutes worth of work.

106
Infor recommends the following five things after a crash:
1.) check the database log file. Remember that anytime the database broker comes
down, it is for a good reason, and Progress will record why it came down in the
database log file.
2.) fix it Fixing it may include restoring from backup, building a new server, or simply
getting the power back on.
3.) use the truncate.bi script to truncate the .bi file. This will ensure that the
database is in a known state before you bring the database broker back up. When
you bring the database broker back up after truncating the .bi file you know it will be
ready to go.
4.) Bring the database back up.
5.) have your users check their last 3 minutes of work.

107
108
When we talk about parameters, there are actually several different types of
parameters. We will now take a look at three different types of parameters.
1.) database parameters - The parameters that the database broker uses to start up.
2.) client parameters - The parameters a client process uses at start up to configure
the client environment.
3.) .bi file parameters - The parameters we use when truncating the before-image
file.

Well just discuss the ones that can affect database performance and access.

109
For the database broker, we should use the OpenEdge Explorer tool to modify these
files. We could alternately, if we wanted to be bold and daring, modify the
conmgr.properties or ubroker.properties file with a text editor.

Lets get into your OpenEdge Explorer and take a look at these parameters. Well also
take advantage of this time to make some changes that might speed things up a bit.

110
BlocksInDatabaseBuffers or -B - This is the Database Buffer parameter. It tells the
database how many shared memory blocks to allocate to the database buffer pool
when we start the database broker. The size of each block is dependant on the
database blocksize. Infors installation value for -B is 3750. The recommended size is
10% of database size
(3750 blocks x 8k database blocksize) = 30 MB memory

111
112
LockTableEntries or -L - This is the Lock Table parameter. It tells the database how
many entries to allocate, when it creates the lock table in memory. The size of each
entry is 18 bytes. Infors standard value for -L is 50000.
(50000 x 18 bytes) = 900 KB

113
114
MaxUsers or -n - This is the User Table parameter. It tells the database how many
entries to allocate, when it creates the user table in memory. This parameter does
not use a lot of memory, however all of the other memory calculations are driven off
this parameter, so do not over allocate it. The correct values are as follows:
Character only - SASA number of licensed users + 20% = -n value
Graphical only - (SASA number of licensed users x 2) + 20% = -n value

115
116
BeforeImageBuffers or -bibufs - This is the BI Buffer parameter. It tells the database
how many memory blocks to allocate to the before-image buffer when we start the
database broker. The size of each block is dependant on the before-image blocksize.
Infors standard value for -bibufs is 30.

117
118
Directio or -directio - This is the Direct I/O parameter. This tells the database broker,
to bypass the OS buffers and write the Progress buffers directly to disk. Instead of
writing Progress buffers to the OS buffers and then to disk.
Note: there is no value for -directio it is either on or off. Infor recommends running
with direct i/o turned on.

119
120
121
SpinLockReTries or -spin - This is the Spin Lock parameter. This tells the client
process to continue checking to see if a lock is free x number of times before
napping.
This is the only current parameter that is tunable with the database running, but it is
only tunable if it is turned on. Infors recommended value for the -spin parameter is
1. The -spin parameter should be increased by 20000 for each additional processor.

122
123
124
125
DatabaseName or -db - This is the Database parameter. This tells the database
broker to start against which database.

Host or -H - This is the Hostname parameter. This should be set to the hostname of
the system you are starting the database broker on. This can be found by typing
hostname at the Unix prompt.
Port or -S - This is the Service Port parameter. This is the service port that the
database broker will be listening on.
MinDynamicPort & MaxDynamicPort or -minport & -maxport - These are the
Firewall parameters. These tell the database broker to spawn the auto-servers on
the specified range of ports.
Example: -minport 7150 -maxport 7200 would cause the auto-servers to only spawn
on ports 7150 through 7200

126
-l - This is the Local Buffer Size parameter. This is a soft limit and tells the
client how much memory to allocate for workspace. This value will be
increased as necessary.
-q - is the Quick Load parameter. This tells the client to reread the code from
memory, if that code is already in memory.
-T - This is the Work Directory parameter. This tells the client where to
create the workspace files. If this is not specified, the default is current
working directory, which for the users that would be /home. Infor
recommends this parameter to be set to /db/sort
-t - Is the Save Temp Files parameter. This parameter keeps Progress from
unlinking the filename of the temp files after they are created. In other words
if you do not use this parameter the workspace files are hidden.
-db - This is the Database parameter. This tells the client which database to
connect to.
-U & -P - These are the username and password parameters for database
security. If you disable the blank user id then these parameters will need to
be configured.
-yy 1950 - This is the century marker parameter, needed for Y2K.

For the character client, the parameters are kept in /rd/opsys/client.pf


The only difference between parameter file for the SX.enterprise client and
the SX.enterprise editor client is the editor.pf does not use the -q parameter.

127
For the graphical client the above parameters are kept in several different
parameter files:
The Command line - This contains the initial call to the .pf files and sets the
location for the Progress client workspace files. The following parameters are
found here:
-pf login.pf
-T "%TEMP%"

C:\nxt\code\login.pf - This contains the calls to the other parameter files in


the appropriate order and then the call to the li.p program for logging into
SX.enterprise. The following parameters are found here:
-pf ..\Local\netpath.pf
-pf startup.pf
-p li.p

C:\nxt\local\netpath.pf - This contains the necessary information to connect


the SX.enterprise client to the Progress Appserver. The following parameters
are found here:
-param "START, PROPATH=server-propath.ini ConnectParms='-H <hostname> -
AppService appsrv -S 7182'"
C:\nxt\code\startup.pf - This contains global parameters, and this file gets
overwritten during the push process. The following parameters are found

128
here:
-pf ..\Local\local.pf
-rereadnolock -D 500 -devevent -filterocxevents -mmax 6000 -nochkttnames

C:\nxt\local\local.pf - This contains parameters that are local to this client only. This
file does not get overwritten during a push process. The following parameters are
found here:
-TB 8
-pls

C:\nxt\code\connect.pf - This contains database connection parameters. This


parameter file must be called connect.pf, and is called from the code. The client uses
this parameter file to connect to the database across the network. The following
parameters are found here:
-db nxt
-N tcp
-H dixie
-S nxt
-cache nxt.csh

128
-bi - This is the Cluster Size parameter. This parameter tells the truncate.bi script
what size to set the before-image cluster size to. Infors recommended value for this
parameter is -bi 1024. This sets the cluster size to 1MB.
-biblocksize - This is the Blocksize parameter. This parameter tells the truncate.bi
script what size to set the before-image block size to. Infors recommended value for
this parameter is to have it the same as your database blocksize

By changing the bi number, we can tune the database to be a little faster.

Larger cluster sizes generally increase performance. However, they also have
significant drawbacks which includes Increased disk space usage for the BI file and
longer crash recovery periods.

Cluster sizes from 512 to 4096 are the most beneficial and even as high as 16 MB may
speed things up even more.

We can use promon to see if we need to increase this. Well examine that later when
we see promon and the performance tuning section.

129
130
131
When it comes to database extents, there are 2 different types of database extents.
Variable extents - These extents will grow as needed to accommodate more data.
Fixed extents - These extents are pre-formatted when the database is created, and
never grow.

132
If we imagine our database is like a 5 gallon bucket and the data in it are like M&Ms,
where the green M&Ms are OE records, the blue ones are Inventory records, etc,

As we add data, we are pouring M&Ms into that bucket. We are just adding data to
that file. This results in data being written quickly. But it does have a down side.

133
134
With fixed extents, the database was already formatted with disk space when the
database was created.
When Progress needs to write the same record to a database using fixed extents, it
simply writes that data directly to the disk.
This we consider to be a 1-step write process.

However, when we get to the end of that fixed extent, we cant add more data to the
database, and it crashes to protect the integrity of the database and

135
We end up with M&Ms all over the floor when we overflow our bucket

136
The OS will format a little more space and then Progress may continue writing data to
the database.
If there still is not enough room, then Progress will need to get the OS involved again,
to format a little more space, and then Progress may continue writing the data to the
database.

137
138
139
This must continue to happen until all the data has been written to the database.
Therefore, this can be looked at as a 2-step write process. So, it is slower than fixed
extents.

140
141
This is like having our 5 gallon bucket footprint, but no height. Then when we add
M&Ms, we need to duct tape a 1/8 high layer to our bucket.

142
143
Inodes are like doors into Fenway Park but for files on a Unix system. If 100,000
people tried to go see the Red Sox play and they only had one door to go in, wed see
a big line outside the door.

Likewise, when we have just one large file and 100,000,000 writes trying to happen,
then this can result in poor system performance

144
When the OS needs to write to a file on disk, the inode for the file being updated
must be locked. This is because the OS cannot allow multiple processes to write to
the same file at the same time.
If we add more entries into our stadium.
With a multi-volume database, there are several extents, each with a different inode
that may be locked. Therefore, a multi-volume database may have more than one
update at a time, while a single-volume database may not.

145
By having a combination of fixed and variable extents, it gives us both the speed and
flexibility we need. Database extents give us the flexibility to add more space to the
database in a variety of ways.

146
Weve split up the database into various storage areas. Each storage area can
be considered as a kind of mini database.

147
These 12 tables are the tables we consider the busiest and/or that hold the most
data. By examining their function, we can see why that might be. Hopefully we are
adding lots of orders! Thats the OEEH and OEEL tables. We recommend that you
analyze your database periodically to see if we should isolate any other tables.

148
We have 3 standard areas called default, transient and custom these areas contain
all the rest of the tables and the fields that werent part of our previous slide.

149
Each area has their own set of fixed and variable extents. Here we can see aret. The
last extent, extent 9, is variable. Lets take a look at your system at the various files of
the database.

150
151
Now we will take a look at how data gets added to the database at a database block
level. This will offer you an understanding of what is going on inside the database
when you add or delete records.
For this discussion we will look at the following terms:
High Water Mark - any blocks above the high water mark, have never had data in
them.
RM Blocks - RM or Record Management blocks are blocks that are considered to be
full.
RM Chain Blocks - RM or Record Management chain blocks are blocks that are
considered to be partially full or now empty, but had data in them at one point.
Empty Blocks - Empty blocks are blocks above the high water mark that have never
had data in them before.

152
RM Blocks - These are full blocks, they are below the high water mark.

153
RM Chain Blocks - These are blocks that are partially empty or empty but have had
data in them at one point. These blocks are indexed for speed in a chain. Each block
has a pointer in it to the next block on the chain. This is so that no matter where
these blocks are in the database, Progress can find the next one quickly.

154
Empty Blocks - These are empty blocks, they are above the high water mark and
have never had data in them.

155
We are going to look at 2 different scenarios of adding a new record to the database.

156
157
158
159
160
Lets add a record to our database and see what happens.
You can see our New Record waiting to be added to our database.
The first thing that happens is that Progress tries to check the first 3 blocks to see if it
has enough room in any of those blocks to write the new record.
If Progress finds enough room for the new record, it writes the new record to the
block.
In this example, Progress fills the block, so the block gets taken off the RM chain. The
record does not move anywhere in the database, it simply gets the flag removed that
tells Progress this block is part of the RM/Free chain.

161
How full is full? Progress considers any block that is at least or more than 93% full to
be a full block. Any blocks that are considered full get removed from the RM Chain
and becomes simply an RM Block.
Progress uses this almost full concept to allow a record to grow within the same
block without having to be fragmented.
It is important to note that during the dump and load procedure, Progress will
reestablish this 7% buffer on the database blocks. This is why your database may
actually be larger after a dump and load than it was before hand.

162
In this second scenario the block will not fit on the RM/Free chain

163
Lets add another record to our database and see what happens in this scenario.
You can see our New Record waiting to be added to our database.

164
The first thing that happens is that Progress tries to check the first 3 blocks to see if it
has enough room in any of those blocks to write the new record.

165
In this case Progress does not find enough room for the to write the new record to
any of the first 3 blocks of the RM/Free chain.
Progress then makes a decision and pulls the next Empty Block and adds it to the
RM/Free Chain.

166
Progress then adds the new record to the newly added block.
Since in this case, the block does not fill up, Progress leaves the block on the RM/Free
chain.

167
Progress then moves the first 3 blocks of the RM/Free chain to the end of the chain.
This is so that the same 3 blocks are not being checked over and over again.

168
This process is built for speed. If there were 10,000 blocks on the RM chain, you
would not want to wait for potentially 10,000 blocks to be read every time Progress
needed to add a record to the database.

169
Any new record that is going to be added to the database will have a maximum of 3
blocks checked to see if there is room for the new record. After the first 3 blocks
have been checked, Progress makes a decision and grows the database as necessary,
even if the fourth record that it would have checked would have held the record.
The first 3 blocks of the RM chain always get moved to the end, this rotation is to
keep Progress from always hitting the same 3 blocks over and over.
With this new information, we can see why, if you delete or purge data from your
database, you cannot guarantee that Progress will reuse that space. The only way to
ensure that Progress reuses that space is to do a dump and load of the database.
Progress considers blocks to be full when they are approximately 93% full. Progress
will reestablish this 7% buffer when we do a dump and load of the database. If a
dump and load of the database has not occurred in a while, the database may grow
after a dump and load.

170
We have been talking all through this session about dump and loads.
Lets finish this session by taking a deeper look at what a dump and load is all
about.

171
A dump and load is a process to defragment the Progress database.
When we took a look at Record Management, and we added a new record to the
database, the procedure was designed for speed. The record management process
was not designed to keep the data streamlined. The dump and load process is the
process used to streamline and defragment the records of your database.
Infor recommends doing a dump and load of the Progress database once a year.
The dump and load is a safe procedure, but it may take a long time. It is basically a
weekend project. The dump and load process will dump all of the data out of the
database, the database will be deleted and then recreated, and then all of the data is
loaded back into the database. The dump and load procedure takes approximately 4
hours per gigabyte of data. This process does go much faster on higher performance
systems.

172
173
As data gets added to the fixed extents in a database, it is like pouring m&ms into
buckets. As each fixed extent fills up, the data simply starts spilling into the next
extent. It does not matter if the record is an OE record, or a PO record, they are all
treated the same.

174
When we do a dump and load, we export all of the data from the database, then
delete the database and create a new one, then we load all of the data back into the
database. The data being loaded back into the new database is loaded table by table
each using the primary index. Therefore, all of your AP records get put together in
the database, then your AR records, the OE records, and so on
This helps the performance of reports significantly. It is recommended that a dump
and load be performed on the database at least once a year.
Since all of the records of the same type are together in the database after a dump
and load, reporting performance can be significantly increased.

175
How do I know if we need a dump and load?

Using the tabanalys tool, we can determine if a dump and load is required. This will
generate a report that tells us the scatter factor for each table. Generally, 1.0 is good
and over 4 is high. If we have a lot of tables, or a few bigger tables, whose scatter
factor is high, then a dump and load is recommended.
Lets review your database now by running the tabanalys command

Also, an online tablemove is an option that Progress recommends. Please review the
Progress kbase for directions on that.

176
The Progress Knowledgebase is a great tool. You can access it from
www.progress.com/support.

Right now, the URL is knowledgebase.progress.com, but that changes periodically.

177
178
179
180
181
According to Progress Promon reports APW writes as "buffers flushed" in several
places. You want that number to be less than 10 for almost every checkpoint.

182
183
184
185
186
187
188
According to Progress: Duration for the last 8 checkpoints are displayed in promon's
Checkpoints display. If they are at least a minute long, you are fine.
Longer is ok but not needed. If they are shorter than a minute, you should increase
the cluster size.

189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216

Das könnte Ihnen auch gefallen