Sie sind auf Seite 1von 2

BladeLogic Server Automation Database Sizing

There is no sure way / easy answer / quick way to tell exactly how big a customer's database is going to grow over a given amount of time. It really comes down to usage, and the amount of variables when it comes to usage is very large. For example, there could be a customer with only 50 targets may execute compliance jobs with hundreds of rules against those targets on a daily basis, which could be the same amount of data as a customer with 10,000 servers that only performs a light compliance job against all their targets on a quarterly basis. Use the spreadsheet below to figure out a general idea of how much a customer's database will grow over time. Core DB Considerations Table Sizes How to roughly evaluate the size of the JOB_RUN_EVENT table Reports DB Sizing Spreadsheet Sizing Spreadsheet FAQs Why does the BSA database need so much space each day for the archivelog?

Core DB Considerations
The size of the database greatly (80 to 90%) depends of the size of one particular table which contains Job log records (JOB_RUN_EVENT table) so in fact the size you need to estimate almost only depends on several job related factors which might be obvious: the number of agents the number of jobs they will have and how frequently these jobs will be run the average number of targets in a job the type and the complexity of jobs they are going to have how many Job execution logs they will keep how often they are going to run a DB cleanup (engineering has developed some routines to help properly manage Job log retention make sure to get the information - I believe the feature is going to be introduced in 7.4 or 7.5) ... because each time you will generate a line of job log, basically you will create a new record in the JOB_RUN_EVENT table.

Table Sizes
I looked at several customer database we have in-house. The size of the tables varied in the database, but looking at customers that did snaps/audits/deploy as well as compliance the below tables are the largest 20 in our database. They varied in position, but for the most part job_run_event, component, bl_acl, bl_ace, property_set_instance, and prop_set_instance_prop_val where always in the top 10. For size: JOB_RUN_EVENT PROP_SET_INSTANCE_PROP_VAL COMPONENT PROPERTY_SET_INSTANCE BL_ACL BL_ACE DISCOVERY_RESULT_PART BL_ACL_CHECKSUM BL_VALUE JOB_RESULT_DEVICE DISCOVERY_RESULT_COMPONENT PRIMITIVE_BL_VALUE DEPLOY_JOB_RUN_EVENT AGG_READ_ACCESS AUDIT_TRAIL AUDIT_OBJECT_PART AUDIT_OBJECT_PART_COUNT AUDIT_OBJECT JOB_OPTION_VALUE BLGROUP

How to roughly evaluate the size of the JOB_RUN_EVENT table


N_runs: number of Job runs kept on line N_templates: number of ComponentTemplates discovered in a DiscoveryJob

N_instances: number of PropertyInstances in a ComponentTemplate N_servers: number of targets in a Job Job type DiscoveryJob ACLpushJob UpdatePropertiesJob NSHScriptJob AuditJob SnapShotJob ComplianceJob BatchJob DeployJob Once the number of lines calculated, an evaluation of the size of the data contained in the JOB_RUN_EVENT table can be obtained by multiplying by the average size of a line. (usually 150 to 200 characters) This means that the DB should be at least 50% larger. Impact of Job log on JOB_RUN_EVENT table per Job Run Generates 4 lines of log data per ComponentTemplate discovery per instance per server Generates 1 line of log per server ACLs are pushed to Generates 1 line of log per server Properties are Updated on Generates as many lines per server as are sent to STDOUT and STDERR in the Script the Job's based on. Formula N_runs x N_templates x N_servers x N_instances x 4 N_runs x N_servers N_runs x N_servers N_lines x N_runs x N_servers

Reports DB
The reports database size does greatly depend on the amount of agent logs you will store in it, so basically depend on: the number of agents if agent logs are collected at all the log level of the agents the activity on the agents the retention period you want to have for these logs (engineering has also developed some routines to manage this) If you do collect the agent logs you also will have to plan for extra temporary space in the REPORTS schema itself, (besides the TMP tablespace) this size depending on the number of agents, the size of the agent logs and the frequency at which you will a. collect the logs b. pour them in the database with the populate_reports. ... and finally some architectural considerations: AKA: do you plan to run a single DB server for both CORE and REPORTS databases or separate DB servers. in the first case you will have to size TMP and UNDO tablespaces for managing both schemes a the same time in the second case you will probably be able to have smaller but differently tailored individual TMP and UNDO tablespaces.

Sizing Spreadsheet
BladeLogic_Database_Sizing_Worksheet.xls - The latest BL Sizing Worksheet for the CORE DB. It will output the following: Application Server sizing information (# of servers per server type) Database sizing information (Data & Index space after 1 year) DB Operational Guidelines

Sizing Spreadsheet FAQs


Why does the BSA database need so much space each day for the archivelog?
The Archive item in the Database Operational Guidelines in the summary suggests archiving 40GB/day. Why is this? This basic guideline was provided in connection with job activity. It was determined based on customer (midsize to large) experience and internal measurements. The relatively high archive log size per day is due to our DB write "agility" - ex. job_run_event. For example - if job intensity is average - you could extrapolate based on the example of ~40gb/day is sized for 35K-50K target count - of course provide some cushion.

Das könnte Ihnen auch gefallen