Sie sind auf Seite 1von 5

The Data Warehouse Data Quality Audit

The data quality audit: A survey and / or sample of data tables to determine accuracy and completeness of data.
An audit can help you discover the source of data warehouse quality problems. Data quality problems are often widespread and originate in your source systems, their applications, and operational processes. And, sadly, many are the direct result of inadequate warehouse management. To combat these problems, data warehouse architects must understand their 1) source data then the 2) target data thats been loaded in the warehouse. This understanding can best come from data profiling.

Understanding the Source and Target Data


Within your source systems are two principle areas of concern: Insufficient process controls to ensure data quality and consistency. Limited or no basic edits. Source applications that create or update your source data have been known to provide little or no edit checks to ensure that data being entered meets even the most basic standards. Without measures to monitor and control the most fundamental standards at the source systems, the warehouse team will struggle against a constant flow of poor quality data.

The target warehouse adds to the data quality problems, which can be typically attributed to two factors: Inadequate source data analysis. Guidelines for sourcing data into the warehouse must be established in order to avoid simply sourcing data because it is part of an existing record. This practice leads to a high percentage of data of questionable value being warehoused. Limited warehouse (target data) management. A fundamental function of the warehouse team is to actively monitor and tune warehoused data in pursuit of quality assurance and data relevance.

So, how good (or bad) is your warehoused data? A data quality audit is designed and implemented in order to determine the answer. There are three high-level objectives of a data quality audit:

Produce statistically valid data quality measurements of the data warehouse Investigate, identify, and document leading data quality causes Create an assessment and recommendation report.

This audit approach is straightforward. It is designed to be implemented by any warehouse team, on demand.

High-quality data needs to pass a set of quality criteria. Those include:

Validity: The degree to which the measures conform to defined business rules or constraints. When modern database technology is used to design data-capture systems, validity is fairly easy to ensure: invalid data arises mainly in legacy contexts (where constraints were not implemented in software) or where inappropriate data-capture technology was used (e.g., spreadsheets, where it is very hard to limit what a user chooses to enter into a cell).Data constraints fall into the following categories:
o

Data-Type Constraints e.g., values in a particular column must be of a particular datatype, e.g., Boolean, numeric (integer or real), date, etc. Range Constraints: typically, numbers or dates should fall within a certain range. That is, they have minimum and/or maximum permissible values. Mandatory Constraints: Certain columns cannot be empty. Unique Constraints: A field, or a combination of fields, must be unique across a dataset. For example, no two persons can have the same social security number. Set-Membership constraints: The values for a column come from a set of discrete values or codes. For example, a person's gender may be Female, Male or Unknown (not recorded). Foreign-key constraints: This is the more general case of set membership. The set of values in a column is defined in a column of another table that contains unique values. For example, in a US taxpayer database, the "state" column is required to belong to one of the US's defined states or territories: the set of permissible states/territories is recorded in a separate States table. The term foreign key is borrowed from relational database terminology: follow the hyperlink for more details. Regular expression patterns: Occasionally, text fields will have to be validated this way. For example, phone numbers may be required to have the pattern (999) 999-9999. Cross-field validation: Certain conditions that utilize multiple fields must hold. For example, in laboratory medicine, the sum of the components of the differential white blood cell count must be equal to 100 (since they are all percentages). In a hospital database, a patient's date of discharge from hospital cannot be earlier than the date of admission.

o o

Accuracy: The degree of conformity of a measure to a standard or a true value - see also Accuracy and precision. Accuracy is very hard to achieve through data-cleansing in the general case, because it requires accessing an external source of data that contains the true value: such "gold standard" data is often unavailable. Accuracy has been achieved in some cleansing contexts, notably customer contact data, by using external databases that match up zip codes to geographical locations (city and state), and also help verify that street addresses within these zip codes actually exist. Completeness: The degree to which all required measures are known. Incompleteness is almost impossible to fix with data cleansing methodology: one cannot infer facts that were not captured when the data in question was initially recorded. (In some contexts, e.g., interview data, it may be possible to fix incompleteness by going back to the original source of data, i,e., re-interviewing the 2

subject, but even this does not guarantee success because of problems of recall - e.g., in an interview to gather data on food consumption, no one is likely to remember exactly what one ate six months ago. In the case of systems that insist certain columns should not be empty, one may work around the problem by designating a value that indicates "unknown" or "missing", but supplying of default values does not imply that the data has been made complete.

Consistency: The degree to which a set of measures are equivalent in across systems. Inconsistency occurs when two data items in the data set contradict each other: e.g., a customer is recorded in two different systems as having two different current addresses, and only one of them can be correct. Fixing inconsistency is not always possible: it requires a variety of strategies - e.g., deciding which data were recorded more recently, which data source is likely to be most reliable (the latter knowledge may be specific to a given organization), or simply trying to find the truth by testing both data items (e.g., calling up the customer). Uniformity: The degree to which a set data measures are specified using the same units of measure in all systems. In datasets pooled from different locales, weight may be recorded either in pounds or kilos, and must be converted to a single measure using an arithmetic transformation.

An Approach to the Data Quality Audit


The data quality audit is a business rules-based approach that incorporates standard deviation to identify variability in sample test results. You examine the likelihood (confidence levels of 95 to 99 percent) of invalid data values occurring in columns or fields after business rules are applied against sample test data. DWH tables can be the center of an audit because they provide the foundation for information delivery to your business users. These tables provide the data management assessment team a starting point for examining critical activities in the warehouse process instead of concentrating on the source or feeder systems. Using the warehoused data as a baseline, the emphasis is on the effect, followed by the cause. The following data quality audit approach is iterative. Using the spiral method, subject matter areas are identified and tasks are defined, enabling the team to set specific milestones and associated deliverables unique to iterations. By keeping the process simple and focusing on key checkpoints, you'll be able to make significant progress. The six steps of this approach are iteratively conducted for each subject area of your audit.

1. Deliver DDL: This is the first step in the data quality audit process cycle. It provides the initial scope of tables and their attributes relevant to the subject area being audited. 2. Get sample data: A critical success factor for calculating confidence levels based on standard deviation is to have random samples of target data. Use a random sample function to select the audit data. A sample size of 10,000 records for each target table is often sufficient 3. Establish base rule set: For each of the tables selected, a set of base rules must be established to test the conformity of the attributes' data values against basic constraints of the attributes, such as data type. For example, if an attribute has a DATE data type, then that attribute should have only date information stored. 4. Apply data management subject matter experts (DM SME) rules: After the base rules are tested, the DM SME rules identify and document the known business rules that affect each table or attribute relevant to the subject area. 5. Apply user community (UC) SME rules: The intent of this step is to have the UC review all the rules collected by data management as well as contribute additional rules they deem necessary. 6. Calculate final results: Final data quality audit results are calculated and a report is published once all rules have been identified, formalized, reviewed, and accepted per subject area. This six-step approach is intentionally kept at a high level in order to minimize confusion regarding the iterations. However, when executed, each of the six steps will follow a process to ensure success: Plan, Do, and then Review. You must plan each of the six steps with regard to the subject area being evaluated. Then you execute (do) the plan, and finally, review the results. A data quality audit based on business rules has several advantages. Among other things, it provides a means for the data warehouse group to keep current on the business rules that need to be applied to the warehoused data. Moreover, the approach forces the technical teams (warehouse and IT) to synchronize with the user communities regarding the data being warehoused and the ultimate application of that data.

Who's Responsible for the Data Quality Audit?


4

Buyer beware is the best way to describe responsibility in this matter. It would be great if the vendors were constantly and explicitly handling data quality. It would be awesome to hear vendors tell clients, "Prior to your committing large resources to this effort, we want to conduct a data quality assessment of the source data specific to this project to ensure we can create the necessary target tables you need to meet your business requirements."

Das könnte Ihnen auch gefallen