Beruflich Dokumente
Kultur Dokumente
2 3
Introduction
By Clint Boulton
10
10 11
11
Migrating to Oracle Database 11g, An Internet.com IT Management eBook. 2007, Jupitermedia Corp.
n its 30th year, Oracle's database is the most established software of its kind in the lot, showing that a mature product is OK as long as you lavish it with new perks. Innovation is the key theme for Oracle's Database 11g, which company President Charles Phillips and other executives introduced at a launch event in New York in July. While some analysts have pegged the update as incremental over the previous 10g database, there is no mistaking 11g for old hat; more than 400 new features, including new manageability features and testing utilities, dominate the release. "People always need to manage more data, and the things they need to do with data become more complex," Phillips said. "We've got to keep up. So we continue to have these innovations year after year after year... This is really rocket science." Among the core new features in 11g is Real Application Testing, which lets customers test and manage changes to their IT environments quickly. Andrew Mendelsohn, senior vice president of Database Server Technologies at Oracle, said the technology combines a workload capture-and-replay feature with a 2
SQL performance analyzer to let users test changes against real-life workloads. The idea is to fine-tune them in a couple of days rather than months and put changes into production. Oracle Data Guard, the disaster-recovery technology, has also been upgraded in 11g, now allowing customers to use their standby databases to improve performance in their production environments as well as provide protection from system failures and disasters. The technology now allows simultaneous read and recovery of a single standby database, making it available for reporting, backup, testing and upgrades to production databases. Tired of spending thousands of dollars on disks? Mendelsohn said Oracle Database 11g has signifiJupiterimages cant new compression capabilities to further cut the number of disks and cost of storage. In one scenario, Mendelsohn showed how a customer using a combination of tiered storage (for high-performing, less active and historical data) with the new compression technologies in 11g can trim a storage budget from almost $1 million to under $60,000 a year. 11g also boasts something Oracle calls Total Recall, which allows administrators to query table data from the
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.
past. The idea is to bring a heretofore-unprecedented time dimension to data for tracking changes, which in turn leads to more intelligent auditing and compliance. For unstructured data such as images and large text files, 11g has Fast Files, which stores a lot of information but retrieves it really fast. Security, of course, is always a major concern, with data breaches (hello T.J. Maxx, et al.) and compliance regulations keeping CIOs on their heels. In 11g, Oracle has boosted its Transparent Data Encryption capabilities
beyond column-level encryption to scramble data in entire tables, indexes, and other data storage. Noting figures from Gartner that put Oracle's database market share at 47 percent -- more than the combined market shares for IBM's DB2 Universal Database and Microsoft's SQL Server -- Phillips said 11g is a continuation of Oracle's practice of carving out the database technology roadmap for the industry. "We don't mind defining the roadmap for them," Phillips said, chuckling.
spent several months participating in the Oracle Database 11g beta evaluation program. Even though Oracle Database 10g impressed me with the breadth of its changes, I'm still trying to wrap my brain around the even more impressive upgrades in this next release. Here are some of my personal favorites among the plethora of Oracle Database 11g's new features.
Jupiterimages
Finally, the client result cache can retain results from queries or functions on the application server from which the call originated. By retaining result sets in these in-memory caches, the results are immediately available for reuse by any user session. For user sessions that connect to the database through an application server, the client cache permits those sessions to simply share the results that are already cached on the application server without having to reissue a query directly against the database. These result caches therefore hold great promise for eliminating unnecessary "round trips" to the database server to collect relatively static reference data that still
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.
needs to be shared across many application servers or user sessions - a potentially immense improvement in overall database throughput.
tion code, database patch set, or hardware configuration will affect that database's performance. That usually meant purchasing a relatively expensive third-party package (e.g., Mercury Interactive's LoadRunner) to generate a sample workload against the database using the next version of the application code, and then comparing the results against baseline performance for the current application code version. Fortunately, Oracle Database 11g has come to the rescue with two new utilities that offer monumental strides forward in system testing:
Database Replay. Database Replay can capture generated workloads from production systems at the database level. Therefore, it's no longer necessary to run actual application code to duplicate the load on the database, and this also improves accuracy of the If you've already simulated workload because it limits or removes other factors like network experienced the latency. These captured workloads advice for SQL percan then be replayed on a quality formance improveassurance database so that the ments that Oracle impact of application changes, softDatabase 10g's SQL ware patches, and even hardware upgrades can be measured accurateTuning Advisor and ly. This feature is especially valuable in SQL Access Advisor detecting performance issues that provide, you'll be could potentially hamstring a producpleasantly surprised tion database's performance that with Oracle Database might go otherwise undetected until well after changes have been 11g's enhanced SQL deployed. tuning capabilities.
Oracle Database 11g now supports retention of historical execution plans for a SQL statement. This means that the CBO can compare a new execution plan against the original plan and, if the old plan still offers better performance than the new one, it can decide to continue to use the original execution plan.
SQL Performance Analyzer. A robust complement to the Database Replay facility, the SQL Performance Analyzer (SPA) leverages existing Oracle Database 10g SQL tuning components. The SPA provides the ability to capture a specific SQL workload in a SQL Tuning Set, take a performance baseline before a major database or system change, make the desired change to the system, and then replay the SQL workload against the modified database or configuration. The before and after performance of the SQL workload can then be compared with just a few clicks of the mouse. The DBA only needs to isolate any SQL statements that are now performing poorly and tune them via the SQL Tuning Advisor.
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.
Support Workbench. Though it's stored outside of the database itself, the ADR can be accessed via either Enterprise Manager or command-line utilities. Once the ADR has detected and reported a critical problem, the DBA can interrogate the ADR, report on the source of the problem, and in some cases even implement repairs through the Support Workbench, a new facility that's part of Enterprise Manager. Incident Packaging Service. If the problem can't be solved using these tools, it may be time to ask for help from Oracle Support. The new Incident Packaging Service (IPS) facility provides tools for gathering and packaging all necessary logs that Oracle Support typically needs to resolve a Service Request. Hang Manager. Oracle Database 10g introduced the Hang Analysis tool in Enterprise Manager, and
Oracle Database 11g also adds a series of improved fault diagnostics to make it extremely easy for even an inexperienced DBA to detect and quickly resolve problems with Database Oracle 11g.
Automatic Health Monitoring. When a problem within the database is detected, the new Health Monitor (HM) utility will automatically perform a series of integrity checks to determine if the problem can be traced to corruption within database blocks, redo log blocks, undo segments, or dictionary table blocks. HM can also be fired manually to perform checks against the database's health on a periodic basis.
Oracle Database 11g now expands this concept with the Hang Manager. Through a series of dynamic views, it allows the DBA to traverse what's called a hang chain to determine exactly which processes and sessions are causing bottlenecks because they are blocking access to needed resources. And since it's activated by default on all single-instance databases, RAC clustered databases, and ASM instances, it's now possible to track down the source of a hang from one end of the system to the other.
Automatic Diagnostic Repository. The Automatic Diagnostic Repository (ADR) is at the heart of Oracle Database 11g's new fault diagnostic framework. The ADR is a central, file-based repository external to the database itself, and it's composed of the diagnostic data -- alert logs (in XML format), core dumps, background process dumps, and user trace files -- collected from individual database components from the first moment that a critical error is detected.
Flashback features: Flashback Transaction. Essentially an extension of the Flashback Transaction Query functionality introduced in Oracle Database 10g, Flashback Transaction allows the DBA to back out of the database one or more transactions -- as well as any corresponding dependent transactions -- by applying the appropriate reciprocal UNDO statements for the affected transaction(s) to the corresponding affected rows in the database. Total Recall. This new feature offers the ability to retain the reciprocal UNDO information for critical data significantly beyond the point in time that it would be flushed out of the UNDO tablespace. Therefore, it's now possible to hold onto these reciprocal transactions essentially indefinitely. Once this feature is enabled, all retained transaction history can be viewed, and this eliminates the cumbersome task of creating corresponding history tracking tables for critical transactional tables. And as you might expect, Oracle Database 11g also provides methods to automatically purge data retained in the data archive once a specified retention period has been exceeded.
tively called SecureFiles, will allow Oracle Database 11g to store images, extremely large text objects, and the more advanced datatypes introduced in prior Oracle releases (e.g., XMLType, Spatial, and medical imaging objects that utilize the DICOM [Digital Imaging and Communications In Medicine] format). SecureFiles promises to offer performance that compares favorably with file system storage of these object types, as well as the ability to transparently compress and "deduplicate" these data. (Deduplication is yet another brand-new feature in Oracle Database 11g. It can detect identical LOB data in the same LOB column that's referenced in two or more rows, and then stores just one copy of that data, thus reducing the amount of space required to store these LOBs.) Perhaps most importantly, Oracle Database 11g will also ensure that these data can be encrypted using Transparent Data Encryption (TDE) methods - especially important (and welcome) in the current security-conscious environments we inhabit today as database administrators.
No. 6: SecureFiles
Oracle Database 11g provides a series of brand-new methods for storing large binary objects (also known as LOBs) inside the database. These new features, collec-
The concept of a virtual column a column whose value is simply the result of an expression, but which is not stored physically in the database is a powerful new construct in Oracle Database 11g.
Jupiterimages
TDE within the database. For example, it's now possible to encrypt data at the tablespace level as well as the table and index level. Also, logical standby databases can utilize TDE to protect data that's been transferred from its corresponding primary standby database site. Moreover, secured storage of the TDE master encryption key is ensured by allowing it to be stored externally from the database server in a separate Hardware Security Module. Secure By Default. Oracle Database 11g also implements a new set of out-of-the-box security enhancements that are collectively called Secure By Default. These security settings can be enabled during database creation via the Database Configuration Assistant (DBCA), or they can be enabled later after the database has been created. Here's a sample of these new security features:
Row-Level Security (RLS) policies. Finally, an RMAN recovery catalog can now be secured via Virtual Private Catalog to prevent unauthorized users from viewing backups that are registered within the catalog.
Interval Partitioning. One of the more intriguing new partitioning options, interval partitioning is a spe Every user account password is Oracle Database 11g cial version of range partitioning that now checked automatically to requires the partition key be limited adds a third standby ensure sufficient password complexto a single column with a datatype of database type, the ity is being used either NUMBER or DATE. Range parsnapshot standby titions of a fixed duration can be To further strengthen password database, that's specified just like in a regular range security, the DEFAULT user profile created by converting partition table based on this partition now sets standard values for passkey. However, the table can also be an existing physical word grace time, lifetime, and lock partitioned dynamically based on standby database to time, as well as for the maximum which date values fall into a calculatnumber of failed login attempts this format. ed interval (e.g., month, week, quarter, or even year). This enables Oracle Auditing will be turned on by Database 11g to create future new default for over 20 of the most sensitive DBA activipartitions automatically based on the interval specities (e.g., CREATE ANY PROCEDURE, GRANT ANY fied without any future DBA intervention. PRIVILEGE, DROP USER, and so forth). Also, the
AUDIT_TRAIL parameter is set to DB by default when the database is created, so this means that a database "bounce" will no longer be required to activate auditing Fine-Grained Access Control (FGAC) is now available for network callouts when using raw TCP (e.g., via the UTL_TCP package), FGAC will be able to construct Access Control Lists (ACLs) to provide finegrained access to external network services for specific Oracle Database 11g database user accounts. Enterprise Manager now provides interfaces for direct management of the External Security Module (ESM), Fine-Grained Auditing (FGA) policies, and 7
Partitioning On Virtual Columns. The concept of a virtual column - a column whose value is simply the result of an expression, but which is not stored physically in the database - is a powerful new construct in Oracle Database 11g. It's now possible to partition a table based on a virtual column value, and this leads to enormous flexibility when creating a partitioned table. For example, it's no longer necessary to store the date value that represents the starting week date for a table that is range-partitioned on week number; the value of week number can be simply calculated as a virtual column instead. Partitioning By Reference. Another welcome partiAn Internet.com IT Management eBook. 2007, Jupitermedia Corp.
tioning enhancement is the ability to partition a table that contains only detail transactions based on those detail transactions' relationships to entries in another partitioned table that contains only master transactions. The relationship between a set of invoice line items (detail entries) that corresponds directly to a single invoice (the master entry) is a typical business example. Oracle Database 11g will automatically place the detail table's data into appropriate subpartitions based on the foreign key constraint that establishes and enforces the relationship between master and detail rows in the two tables. This eliminates the need to explicitly establish different partitions for both tables because the partitioning in the master table drives the partitioning of the detail table. Transportable Partitions. Finally, Oracle Database 11g makes it possible to transport a partitioned table's individual partitions between a source and a target database. This means it's now possible to create a tablespace version of one or more selected partitions of a partitioned table, thus archiving that partitioned portion of the table to another database server.
Fast Mirror Resynchronization. An ASM disk group that's mirrored using ASM two-way or three-way mirroring could lose an ASM disk due to a transient failure (e.g., failure of a Host Bus Adapter, SCSI cable, or disk I/O controller). Should this occur, ASM will now utilize the Fast Mirror Resynchronization feature to quickly resynchronize only the extents that were affected by the temporary outage when the disk is repaired, thus reducing the time it takes to restore the redundancy of the mirrored ASM disk group. Preferred Mirror Read. An ASM disk group that's mirrored using ASM two-way or three-way mirroring requires the configuration of failure groups. (A failure group defines the set of disks across which ASM will mirror allocation units; this ensures that the loss of any disk(s) in the failure group doesn't cause data loss.) In Oracle Database 11g, it's now possible to inform ASM that it's acceptable to read from the nearest secondary extent (i.e., the extent that's really supporting the mirroring of the ASM allocation unit) if that extent is actually closer to the node that's accessing the extent. This feature is most useful in a Real Application Clusters (RAC) database environment, especially when the primary mirrored extent is not local to the node that's attempting to access the extent. Resizable Allocation Unit. Oracle Database 11g now permits an ASM allocation unit to be sized at either 2, 4, 8, 16, 32, or 64 MB when an ASM disk group is first created. This means that larger sequential I/O is now possible for very large tablespaces, and/or tablespaces with larger block sizes. The extent size is now automatically increased as necessary and this allows an ASM file to grow up to the maximum of 128 TB as supported by Bigfile Tablespaces (BFTs). Improved ASMCMD Command Set. ASMCMD now includes several new commands that increase visibility of ASM disk group information, support faster restoration of damaged blocks, and retain and restore complex metadata about disk groups: A system / storage administrator can execute the lsdsk command to view a list of all ASM disks even if an ASM instance is not currently running. The remap command utilizes the existing backup of a damaged block on an ASM-mirrored disk group to
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.
recover the damaged block to an alternate location elsewhere in the ASM disk group. Commands md_backup and md_restore allow a DBA to back up and restore, respectively, the metadata reflecting the exact structure of an ASM disk group. These new commands are an immense boon because the recreation of extremely large disk groups consisting of several dozen mount points can be tedious, time-consuming, and prone to error.
Rolling Database Upgrades Support Physical Standby Databases. Oracle Database 10g introduced the ability to utilize SQL Apply to perform rolling upgrades against a primary database and its logical standby database. During a rolling upgrade, the DBA first upgrades the logical standby database to the latest database version, and then performs a switchover to make the standby database the primary and vice versa. The original primary database is then upgraded to the new database version, and a switchover reverses the roles once again. This ensures that the only interruption to database access is the time it takes to perform the switchovers. The good news is that Oracle Database 11g now allows a rolling database upgrade to be performed on a physical standby database by allowing the physical standby to be converted into a logical standby database before the upgrade begins. After the rolling upgrade is completed, the upgraded logical standby is simply reconvert-
Migrating from one version to another may be as simple as exporting the old and importing into the new, but chances are there is a lot more involved than first meets the eye.
cal standby, which contains the same logical information as the primary database, but whose data is organized and/or structured differently than on the primary database and which is updated via SQL Apply. Oracle Database 11g adds a third standby database type, the snapshot standby database, that's created by converting an existing physical standby database to this format. A snapshot standby database still accepts redo information from its primary, but unlike the first two standby types, it does not apply the redo to the database immediately; instead, the redo is only applied when the snapshot standby database is reconverted back into a physical standby. This means that the DBA could convert an existing physical standby database to a snapshot standby for testing purposes, allow developers or QA personnel to make changes to the snapshot standby, and then roll back those data created during testing and immediately reapply the valid production redo data, thus reverting the snapshot standby to a physical 9
ed back into a physical standby. Real-Time Query Capability. Active Data Guard will now allow the execution of real-time queries against a physical standby database, even while the physical standby continues to receive and apply redo transactions via Redo Apply. (In prior releases, the physical standby could only be accessed for reporting if it was opened in read-only mode while the application of redo was suspended.) This means that a physical standby database can be utilized more flexibly for read-only reporting purposes; also, the considerable resources needed to create and maintain the standby environment may now be put to much more effective use. Expanded DataType and Security Support. Oracle Database 11g now supports XMLType data stored in CLOB datatypes on logical standby databases. In addition, Transparent Data Encryption (TDE) can now
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.
support encrypted table data as well as encrypted tablespaces, and Virtual Private Database (VPD) is supported for logical standby databases. Heterogeneous Data Guard. Finally, it's now possible to set up the primary database site using one operating system (e.g., Oracle Enterprise Linux 4.4) while using another operating system (e.g., Windows 2003 Server) for the standby database site.
Conclusion
Oracle Database 11g continues to improve upon the massive paradigm shift in Oracle Database 10g toward self-managed, self-tuning, and self-healing databases. These automatic database management features will be especially valuable to IT organizations that continue to struggle with ever-larger databases and ever-increasing computing workloads while attempting to answer the demand for lowered costs and value-added service.
fairly common event in a database's lifecycle is that of the migration from version "older" to version "newer." Migrating from one version to another may be as simple as exporting the old and importing into the new, but chances are there is a lot more involved than first meets the eye. It is not uncommon to also incorporate other significant changes such as an operating system change, a schema modification, and changes to related applications. Each change has its own inherent risk, but lumping them together in one operation flies in the face of common sense, even more so without having tested the migration from start to end. Amazingly, this situation occurs all too often. From a software engineering standpoint, is it safe or a best practice to heap so many significant changes together in one step? Further, wouldn't it seem obvious that you would want to not only practice the migration, but test the changes before actually applying them to your live/production environment? Here is something else to consider: break a dependency chain before it breaks you and the migration process. Given the scenario of migrating from Oracle 10g to 11g, changing the underlying operating system to Linux from Solaris, modifying major tables within a schema, and running newer/modified versions of related applications, where are the places you can break the dependency chain? Put another way, what are the safer/well-known/"charted by many others before you" steps, and which are the uncharted/"applies only to you" steps? 10
Jupiterimages
namely the hours spent on exporting and importing. If you can separate the overall migration into at least two distinct stages, you will have broken the dependency chain into smaller chains. The guiding principle/lesson to be learned here is to move from point A to D via safe, incremental steps. How your database operates with respect to schema and application interaction is up to you to determine. Until you have thoroughly test-driven schema and application changes, this part of the overall migration process stays in the realm of the unknown. Going live and finding out - for the first time - that the new application/database code results in cascading triggers (thereby bringing an instance to its knees, so to speak) is obviously a poor time to become aware of this situation. Developers and testers using 100 records as a test size when the production environment contains tens of millions of records are hardly conducting a thorough test.
Jupiterimages
You can script the process to include it in a set of install scripts you deliver with a product. You can put your create database script in CVS for version control, so as you make changes
cleaning or rearranging of tables and indexes, now is the time to edit the indexfile and update tablespace mappings and storage parameters. If the logical layout is to remain the same, then the third reason comes into play. Separate the tables from the indexes; that is, separate the SQL create statements (one script for tables, the other for indexes). Do as much as you can on the target database before it is time to do the actual migration. Part of this includes creating the same/new tablespaces and running the create tables script. Run the create tables script ahead of time for two reasons: one is to validate the logical layout, the other is to help speed up the import (concepts question: how does import work if an object exists or does not exist?). The fourth reason comes back to the indexes listed in the indexfile. Performance-wise, when doing bulk inserts, is it better to have indexes or not? What happens when a new record is inserted? One or more indexes have to be updated (assuming there is at least a primary key for that record). Oracle's recommendation is that (for large databases) you should hold off on creating indexes until after all the data has been inserted. Again, this comes back to the importance of the indexfile because it is the link between export using "indexes=n" (the default is y) and your being able to re-create the indexes after the data has been loaded.
Team" have in common? Answer: "I love it when a plan comes together." To help make the plan come together, fire up Visio or PowerPoint and diagram a workflow process. As a minimum, you can take the low-tech route and come up with a timeline. Even if you start with brainstorming and writing down ideas as they come to mind, you will be much better off having everyone on the same sheet of music. Items to consider include: Diagramming the workflow/process, coordination meetings Assign responsibilities among team members, establish roles and responsibilities Create and distribute a contact list, include how to get in touch with other key personnel (managers, system administrators, testing, developers, third party/application providers, customers, account managers, etc.) Hours of operation for Starbucks (some of them open an hour later on Saturdays) After hours building access for contractors (meet at a designated place and time?) Janitorial services - do they alarm the building/office when they are done? There is nothing like an alarm going off, as you walk down the hall, to add a little excitement to the evening. Notification to security/police regarding after hours presence ("Really Officer, we work here, we're not just sitting here looking like we work here") Establishing a transfer point on the file system and ensuring there is enough disk space for the export Acquiring a complete understanding of schema changes (how and when key tables get altered/modified, to include data transformation processes) Establish a work schedule (does every DBA need to be present the entire time, or can schedules be staggered?)
13
Aside from a shortage of time, there is very little to prevent you (or the person in charge of export) from practicing the export several times over and ensuring there are no glitches in this part of the plan.
both can run faster if optimized a bit. Do not forget that indexes are not being exported. Indexes will be re-built after the data is loaded in the target database. How are you driving the exports: interactive mode or use of shell scripts and parameter files? Shell scripts should have four key features: An interview process Feedback, or a summary of what was entered Existence checks (includes parameter files, ability to write to the dump and log file locations, and database connectivity) Bail out mechanisms ("Do you want to continue?") after key steps or operations
14
One script can drive the entire export process, and the bail out points can be used as signals (accompanied by extensive use of echo statements, which denote where you are in the process). A key metric to be determined while practicing and refining the scripts is that of the time it takes to perform all exports. If a schema migration is taking place (as opposed to a full database migration), what are the dependencies among schemas? Look for names/items such as build_manager, process_logger, and stage (more germane to a warehouse). "Build_manager" (as an example of a name) may contain common or public functions, procedures and packages. Process_logger may be the owner of process logs for all schemas (fairly common if you see "pragma autonomous_transaction" in the text of a source; it is a way of capturing errors during failed transactions). Unless the new schema incorporates these external or associated schemas, some or all of these otherwise "Left Behind" schemas need to be accounted for in the target database. While the export is taking place, what is happening with the nonexported schemas? You may need to disable connections, change passwords, disable other processes, and suspend crons while the export is taking place. Web applications connections tend to be like crabgrass (i.e., hard to kill), and an effective way of stopping them is to change a password. Finally, what is the disposition of the source database, that is, assuming your plan comes together?
For tables undergoing a modification, questions to ask include where, when, and how does that take place? Do the changes occur within the user's schema, or within a temporary or migration schema, followed by "insert into new version of table as select from temp table?" Fully understand how major tables are being changed you may take for granted what appear to be ash and trash "not null" constraints, but application changes may completely rely upon them. In other words, it may not be enough to take care of PK, FK and unique constraints when trying to rebuild a table on the fly because there was some hiccup in the process. What about cron and database jobs? How are you migrating/exporting all of those? Something that frequently goes hand-in-hand with cron jobs is email. Is the new server configured for e-mail notification? Are there any database links to create? Do you need logging turned on while the import is taking place? Is it even necessary to log everything being imported? What about triggers, especially the "for each row" kind? Millions of rows inserted via import equals millions of times one or more triggers fired on a table with that kind of trigger. If the trigger on a table back Jupiterimages on the source database already took care of formatting a name, does it need to be fired again during an import? You can be clever and disable quite a few automatic functions to help speed up the import, but don't be too clever by half, that is, do not forget to re-enable whatever it is you disabled. At 5:30 in the morning, having worked all day Friday (in addition to coming back at 11 to get ready for the midnight starting gun), sleep deprivation can introduce a significant amount of human error. If you have to go off your game plan, have someone double-check your work or steps, especially if the object being manipulated is of key importance to a database or schema.
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.
Import phase
Practice creating schemas and associated physical/logical objects such as tablespaces and datafiles. The end result desired here is no ORA-xxxxx errors whatsoever, and all create scripts should be re-runnable. With respect to import parameter files, ensure fromuser marries up to touser. Using what was gleaned from the indexfile, pre-create tables in the target database.
15
In Closing
The tips and steps covered in this article are based on real events, places, and persons. I have personally witnessed the customer service rep complaining about how his changes were not showing up, and it was because he had no idea whatsoever about pointing his desktop CRM application to the new database. Was that the responsibility of the DBA or the rep's manager? I have seen key tables have problems with the insertion of transformed data and workarounds such as "create table as select" from the stage or transformation table implemented, but alas, the stage table did not have all of the not null constraints as did the new "real" table, and there goes the Web application down the drain. The sad truism about a database migration is that if you do not have the time to test beforehand and wind up failing (the reason why is immaterial), it is amazing how time magically appears to perform testing before the second attempt. The tips mentioned in this article should give you a good perspective regarding some the external factors that come into play during a migration.
This content was adapted from Internet.com's DatabaseJournal and InternetNews Web sites Contributors: Clint Boulton, Jim Czuprynski, and Steve Callan.
16