Table Level Copy

Acceptance-, integration- and regression tests require mass data, gigabytes, sometimes even terabytes. Often data provisioning processes run for hours or even days. They consume considerable resources of database experts. What is needed is a fast procedure, which, once defined, can be executed whenever it is necessary to transport data between environments.

At this level, the traditional cloning method cannot be used anymore. The initially fast and efficient procedure is not suitable for copying only parts of a source system and is also not offering any assistance for structural changes of the data.

The data mostly exists in relational structures. This means that preliminary work of verification/creating databases, tablespaces, tables, indices, etc. is very involved and time/efforts should not be underestimated. A tool should be able to carry out these working steps independently. Users should simply let the user define from which source system (production, preproduction) the tool has to copy which data to which target environment (test, development). It can be for example a complete schema, all tables with a certain name pattern, or simply all tables of a list. Furthermore the objects to be copied types have to be selected including, indices, views, etc. In the target system implementation rules have to be defined, missing objects should be added (create), existing should be refreshed/replaced. Eventually objects have to be renamed (Creator, Schema), data has to be modified or must be anonymized. The execution should be unattended (perhaps scheduler driven) with an automatic restart if necessary (i.e abort due to lack of storage space). Parallel execution should accelerate the copy process.