For a long time moving DB2 data was meant to be unload and load. Transfer rates reached by procedures based on unload/load are today at about 20MB/sec. That is, a one-terabyte copy needs around 13 hours, even more, if there are many constraints to regard, not yet considered is the time needed to rebuild indexes.
Procedures based on VSAM dataset copies are much faster, more than tenfold speeds are reachable, a one-terabyte copy runs little longer than an hour. Table- and indexspaces can be copied, hence rebuild index is no longer required. The difference is even sharper in respect of resource consumption. A VSAM copy requires less than one-tenth of the CPU time an unload/load procedure burns up. Not only this, the associated DB2 serves the load job, hence the DB2 address space consumes also CPU time, whereas a VSAM copy runs stand-alone.
Both techniques, unload/load and VSAM copy require that the structure of the objects of source and target match somehow, the target objects must at least be created in advance. A convenient copy tool shall automate this, see BCV5.
Even higher transfer rates can be expected from DB2 subsystem cloning techniques. A DB2 subsystem clone is a duplicate of a whole DB2, everything is duplicated, user tablespaces, catalog, log, BSDS, one and the same is present under a new subsystem id. The duplicate or clone is normally created by copying the volumes (z/OS ‘disks’) on that the system resides. The standard utility ADRDSSU can be used to copy a volume within some minutes, even faster are enhanced copy facilities like Flashcopy. Clones are ideally suited as pre-production environments, mixed landscapes comprising of DB2, IMS and VSAM data can be cloned as a whole. Although, the volume copies are made rapidly, some actions are still necessary to make the clone work. The volume copies contain datasets with the same names as those of the original datasets on the original volumes. Renaming is necessary in order to allow to catalog the datasets. After this the DB2 (the clone), in particular its catalog, must be adapted to the new dataset names. Cloning tools like BCV4 automate this entire process. The volumes that need to be copied are identified, renaming is suggested, the copy jobs are generated as well as the jobs to catalog the datasets and to adapt DB2 and, if required, IMS. The process works highly parallel, is scheduler-driven and robust. Entire application landscapes can be cloned in a short time. With enhanced copy facilities that are able to provide point-in-time copies, for the whole set of involved volumes, clones can be provided “in-flight”. In order to ease the initial recovery of the clone, it is of course advisable to avoid heavy update transactions/jobs parallel to the volumes split/flash/snap.