BCV5 is a faster, more efficient and more manageable way to copy Db2 data. BCV5 not only copies the physical Db2 data fast, it also takes care of all the structures. This includes all the basic objects – your databases, tablespaces, tables and indexes – and also DDL features such as views, triggers, aliases, synonyms, constraints, and many more. If your target objects already exist, it will check them for compatibility. It automatically generates and executes jobs that will:
- Extract object definitions from the Db2 catalog of the source system.
- Transfer the definitions to the target system.
- Rename the objects as specified and apply them in the target Db2 system (CREATE, or DROP and CREATE).
- Compare the source definitions with existing target objects for compatibility.
- Copy page sets from source to target Db2.
- Start the target objects for end user access.
- Once a copy task is defined, it can be executed at any time, either manually or using a job scheduler for periodic execution.
Automation Reduces Manual Effort
BCV5 is completely automated. The integrated ISPF interface allows you to define copy processes easily by specifying name patterns for the objects to be copied, and the appropriate processing options. Its powerful rule based renaming feature makes adhering to naming conventions in the target Db2 system simple and error free. A BCV5 copy process can be executed either manually, or under the control of a job scheduler. Once the copy process is started there is nothing else to do.
BCV5 automatically generates the DDL for the selected objects using the specified target names. It checks existing target objects for compatibility. If any target objects are missing, it can create them for you. For each object in the process, BCV5 determines the fastest way to make a copy – usually a direct VSAM level copy.
If existing target objects have structural differences, BCV5 can either drop and recreate them, or it can trigger UNLOAD/LOAD as a fallback copy tool, allowing it to copy the data despite the differences. One way or another, BCV5 makes it all work, and minimizes unwanted surprises.
Stay compliant and protect sensitive data. The Masking Tool masks personally identifiable information (PII) during the copy process. It can also mask data that is currently in a table without making a copy. The Masking Tool comes with ready-to-use predefined functions that allow you to mask names, addresses, SSNs, and many other types of data. You can also customize or add your own functions to meet your specific data masking needs.
Whenever data must come from tables that require 24×7 enterprise availability, it creates an issue for IT. Users need to query and update the tables without interruption – stopping the source objects is not an option.
At the same time, testers require a consistent copy of the data. BCV5‘s In-Flight component solves this problem. It uses information from the Db2 log to make consistent copies of tablespaces (regular, LOB and XML) and indexes. And it does all this while keeping the source available for updates.
Need to provide Db2 data from last week, last month or last year? Want an automated and efficient method to recreate DDL and data in record time?
IB takes snapshots on demand and allows you to restore the objects whenever you need them into any environment. You can use the original names or rename the objects.
BCV5‘s Icebox component compares structural information from the backup with current structures, and it can either restore old data into the current tables or drop and recreate the tables so that they look exactly like they did when the backup was created.
Some production systems are physically isolated to such an extent that it is extremely difficult to migrate data efficiently to test systems. They may be located on a separate LPAR or even on a separate CEC, and sometimes not even shared DASD is available.
The RC component of BCV5 makes data transfer between isolated systems simple. RC uses TCP/IP to transfer structures and data from one system to another. The entire copy process is automated and just as easy to set up and execute as a local copy.
BI provides a powerful definition language for creating, modifying and deleting BCV5 copy tasks entirely through batch jobs. It allows you to easily integrate BCV5 with other processes.
You can supply the processing options and rule sets of a copy task in a job rather than interactively. When you have multiple BCV5 installations on different systems, the batch interface is also a convenient way to transfer complete task definitions between the installations.
Usage Tracker can store persistent information about task executions, which helps you to track and report on your organization’s Db2 data movement, such as which Db2 objects were copied, by whom, how long the process took, and other details. This can provide valuable insights for DBAs, storage administrators, and stewards for data governance.
The advantage of integrated parallel processing
- Automates Db2 Copying/Migrating/Refreshing
- Saves 90% CPU resources and run time
- Dramatically reduces labor costs freeing up staff
- Integrates into IT environments seamlessly
- Eliminates the need to run RUNSTATS and REBUILD INDEX
- Copies directly, no need for temporary DASD space
The mechanism that BCV5 uses to move data is based on copying the VSAM clusters that Db2 uses to store tablespace and indexspace data. There is a Db2 utility that works in a similar way: DSN1COPY.
The speed of DSN1COPY and the built-in copy utility of BCV5 are comparable if you copy a single cluster. When you want to copy an entire database or even multiple databases with all associated tablespaces and indexspaces, the copy utility of BCV5 has the advantage of integrated parallel processing with a user-definable number of threads. To emulate this parallelism with DSN1COPY, you need to create an elaborate job chain and execute it under the control of a job scheduler. When the number of data sets becomes larger, you must also take other limitations like the maximum number of steps in a job into consideration when using DSN1COPY.
The main problem with DSN1COPY, however, is the extremely low extent of automation that is built into this utility. Before the utility can be used, you must do a manual comparison of the source and target object structure in order to ensure that all attributes of all involved objects match. Some of these attributes need to be passed to the program in the form of invocation parameters to achieve usable results. The values that you need to look up are scattered over many catalog tables and need to be retrieved using multiple queries. In addition, you sometimes have to define some of the clusters in the target environment by yourself because DSN1COPY requires that all target clusters already be present.
First of all, you need to determine which datasets to copy. Let’s say you want to copy a single tablespace with all its tables and indexes. If the tablespace is partitioned, it consists of more than one dataset. Each partition may have been subject to a fast switch operation, which changes the dataset name of the corresponding VSAM cluster. Also, the indexspaces can have a mangled name that is derived from the actual index name. Indexes, of course, may be partitioned as well. To illustrate the complexity of this seemingly simple task, here’s a query that can be used to give you the name of all datasets that belong to the tablespace DB000001.TS000001 and its indexes. Note that you have to insert the database name and tablespace name in both parts of the UNION when you want to use this query.