Keep tables synchronized with the ULT4Db2 data propagation feature. It can directly execute the same INSERT, UPDATE and DELETE statements against different target tables. Alternatively, you can have ULT4Db2 write those statements into external data sets. If your target tables are in a different database system or a platform like Oracle, Microsoft SQL Server, or other DBMS, then you can change the syntax of the generated statements to suit your needs. ULT4Db2 is different from other propagation tools in that it does not increase the load in the source Db2 subsystem because it does not use log capture exits. Instead, it reads the
archive and active log datasets directly, which results in significant savings in CPU time. Furthermore, you control exactly when ULT4Db2 runs, so you can schedule it for non-peak times to keep your four-hour rolling average low. Even though ULT4Db2 does not run continuously, it can still provide seamless data propagation. At the end of each execution, it keeps track of how far the log was read and which transactions were still open. You can run the jobs of a propagation task periodically. Depending on your requirements, you may run ULT4Db2 once per day, or every couple of minutes for near real-time propagation.
Organizations want to keep track of changes to sensitive information – who made a change to a table, when was it made, and what exactly was changed? The Db2 log contains all this information in different locations.
ULT4Db2 helps you to put the pieces together and populate your auditing tables with the information you need. You can analyze all the changes over a given period of time and filter by user name, plan name, column contents or any other criteria. You can choose which columns from your original tables you want to see in the auditing tables. Plus, you can also include columns with meta information, such as the timestamp of a change or the correlation ID, in the output.
If data in a table has been changed in error, it can be very difficult to undo this change without unintended side effects. Db2 allows you to recover an object to any point in time, but doing so means you lose all the changes that were made after this point in time, not only the one that you actually want to undo.
What you really need is the ability to undo a single change, or a single transaction that affected one or more tables. ULT4Db2 can create SQL statements that revert a specific change which happened at a given point-in-time. These statements are always written to an external dataset so that DBAs can review them first. The ability to filter by various criteria helps to control the repair process.
Accidentally dropping an object in a production environment can have severe consequences and cause prolonged outages. Restoring an object to the state before the drop operation is usually a tedious and error-prone process.
It requires a lot of resources on the mainframe, and means a lot of extra work for DBAs. ULT4Db2 has the ability to bring back objects that have been accidentally dropped. Based on the information from the Db2 log and existing image copy data sets, ULT4Db2 can re-create objects and fill them with the same data that they had right before the DROP command was issued. LOB tablespaces can be restored to the point in time of the latest image copy. You can scan an arbitrary log range for drop operations and select the objects that you want to undrop by specifying a name pattern. The entire process is automated and does not require any manual intervention.
ULT4Db2 is able to undrop databases, table spaces, tables and indexes. All foreign keys, check constraints and table privileges are automatically recreated as well. The ULT4Db2 undrop process does not require a point-in-time recovery of the Db2 catalog, which means all other objects stay fully available during the undrop process.
ULT4Db2 can generate a variety of reports that help you keep an overview on how your tables are used. It can summarize the INSERT, UPDATE and DELETE activity for your tables by different criteria like unit of recovery, user name, or plan name. You can also produce a detailed report that contains each row as it was before and after each update.Additional reports are also available:
- Quiet reports identify periods of time that are ideal candidates for a RECOVER operation
- Rollback reports identify transactions that do large rollback operations
- Longrunner reports show transactions with a low commit frequency that can cause lock escalation
Log analysis processes are easy to set up with ULT4Db2. An ISPF interface guides you through all the required steps. You specify the names of the tables you wish to analyze using name patterns, plus a time frame. You also have the option to control numerous details of the analysis process if needed. For example, it is possible to hide or rename individual columns or to treat them as binary data. You can also include log records for cascaded deletes, trigger actions, or logged LOAD operations.
Once a task is defined, ULT4Db2 does the rest: It generates all the required JCL and utility statements, and at runtime, all necessary information is retrieved automatically. This includes table structures, compression dictionaries, required log data sets, and information about previous table versions. Output for different tables, or different sets of tables, can be written into separate datasets that are dynamically allocated based on a template specification. ULT4Db2 also keeps track of each execution. The ISPF interface shows the execution history of all tasks and allows you to browse the output datasets.
Database administrators and application developers use ULT4Db2 to repair the result of an incorrect program or job execution, or a user error. Auditors and administrators use ULT4Db2 to determine which tables were changed and by whom. Data centers replace expensive data propagation tools with ULT4Db2 because it offers reduced CPU consumption, plus, it‘s affordable. Database administrators use ULT4Db2 to analyze commit frequencies and idle times.
- Provides insight into how your objects are used
- Allows DBAs to quickly and easily identify, isolate and undo unwanted changes
- Propagates data changes to a target system
- Helps auditors to identify updates to sensitive data, provides information about those updates incl. who made them and when