Smooth transition from Mainframe to Distributed

Many companies – including large insurance companies and banks – have been using mainframe solutions for decades. But how can the previous host applications be replaced as smoothly as possible by decentralized applications? With the UBS-Hainer TDM Suite, the process is easy.

Step by step from DB2 to Oracle

To ensure a smooth transition, the tools that previously managed DB2 databases exclusively must now be enabled to work with Oracle database systems as well. For the transition to work, the new tool – used for test data mining – must run on the previous host, but at the same time be able to perform the same job on Oracle database systems.

The UBS-Hainer TDM Suite is ideally suited for this task, since it can handle both mainframe and distributed systems. Once such a new constellation has been set up, Oracle is defined as the standard database. From then on, everything that enters the productive stream, i.e. every newly set up application, is configured for an Oracle database. When Oracle  becomes the new standard, this initiates a successive and automatic conversion of the entire system over time.

Smoothly into the new system

In general, the UBS-Hainer TDM Suite does not copy any data back and forth between DB2 and Oracle (But if needed the feature is fully available.), for example to populate a new test data table. It’s more like a relay race, where the first runner runs with the next runner for a while until the latter takes over. In practical terms, this means that a new Oracle database appears in the development systems, e.g. as a new application for “storage of the claim survey”. This new application is then connected to the TDM Suite, but initially contains no data.

Only when data is next generated during ongoing production does the application go live. Depending on the area, this can take one or two months. From this point on, data is delivered via the new database. The old information remains in the old DB2 tables, but gradually becomes irrelevant.

New flexibility and efficiency

Following this procedure, a UBS-Hainer customer was able to successively convert its system with over 7 million policyholders. The explicit goal of the insurance company was to completely shut down the mainframe after a transition period of several years. This was approached with high priority. One hundred new employees were hired for this purpose in order to increase the speed of the project.

To generate test case data, the 7 million insured persons were originally accessed from different applications. This was standardized in the course of the project. Today, an easy-to-use data shop  provides a central entry point for all the data needed. Underneath are the corresponding applications, which can be modeled by a handful of experts.

Agile testing with high-quality test case data

In the case described, around 100 people work with the data shop. About half of these are developers and the other half are experts from the insurance business who are assigned to the teams. The company no longer maintains a central test department; instead, anyone can copy production-quality test cases for their tests at any time.

All test teams have been eliminated. The testers are part of the respective application team. This team is responsible for the quality of their delivered software. And all cross-cutting operations are covered by automated tests at the various levels. With this setup, an exceptionally high level of automation was achieved, and agile and iterative testing was made possible.

CURRENT POSTS

TDM Solution – Make or Buy?

The TOP5 ARGUMENTS why developing your own test data management solution is no longer profitable today. TDM has become a complex issue. For this reason, there are experts today who offer mature TDM solutions to the market

Read more »

Test data procurement – Three basic options

In the process of software development, tests need to be performed repeatedly. Depending on the stage of development, various test data, from individual test case data to bulk data, is required. There are three basic options for obtaining this test data: The manual creation of test data, the creation of synthetic test data and the conversion of productive data into test data

Read more »

Bulk data for system and release tests

Best-Practice: Test data procurement in the context of continuous software development (PART 3/3). Before the new or modified applications go live, system, release, load or performance tests are applied. For this purpose, no fine-grained customized test case data is needed, but production-related data in larger quantities is required. These tests are

Read more »

Customized test case data for functional, component and regression testing

Best-Practice: Test data procurement in the context of continuous software development (PART 2/3). The further development of an application usually means that different features or even bug fixes need to be implemented. Ideally, each feature gets its own environment. This environment contains only the relevant data for that particular case.

Read more »

Green light for automated
Test Data Management

Take advantage of our free initial consultation to quickly and easily determine your optimal and individual options!

Green traffic light