Example of a typical scenario prior to the implementation of an automated TDM solution like XDM from UBS Hainer
The following description illustrates a real-life example of a large insurer, and customer of UBS Hainer. This customer was able to master all its challenges by using the test data management software “XDM” (This customer is available upon request as a reference for further inquiries and information).
And that was the initial situation and task:
The IT backbone of the insurer was a system that was developed in the 1990s. It was essentially a monolithic system with a central in-house database in which all data was stored.
This constellation made it quite easy to generate test data. To do this, one simply tapped into the central data pot with all the production data. The desired data sets were copied and distributed to the various test systems. During the copy process, the personal information within the data sets was simultaneously pseudonymized. The complete process of copying, distribution and pseudonymization was accomplished using a software tool developed on the host by the company itself.
All of this worked quite well until 2015, when certain limits were met. The constraints for the applications increased. There was a need for action – not at least to stay up to date in terms of IT technology.
Restructuring of the productive data
Therefore, in 2015, the company began to upgrade the solution and restructured the old system. The newly created system is based – at least in part – on a self-contained systems approach. For this purpose, the previously monolithic system was broken down into subsystems of different business application components. These subsystems are self-contained and communicate with each other via REST or Kafka interfaces.
In practical terms, this means that the data records are no longer stored in a single database, as was previously the case, but are distributed across several databases. For example, a customer’s master data is stored in one of the databases, the contract data for his vehicle insurance in another, and further contract data in other databases.
In addition, these databases do not form a homogeneous database landscape. Instead, several database types are used in parallel. Some of the data required for test purposes, for example, is located on an IBM LUW, others in a PostgreSQL. And the IBM Db2 z/OS system will also be used – probably until 2030. So productive data needed for testing will continue to be located there as well.
The previous TDM solution no longer worked
What do these changes now mean for the procurement of test data? In the
course of the restructuring, the procurement of test data became a massive
challenge. The previously proven solution no longer works.
If a coherent data set is now needed for test purposes, the required
partial data must be found and collected on the various systems, copied, and
then merged into an overall data set. At the same time, this test case data
must be consistently pseudonymized.
The resulting GDPR-compliant data now has to be copied into the
respective test system. From there it needs to be distributed to the various
database systems again. During the process, the relational integrity of the
data must be maintained. All this has to happen automatically and, if
necessary, with hundreds of thousands of data records. This requires a stable
and tested system that can be used by several hundred developers
simultaneously.
Therefore, a new solution was needed to replace the existing system, one
that could not only cope with the distributed world, but also seamlessly access
the z/OS world as well. And, it needed to be possible to implement this system
during ongoing operations. The insurer in our example came to the conclusion
that XDM from UBS Hainer could meet these challenges. A PoC was set up and
subsequently the test data automation solution was successfully implemented.