Cybersecurity
DevOps Cloud
IT Operations Cloud
Gutenberg Rechenzentrum speeds up data recovery with HPE StoreOnce and Data Protection
The challenge was to secure reliable, automatic backups with the ability to quickly recover data that has been accidentally deleted.
Frequent, reliable and automatic backups
Gutenberg Rechenzentrum (GRZ) has been a leading IT service provider for the media sector since 1970. With offices in Hanover and Leipzig, the company operates throughout Germany.
The portfolio of services offered to publishing houses and newspapers range from the design and maintenance of innovative IT systems, through the provision of suitable hardware platforms, to software advice for SAP. This results in close cooperation with the newspaper publishers with regard to practical IT solutions that play a major role in digitizing the publishing processes. The business core focus is its software solutions VI&VA and eVI&VA. These solutions are already used by more than 40 publishers.
egardless of the various storage solutions, GRZ has used OpenText Data Protection to this end for several years.
In the past, this ensured that all data, whether from the file systems, Oracle or Microsoft SQL databases, was quickly available to the individual departments even in the event of a system failure.
Until a few months ago, the data was initially stored on a NetApp Virtual Tape Library. This had become somewhat outdated and it was no longer able to manage the increased data backup volume.
GRZ, therefore, sought a hard-drive-based backup solution with data deduplication and replication. When using deduplication, information units are compared to detect duplicates and each unique data segment is stored once. To this end, a deduplication module uses mathematical hash functions to assign each data unit a unique ID. If two data units are identical, the system replaces the duplicate by linking to the original unit.
Neffgen and his colleagues were promised that deduplication would reduce the data volume to be stored and thus enable quicker data replication. This was based on the theory that the data flow with a deduplicated backup only contains the unique documents and references to the duplicated segments. The necessary network bandwidth is, therefore, reduced.