Phase 2 – information collection and standardization
System-independence for maximum freedom and performance
To achieve dramatic data quality improvements and create a solid base for implementing a digital strategy, it makes eminent sense to develop a new, system-independent database.
Although the relevant information may already exist in the company, it is generally kept in the specialized systems of individual departments, such as accounting, service planning, order processing or HR.
Typically, there is no centralized, and above all, vendor-independent data source, in which information can be easily consumed, classified, and consolidated. Instead, data have to be extracted from specialized systems, which is generally time-consuming and complex.
Due to maintenance in different systems, information is frequently treated differently, which results in divergent data that are difficult or impossible to synchronize. Doing so usually requires cooperation with the specialized software vendor – another time consuming and expensive option!
System-independent database
To get around these difficulties, it makes sense to establish a new, system-independent database. The objective is to break up rigid boundaries and processes to allow for significantly easier access and display opportunities in the company – but not to replicate or even replace individual specialty systems. Those systems have legitimate purposes. They comprise functions that are needed to address their respective tasks – and also usually include reporting features that are helpful, required and in many cases, mandatory for data inventories.
Connecting “big” systems
“Big systems” are applications containing a relevant part of the business data to be processed in terms of data management.
To establish a new, system-independent database, “read access” to the data memory of the respective applications is established, usually in cooperation with the system manufacturer. The main data of the “big application” can be matched with data from other large and small data inventories to identify discrepancies and to supply new or smaller systems with the main data.
Identification and elimination of “small” data sources
Finding the “small data sources” in the company, i.e. the multitude of existing Excel sheets, self-created access databases, records, protocols and processing sheets completed by employees etc. is a difficult step. Nevertheless, it is indispensable, including from the perspective of complying with the new GDPR.
These sources frequently contain important information, but are generally not accessible; they are “trapped” in their own data structures – usually because the main systems only offer inadequate views and options for incorporating or mapping these sources.
Combining, converting, standardizing - a walk through “data hell”
In theory, the individual steps are clear and well defined: System selection, read access to small and large systems, data retrieval and synchronizing with the records of the new, system-independent database.
However, the practice of countless data projects has painted a completely different picture. Sometimes, the data quality is poor or incompatible with the new data concept for a number of reasons rooted in structure, logic, history or technology.
Requirements for management
To eliminate poor data quality and to build a system-independent database, the individual processing steps require much time, intense focus and meticulous data sifting.
In such situations, management must acknowledge the need for restructuring, communicate it appropriately, approve the necessary measures and order their implementation. Those in charge must arrange for the extensive transfer work – which only has to be done once – and fully support the process.