Duplicate master data and missed opportunities for revenue, margin, and efficiency have been linked in case study after case study.
Business environments change rapidly from mergers and acquisitions, creating numerous database applications when coupled with a proliferation of specialized line-of-business systems. The result is a commonplace struggle for many large businesses today: organizations face increasing inefficiencies and missed opportunities from fragmented data living in multiple applications. In a traditional architecture environment, data is not integrated so that a customer is uniquely identified and consistently described across all applications, or that a product is tracked in uniform throughout the company. An organization that lacks such data cohesion cannot possibly recognize the total value of their business.
The Data Duplication Problem
Most enterprises are already painfully aware of the costs, errors and missed opportunities associated with duplicate data. Records for customers, suppliers, products, and more are duplicated in multiple systems due to a proliferation of operational systems and mergers and acquisitions. Often no mechanism exists to uniquely identify each entity across systems and no proactive steps are taken to prevent the creation of duplicate records.
Below are 8 common business process problems that result from data duplication challenges:
- A regional construction materials company allows a customer to open a new line of credit at one division while the customer is already on credit hold
at another division.
- An insurance company does not have a “lookup before create” check in place and therefore creates a duplicate member record. When this client fi les a claim, the company inadvertently processes their single claim twice.
- A global cosmetics company is unable to provide aggregated product and customer reporting, as each region records entities in a different way, with a different name, in their own systems. 20,000 real products are reported as 900,000 products worldwide. 2,000 customers are reported as 30,000
unique customer records.
- A healthcare provider cannot assess their billing liability based on insurance coverage because it lacks a process which ensures that only unique or master patient records are calculated.
- A large global music provider cannot combine its complex music sourcing information (tracks, artists, labels, legal entities) into a global media list
across all channels.
- An investment bank with a heavy mergers and acquisitions growth model is unable to efficiently calculate total exposure to a given individual or organization across the entire company.
- A large hardware and software reseller that competes primarily on product price must track more than 50 providers with multiple SKUs for thousands of products. Managing their data manually in Excel with add-ons, Access DBs, and macros causes lost price margins and lost business.
- An international manufacturing firm focused on growth through acquisition spends nine months per acquired company to integrate customer and product masters into their environment. With up to 10 acquisitions per year, duplicate data chokes IT resources and consumes the data quality budget.
Having identified their data duplication problem, many organizations embark on solutions with varying success. Most solutions are sold with record matching technologies and processes, but the “data quality solutions” category has ballooned to include a vast range of products. As a result, many matching solutions are fit to clean only a narrow range of data types. But, not all matching problems are equal.
As a result, many vendors now market matching solutions to solve
the data duplication problem.
To learn about how data duplication issues affect enterprises in your industry, visit the MDM Education page.
Interested in learning more? Download a full copy of the article below.