Master data is the most boring and most consequential problem in supply chain management. Here's why it matters and how to get it right.
Master Data in Supply Chain: Why It Breaks Everything and How to Fix It
Master data is the foundational information that defines your products, suppliers, customers, locations, and organisational structure. It is the reference data that every system in your supply chain relies on: your ERP, your warehouse management system, your transport management system, your procurement platform, your planning tools, and your reporting environment. When master data is accurate and consistent, these systems work. When it is not, nothing works properly, and the organisation spends an extraordinary amount of time and money compensating for a problem it does not fully understand.
This is not a technology problem, although technology is often blamed. It is a governance problem. Most Australian organisations have no formal ownership of master data, no standardised processes for creating or maintaining it, no quality metrics, and no accountability for the downstream consequences of getting it wrong. The result is a supply chain that runs on data nobody trusts, systems that produce outputs nobody believes, and decisions that are made on instinct because the numbers cannot be relied upon.
Gartner estimates that poor data quality costs organisations an average of $12.9 million per year. In supply chain operations specifically, the cost manifests as inventory inaccuracies, procurement errors, planning failures, reporting inconsistencies, and technology implementations that fail to deliver their promised benefits. Master data is the most unglamorous and most consequential problem in supply chain management.
What Master Data Actually Is
Master data is the relatively static reference data that describes the core entities in your business. It is distinct from transactional data (which records events: orders, shipments, invoices) and from analytical data (which is derived from transactions for reporting purposes). Master data defines the "what" and "who" that transactions happen against.
In a supply chain context, the key master data domains are:
Product (material) master data. Every SKU, raw material, component, and finished good in your supply chain has a master record. That record defines the item's description, unit of measure, weight, dimensions, storage requirements, shelf life, sourcing information, cost, classification codes, and the various identifiers used by different systems. A single product might have a different code in the ERP, the WMS, the customer's system, and the supplier's system. The product master is supposed to hold all of these relationships together.
Supplier master data. Every supplier has a master record containing legal entity details, contact information, payment terms, bank details, ABN, compliance status, approved product catalogue, lead times, and performance history. In a typical mid-to-large Australian organisation, the supplier master contains duplicates, inactive records, incomplete fields, and inconsistencies that create real operational and financial problems.
Customer master data. Delivery addresses, order requirements, pricing agreements, credit terms, and service level commitments. Errors in customer master data cause deliveries to go to the wrong address, invoices to be sent to the wrong entity, and pricing disputes that consume commercial team bandwidth.
Location master data. Warehouses, distribution centres, stores, production sites, and delivery points. Each location has attributes that affect logistics planning: address, operating hours, receiving capability, storage capacity, and geographic coordinates. Inaccurate location data causes route planning errors, delivery failures, and logistics cost blowouts.
Organisational master data. Cost centres, business units, legal entities, and the hierarchies that connect them. This data drives how costs are allocated, how reporting is structured, and how approvals flow. Errors here produce misleading financial reports and broken approval workflows.
How Bad Master Data Breaks the Supply Chain
The effects of poor master data are pervasive, but they rarely present themselves as "a master data problem." They present as operational problems that get treated symptomatically while the root cause goes unaddressed.
Planning failures. If the product master contains incorrect lead times, the planning system will generate purchase orders and production orders with the wrong timing. If weights and dimensions are wrong, the logistics plan will underestimate transport requirements. If the bill of materials is inaccurate, production will order the wrong quantities of components. Every planning system is only as good as the data it plans against. An organisation that invests $500,000 in an advanced planning tool but has poor master data will get precisely the same quality of plan it had before, just faster.
Procurement errors. Duplicate supplier records are one of the most common and most costly master data problems. When the same supplier exists under multiple records, the organisation loses visibility of total spend with that supplier, misses volume discount thresholds, and may process duplicate payments. A Fortune 500 manufacturer found 47 different records for a single critical supplier across its systems, resulting in over $12 million in annual cost from duplicate payments, reporting errors, and regulatory violations. Australian organisations are not immune to this problem; they are simply less likely to have measured it.
Inventory inaccuracy. If the system records a product in cases of 12 but the physical product arrives in cases of 10 because the unit of measure was set up incorrectly, every receipt, every count, and every pick will be wrong. If the system weight is 5kg but the actual weight is 7kg, pallet configurations will be wrong, truck loads will be underestimated, and storage locations will be misallocated. These errors compound over time and create a chronic gap between what the system says and what physically exists.
Reporting that nobody trusts. If cost centres are mapped incorrectly, logistics costs get allocated to the wrong business unit. If product hierarchies are inconsistent, category-level reporting is unreliable. If supplier classifications are incomplete, spend analysis produces incomplete results. The most common response is for analysts to extract data and manually reconcile it in spreadsheets, creating a shadow reporting environment that consumes enormous effort and introduces its own errors.
Technology implementations that fail. This is the most expensive consequence. ERP implementations, WMS deployments, planning system rollouts, and procurement platform transitions all depend on clean, consistent master data. Data migration is consistently cited as one of the top three reasons for delays and cost overruns in supply chain technology projects. Organisations that do not invest in data cleansing and governance before a technology implementation will discover the problem during go-live, when the cost of fixing it is ten times higher.
Why Nobody Fixes It
If master data is so important, why is it so consistently neglected? Several structural factors explain the pattern.
It is not anyone's job. In most organisations, nobody owns master data. The IT team maintains the systems but does not own the content. The business teams create and use the data but do not see data quality as their responsibility. The result is a gap in accountability where everyone assumes someone else is managing it.
The cost is invisible. Poor master data does not appear as a line item in the P&L. Its cost is embedded in inefficiency, rework, and missed opportunities that are difficult to attribute. The warehouse team knows that inventory counts do not match. The procurement team knows that spend reports are unreliable. The planning team knows that lead times in the system are wrong. But each of these problems is treated as a local issue rather than a symptom of a systemic master data problem.
Data quality degrades gradually. Master data does not fail catastrophically. It degrades over time as records are created inconsistently, as changes are not propagated across systems, as new products and suppliers are added without following established standards, and as mergers and acquisitions bring in data from systems with different structures and conventions. The degradation is gradual enough that the organisation adapts to it through workarounds rather than addressing the root cause.
It is not exciting. Master data governance does not have the appeal of an AI implementation, a new planning system, or a supply chain control tower. It is process-oriented, detail-heavy, and unglamorous. It is difficult to get executive sponsorship and funding for a master data programme because the benefits, while substantial, are distributed across the organisation rather than concentrated in a single, visible outcome.
How to Fix It
Fixing master data is not a technology project. It is a governance programme that uses technology as an enabler. Here is a practical approach.
Step 1: Assign ownership. Every master data domain needs a business owner: someone who is accountable for the quality, completeness, and consistency of that data. Product master data should be owned by the supply chain or product team. Supplier master data should be owned by procurement. Customer master data should be owned by the commercial team. These owners are not doing the data entry. They are setting the standards, approving exceptions, and being held accountable for data quality metrics.
Step 2: Audit the current state. Before you can fix the data, you need to understand how bad it is. Run a data quality audit across your key systems: ERP, WMS, procurement platform. Measure completeness (what percentage of mandatory fields are populated), accuracy (does the data match reality), consistency (is the same entity described the same way across systems), and duplication (how many duplicate records exist for suppliers, products, and customers). This audit will quantify the problem and provide the baseline for measuring improvement.
Step 3: Define standards and governance. Establish naming conventions, mandatory fields, classification structures, and approval workflows for creating and modifying master data records. Document these in a master data governance policy. The policy does not need to be elaborate: it needs to be clear, enforceable, and owned. For each data domain, define who can create records, who approves them, what fields are mandatory, and what naming conventions apply.
Step 4: Cleanse the data. This is the labour-intensive step. Systematically work through each domain, deduplicating records, filling in missing fields, correcting errors, and standardising formats. Start with the domains that have the highest operational impact: typically supplier master and product master. For large data sets, automated data matching and cleansing tools can accelerate the process, but human review is always required for ambiguous matches and complex records.
Step 5: Build it into the operating rhythm. Data quality is not a one-off project. It is an ongoing discipline. Build master data quality metrics into your monthly reporting. Conduct periodic audits. Include data quality requirements in the onboarding process for new products, suppliers, and customers. Make data quality a standing agenda item in your S&OP or operations review meeting. The organisations that sustain master data quality are the ones that treat it as an operational process, not a project.
Master Data and Technology Projects
If your organisation is planning an ERP upgrade, a WMS implementation, a new procurement platform, or any other supply chain technology project, master data should be one of the first workstreams, not an afterthought.
The data migration step in a technology implementation involves extracting data from the old system, transforming it to fit the new system's requirements, and loading it into the new environment. If the source data is inaccurate, incomplete, or inconsistent, the migration will transfer those problems into the new system. Organisations that "lift and shift" dirty data into a new platform are paying for a new system that produces the same unreliable outputs as the old one.
Best practice is to start the data cleansing workstream six to twelve months before the technology go-live, depending on the scale and complexity of the data. This gives sufficient time to audit, cleanse, and validate the data before it is migrated. It also provides an opportunity to redesign the master data governance processes for the new system, establishing the standards and workflows that will prevent the data from degrading again after go-live.
The investment in data quality before a technology implementation typically represents 5 to 15 percent of the total project cost. It is the most cost-effective investment in the entire programme, because it determines whether the other 85 to 95 percent of the investment delivers its promised value.
The AI Readiness Connection
Organisations that are exploring AI and machine learning in their supply chain need to understand that master data quality is the prerequisite, not an optional input. AI models are trained on data. If the training data contains errors, duplicates, and inconsistencies, the model will learn from those errors and produce outputs that reflect them. The principle of "garbage in, garbage out" applies with particular force to machine learning, because the algorithms are designed to find patterns in whatever data they are given, including patterns that reflect data quality problems rather than genuine operational signals.
Demand forecasting models trained on shipment data with inconsistent units of measure will produce unreliable forecasts. Supplier risk models built on duplicate supplier records will underestimate concentration risk. Inventory optimisation algorithms running on inaccurate lead times and incorrect safety stock parameters will recommend the wrong stock levels.
Before investing in AI, invest in the data foundations that AI depends on. The organisations that are getting the most value from AI in supply chain are the ones that spent the time getting their master data right first.
How Trace Consultants Can Help
Trace Consultants helps Australian organisations get their supply chain data foundations right, whether as a standalone data quality programme or as part of a broader technology implementation or supply chain improvement initiative.
Master data audit and assessment. We assess the quality, completeness, and consistency of your supply chain master data across ERP, WMS, procurement, and planning systems, quantifying the operational and financial impact of data quality gaps.
Data governance design. We design master data governance frameworks: ownership structures, standards, approval workflows, and quality metrics that ensure data quality is maintained as an ongoing operational discipline.
Technology implementation data readiness. We lead the data cleansing and migration workstream for supply chain technology projects, ensuring master data is accurate, consistent, and fit for purpose before it enters the new system.
Procurement and supplier data management. We clean and consolidate supplier master data, eliminating duplicates, completing missing fields, and establishing the governance processes that prevent the problem from recurring.
Explore our Technology advisory services →Explore our Procurement services →Explore our Planning & Operations services →Speak to an expert at Trace →
Where to Start
If you suspect your master data is a problem but you have not quantified it, start with a focused audit of your two most critical domains: product master and supplier master. Measure completeness, accuracy, consistency, and duplication. Quantify the operational impact: how many planning errors, procurement duplicates, inventory discrepancies, and reporting inconsistencies can be traced back to data quality? That audit will tell you whether you have a manageable housekeeping exercise or a systemic governance problem that needs structured attention.
The organisations that get master data right do not treat it as a technology initiative. They treat it as a foundational operating discipline, like safety or quality, that underpins everything else the supply chain does. It is not exciting. It is essential.
Read more insights from Trace Consultants →Contact our team →