Saturday 5 November 2011

A Maturity Model for Master Data Management


MDM Maturity model and quantitative measurements




One of the important factors that contribute to the organizational acceptance and long term sustainability of any large scale, multi year program is the ability measure the progress. Unfortunately master data management does not have many accepted measures. Many organizations try to use organizational KPIs and try to link MDM initiatives to them. This could prove to be a hard exercise since there are many layers between technology initiatives and their outputs and business results. 

The initial step should be to establish a set of measures acceptable to both the IT people and Business community. There are measurements needed on both IT side and Business side. A clear understanding and agreement of them and balance of them is essential for the for the long term sustainability of MDM program.


1.1    Identification and definition of business data entities


One of the most important steps in creating a roadmap and a measurable multi year MDM program is the identification of business data entities in the master data domain. These constitute the ‘Scope’ portion of what we are measuring. And then we move on to the measures, which constitute the ‘What’ portion.

On a high level, this could be a simple beginning. Like Say Customer, Product/Asset, Location/Store/Site, Physician etc. And there are not a lot of master data entities in any organization, let us say a couple to a dozen depending on how granular you go. And it is relatively easy to have a starting point definition. But that is not enough for any practical purposes.

Let us take an example of customer. A common perception is, customer is somebody who pays the organization eventually. But let us focus on the airline industry, there are lot of intermediaries between the airline company and the end traveller. So if the airline is creating a customer data hub, is it the travellers who ultimately uses the airline? Or is it the intermediary who buys the tickets in bulk from the airline and then resell? Common agreement could be that the customer is the traveller. But if you want to focus on revenues in the next year or so, another MDM domain to focus could be the vendors or say re-sellers. But if you focus on that, immediately comes the question of how do we understand the vendors without knowing what kind of customers they are selling it.

In the telecom wireless  industry many consider the customer as the individual who uses the mobile phone. But then what about the small business ‘Joes Pizza’ who purchased 10 units for its employees. Do we care about those employees who carry the phone and use it but does not pay directly for the service? Which line of business of the organization really own this segment of the customers? These issues should be addresses in the identification and definition of the “Business data entities”. And later constitute a mandatory pre-requisite and critical starting point in the scoping and road mapping exercises.

However, Identification of the business entities should be independent of these program scoping considerations. Where we consider this prioritization is in the roadmap creation. However the focus should be given to the master data entities. In other words transactional and analytical information should be filtered out as much as we can. Refer to the ‘MDM Layers’ section for more details on this. Scoping details are later in the ‘Business Measures’ and ‘Roadmap’ sections.


Ironically, a logical place to start this could very well be the IT department, specifically any common group which is neutral in it’s approach to any line of business or information systems. These guys have the advantage of being a common factor across variety of departments and document and escalate issues again for a common good of the enterprise.

Now, we started the earlier paragraph with the ‘logical place to start’ statement. So they can only start. Now, the business side has to take over when prioritizing, verifying the definitions etc.. Let us get back to this in the data governance sections.



1.2    Measures


An important criteria for selection and definition of these measures is the independence of these measures. That means these could be measured and improved almost independent of each other. Also these measures could be applied to specific business entities at a holistic level or it could be applied to a very specific finer grain data element or type of data. The tools and processes are used and skills required to improve each of the measures is relatively independent and could be applied at different timeframes.
                                                                                  
These allow organizations to prioritize these measures in an independent fashion and apply resources in the areas important to them.

1.2.1     Technology Measures


These are intended to measure the technological maturity of the master data content and the fundamental organizational components for managing it over the years. This need not be tied to the master data management program. A master data management program is not even necessary to use these measures. But to focus our efforts and dollars on ‘data’ and not on all the tools and technologies and all the layers that comes on top of data, these measures will help. After all, master data management is about the management of ‘data’ of the type ‘master’.

These measures enable to adopt master data management and maturity of it as a series of smaller, focused project/investments rather than needing huge initiatives focusing on everything. The independence of measures help this objective.

Also a careful definition of these measures are selected so that these measures are independent of each other. This is different from many industry standard definitions which overlaps each other and thus results in confusion.

1)    Data Content: This measures the organizational breadth and depth of master data the organization wanting to address versus what it is addressing at any given moment. The breadth (aka scope) dictates the master data domains and major business entities/data elements that would be addressed as part of the master data management program. The depth portion focuses on the enterprise reach Vs departmental reach.

As we get closer and closer to the enterprise level use of single source of master data repository and have all the intended (in-scope) data in the master data store/s, we get better and better scores for this measure.

2)    Data Services:
Now we have the data in scope defined for the master data stores and measured it for the content, we need to switch our attention to the accessibility of it to consuming applications across the enterprise.

‘Data Services’ portion measures the ability of the organization to access and manage master data information in the master data stores quickly, easily and securely for consuming applications. This is a fundamental step towards reducing the time to market and project turnarounds. Currently many of the IT projects for business purposes spend a good deal of their time and money in gathering basic information, integrating , cross checking, quality checking etc. as part of these business projects. By adopting this measure, we are ensuring that the focus is centered in moving this fundamental efforts to one time centralised placed so that future business projects are freed from this step.
                                                                                                               
The basic assumption here is to have a layer of data services around the master data stores so that nobody need to directly use the master data store tables. Both custom build and software product use situations, data structures used for master data is quite complex and abstract than usual data structures. So this is a very important often ignored capability in the master data management domain.

This can be further subdivided into multiple type of data services. Of course the most important is the base services in which basic CRUD and simple search services are exposed. Initial days only the basic CRUD and/or logical grouping of them was considered as data services. However, Advanced searches like probabilistic searches and lifecycle management services also considered as data services now and are good line items for measurement under the data services umbrella.

In the early days of master data management (let us say 2005 ish) nobody considered services beyond these relatively simplistic data services. However, due to the introduction of vendor products and the ‘need to show immediate results’ vendors tried to put more into their products. A good example is lifecycle management services. This create a whole ‘platform of MDM’ which is expected to readymade. Needless to say this is not an invention. Earlier days this is considered to be a custom made business process for each organiziation and probably for each line of business and probably even specific to product categories. Because the lifecycles and rules associated to that are so integrated and specific. Now a days many consider or at least expect this lifecycle services as a outcome of master data management. Right or wrong.

Having these expectations also in mind, here is a high level list of categorization of data services for measurement purposes is given here.

a)    Key generation Services
b)    Base Data Services
c)    Lifecycle Data Services
d)    History data services
e)    Simple Search services
f)     Probabilistic search services


However most of the MDM implementations nowadays are using software products and almost all of them provide a good set of data services in various names. However the robustness and usability of these varies and so we still need this measure, even when we use vendor products.

3)    Universal IDs (Single IDs)
A generally accepted objective and probably the hardest of all the objectives is to attain is a single universal id for a piece of information. Or say single version of truth. Like Customer ID or Product ID or a Store ID. The idea is that any system across the organization can then use this id and access the same exact information.

Needless to say that currently all major organizations have multiple systems with its own replicated data and have different Identifiers for same data in each system.  In the past, we kept creating copies, in the name of ease of access, business intelligence projects etc. Almost anytime we integrated data from multiple systems, we said that the new id is “the ID’. And of course that lived only till the next system integration project and newer ID. Many of these projects kept consolidating all the information from these disparate systems and brought all these identifiers and associated information to a single system. Now, that would help them on the first measure of ours, the “Data Content”. However, single ID, not yet. Only thing that happened now is, say six records from six systems is moved to a single data store but as six separate instances(aka records). So which one of the six instances is the right one? Well, no answer. There were efforts to consolidate attribute by attribute to a single record. Well, that requires manual intervention and takes years for all the millions of records. But there aren’t many short cut anyway. There are lot of data quality/matching software  promised the magic of consolidation, but practically any organization is far away from this ideal state of reaching that single record.

So this measure focuses on the maturity of the data in this area. And this is key measure in the MDM maturity and differentiating an MDM project from becoming yet another data integration project (another copy of that data again).

4)    Data Quality

First of all, this book is not about data quality. It is only about adopting master data management. So anything that is needed for that adoption and the completeness of that is discussed here. Data Quality is understandably a imperative part of MDM. But organizations vary widely in their definition of data quality. The terms data quality and data governance are so overlapped and so it is confusing. Since MDM programs are specific point of action, these points converge and confuse the MDM domain also. So the objective here in this book is to forget the very broad definitions of data quality (and data governance) and adopt a measure that focuses only on master data and only for data quality issues and processes that is relevant and necessary for a complete and mature master data. This may not be suitable for an enterprise wise ‘data quality’ program. However, this is a very good point to start the action of data quality and then generalise it and grow it into the enterprise programs.

Data Quality is one of the most used, abused and marketed term in the MDM industry. But on the good side, MDM is bringing lot of attention to data quality and there are serious thought and efforts happening in the data quality side. And the association of data quality to master data management enhance each other. This is because the focus on ‘master data’ give data quality a razor sharp focus in one of the most impactful areas but not necessarily immediately impactful. It also bring certain repeatable processes across industry. So it becomes much more practical to do in the real life situation in many organizations. Because now you can start small, focus and arguably easy to demonstrate success, then scale up and make it sustainable than it used to be in a general case.

On the negative side, Data Quality is another beat up term in the industry overall. It is often associated with large enterprise programs, buying new software, lot of presentations, data quality stewards, their role definitions, business case building to get financing for all these etc..
Also in a different view, the data quality effort’s success can be questioned in a different way. Because it may not be immediately visible or useful. For example, A need some good quality customer data for some campaign mailing. This data has to be based on the campaign management system. And we did the data quality improvements in the  master data system. If the campaign management is not integrated with master data system that improvements are not really available to the campaign program easily. So even though there are some lot term prospects for improving the campaign using better data quality, it is not available immediately.

Majority of the organizations, data quality programs are associated to specific IT systems. Like a specific data warehouse, or a business units like campaign management etc.. Of course there are organizations of exception in with there are disciplined data quality programs covering the whole organization data and processes. But let us not focus on them as their numbers are very limited.

Overall, MDM gives a good opportunity to start a data quality program in very focused way. We can focus on specific data quality issues in the domain as well as built out a repeatable data quality program as part of MDM. And we can measure the quality of data in quantitative terms.

The important thing is not to get caught up in the never ending theory discussion cycles and business case cycles but to get it started. And not to get hung up on what is contained in what, like MDM as part of data quality effort or data quality as part of MDM.

5)    Data governance

This measure focuses of the data governance aspects needed for master data management and the maturity of it. Not necessarily the maturity of the data governance program in the organization overall. Ie; we donot really focus on the maturity or reach of data governance program in the transactional or analytical data domains. However, this ensures that all the elements of data governance is in place for the successful implementation and maturing of the master data management of the organization.

MDM can be done without Data Governance or Data Quality for sure. But the value would be very diminished. It would look like just another ETL project, just like in the BI space, but for master data. However, it is a good first start, if done with as part of a well planned out roadmap. To some extend data governance is imperative is some form even for the implementation of an initial phases of the program. But it surely need not be under the “brand name” of data governance. We need to focus on why we need data governance. What are the vehicles for such things in the organization. We would find gaps. Then either need to fill the gaps or convert these tasks and processes under the umbrella of data governance  and add the missing elements. We would focus on such a process just for the master data management program.





6)    Mastering of Data
Immediately after you have that single record with single ID, you are immediately followed by the problem of keeping it that way!

The question is, are you creating and managing the data in the master data store as per your end state? May be you have achieved all the above measures. Eg: you are bringing data from five systems, using a complex logic to merge it and create that single record, use a work queue system to manage the output of the matching engines etc.. And data is constantly being fed from the sources to this master store. You achieved a good score in all other measure. But what about stop creating the master data in five places to begin with and create it directly in the master store for the master data? You can get rid of these backend processes, matching all these if you are able to do that. This is kind of optimization and continuous improvement area. You would need strong data governance systems in place to enable the organization change that comes with this. So this measure tracks that amount of data being mastered in the master data store as opposed to being the slave of other data stores.

There is a large cost associated to re-engineering these processes and systems making any attempts to change this difficult. However, there is enough value in altering these and moving the creation of specific data (as specified in the scope of each master data store) into the master data stores. During the initial creation of these data stores the mastering is not always in the master data stores. However, this usually causes a delay in having the master data in central datastore and it being available to rest of the telus. This delay in not very important for reporting and business intelligence purposes but very important in operational functions like customer self serve, call center supports functions etc.. One day of delay in reaching the information from one system is very inefficient here.

This measure is analyzing the maturity in this domain. Of course this need to be measured for each business entity. A low score for the measure will mean that a data expected to be mastered in the master data source is reaching there in a very offline way (Say a week) to the best possible situation that this data is created in the end state location and is fed to every consumer of this information real time.



  MDM Maturity Model




Wednesday 26 October 2011

'Data Layers' in a master data management hub

“Data layers” is a diagrammatic representation of high level data entities to assist the scoping and road mapping of an MDM program. This is done generically here and using “customer” as example mdm domain. This need to be customised for each organization.

In this diagram concentric layers of data is represented in circles around a core mdm domain concept.  The core represents the ID of the concept. Eg: the Enterprise level Customer ID. Then data entities with closest affinity with the IDs are drawn in the immediate circle. And then concentric circles are drawn with decreasing affinity to the core.

These layers denotes a logical and optimal sequence to build out these layers from a technical perspective. So this can represents various large phases in an MDM program, with each phase running months or even years. However, business needs and value always get more weight and it is okay. What we are looking for is a way to balance the technically optimal world and business need driven world and come to practical roadmap.  

A sample picture of a ‘Single View of Customer’ using customer master data layers:


 




This is only a ‘logical’ picture of MDM data entities and Non-MDM data entities. It is definitely not a suggestion that we should clog the master data stores with data from all these non-MDM system in the physical architecture of MDM. But it is to get a conceptual understanding on what business team could be looking when asking for things like ‘Customer Profile’, ‘360 degree customer view’s etc. And how to build it using the master data as a starting point.

1.1      Master Data Layer

1.1.1     Layer 1.1 a.k.a The Core



The ‘Core’ means the centralisation of the IDs from all the important internal and external DBs and linking to one another representing the ‘same record’ or ‘instance of information’. As the maturity level increases, this linking turn into identifying or creating a new enterprise ID as required. The legacy IDs will then be decommissioned from the enterprise slowly over many years as other downstream systems also able to completely use the new enterprise IDs. This transformation will take years and can run into decades.

The assumption here is that, in any large organization, the master data is spread across the organization in multiple data stores. Mostly with their own unique ids.

This forms something like an initial ‘Registry’ architecture of MDM. In a pure registry form, no properties of the data concepts are maintained or consolidated as part of these, including hierarchies. The IDs mostly represent the highest parents.

To create a core, usually we may need lot more data to identify the links. Like names, addresses, status, phone/email, their external ids etc.. These information is used to identify the duplicate and finally link the records to each other and form a core. However, storing this non-key information is moved to the next phase/layer. The main reason is, each of these entities, like names, addresses, status bring in its own data lifecycle and bring in more integration and maintenance needs. So the scope increases a lot.

However, most organizations keep some additional information along with these ids, if they build a registry MDM. Like name, addresses or hierarchy. This varies by organization and would be necessary, if a quick identification of the master data entity is required without going to the source systems. So, for any practical purposes, they would actually be doing a little more than a pure registry form of MDM actually. But in that case, it is important to take steps on the data quality, data integrity of the information. Or in other words, then it starts to bring out the different aspects of a true MDM implementation, in terms of processes, data quality, data governance etc.


1.1.2     Layer 1.2 a.k.a Operational MDM


The second layer represents all the master data which fit the classical definition of master data for the given domain. Ie; “slowly changing”. Eg: A customer will have only one legal name at a point in time. He/She will have only couple of mailing addresses at maximum etc. All these are set one time and then expected to remain constant unless some specific life events, re-evaluation etc occur. Any of the regular business operations like a customer order or customer bill generation does not change this data.

This information is more or less same across organizations and even industries. This is the core reason giving rise to the ‘Master Data Management’ industry. However each of these type of information, like name, or say phone number has its own information lifecycle, in addition to the core data concept. Eg: Customer has a lifecycle which represent his relationship with the organization. He opens accounts, buys products etc.. and may be he will discontinue to be a customer closing all relationships. However he has a lifecycle outside the given organization. Ie; He get married, have children, move to new house changing addresses etc.. And each portion of the information can also change. Like changes his name, changes his addresses, changes his email, changes his phone etc..All these information lifecycles are managed in this operational MDM layer and then made available to the entire organization over period of time. And also send to downstream systems as a single source of truth.


1.2      Layer 2 : The transactional data and events

This layer represents the core transactions of the organization. Sales, Orders, Shipments, Bills, Product Usage etc.. For large organizations, most of these are done by major IT systems in the organization (Not the MDM system). Most of the legacy applications and vendor packages which execute these functions unfortunately has built in provisions inside these systems to capture and mange master data also. Like customer data, product data etc.. They forget that customer data is not owned by those transactions. Most of the vendor products claim that they can interact with outside systems with lot of interfaces starting with file formats to web services. But the reality is that most of these systems expect the master data to be resident inside these systems to properly function. And it could be very costly to re-direct the customer management portion inside these systems to master data stores. However this is a major integration challenge in the adoption of master data management. This layer of data represents these types of transactional data and the information it provide to understand the customer holistically.

1.3      Layer 3 : The analytical data : BI/data mining

Generally this layer does analysis on the data produced by the two inner layers and produce interpretations, summary etc.. As part of it, it generate lot of new data. Almost always it has to replicate the data from the master and transactional systems to the BI systems. There will be lot of other analytical programs, especially in the web centric organizations about click stream analytics etc. which produce lot data. This can also be classified as analytical data.



1.4      Beyond Operational MDM : Enterprise MDM


The master data layer (1.1 and 1.2) represents master data in its pure form. Ie; The remaining layers are NOT master data. However, the introduction of analytical MDM, 360 degree view of customer, or even the recent trends around highly analytical information based operational processes demand lot more information than these two layers. And master data management is asked to take more and more of this scope on to their plate. Or it is just ‘ordered’ by the senior most level of management.

So the industry has started using very creative terms to satisfy these needs. Some of the terms used for that ‘Analytical MDM’, ‘Enterprise MDM’ etc.

it is important to note that the objective of MDM is to manage the core data assets of the organization, like the customers and products and provide a single source for that data. The moment we step away from that, we are making a conscious choice to deviate from the basic definitions of MDM. However  a demand is a demand and it need to be met. The IT side need to architect for it. And we are making choices.

One of the choice is to stay clearly in at the operational MDM level. And count on the service layer to assemble the rest of the data on the fly from other systems, including legacy systems and provide it to the requesting applications. There are vendor products who claim to do this. But often, they ask for data to persist in the product. 

Often the choice will be to bring the data needed and expand the MDM data store for this ‘additional data elements’ also.

1.5      Scoping considerations and phasing beyond Layer two.


As discussed before, eventhough master data ends with layer two, it is very inadequate to address many of the frequent business expectations and needs. For example, it doesnot make sense to have a simple customer records without knowing the account he has with the organization or the product he has subscribed. Or even the recent interactions or trouble tickets raised by the customer. Business users even expect this by default without even knowing master data management doesn’t really address these needs.

So in order to meet these needs we need a framework to explore the rest of the customer centric data. We need to use this framework to identify data in transactional and analytical domains and link it and use it in conjunction with the master data in the operational master data.

This is done by identifying the scope of the master data domain needed for the organization in the end state. So let us consider these as a set of layers of data which is either added to the master data store or made available along with master data via the integration at services layer.


1.6      The transactional data layer and MDM data store.




1.6.1     Replicated transactional data skeletons in master data store

The first step is, we need to identify the type of data that is needed along with master data almost always. Or at least some information about these types of data.

Eg:
1.    Customer accounts ( The saving/checking /mortgage accounts they have in case of banking). The other types of accounts they pay monthly recurring bills in case of utilities etc.
2.    Products of the customer (The products currently the customer have). Sometimes this will overlap with accounts, especially in the financial industry. But many of the retail, utility and manufacturing domains there will be big difference.
3.    Users of the customer. The web users etc. especially in the case of a business to business scenario. The contact people of the customer etc..
These type of data is  so interwoven with transactional processes and these processes maintain this data. To take this out of these transactional systems may not be a valuable exercise, as it is costly and would mean too much of integration difficulties. So a select few data elements of these  are usually replicated from these transactional systems to the master data stores or created as federated data components – however this is an architectural consideration.

These type of data, even though it is occurring throughout the various business processes in the company, would be sought after every time master data is requested. So it is a good idea to replicate this some of the important data elements of this data to master data store and keep the source system reference IDs of the master data store. Let us call this ‘Transactional summary data’  or transactional summary data layer. It is important to remember that this data will be physically present in the master data store.

1.6.2     Identification of customer centric light weight transactions (a.k.a candidate federated components)



The next step is, we need to identify the light weight transactions, which probably is not that important or frequent as the first layer.

Good example is the “Identification and Authorisation” information. Ie; who has access to data in the organization.  Usually the services layer check this information before giving out this information. Also used in case of users of web portals which the customers or their authorised representative use to pay bills, interact with the organization etc.  This is arguably master data but not kept along with master data due to security reasons.  

Another example is a survey information, which collect feed back from customers is very important database which is important to understand the pulse of customer. It is not master data as it could collected frequently and mostly considered as part of a customer touchpoint. So this is transactional data which is collected and maintained in close association with customer data.


Some other examples
·         Customer alerts :This send a bill due notification etc. send to the customers.  
·         Customer  calls : A a call from the customer about a problem.
·         Trouble tickets raised by customers.


These are definitely transactional data. But are lightweight and very customer centric. Some or all these are candidates for considering as federated components of master data. Ie; separate standalone systems and databases which uses the customer enterprise IDs from the master data hub.

1.6.3     Pure transactional entities. Integration via services layer.


The third and final step in analysing the transactional data, is to take steps or at least be mindful of linking the master data and the enterprise IDs to the rest of the transactional systems so that the services layer can integrate the data among these master data systems and transactional data systems. The importance is to make arrangements in the overall architectural standards and design or the MDM solution to ensure a smooth operation with this type of information.


At this point the master data store at the target state becomes something like this.



1.7      The analytical data layer and MDM data store.




Output of customer focused business intelligence, data mining, click stream analytics, social media analytics etc. are relevant at this layer. However, again all information is not considered here. Only the most summarised and important attributes are considered which are used widely in the enterprise.

A common technical property of these ‘selected’ attribute is the singular relationship of these data elements to the master data concept being addressed. Eg: A customer satisfaction index could be measured on a scale of 1 to 10 and say it is recalculated every quarter. So at any point in time there is only one value. Or it could be such attribute which has one or a few values.

This type of “Business Value Vs Volume Vs Frequency” considerations keep the principle of master data in check and from being runaway systems that store and handle every kind of data for the enterprise.

At the analytical data layer, the type of data is not so well defined as in the two inner layers. It is derived or say invented information and vary greatly by industry and organization. So it could be hard list down all the entities and then figure out what can be moved to the operational master data store. So we would need a different approach here.

It is better to provide a flexible mechanism to add new data elements and values of them to the master data store. Similar to the replication from the transactional layer, few key data elements can be replicated to master data store from the analytical layer.

Also it is imperative to have a clear understanding of the purpose of this layer. This is not for replicating all the customer(or say the MDM domain) related analytical data. The purpose is to support the business applications handing transactional process but needing some of these analytical summaries for the transactional operations.

Eg: A customer inbound call system (Commonly known as IVR, Interactive voice response) system want to give a higher priority for customers likely to switch to competitors. Or they want to prioritize based on customers value to the organization. Most of these would be derived information from analytical systems.  The popular option is to store this in the IVR database itself. However, there could be many other systems that would have used this data if it was readily available. Like the ordering system when ordering new products for existing customers. Or the marketing system who has plans to mail a certain customers etc..

The purpose of this layer is to provide a few key data elements which represents these most important indicators and make it available for the operational processes. But also it will have the links (by storing corresponding IDs or link information) to the source systems for retrieving more information in an automated fashion.
Overall, this avoid some un-necessary replications of the master data and these systems access the ‘best available data’ directly.

It could be difficult to foresee these type of requirements in the initial phases of a master data management program. So this layer can be considered completely optional and can be added later on. However need to put enough thought in designing the services layer etc to include these in future.



This bring the final end state picture of the customer master data store to,


1.8      Summary


This represents a way, a complete ‘enterprise’ view of master data can be developed in systematized, modular and multi phased approach, starting even with a registry model, and leading to an Enterprise MDM solution in an ideal and technologically optimal case. So we initially focus our resources in building or cleaning up the ‘Core aka Layer1, then move on to Operational MDM aka Layer 2, Then to Layer 3 etc. So let this be the starting point approach. Depending on the needs, skill set available etc. of the organization, a direct adoption of an enterprise MDM could even be considered.

However as mentioned initially, these represent a data dependency driven sequence which results in a logical and optimal way of building/refining these components. But the business needs whether they are band-aid solutions or strategic business capabilities draws more weight. And that is okay. In a practical world, any MDM implementation should be okay with this reality. After all, all these are done to support the business. For example, if a trouble ticket resolution system is totally broken, we cannot say, wait until we create or clean up the ‘Core’ customer master data hub. And then fix the next layer and then the trouble ticket resolution. It is the modularisation, overall architecture and road mapping portions of master data management which solves that puzzle.