The last few blogs dwelled on the importance of documenting standards from an overall knowledge management point of view. The last three covered how it’s better to pace yourself to develop high quality standards vs. an intense project, why documenting is important from a corporate learning point of view and how data standards have three levels: the strategy / data family level, the data element standard and the control documentation.
Now your next question might be “When is the right time for the data steward to engage your business experts on completing standards?”
The short answer is as soon as the repository is constructed, populated with all previous knowledge and the steward has enough knowledge to manage the prioritization.
Three things must come together to help you create a complete well-defined knowledge base for your data standards. They are all your historic documentation, a freshly completed data profile and finally the tacit knowledge of your experts. But, when is the best time to pull your experts into the process? Only the historical artifacts and the new repository are considered explicit (leverage-able / transferable) knowledge.
A previous blog discussed how the corporate knowledge is enhanced only when tacit (experiential) knowledge is “extracted” from the experts and documented (codified, explicit knowledge) to use as a baseline for future growth. This also prevents the risk (certainty) of knowledge loss as people change positions / companies.
Attempting to engage your experts too early in the establishment of a “standards repository” project will be met at best with resistance to reinventing the wheel or worse will result in a halfhearted re-casting of the historical documents into the new format with little regard for any new knowledge obtained by the expert since the last attempt at documentation. Of course, any new uses for data elements or new values in use that are not recalled by the expert will never be seen if you engage without doing your homework.
In other words, it will waste everyone’s time.
The data steward generally has responsibility for accurate, complete and up-to-date data standards that are leveraged by the enterprise. These data standards support knowledge transfer for everything from training new employees or old employees to a new process, supporting new projects, to implementing new software plus a host of other needs; however, all too often these get hopelessly out of date or diverge into many similar but conflicted and incomplete copies. A data steward cannot possibly know how every data element is used by every business in every country.
Therefore, the steward must engage the business experts to keep the standards abreast of each change in the business that impacts or is impacted by the standard. Granted, having an all-knowing data steward may be possible in small simple companies but in today’s complex global enterprises, an omniscient data steward is rare.
And what happens when the all-knowing expert hits the lotto and runs off to an island paradise? Or gets downsized or has health issues? Or if the business is sold and the new stewards must get up to speed quickly. Data Standards must always be up to the task of transferring knowledge to the new set of experts.
We discussed in a previous blog how the tacit knowledge of business expert is always growing and how it is important to prevent that data from being lost when individuals change jobs, get re-organized or win the lottery. The business' tacit knowledge must be converted to explicit knowledge to be passed on.
The process that I have seen most successful is for the data steward to collect first all the historic documents into a single repository based on the complete physical model as the common link. This data element metadata provides the record level information to provide the basic baseline into which other documents are mapped. A future blog will explain why the physical model is important at this stage.
Sources of historic knowledge that can be used to populate this repository includes the existing standard and your ERP blueprint but can be greatly enriched by using any training documentation for master data maintenance and other process training documents. The first observation at this point is that the data standards repository will be that many fields have no standard or rules. This is fine if these fields are truly not in use; however, often you find new fields being used without benefit of standards or data governance approvals.
While much of this initial information will be incomplete and internally conflicted, and will serve as the initial baseline for what knowledge is explicitly known about those data elements.
This brings up why a fresh data profile of all the fields is so important. Information can be easily obtained from data profiling at this stage can determine the fields that are used 75-100% of the time and what fields are used only on specific occasions and finally what fields are never used. Field records never used should not be deleted from the data standards repository however; they should be marked as inactive and eliminated from repository analysis. Once you have gained the knowledge that a field is not used; do not walk away from that new knowledge. Document it for future reference.
The second piece of key information that can be obtained easily in a quick profiling of the data are the list of “values in use” for those data elements that are based on reference data (e.g. drop down lists or check boxes). Comparing the values-in-use to the allowed values in the original standards and training documents will inform the extent that the value definitions need to be expanded and documented. Your standard should state both when and why certain values are used. At the initial stage, avoid the profiling for the conditional dependencies. It is far easier to focus on the complex profiling once the repository is initially populated and able to provide guidance.
Assessing the records now in the repository for completeness while also assessing the individual data elements in your ERP for their data accuracy risk will give the steward clear direction for the expert workshops. If you incorrectly assess one field, it does not matter in the long run, as it is important to document every field even if the standard is “Do not Use this Field”!
Companies that stress over which fields to write a standard for first, are still having that argument after the companies that “just do it” have completed their initial load and are well on their way to a sustain phase.
One word of caution is to not try and deal with all high priority fields in the first meeting. Instead take 80% easy, low controversial fields and 20% medium hard fields before tackling the tough ones. Remember you should have a standard for every field, so starting with algebra is better than jumping right into advanced calculus. Algebra before calculus means your team will not have to fight the hard battles before learning to work together.
Finally, this is your company’s knowledge to document so that you can grow and share the knowledge to improve your bottom line. Some parts of the data standard are standard values based on your ERP. Most aspects of a standard are indigenous to your culture.
The summary evolution of data standards at your company might look like this:
Having a proven set of templates that are designed for your intranet together with an approach that is thorough but flexible can save months of time of up front.
Your knowledge is your business, RuleBase helps you to manage it. Let us know if you would like to take a tour.
Richard A. King