In Part I of our series on creating an enterprise-class business intelligence architecture, we explored how business objectives are logically modeled and designed to allow a "360-degree view" of the enterprise. This exercise is done independently of the business systems and the data sets that they generate. In this installment, we dig into the data itself, examining how data flows through an organization, and what needs to happen to optimize that data flow to serve the business model that was created.
Every modern organization utilizes online processing systems to record and manage its business transactions. These systems may take the form of enterprise resource planning (ERP) encompassing financial transactions and budgeting, customer relation management (CRM) systems, supply chain management, point-of-sale, etc. Often these systems are separate deployments from multiple vendors. All of them create large amounts of "raw" data that enable the business's day-to-day operations, but are difficult to utilize in reporting, analysis and decision-making. The first step in optimizing data flow in an organization is understanding how this raw data moves from the source systems to the end-user consumption state.
An analysis of the current data flow should produce a diagram identifying source systems, processes and the personnel involved. Paramount to this analysis is the identification of manual efforts, since every manual effort introduces a higher data quality risk. The final diagram should help both the business and IT quickly identify the following issues:
Redundancies
Inefficiencies
Shortcomings
Gaps
Insufficient technologies
Bottlenecks
Untapped data sources
Manual processes
Potential breakpoints
The diagram will also serve as a basis for comparison against future diagrams for the purpose of clearly identifying the benefits of any proposed solutions.
The Future Data Flow
The key to optimizing an organization's data flow for enterprise-class business intelligence is integration. What's more, data integration should occur as far upstream in the process as possible in order to facilitate standardized and high-quality data before it is delivered to the business. Integration occurring as close as possible to the source systems will go a long way to resolve the list of issues that were identified in mapping the present data flow.
In creating a map for the optimized future data flow, the point of integration should occur immediately after the source systems are identified. The first touch of data governance occurs at this initial integration point as well, in the form of enterprise exception handling. As part of an automated integration process, initial rules are established to identify erroneous or anomalous data that occurred at the manual system point-of-entry. This data is reviewed, possibly corrected, and potentially reconfigured into new or modified rules by an initial line of operational data stewards (these stewards should be embedded in the departments that are creating the system data).
Following the initial point of integration is the analytical repository itself, usually in the form of a data warehouse. Here is where the full force of data governance must be brought to bear. Data management rules, set by the business and often the result of profiling, are implemented at the DW layer, where data cleansing and standardization is enforced. This makes the DW the single system of reference for all analytical applications. Ideally, the data in the warehouse is then presented to the business in the form of a semantic layer, in and of itself a data governance platform, where the cleansed data meets common business nomenclature to create the business analytics lexicon.
The ultimate objective in creating the future data flow is to outline a set of recommendations that will enable the efficient management of analytical needs of the business. The intent of the optimized data flow is to deliver the right information to the right people at the right time. This will serve a standardized BI platform with centralized business rules to improve accuracy of reporting and consistency among various analytical applications. In the next installment, we look at the BI platform and learn how BI standards and data governance are established to further serve business needs.
Questions?
If you’ve enjoyed the Enterprise BI Architecture content so far and can’t wait for the rest of the postings, please contact us. Our Senior Architects can schedule an Enterprise BI Architecture workshop that covers in great detail the practices and considerations in designing and implementing a cost-effective, high performing, and modern BI system based on our experience and industry insight. Reach us info@pandatagroup.com
About the Author: Joe Caparula
Joe is one of the co-founders and active thought leader of Pandata Group. He works with our clients on a range of business intelligence and data management initiatives to ensure they meet their business goals. In addition to over 15 years of experience with the SAP BI and Data Services platform(s), he is a Certified Business Intelligence Professional at the practitioner level. Have a question for Joe? He can be reached at jcaparula@pandatagroup.com.