Data Integration Roadmap - Part One: The Logical Data Model

Our recent blog series on the data integration portfolio introduced a variety of new architectures that help the enterprise manage their data resources, including replication, virtualization and cloud data warehousing. Organizations are now able to integrate multiple data management solutions to address a variety of business sources and requirements. But it is important to understand that the foundation of any enterprise data management portfolio remains the same . . . a roadmap to data management must be created that is independent of the underlying technology. This series of blogs will examine the three main elements of the data integration roadmap: the logical data model, master data management (including metadata management), and t...

ETL vs. ELT - What's The Difference and Does It Matter?

For most of data warehousing’s history, ETL (extract, transform and load) has been the primary means of moving data between source systems and target data stores. Its dominance has coincided with the growth and maturity of on-premise physical data warehouses and the need to physically move and transform data in batch cycles to populate target tables efficiently and with minimal resource consumption. The “heavy lifting” of data transformation has been left to ETL tools that use caching and DDL processing to manage target loads.

However, the data warehouse landscape is changing, and it may be time to reconsider the ETL approach in the era of MPP appliances and cloud-hosted DW’s. These architectures are characteriz...

The Data Integration Portfolio - Part Four: Putting It All Together (In The Cloud)

This blog series has examined the hybrid data portfolio as a mix of technologies and approaches to a data foundation for the modern enterprise. We’ve examined a variety of strategies and technologies in data integration, including virtualization, replication and streaming data. We’ve shown that there is no “one size fits all” approach to an integrated data foundation, but instead have seen how a variety of disciplines that suit specific business and technical challenges can make up a cohesive data policy.

This final chapter puts it all together under the umbrella of “time-to-value" and its importance to the agile enterprise data platform. No matter what the technology, data strategies invariably involve movi...


What can EDA do?

EDA is the governed integration of disparate data collection applications, into a centralized “source of truth”, for enhancing business acumen to improve and optimize decisions and performance. EDA allows health plans to easily:

  • Analyze provider performance in order to optimize pay-for-performance

  • Evaluate treatment patterns across providers and better tailor care management program

  • Identify underlying opportunities for health education and intervention with high-risk members

Sustaining fixed EDA support and overhead costs can allow an organization to continue growth and improvement, thus providing an infinite return on investment (ROI).

Why do you need an EDA?

  • How are your patient outcomes improving?

  • Are you doing the same things but expecting different results?

  • Have your past investments become your “legacy” systems?

  • Are you experiencing rising costs, long revenue cycles, and bad debt?

  • Who are the health care provider/vendors that are performing well?

  • What lost economic opportunity (LEO) have you missed?

  • Are you maximizing your ROI?

If you answered yes to any of the above questions, then you need EDA to give you give the best assessment of you organization performance, so that you can make the best decision for your organization. Even the smallest improvement in everyday decision-making can improve corporate perfo...


Two DV's in Tandem: Data Vaults and Data Virtualization

There has been growing interest in data vaults as an architecture suited for the archiving and preservation of all enterprise-wide data. The normalized, "hub-and-link" structure is suited for parallel loading from multiple operational systems, and the philosophy of "all the data, all the time", regardless of quality, lends itself to the rapid population of the vault without cumbersome governance and oversight. However, the data vault is ill-suited as a source for reporting and analytic querying, since it separates keys from descriptive metadata, propagates many-to-many joins, and makes no distinction between high-quality and low-quality (or relevant and irrelevant) data. Companies...