The Warehouse

BLOG & REsources

Data Sharing and the “Internet of Databases”

The rise of personal computing in the 1980s and 90s led to a boom in business productivity that was transformative in its scope. Suddenly businesses had the power of what were formerly room-size computers on their desktops. This period saw the rise of the “knowledge worker” and the digitization of business.

But it was the impact of the internet that really drove business to the next level. All of those isolated desktop computers were now connected via a world wide web to enable communication, marketing and commerce without barriers. An open exchange of innovation and ideas fostered rapid growth, collaboration, and the global marketplace. The inter...

Matillion Data Loader: The Fast, Easy (and Free!) Way to Populate Your Cloud Data Warehouse

There is no question that companies are moving their on-premises data warehouses to the cloud at an increasing pace. The benefits of a cloud data warehouse (instant scalability, minimal up-front costs, rapid deployment, ubiquitous access, etc.) are being fully realized and appreciated by a growing number of enterprises both large and small. The major players in cloud DW (Snowflake, AWS Redshift, Azure Synapse, Google BigQuery) are all vying for market share, and the customers are seeing benefits from the varying and competitive costs and features of each platform.

But how do you get your data to the cloud? Isn’t the time and cost comparable to any data load project, whether it is on-pre...

Rethinking the Data Vault for Real-time Data

The data vault has long been viewed as a model best suited for historical and archival enterprise data. Its “insert only”, business-process approach to raw, unadulterated data is ideal for low-maintenance storage of all enterprise-generated information from all systems. Use cases for data vaults have traditionally revolved around historical tracking and auditing . . . however, the perception has largely been that it is ill-suited to analytics due to its many-to-many relationships and dispersed structure. In fact data vaults are often used as a “lightly modelled stage” for traditional star-schema data warehouses.

But the data vault may be best suited for a use case that...

Where's the "T?" A look at ETL vs. ELT

In a previous blog post, we examined the differences between traditional ETL (extract, transform and load) and ELT, where the “heavy-lifting” of data transformation is handled by the robust and scalable (usually cloud-hosted) target platform. In today’s modern cloud data warehouse environment, ELT maximizes the speed at which data is staged and ingested, while leveraging massive computing power in the cloud to cleanse, aggregate and otherwise prepare that data for general consumption. But where is the best place to manage that transformation piece? Is it using cloud-friendly ETL tools, or is it within the management consoles of the cloud DWs themselves?

A common perception a...

Talend v. Matillion for Cloud Migration

The two leading ETL/ELT tools for cloud data migration are Talend and Matillion, and both are well-positioned for moving and transforming data into the modern data warehouse. So if you’re moving to any type of cloud-hosted DW, whether it is a cloud-dedicated warehouse such as Snowflake, or part of a larger cloud platform such as AWS Redshift, Azure SQL Data Warehouse or Google BigQuery, which tool should you use to move your existing on-prem data?

Both Talend and Matillion can source any kind of on-prem data and land it in a cloud-hosted data environment. They can also move data to and from AWS’s cloud data-storage S3 as well as Azure’s Blob storage (which can be used to s...

Data Integration Roadmap - Part One: The Logical Data Model

Our recent blog series on the data integration portfolio introduced a variety of new architectures that help the enterprise manage their data resources, including replication, virtualization and cloud data warehousing. Organizations are now able to integrate multiple data management solutions to address a variety of business sources and requirements. But it is important to understand that the foundation of any enterprise data management portfolio remains the same . . . a roadmap to data management must be created that is independent of the underlying technology. This series of blogs will examine the three main elements of the data integration roadmap: the logical data model, master data ma...

The Data Integration Portfolio - Part Four: Putting It All Together (In The Cloud)

This blog series has examined the hybrid data portfolio as a mix of technologies and approaches to a data foundation for the modern enterprise. We’ve examined a variety of strategies and technologies in data integration, including virtualization, replication and streaming data. We’ve shown that there is no “one size fits all” approach to an integrated data foundation, but instead have seen how a variety of disciplines that suit specific business and technical challenges can make up a cohesive data policy.

This final chapter puts it all together under the umbrella of “time-to-value" and its importance to the agile enterprise data platform. No matter what the techn...

The Data Integration Portfolio - Part Three: Streaming Data

In previous installments of this series we examined recent trends in data integration, specifically data replication and synchronization, as well as data abstraction through virtualization. Taken individually, all of these approaches are suited for high data latency requirements around historical reporting and trending analysis. In this chapter, we look at real-time streaming data, and how it can complement high-latency data integration approaches to create a complete enterprise data foundation.

Streaming data delivery is often perceived to be the "holy grail" of data integration in that it provides users with immediate and actionable insight into current business operations. In reality, st...


In our previous installment on the hybrid data integration portfolio, we looked at the role of data virtualization in a unified, multi-platform approach to creating a managed enterprise data foundation. In this chapter, we examine data replication and synchronization, i.e. the ongoing copying of data (without massaging or transformation) from one physical location to another, usually in conjunction with change data capture (CDC).

Data replication is often considered ETL without the "T", though where ETL is usually a batch-based delivery process, replication is often driven by "update-upon-change". Through this process, the target database only updates when changes occur to the source. Often r...

Pandata GroupLess


Pandata Group, LLC

316 W. Washington Avenue

Suite 525

Madison, WI 53704

(877) 350-5192

Pandata Group © 2020


WeWork/ Fulton Market

220 N. Green Street

Second Floor

Chicago, IL 60607