Skip to main content

Testing in data warehouse projects

Recently Arun Sundararaman from Accenture has posted an article on DWBI Testing in Information-management.com. The article can be found out here. The article brings in a very timely discussion on the state of testing methodologies for data warehousing projects.

DWBI testing is so far the least explored area in the data warehousing domain. Majority of data warehousing projects that fail, rarely fail in the implementation phase, rather they mostly fail in the user acceptance phase. This is largely due to the fact that end users often find their data warehouse generating unacceptable reports (Or reports generating numbers outside their "tolerance" limit) while compared to actually known business scenarios. Whatever be the root cause of that, proper testing is the only way of detecting and fixing those issues.

Unfortunately, in the current data warehousing context, the only viable method of testing is through manual SQL scripting. Metadata management tools fail miserably if "SQL Override" or "Stored Procedures" are used in the ETL phase. But that's not the only real problem of automated testing. The main issue is we are yet to come up with a generic testing strategy for data warehouse data reconciliation method.

I believe this is a high time when data warehousing practitioners, both individual and organizations, take data warehousing testing seriously and develop a common methodology for the same.

Comments

  1. Hello,
    Palmer Leasing Inc offers one of the largest fleets of Quality Mobile Storage, Transportation and Logistics equipment for rent or lease - ready for your use, without the expense, exposure or hassle of ownership and always at competitive rates.

    ReplyDelete

Post a Comment

Popular posts from this blog

Informatica Incremental Aggregation

Saurav has posted a new article here on Incremental Aggregation Using Informatica . The need of incremental aggregation arise when we capture our source data (transactional data) incrementally in a frequency faster than the aggregation period. Take this example, a data warehouse system is refreshed every night from source data. The data warehouse has a monthly aggregated table. So it is obvious that every day's data you need to aggregate and put together in the monthly table. But in stead of loading the monthly table at month end, if you consider loading this monthly table everyday or every week or bi-monthly, then incremental aggregation is possibly the best option for you. Now performance wise, it remains an open question on how good is Informatica in doing incremental aggregation. I think Saurav might consider an other article by putting informatica in test with considerable data volume.

Should we keep Index and Data in separate tablespace?

Index and Data in Separate Tablespaces Recently I got caught up in a developer-DBA argument regarding placing of database indexes in separate tablespace from data tablespace. These developers and DBAs belong to a project where all data and indexes are stored in different tablespaces. But still there are a few indexes on a daily truncate-and-load type temporary table that are managed through the ETL code. Meaning, those indexes are dropped before batch loading and recreated after the load. The trouble here is - the ETL jobs create the index in the data tablespace instead of creating them in the index tablespace. While DBA wants developers to change the code to create the indexes in the proper tablespace, developers’ argument is why creating indexes in a different tablespace are so important? Why do you need a separate Tablespace for Indexes? The DBA’s argument is they need a separate tablespace for indexes for performance reasons. And that is off course wrong. Putting all your indexes i

Compare between CTAS, COPY and DIRECT PATH Load in Oracle

OK. Here is a simple task that I am trying to achieve all through out tonight. Loading Huge Table Over DBLINK I have a big (infact very big) table on one database (SRCDB) and I am trying to pull the data from that table to a table in different database (TGTDB). Both SRCDB and TGTDB reside in different HP Unix servers connected over network. And I have only SELECT privilege on the SRCDB table. The table has no index (And I am not allowed to create an index, or any database object for that matter, in the SRCDB). But the SRCDB table has many partitions, only one of which I am supposed to pull from that. Let's suppose the SRCDB table has 10 partitions. Each partition has 500 million records. And as I said above, I need to pull data from only one partition to the target. So what will be the best suited strategy here? My Options When I started to think about this, following options came into my mind: 1. Using Transportable Table Space 2. Using CTAS over DBLink 3. Using direct load path 4