Should You Trust Analytics II: Data Provenance

The process of turning data into information to present it in a simple manner can be incredibly complex.  I believe this irony is primarily because most available data is not formatted for analysis.  Building a large, custom data set with the exact list of features you desire to analyze (Design of Experiments) can be very expensive.  If you have pockets as deep as big Pharma or are ready to dedicate years to a PhD, it’s definitely a great way to go.

Our last blog on trusting data analytics explored how the industry practice of “data cleaning” can spoil the reliability of an entire analysis.  But problems can also occur with perfect, clean, complete, and reliable data.  In this post we will explore the topic of data provenance and how the complexities of data storage can sabotage your data analytics.

Data Provenance 2

The truth is… business data is structured and formatted for business operations and efficient storage.  Observations are usually:

  • Recorded when it is convenient to do so, resulting in time increments that may not represent the events we actually want to measure;
  • Structured efficiently for databases to store and recall, resulting in information on real world events being shattered across multiple tables and systems; and
  • Described according to the IT departments’ naming conventions, resulting in the need to translate each coded observation;

Continue reading “Should You Trust Analytics II: Data Provenance”

Should You Trust Analytics III: Analytics Process

Lack of trust in source data is a common concern with data analytic solutions. A friend of mine is a product manager for a large software company that uses analytics for insights into product sales. He told me the first thing executives and managers do when new analytic products are released in his NYSE-traded, multi-billion dollar  company is…  manually recalculate key metrics.   Why would a busy manager or executive spend valuable time opening up a spreadsheet to recalculate a metric? Because he or she has been burned before by unreliable calculations.

I’ve been exploring the subject of unreliable data since a recent survey  of CEOs revealed that only 1/3 trust their data analytics.   I have also been studying for an exam next week to earn a Certified Analytics Professional designation  to formalize my knowledge on the subject.  While studying each step in the analytics process on INFORMS’ analytic process, the sponsoring organization for the Certified Analytics Professional exam, I’ve considered how things could go wrong and result in an unreliable outcome.  In the flavor of Lean process improvement (an area I specialized earlier in my career), I pulled those potential pitfalls together in a fishbone diagram:

Analytic Errors Fishbone

Continue reading “Should You Trust Analytics III: Analytics Process”

Visualized Correlations

One interesting approach to root cause analysis is to correlate descriptive variables about errors with one another.  I created this correlogram to visualize every possible combination of correlation coefficients among observations from a large information system.  At the intersection of two numbers is a square that represents the correlation of those two variables across hundreds of observations.

2015-05-12.correlogram2

Blue shows a positive correlation, red represents a negative, and darker saturation signifies a stronger relationship.  What trends that might give insights to the root causes?  I chose to explore variables 14 (vertical blue trend), 25 (horizontal), and 27 (horizontal).

The analysis was performed in Excel and also in R using the correlogram package.