Once you start using Power Pivot for analysis, the “portable formulas” benefit kicks in. What used to require 50 formulas copy/pasted/filled-down into thousands of cells, with arcane and unreadable references, now becomes 5-6 formulas that exist only one time *each* – no copies at all. Oh, and those formulas adapt to each new report/analysis without rewriting them.

The surface area for random errors (that “one error per 100 actions” stat) drops dramatically, while the chances of an error being able to “hide” simultaneously is decreased multiple times. You are inherently testing every Power Pivot formula over and over as you manipulate pivots.

I am comfortable saying that a well-constructed Power Pivot model is in the ballpark of 100x less error prone than an optimally constructed Excel model for the same purpose, assuming the problem is of reasonable complexity. Power Pivot is not suited for all tasks of course, but for reporting and analysis, it virtually eliminates the errors that EUSPRIG etc. have documented.

I’d love to see Dr. Panko do some followup research on Power Pivot.

]]>Another important reason why Excel sucks (the most important, in my view) is that users do a lot of things from scratch (whether they need to or not), creating loads of opportunities for error. Creating a formula is an opportunity for error, selecting a range is another and adding even one data point is another. Since one thing feeds another, these errors propagate. It’s ugly!

A resource for loads of excellent information from serious research on spreadsheet errors is “What We Know About Spreadsheet Errors” (http://bit.ly/sserrors) by Raymond Panko of the University of Hawaii.

]]>