A fairly common data model design pattern can be described such that:
- Given a set of criteria that are numerically scored
- These scored criteria are grouped together in arbitrary sets and the group scored via an aggregation of criteria scores
- These groups of scores are further grouped into arbitrary sets (super groups) and the super group scored via an aggregation of group scores
For example, a restaurant rating system might uses a ranking criteria of satisfactory from 0-100. These questions could be grouped into questions about "Service", "Atmosphere", "Quality", "Value" and "Cleanliness" where each group's score is simply an average of the individual question scores. These five groups are then grouped into "Restaurant" and "Food" where the score for each of these super groups is the minimum score of related sub-groups.
The Data Model
The data model to implement this design is relatively straight-forward.
[SuperGroups] 1-* [Groups2SuperGroups] 1-1 [Groups] 1-* [Attributes2Groups] 1-* [AttributeScores]
We will use the following columns and data to build a sample data model via "Enter Data" queries:
AttributeScores contains two columns, "Attribute" and "Score".
Groups contains a single column "Group"
SuperGroups contains a single column "SuperGroup":
Attributes2Groups contains 2 columns "Group" and "Attribute":
Groups2SuperGroups contains two columns, "Group" and "SuperGroup":
A Simple Solution
Create an AverageScore column in Groups:
AverageScore = CALCULATE(AVERAGE(AttributeScores[Score]),RELATEDT
Groups data is now:
Create a MinScore column in SuperGroups:
MinScore = CALCULATE(MIN(Groups[AverageScore]),RELATEDTABLE(G
SuperGroups data is now:
What happens when we change our core AttributeScores table to include an additional field so that we can store multiple "things" in the same table:
Note that the first 10 entries have remained the same other than the addition of "Thing1" in the "Thing" column. Now we have a representation of the same survey or scoring conducted on multiple "things".
Houston, We Have a Problem
By simply adding this additional field, we might think that we have not changed the data model to such a degree that our simple solution will suffice. No such luck.
If we simply put our Group and AverageScore in a table visual and add a slicer for "Thing", we get the same numbers for each Thing.
When we should get:
Similarly, if we add a simple table visualization of SuperGroups to show "SuperGroup" and "MinScore", regardless of slicer selection, these remain:
If we try using row level security instead of slicers, the results are the same. So what is going on? Basically, calculated columns are essentially static in nature, calculated upon data refresh and are not updated contextually. Note that I realize that if you use the original "Score" value from AttributeScores and choose the dynamic "Average" aggregation that it works at the group level, but this only holds for very simplistic aggregations.
A Slightly More Complex Solution
In order to solve this problem, we switch from calculated columns to measures.
Back to the Groups table (or anywhere), we can create a measure defined such that:
AverageScoreMeasure = CALCULATE(AVERAGE(AttributeScores[Score]),RELATEDT
This is EXACTLY the same DAX calculation as we had for our calculated column. However, the calculation for MinScore is dramatically different. For the MinScore as a measure, we are trying to find the minimum score of a measure and thus we must return the measure AverageScoreMeasure to our measure calculation within the correct context. Thus, to do this we define MinScoreMeasure such that:
MinScoreMeasure = MINX ( SUMMARIZE ( Groups, Groups[Group] , "AVG",[AverageScoreMeasure] ), [AVG])
What is going on here is that we are creating a summarized table using the SUMMARIZE function. We are returning two columns, the first is a grouping by our "Groups" column called "Groups" and the second is our AverageScoreMeasure calculation in a column called "AVG". We are then taking the MIN of column AVG.
Now, when we create table visualizations using our "Group" and "AverageScoreMeasure" in one table and our "SuperGroup" and "MinScoreMeasure" in anther table, we receive the correct results for "Thing1" and "Thing2" whether using slicers or RLS.
The designer pattern described here has many applications to just about any scenario in which scores or rankings need to be aggregated and then aggregated again. The solutions described within this article handle both simple scenarios as well as complex scenarios involving multiple base scoring sets of data. This scenario demonstrates the limitations of custom columns and the corresponding superiority of measures for handling aggregation roll-ups.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.