I am doing a deep dive into the Vertipaq engine and I have a few questions regarding a concrete use case.
I have a .pbix file with two Fact Tables:
- Raw (300k rows and 17 columns) with the following structure:
|London||644 544||417 834||961 806||367 504||83 035||893 560|
|Vancouver||717 302||153 883||616 675||997 484||779 110||672 422|
|Copenhagen||643 329||17 978||215 068||935 608||270 478||827 192|
|London||279 920||224 229||774 350||764 692||770 781||929 427|
|Vancouvert||158 089||104 762||209 378||22 990||938 295||93 513|
|Vancouver||436 206||70 139||223 948||482 096||136 600||930 017|
|Copenhagen||290 725||986 045||260 513||714 680||491 096||293 309|
- Unpiv which is the unpivoted version of the 'Raw' table above (4.8 Million rows and 3 columns). So it has the following structure:
Not surprinsingly, because Vertipaq uses a columnar storage, my 'Unpiv' Table is ~45% smaller than 'Raw' table:
We clearly see (i) the impact of the cardinality of the columns on their total size (ii) the relevance of unpivoting the table. Fair.
Here are my questions :
1. As we can see above, the size difference between 'Raw' and 'Unpiv' is mainly due to the "Column Hierarchies". What is it about ? I found no real explanation on the web or on Marco Russo's book. I read somewhere that it has to do with MDX structure...?!
2. How is the size of Column Hierarchy computed? How can it be estimated or perhaps turned off? I understand the logic of value encoding , dictionnary encoding and RLE encoding. Is it something similar ?
3. What is the unit of the measures displayed by the Vertipaq analyser in the pivot table above ? When I save my .pbix file on my disk, its total size is: 74MB. I have no relationships between 'Raw' & 'Unpiv' and 0 graph on my report. so the data model itself is ~ 73MB (extracted by zipping the .pbix file).
--> As a result, I would expect the Total Table size from the Vertipaq analyzer of my two tables above (49 391 934) to be similar to the 73 MB of the data model. Where does this difference come from ?
Thanks in advance for your answers !
Solved! Go to Solution.
I hope it helps!
Marco Russo - SQLBI
I hope it helps!
Marco Russo - SQLBI
I hear you in this video sayng that is possible to disable Hier Size in SSAS, when not used, of course.
And now I can also see the green tickbox just on top, which makes me think that is really possible.
Could you be so kind to address me to some specific article where I can see how to do that?
Also let me thank you for all the availability, support and videos.
You are making a great job among PBI and Dax world! A Big thank you.
You should set the "Available In MDX" property to False. The internal property name is "isAvailableInMdx" and it is described here: Chris Webb's BI Blog: How The New IsAvailableInMDX Property For Analysis Services Tabular Can Reduce...
This property is available only in Tabular Editor and Visual Studio, you cannot modify it from Power BI Desktop.
Disabling it in Power BI Desktop is not supported - it could work, I never tried it. It is supported for Analysis Services (Azure and SSAS). It should work on Power BI Service as well through XMLA endpoint.
Don't disable this feature on columns that could be used as filter or exposed as visible columns to Excel. Disable only when a column is only used in an aggregation like SUM (...) and it is never visible to user.
I will definitely follow your recommendations. Thanks a lot for replying quickly.
Hopefully I will get the best from VertyPaq analyzer and improve performance of the Tabular Model which we have.
For the folks on this thread, there is a new 'Available in MDX' feature in Azure Analysis Services, SSAS 2017 (and soon Power BI Premium) which answers my questions n°1 and n°2 above: https://blog.crossjoin.co.uk/2018/07/02/isavailableinmdx-ssas-tabular/
This is a must watch for a message from Power BI!
Click here to read more about the December 2020 Updates!
Click here to read the latest blog and learn more about contributing to the Power BI blog!
Mark your calendars and join us for our next Power BI Dev Camp!.