Earn the coveted Fabric Analytics Engineer certification. 100% off your exam for a limited time only!
Hello,
I am trying to build a report that combines Direct Query with Import Mode to improve performance and allow the scheduled refresh to be carried out for aggregated data only.
I was tried to do what is explained in the "Aggregations based on group-by columns" section on the following
,since my dataset lives on a SQL DB and is composed of one SalesData table with denormalized dimension values (ie. repeated dimension values) into it, rather than one fact table connected to dimension tables.
Could someone please explain this process step-by-setp?
Also some questions:
1) Is the aggregation table supposed to be created in SQL or PowerBI?
2) Is it necessary to create a table in the SQL DB to pre-aggregate the data?
Thanks for your help
Solved! Go to Solution.
As it turns out, most answers about this topic can be found here:
https://docs.microsoft.com/en-us/power-bi/desktop-aggregations
The aggregation table is generated by power bi and cached in power bi model. In addtion, based on my understanding, you don't need to pre-aggregate the table because the aggregation feature in power bi has cover this step.
Three reasons for using aggregation table in power bi:
Community Support Team _ Jimmy Tao
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi,
Thanks for your reply. Could you please explain what you mean by "The aggregation table is generated by power bi" and how this can be accomplished?
I understand the reasons for using aggregations, my dataset is actually rather large, several GB in size, so using DirectQuery the visuals on the report take between 12 and 15 seconds to refresh (sometimes longer). This makes the user experience terrible, since everytime filters or slicers are used the user has to wait too long before all filters are applied.
I don't want to upload the whole dataset to the PowerBI Service as it's too big and eventually the limit would be reached, so I am sure aggregations would solve my problem (given the data to upload would be a fraction of the original dataset).
Could you possibly explain how this can be done?
Thanks
As it turns out, most answers about this topic can be found here:
https://docs.microsoft.com/en-us/power-bi/desktop-aggregations
User | Count |
---|---|
124 | |
108 | |
99 | |
62 | |
62 |
User | Count |
---|---|
137 | |
115 | |
102 | |
71 | |
61 |