Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more.
Get startedGrow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.
I'm trying to chart historical stock levels by starting from the first transaction (Qty in/out) in 2012. I have seen a lot of examples of running totals but none that quite fit my dataset:
Date | TransactionID | SKU | Warehouse | Qty In | Qty Out |
I have grouped this table by date, SKU and Warehouse and then added an index column:
RTIn & RTOut are running totals off Qty in & Out each day eg:
= Table.AddColumn(#"Added Index", "RTOut", each List.Sum( List.Range( BufferedOut, 0, [Index]) ))
I need these columns to be specific to each SKU and Warehouse. The difference between the two columns should then be the stock level for the SKU in a particular warehouse on that date. I'm having a lot of trouble trying out potential solutions because it is a large dataset (over 1mil rows). Every time I make a change I wait 10-20min or freeze altogether.
Would measures be a better option? What can I do to optimise memory usage to update changes faster? I am very new to Power BI, any advice would be much appreciated.
Is this the right solution for me?
Solved! Go to Solution.
Hi @Revolution ,
At Power Query, the most effective way to improve performance is still to use group by to reduce the amount of data that needs to be processed for each query. But you've already mentioned that you're using group by, but the query is still slow. I'm afraid that the amount of data you have is really too large, and even using the List.Generate function you mentioned in your link may not significantly improve your performance.
My suggestion is to do some aggregation operations in advance, according to the requirements of the report some of the granularity requirements of the data is not high in advance in the data source side of aggregation. Also, if your data source supports query folding, then please try to share the load of Power Query by having all the steps that support query folding processed up front by the data source, rather than done in Power Query.
Query folding guidance in Power BI Desktop - Power BI | Microsoft Learn
Best Regards,
Dino Tao
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @Revolution ,
At Power Query, the most effective way to improve performance is still to use group by to reduce the amount of data that needs to be processed for each query. But you've already mentioned that you're using group by, but the query is still slow. I'm afraid that the amount of data you have is really too large, and even using the List.Generate function you mentioned in your link may not significantly improve your performance.
My suggestion is to do some aggregation operations in advance, according to the requirements of the report some of the granularity requirements of the data is not high in advance in the data source side of aggregation. Also, if your data source supports query folding, then please try to share the load of Power Query by having all the steps that support query folding processed up front by the data source, rather than done in Power Query.
Query folding guidance in Power BI Desktop - Power BI | Microsoft Learn
Best Regards,
Dino Tao
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.