Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn the coveted Fabric Analytics Engineer certification. 100% off your exam for a limited time only!

Reply
_AlexandreRM_
Helper II
Helper II

Power Query doesn't use 100% of the processor

Hello,

 

In this example I'm using Power Query in Excel, but I gess it whould be the same on PowerBI.

 

I have currently a very complex query taking an average of 30 minuts to execute, while my processor (Intel I3 10th gen) is at 20% and my memory at 600Mo (8Go in total, 2Go still available).

 

I am wondering if there is a way to tell Power Query to use all the capacity it needs? Currently it's running really slowly, while I have a lot of unused capacity on my computer.

 

I don't have any recursive tasks in my query, all rows could theorically be processed at the same time (that's why I don't think it's a logical bottleneck).

 

The screenshot of my task manager (columns are: process name, processor, memory, hard disk, network):

_AlexandreRM__0-1666774620740.png

 

Thank you for your help.

Alexandre

 

1 ACCEPTED SOLUTION

I think I understand your requirements now. I'm pretty sure you can achieve some performnce improvements by leveraging Group, still, but there are a couple extra steps. To test performance better, I switched to randomly generated data to test 10k and 100k rows in the structure you specified above for your testing.

 

The below works pretty well, just takes 1-2 seconds to load. Approach is to merge grouped rows (when grouped on a column, that column becomes primary key, which improve join performance), filter merged grouped rows as needed, sum values for running total, then do a second group to get the min:

 

let
    Source = PerfTest_10k,
    MaterialGroups = Table.Group(
        Source, 
        {"Material"}, 
        {{
            "Current qty", 
            each _, 
            type table [Material=nullable text, Date=nullable date, Stock movement qty=nullable number]
        }}
    ),
    MergeGroups = Table.NestedJoin( 
        Source, 
        "Material", 
        MaterialGroups, 
        "Material", 
        "Groups", 
        JoinKind.Inner 
    ),
    ExpandGroups = Table.ExpandTableColumn(MergeGroups, "Groups", {"Current qty"}, {"Current qty"}),
    GetCurQtyRows = 
    Table.TransformRows(  
        ExpandGroups, 
        (row)=> Record.TransformFields( 
            row, 
            {
                "Current qty", 
                each let 
                    _t = Table.SelectRows( row[Current qty], each [Date] <= row[Date] ) 
                in 
                    List.Sum( Table.Column(_t, "Stock movement qty") ) 
            } 
        ) 
    ),
    GetCurQty = Table.FromRecords( 
        GetCurQtyRows, 
        type table [Material=text, Date=date, Stock movement qty=number, Current qty=number] 
    ),
    GetMinQty = Table.Group(
        GetCurQty, 
        {"Material"}, 
        {
            { "Min qty", each List.Min([Current qty]), type number }
        }
    )
in
    GetMinQty

 

Output:

MarkLaf_0-1667938183424.png

 

The above doesn't work so great when you up the rows to 100k, though. For that I think you have to turn to DAX. This takes about 2 sec to work over 100k rows (probably there are ways to improve performance further on this). Note that [Running Total] and [Min Running Total] are measures:

 

Running Total = 
VAR _thisDt = MAX( PerfTest_100k[Date] )
VAR _matGroup = CALCULATETABLE( PerfTest_100k, REMOVEFILTERS( PerfTest_100k ), VALUES( PerfTest_100k[Material] ) )
VAR _curPrevRows = FILTER( _matGroup, PerfTest_100k[Date] <= _thisDt )
RETURN
CALCULATE( SUM( PerfTest_100k[Stock movement qty] ), _curPrevRows )


Min Running Total = 
MINX( SUMMARIZE( PerfTest_100k, PerfTest_100k[Material], PerfTest_100k[Date] ), [Running Total] )

 

Output (note it's all randomly generated which is why these numbers don't match output above):

MarkLaf_0-1667940835571.png

In case interested and to show my work, here is the M for the test data. Below generates 10k rows for

PerfTest_10k. It's same code, but 10000 replaced with 100000 in line 4, for PerfTest_100k:

 

let
    Source = List.Generate( 
        ()=>0,
        each _ < 10000, 
        each _ + 1, 
        each [
            Material = Character.FromNumber( 
                List.Min( { 
                    Int32.From( Number.RandomBetween(65, 91) ), // A-Z
                    90 
                } ) 
            ), 
            Date = Date.AddDays( 
                #date(2022,1,1), 
                List.Min({ 
                    Int32.From( Number.RandomBetween( 0, 365 ) ), // 1/1/2022-12/31/2022
                    364 } ) 
                ),
            Stock movement qty = Int64.From( Number.RandomBetween( -100, 100 ) ) // -100 - +100
        ]
    ),
    Ouput = Table.FromRecords( Source, type table [Material=text,Date=date,Stock movement qty=number] )
in
    Ouput

 

 

View solution in original post

18 REPLIES 18
MarkLaf
Solution Sage
Solution Sage

As far as I can tell from your M, you can get what you want, along with a huge performance improvement, by grouping, which you seem to basically be implementing yourself with your joins, custom function, etc.

Example, given the below test data:

Numéro de produit Désignation du produit Stock movement qty
1 A 12
1 A 32
1 B 543
2 A 123
2 A 421
2 B 51
2 B 36
3 A 8
3 A 21
3 A 43

Through UI:

MarkLaf_1-1667859429001.png

MarkLaf_0-1667859264352.png

MarkLaf_2-1667859458128.png

 

M:

 

let
    Source = base,
    GroupRows = Table.Group(
        Source, 
        {"Numéro de produit", "Désignation du produit"}, 
        {
            {"Min qty", each List.Min([Stock movement qty]), type nullable number}
        }
    )
in
    GroupRows

 

 

FYI in case you are trying to do something a little more nuanced than get the minimum by the given grouping, you actually have the whole original table (with only the rows applicable to the grouping for each distict values combo) to manipulate. You'll see if you select All Rows through the UI or add something like to the third argument list of lists in M: { "Grouped Original Table", each _, Value.Type( Source ) }

MarkLaf_3-1667860665963.png

Hello @MarkLaf , thank you for looking for a solution!

Unfortunately, your code doesn't meet my need.

 

Here is an example of what I want to achieve. The example table down here could be one of the rows from your column "Grouped Original Table".

The result I want to get is in red, and the result of your code is in blue.

Columns in italic aren't in the native table, and are only here to illustrate the logic of the red result.

 

MaterialDateStock movement qtyCurrent qtyMin qty
A01/01/2023+303030
A01/02/2023+104030
A01/03/2023-152525
A01/04/2023-52020
A01/05/2023+406020

 

To achieve this, I need to execute a running total to get the minimum cumulated stock over an ascending list of dates. Your code would have been perfect if I had the "Current qty" column in my native table, which is sadly not the case.

I think I understand your requirements now. I'm pretty sure you can achieve some performnce improvements by leveraging Group, still, but there are a couple extra steps. To test performance better, I switched to randomly generated data to test 10k and 100k rows in the structure you specified above for your testing.

 

The below works pretty well, just takes 1-2 seconds to load. Approach is to merge grouped rows (when grouped on a column, that column becomes primary key, which improve join performance), filter merged grouped rows as needed, sum values for running total, then do a second group to get the min:

 

let
    Source = PerfTest_10k,
    MaterialGroups = Table.Group(
        Source, 
        {"Material"}, 
        {{
            "Current qty", 
            each _, 
            type table [Material=nullable text, Date=nullable date, Stock movement qty=nullable number]
        }}
    ),
    MergeGroups = Table.NestedJoin( 
        Source, 
        "Material", 
        MaterialGroups, 
        "Material", 
        "Groups", 
        JoinKind.Inner 
    ),
    ExpandGroups = Table.ExpandTableColumn(MergeGroups, "Groups", {"Current qty"}, {"Current qty"}),
    GetCurQtyRows = 
    Table.TransformRows(  
        ExpandGroups, 
        (row)=> Record.TransformFields( 
            row, 
            {
                "Current qty", 
                each let 
                    _t = Table.SelectRows( row[Current qty], each [Date] <= row[Date] ) 
                in 
                    List.Sum( Table.Column(_t, "Stock movement qty") ) 
            } 
        ) 
    ),
    GetCurQty = Table.FromRecords( 
        GetCurQtyRows, 
        type table [Material=text, Date=date, Stock movement qty=number, Current qty=number] 
    ),
    GetMinQty = Table.Group(
        GetCurQty, 
        {"Material"}, 
        {
            { "Min qty", each List.Min([Current qty]), type number }
        }
    )
in
    GetMinQty

 

Output:

MarkLaf_0-1667938183424.png

 

The above doesn't work so great when you up the rows to 100k, though. For that I think you have to turn to DAX. This takes about 2 sec to work over 100k rows (probably there are ways to improve performance further on this). Note that [Running Total] and [Min Running Total] are measures:

 

Running Total = 
VAR _thisDt = MAX( PerfTest_100k[Date] )
VAR _matGroup = CALCULATETABLE( PerfTest_100k, REMOVEFILTERS( PerfTest_100k ), VALUES( PerfTest_100k[Material] ) )
VAR _curPrevRows = FILTER( _matGroup, PerfTest_100k[Date] <= _thisDt )
RETURN
CALCULATE( SUM( PerfTest_100k[Stock movement qty] ), _curPrevRows )


Min Running Total = 
MINX( SUMMARIZE( PerfTest_100k, PerfTest_100k[Material], PerfTest_100k[Date] ), [Running Total] )

 

Output (note it's all randomly generated which is why these numbers don't match output above):

MarkLaf_0-1667940835571.png

In case interested and to show my work, here is the M for the test data. Below generates 10k rows for

PerfTest_10k. It's same code, but 10000 replaced with 100000 in line 4, for PerfTest_100k:

 

let
    Source = List.Generate( 
        ()=>0,
        each _ < 10000, 
        each _ + 1, 
        each [
            Material = Character.FromNumber( 
                List.Min( { 
                    Int32.From( Number.RandomBetween(65, 91) ), // A-Z
                    90 
                } ) 
            ), 
            Date = Date.AddDays( 
                #date(2022,1,1), 
                List.Min({ 
                    Int32.From( Number.RandomBetween( 0, 365 ) ), // 1/1/2022-12/31/2022
                    364 } ) 
                ),
            Stock movement qty = Int64.From( Number.RandomBetween( -100, 100 ) ) // -100 - +100
        ]
    ),
    Ouput = Table.FromRecords( Source, type table [Material=text,Date=date,Stock movement qty=number] )
in
    Ouput

 

 

Hello @MarkLaf , just... wouah !

I tried your code without any modification in a blank new file, and it worked almost instantly.

 

After I tried it in my real query, and it has incredibly improved its performance : previously it took around 25 mins to complete, and now only 1 minute !!! (I couldn't reach the duration of 2 seconds, because the root table comes itself from a few others simple queries).

 

I have now a strongly improved query, and I also learned some stuff I didn't know about M syntax. For example, I discovered that groupping can be used without aggregation function, to just group the rows of a table on a given key (previously I was joining the table on itself to achieve this, which is... less clean).

 

So, thank you VERY much for your message. You helped me a lot !

Glad that worked for you!

 

You may also want to test with different parallelism settings. Increasing may help, and sometimes (counter-intuitively) turning off parallelism can improve performance, especially when you have a cascade of transformation queries.

Ehren
Employee
Employee

There are a few ways to do this without recursion, but some of them involve repeatedly summing all the previous rows, which can be inefficient.

 

Here's an approach that only scans the data once.

 

let
    Source = ...,
    #"Sorted Rows" = Table.Sort(Source,{{"Date", Order.Ascending}}),
    #"Added Index" = Table.AddIndexColumn(#"Sorted Rows", "Index", 0, 1, Int64.Type),
    RunningTotal = List.Generate(
    () => [Index = 0, Total = #"Added Index"{0}[Stock movement qty], Min = Total],
    each [Index] < Table.RowCount(#"Added Index"),
    (previous) =>
        let
            newIndex = previous[Index] + 1
        in
            [
                Index = newIndex,
                Total = previous[Total] + #"Added Index"{newIndex}[Stock movement qty],
                Min = List.Min({previous[Min], Total})
            ]),
    Custom1 = Table.AddColumn(#"Added Index", "Current qty", each RunningTotal{[Index]}[Total], type number),
    Custom2 = Table.AddColumn(Custom1, "Min qty", each RunningTotal{[Index]}[Min], type number)
in
    Custom2

 

That said, these kinds of cumulative calculations are probably better done in DAX, after loading the data to the (Power Pivot) Data Model.

Hello @Ehren , sorry for the very late reply, I was in vacation.

I think the List.Generate is a good alternative, I never tried to use it with records to store multiple values at each iteration!

 

After tests, it seems that both solutions (recursivity and list generation) have almost the same (low) performance. I think it's just the fact to iterate manually over each row of a table which is time-consuming. For those who are interested in, here is the formula using List.Generate:

    f = (codeTable as table) => 
        let
            iMax = Table.RowCount(codeTable),
            qtList = List.Buffer(codeTable[#"Qté entrées/besoins"]),

            cumulativeStockList = List.Generate(
                () => 
                    [
                        i = 1, 
                        qt = qtList{0}, 
                        minQt = qtList{0}
                    ], 
                each [i] <= iMax, 
                each 
                    [
                        i = [i] + 1,
                        qt = [qt] + qtList{[i]},
                        minQt = List.Min({[qt] + qtList{[i]}, [minQt]})
                    ])
        in
            List.Last(cumulativeStockList)[minQt],

 

So, except if Microsoft add a way to access more efficiently rows content for custom formulas, I don't think this problem will be solved!

 

I will also take a look at DAX, but its interface in Excel isn't... user-friendly, let's say.

Just curious: do you see a perf difference if you omit the call to List.Buffer?

No, it's exactly the same.

But I'm already using Table.Buffer on codeTable before calling the function, and this has been a huge performance improvement (around 10 times faster with Table.Buffer(codeTable) than without).

Just curious what kind of perf you see if you replace your Distinct + Join + Calc Running Total approach with a Group (aggregating all rows) + Calc Running Total approach.

Hello @Ehren , with my initial query, and after replacement of Distinct + Join by Group, it has strongly improved performances (from 25 mins to 2 mins). It's also the most important update I got from MarkLaf's code.

It seems that's the Join operation isn't as efficient as I though!

You can learn more about creating efficient M running totals here:

https://gorilla.bi/power-query/running-total/#fast-running-totals

BA_Pete
Super User
Super User

Hi @_AlexandreRM_ ,

 

Really hard to give a definitive answer here, but a few things to think about:

 

- Some of it depends on whether the work you're asking Excel/Power Query to do is serial or parallelable i.e. whether the work has to be done one part after another, or whether it can be threaded across different processing cores. Please don't ask me how to find out if this is the case, I haven't got a clue. It would probably be a question for someone like Ehren.

 

- If you're using a 32-bit version of Excel, then there's a hard RAM cap of 2GB per Windows process instance which, I believe, would then be subdivided across your evaluation containers that handle your query transformations. In this instance, you could consider upgrading to Excel 64-bit to avoid the cap.

 

- If you would consider using Power Query within Power BI Desktop, then you could get both the RAM cap gains by using the 64-bit version, as well as super-charge the RAM usage on the evaluation containers by updating the app MaxEvaluationWorkingSetInMB registry entry. More details on that from Chris Webb here:

https://blog.crossjoin.co.uk/2021/06/06/speed-up-power-query-in-power-bi-desktop-by-allocating-more-... 

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




Hi @BA_Pete , thank you very much for your tips.

 

- In my opinion all calculations could be paralell. @Ehren , I mention you, if you have a clue, here is a simplified version of my query code (function f is a custom function taking a very long time to complete):

 

let
    base = Table.Buffer(#"RRP1 + RRP4"),
    baseMaterials = Table.Distinct(base, {"Numéro de produit", "Désignation du produit"}),
    baseMaterialsJoinBase = Table.NestedJoin(baseMaterials, {"Numéro de produit"}, base, "Numéro de produit", "join", JoinKind.LeftOuter),

    bufferJoinTable = Table.TransformColumns(baseMaterialsJoinBase, {"join", Table.Buffer}),

    addMin = Table.AddColumn(bufferJoinTable, "Qty", each f([join], 0, 9999999999, 0))

in
    addMin

 

 

- I just checked in my account data, I am already on a 64bits version,

 

- This is incredibly powerful, thank you for sharing!!! I defined the parameter in my PowerBI desktop, but I didn't found it in Excel Power Query, it seems Microsoft still didn't pushed this change, sadly.

 

Alexandre

What is the CPU/memory utilization of the Microsoft.Mashup.Container*.exe processes? They're the ones doing the bulk of the PQ work.

 

Also, it's possible the processing is slow because of data source access, which wouldn't show up in CPU/memory.

 

If you remove the addMin step, is everything faster? If so, it would be important to share what the f function does.

Hello Ehren,

 

The CPU/memory utilization of all Excel sub-processes is as follows:

(columns: process name, CPU, memory, hard disk, network)

_AlexandreRM__0-1666855860605.png

Note that the CPU utilization is far higher than on my previous screenshot, even if I didn't changed anything. Still, PQ has a good romm for improvement.

 

The data sources are 2 Excel files on the Sharepoint, previously mergued in the #"RRP1 + RRP4" table.

 

Here is the code of function f, which is the reason of the query slowness (a recursive function applying to the [join] table). Its goal is to calculate the minimum value of forecasted stock, using all stocks entries and exits:

 

 

    f = (codeTable as table, previousQt as number, minQt as number, i as number) => 
    let
        currentQt = previousQt + codeTable{i}[#"Stock movement qty"],
        currentMinQt = if currentQt < minQt then currentQt else minQt,

        //former method, the new one with error handling seems more efficient:
        //result = if i = iMax then minQt else @f(codeTable, currentQt, currentMinQt, i + 1, iMax)
        result = try @f(codeTable, currentQt, currentMinQt, i + 1) otherwise currentMinQt
    in
        result,

 

 

If you have an idea to improve this function, I would be stronly interested!

 

Alexandre

I'm not entirely sure what the purpose of the minQt calculation is, but other than that the recursion seems unnecessary. Buffering each and every table in the join column independently also seems like it could be detrimental perf-wise.

 

After performing your join, have you tried just summing the "Stock movement qty" column in the nested tables?

Ehren_0-1666891478536.png

 

 

Hello @Ehren , I can't use the aggregate option.

 

Here is an example of how works the function. The final result of the function is in red :

DateStock movement qtyCurrent qtyMin qty
01/01/2023+303030
01/02/2023+104030
01/03/2023-152525
01/04/2023-52020
01/05/2023+406020

 

Helpful resources

Announcements
April AMA free

Microsoft Fabric AMA Livestream

Join us Tuesday, April 09, 9:00 – 10:00 AM PST for a live, expert-led Q&A session on all things Microsoft Fabric!

March Fabric Community Update

Fabric Community Update - March 2024

Find out what's new and trending in the Fabric Community.

Top Solution Authors
Top Kudoed Authors