Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn the coveted Fabric Analytics Engineer certification. 100% off your exam for a limited time only!

Reply
Kiwizqt
Frequent Visitor

Reset count of ID once they've all been seen, count how many days it took then reset until next time

Hi,

 

I have 1500 unique IDs, everyday many are seen and some multiple times a day.

 

I would like to do a cumulative count and know how many days does it takes to see them all 1500, once it reaches that amount, the cumulative sum needs to reset for another count until the next time the goal is met.

 

I've done a running total after a grouped count by date, but my issue aside from the needed reset is that the duplicates needs to be removed based on the time period needed to attain the goal. If i'm grouping my IDs by date then i'm losing the ability to sort out duplicates...

 

So i'm guessing I need to first find how much time is needed, then remove the duplicates from a custom column spanning over its own run tower the goal, and then, calculate the running...

 

Here's an ugly paint mockup of what i'd like my final result to look like (I realize I need my date format to be date.time otherwise i'll never get a correct goal and it'll be tricky)

Kiwizqt_0-1600449194030.png

 

 

I've found this solution here from Lind25 but the syntax is wrong, maybe from a previous PQ version:

https://community.powerbi.com/t5/Desktop/Running-total-that-resets-when-equal-to-or-greater-than-a-s...

 

 

let
    Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
    AddedIndex = Table.AddIndexColumn(Source, "Index", 1, 1),
    #"Added Custom" = Table.AddColumn(AddedIndex, "Running total reset", each List.Accumulate(List.FirstN(AddedIndex[Difference],[Index]),0,(state,current)=>if state+current > 60 then 0 else state+current))
in
    #"Added Custom"

 

 

 

Any takers ? I'm completely lost on that one...

 

Thanks

 

4 ACCEPTED SOLUTIONS
ziying35
Impactful Individual
Impactful Individual

@Kiwizqt 

I've simulated a scenario with only 10 IDs using some of the data you provided, see if I'm on the right track? If I'm right, I'll think of the code to implement the end result.

Sample PQ 

View solution in original post

@Kiwizqt 

I did a test performing the core operation in Python and it seems to finish in less than 20 secs for the data you posted. See it in the attached file. You'll have to update the path to the excel file you shared. Here is the M code for the main query:

Note it should be cleaned up a bit, as it is doing a lot of stuff that might not be necessary

 

 

let
    Source = Excel.Workbook(File.Contents("d:\Downloads\Sample PQ Help.xlsx"), null, true),
    Maintenance_Itinérante___CSV_Table = Source{[Item="Maintenance_Itinérante___CSV",Kind="Table"]}[Data],
    #"Changed Type" = Table.TransformColumnTypes(Maintenance_Itinérante___CSV_Table,{{"Date&Time Seen", type datetime}, {"ID", Int64.Type}}),
    #"Removed Duplicates" = Table.Distinct(#"Changed Type", {"Date&Time Seen", "ID"}),
    #"Sorted Rows" = Table.Sort(#"Removed Duplicates",{{"Date&Time Seen", Order.Ascending}}),
    #"Added Custom" = Table.AddColumn(#"Sorted Rows", "Date", each Date.From([#"Date&Time Seen"])),
    #"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Date&Time Seen"}),
    #"Reordered Columns" = Table.ReorderColumns(#"Removed Columns",{"Date", "ID"}),
    #"Removed Duplicates1" = Table.Distinct(#"Reordered Columns", {"Date", "ID"}),
    #"Changed Type1" = Table.TransformColumnTypes(#"Removed Duplicates1",{{"Date", Int64.Type}}),
    #"Run Python script" = Python.Execute("# 'dataset' holds the input data for this script#(lf)groupeddataset = dataset.groupby(['Date'])['ID'].apply(lambda x: list(x)).to_frame().reset_index()#(lf)#test.groupby(['Pos'])['Pos2'].apply(lambda x: list(x)).to_frame().reset_index()#(lf)a = list(groupeddataset['ID']) #(lf)acc=list(initial['ID']); res=[]#(lf)for i in range(len(a)):#(lf)    acc=set(acc)-set(a[i])#(lf)    #acc=set(acc)-set([a[i]])#(lf)    if acc == set(): #(lf)        acc=initial#(lf)        res=res+[i]#(lf)#(lf)output=pandas.DataFrame(res,columns=['Positions'])",[dataset=#"Changed Type1", initial=Table.SelectRows(All_IDsT,each [ID]<> 15133)]),
    groupeddataset = #"Run Python script"{[Name="groupeddataset"]}[Value],
    groupeddataset2 = Table.TransformColumnTypes(groupeddataset,{{"Date", Int64.Type}}),
    #"Changed Type3" = Table.TransformColumnTypes(groupeddataset2,{{"Date", type date}}),
    #"Sorted Rows1" = Table.Sort(#"Changed Type3",{{"Date", Order.Ascending}}),
    CompletionPositionsT = #"Run Python script"{[Name="output"]}[Value],
    CompletionPositionsT2 = Table.TransformColumnTypes(CompletionPositionsT,{{"Positions", Int64.Type}}),
    result = List.Select(groupeddataset2[Date], each List.Contains(CompletionPositionsT2[Positions],_ - List.Min(groupeddataset2[Date]))),
    #"Converted to Table" = Table.FromList(result, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
    #"Changed Type2" = Table.TransformColumnTypes(#"Converted to Table",{{"Column1", type date}}),
    #"Renamed Columns" = Table.RenameColumns(#"Changed Type2",{{"Column1", "Completion dates"}})
in
    #"Renamed Columns"

 

 

The main step is #"Run Python script", with the following Python code:

 

 

groupeddataset = dataset.groupby(['Date'])['ID'].apply(lambda x: list(x)).to_frame().reset_index()
a = list(groupeddataset['ID']) 
acc=list(initial['ID']); res=[]
for i in range(len(a)):
    acc=set(acc)-set(a[i])
    if acc == set(): 
        acc=initial
        res=res+[i]

output=pandas.DataFrame(res,columns=['Positions'])

 

 

It groups the IDs by Dates (day level) and then operates on that to extract a list of the positions where each section with all IDs seen ends. Note I have filtered out ID 15133 from the list of IDs so that there is at least one section that has all IDs

Please mark the question solved when done and consider giving kudos if posts are helpful.

Contact me privately for support with any larger-scale BI needs, tutoring, etc.

Cheers 

 

SU18_powerbi_badge

 

 

 

View solution in original post

As much a I love M, I will put another plug in for the DAX approach for this one.  I don't know if your IDs have category columns associated with them and you will want to also have slicers (which would require a DAX approach).  Attached is a pbix with your example data.  It uses the below DAX expression to generate the shown table (I also added a small table with the "Cycle" values of 1,2,3).  You could do them separate but it is calculation intensive (so since I already had the virtual table in the measure, I generated both outputs and concatenated them together).

 

mahoneypat_0-1600737218749.png

Time and Span for Completion =
VAR thiscycle =
    SELECTEDVALUE ( Cycle[Completion Cycle] )
VAR ids =
    ALLSELECTED ( Seen[ID] )
VAR idcount =
    COUNTROWS ( ids )
VAR summarylastcycle =
    ADDCOLUMNS (
        VALUES ( Seen[Date&Time Seen] ),
        "IDsSoFar",
            VAR thistime = Seen[Date&Time Seen]
            RETURN
                COUNTROWS (
                    FILTER (
                        ids,
                        CALCULATE (
                            COUNT ( Seen[ID] ),
                            Seen[Date&Time Seen] <= thistime
                        ) >= thiscycle - 1
                    )
                )
    )
VAR completiontimelastcycle =
    IF (
        thiscycle = 1,
        MIN ( Seen[Date&Time Seen] ),
        MINX (
            FILTER (
                summarylastcycle,
                [IDsSoFar] >= idcount
            ),
            Seen[Date&Time Seen]
        )
    )
VAR summarythiscycle =
    ADDCOLUMNS (
        FILTER (
            VALUES ( Seen[Date&Time Seen] ),
            Seen[Date&Time Seen] >= completiontimelastcycle
        ),
        "IDsSoFar",
            VAR thistime = Seen[Date&Time Seen]
            RETURN
                COUNTROWS (
                    FILTER (
                        ids,
                        CALCULATE (
                            COUNT ( Seen[ID] ),
                            Seen[Date&Time Seen] <= thistime
                        ) >= thiscycle
                    )
                )
    )
VAR completiontimethiscycle =
    MINX (
        FILTER (
            summarythiscycle,
            [IDsSoFar] >= idcount
        ),
        Seen[Date&Time Seen]
    )
VAR span =
    DATEDIFF (
        completiontimelastcycle,
        completiontimethiscycle,
        DAY
    )
VAR range = completiontimelastcycle & " - " & completiontimethiscycle
RETURN
    span & " days" & "; " & range

 

Regards,

Pat

 

 





Did I answer your question? Mark my post as a solution! Kudos are also appreciated!

To learn more about Power BI, follow me on Twitter or subscribe on YouTube.


@mahoneypa HoosierBI on YouTube


View solution in original post

Anonymous
Not applicable

the code that implements the algorithm I described in the previous message

 

 

 

 

 

 

let
    completion = (tab)=>
    let
    grpID=Table.Group(tab, {"ID"}, {"grp", each _}),
    nids=Table.RowCount(grpID),
    currLastDate=List.Min(List.Last(grpID[grp])[Date]),
    rest= Table.SelectRows(tab, each _[Date] > currLastDate ),
    result= if Table.RowCount(Table.Distinct(rest,"ID")) < nids then {currLastDate} else {currLastDate} & @ completion(rest)  
    in 
    result 
in
    completion

 

 

 

 

 

This function  receives as input a table with Date and ID columns and provides a list of dates where the saturation cycle of all the distinct ids of the table is completed.

 

 

It seems very fast, respect previou solution based on list.generate and list.difference and so on ...

 

 

to obtain a more pleasant output, I grafted a copy of the vector of ids in the original table in three different random points of the [ID] column  😊

 

 

image.png

 

 

 

 

 

 

 

View solution in original post

48 REPLIES 48

Helpful resources

Announcements
April AMA free

Microsoft Fabric AMA Livestream

Join us Tuesday, April 09, 9:00 – 10:00 AM PST for a live, expert-led Q&A session on all things Microsoft Fabric!

March Fabric Community Update

Fabric Community Update - March 2024

Find out what's new and trending in the Fabric Community.

Top Solution Authors
Top Kudoed Authors