Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Register now to learn Fabric in free live sessions led by the best Microsoft experts. From Apr 16 to May 9, in English and Spanish.

Reply
raimund
New Member

Output of a calculated measure not changing in response slicer selections

The output of a calculated measure (see Figure # 3 below) is not changing in response slicer selections in a visualization of a related (primary or central) table (see Figure #’s 4, 5, 6 below).

However, a different instantiation of the same measure using identical syntax in the visualization sheet (see Figure #4 below) and, presumably, a different context is changing in response to slicer selections (see Figure #’s 5, 6).

So clearly the context of the [PerfMin#] calculation in the related ‘Perf Rating Scale’ lookup table (see Figure #3) is not correct.

I’ve tried numerous options with both syntax (referencing or not referencing table and/or column, etc) and DAX functions (RELATEDTABLE, ALLEXCEPT, et al) without success.

 

Since we have different rating distribution targets for each performance rating level, the objective of the [PerfMin#] and [PerfMax#] measure is to calculate specific target range #’s based on the resultant count of employees (rows) for any given slicer selection.

So the final or desired [PerfMin#] measure would be as follows:

 

PerfMin# = CALCULATE( COUNTROWS('Calibration Detail Data'), ALLSELECTED('Calibration Detail Data') ) * ('Perf Rating Scale'[PerfMin%]/100)

 

[Note that the right operand, “… ('Perf Rating Scale'[PerfMin%]/100) …”, was dropped in the examples below to better highlight that the left operand result was not changing in response to slicer selections.]

 

The correct results for the selections, or lack of selections, shown in Figure #’s 4, 5, 6 are captured in Figure #’s 7, 8, 9 (see below).

 

The ultimate objective is to plot [PerfMin#] and [PerfMax#] on the Performance Rating Distribution graph to how the actual ratings compare to the target ratings.

 

Any suggestions on how to solve this problem?

 

Thanks,

 

 

Figure #1: Table relationships (‘Calibration Detail Data’ is central table with other tables primarily serving as lookup tables)Figure #1: Table relationships (‘Calibration Detail Data’ is central table with other tables primarily serving as lookup tables)Figure #2: Data relationship between ‘Perf Rating Scale’ and ‘Calibration Detail Data’ Tables (‘Perf Rating Scale’[Perf Rating] has a 1 to many relationship with ‘Calibration Detail Data’[Performance Rating])Figure #2: Data relationship between ‘Perf Rating Scale’ and ‘Calibration Detail Data’ Tables (‘Perf Rating Scale’[Perf Rating] has a 1 to many relationship with ‘Calibration Detail Data’[Performance Rating])Figure #3: Calculated measure in ‘Perf Rating Scale’ Table, which is used to generate the ‘Perf Rating Scale’[PerfMin#] calculated columnFigure #3: Calculated measure in ‘Perf Rating Scale’ Table, which is used to generate the ‘Perf Rating Scale’[PerfMin#] calculated columnFigure #4: 2 data visualizations using the same calculated measure: one using [PerfMin#] from ‘Perf Rating Scale’ table and the other using [Rows] calculated directly on visualization (and, strictly speaking, a 3rd measure [Selected Employees] producing the same intended output by counting the number of distinct names in the ‘Calibration Detail Data’[Name] column)Figure #4: 2 data visualizations using the same calculated measure: one using [PerfMin#] from ‘Perf Rating Scale’ table and the other using [Rows] calculated directly on visualization (and, strictly speaking, a 3rd measure [Selected Employees] producing the same intended output by counting the number of distinct names in the ‘Calibration Detail Data’[Name] column)Figure #5: Output of [PerfMin#] measure DOES NOT change but output of [Rows] (and [Selected Employees]) measure DOES change in response to selecting “Strong Performer” on Performance Rating Distribution graphFigure #5: Output of [PerfMin#] measure DOES NOT change but output of [Rows] (and [Selected Employees]) measure DOES change in response to selecting “Strong Performer” on Performance Rating Distribution graphFigure #6: Output of [PerfMin#] measure DOES NOT change but output of [Rows] (and [Selected Employees]) measure DOES change in response to selecting “Engineer” in the Job Level slicer (filter)Figure #6: Output of [PerfMin#] measure DOES NOT change but output of [Rows] (and [Selected Employees]) measure DOES change in response to selecting “Engineer” in the Job Level slicer (filter)Figure #7: Correct result for visualization in Figure #4Figure #7: Correct result for visualization in Figure #4Figure #8: Correct result for visualization in Figure #5Figure #8: Correct result for visualization in Figure #5Figure #9: Correct result for visualization in Figure #6Figure #9: Correct result for visualization in Figure #6

1 ACCEPTED SOLUTION

Thanks for reply Cherie Smiley Happy

 

Thought about this approach but wasn't sure it would use the correct percentages for each different performance rating.

 

After referencing the correct table, the calculated measure seems to generate the correct results!

 

Here's the updated expression:

 

PerfMin# =
VAR a =
    CALCULATE (
        COUNTROWS ( 'Calibration Detail Data' ),
        ALLSELECTED ( 'Calibration Detail Data' )
    )
RETURN
    CALCULATE ( a * MAX ( 'Perf Rating Scale'[PerfMin%] ) / 100 )

 

Thanks for your help,

View solution in original post

2 REPLIES 2
v-cherch-msft
Employee
Employee

Hi @raimund

 

You may try to create a measure instead of calculated column for PerfMin# as below:

 

PerfMin# =
VAR a =
    CALCULATE (
        COUNTROWS ( 'Calibration Detail Data' ),
        ALLSELECTED ( 'Calibration Detail Data' )
    )
RETURN
    CALCULATE ( a * MAX ( 'Calibration Detail Data'[PerMin%] ) / 100 )

Regards,

Cherie

Community Support Team _ Cherie Chen
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Thanks for reply Cherie Smiley Happy

 

Thought about this approach but wasn't sure it would use the correct percentages for each different performance rating.

 

After referencing the correct table, the calculated measure seems to generate the correct results!

 

Here's the updated expression:

 

PerfMin# =
VAR a =
    CALCULATE (
        COUNTROWS ( 'Calibration Detail Data' ),
        ALLSELECTED ( 'Calibration Detail Data' )
    )
RETURN
    CALCULATE ( a * MAX ( 'Perf Rating Scale'[PerfMin%] ) / 100 )

 

Thanks for your help,

Helpful resources

Announcements
Microsoft Fabric Learn Together

Microsoft Fabric Learn Together

Covering the world! 9:00-10:30 AM Sydney, 4:00-5:30 PM CET (Paris/Berlin), 7:00-8:30 PM Mexico City

PBI_APRIL_CAROUSEL1

Power BI Monthly Update - April 2024

Check out the April 2024 Power BI update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.