I have a slicer based upon daily dates. I have a relative date filter for the last 7 days (not including today) on the page. So the range would be from 26 May 2020 to 01 Jun 2020.
If the user's last selection was 24 May 2020 and they went to the page today, 24 May is still selected and the page has no data due to the page level filter. Is there a way to not include that last value when they go to the page? It would seem like a refresh of the page would have made the value illegal.
Solved! Go to Solution.
Thanks for that, however doing the 'reset' would just return the slicer to a prior value. Single select slicers cannot (I don't think) start in an unselected state.
The issue I have is that the selected value is no longer valid for that page so I don't know why it persists. In the below graphic, I picked a week when I didn't have relative filtering set up. Then I put a relative filter of prior 6 calendar weeks, then expanded the slicer. The moment I click off the Feb 22 entry, it drops off the list.
I'm just trying to figure out how to no longer have it selected when the user goes back to the page without them having to do anything. If I could start out with nothing selected then the "reset to default" would work; however, it is a destructive operation. It clears all of the filters on all of the pages back to the default, and that is a bad practice as the user may want to have their report remain filtered to a specific department.
Another "gotcha" from Relative filtering.
Here is what I would suggest trying.
= Duration.Days(DateTime.Date(DateTime.LocalNow()) - [Date]) <= 7 and Duration.Days(DateTime.Date(DateTime.LocalNow()) - [Date]) > 0
If your date table was built in DAX, you'd just need similar logic to get the true/false value.
In general, try to avoid calculated columns. There are times to use them, but it is rare. Getting data out of the source system, creating columns in Power Query, or DAX Measures are usually preferred to calculated columns. See these references:
Calculated Columns vs Measures in DAX
Calculated Columns and Measures in DAX
Storage differences between calculated columns and calculated tables
Creating a Dynamic Date Table in Power Query
It sounds like there isn't a solution to this problem. I don't have a date table anywhere in the model as different report pages pull from different tables. The different tables either are daily or week ending and as such, there isn't a consolidated date table.
Calculated columns are not an option on a live connection for us either.
Finally, I created a table with 4 columns, the end of week date, a number, a flag for Prior week where it was a true/false value if the end of the week was last week. To simulate this scenario of someone looking at it last week and then looking at it this week. So I selected, week ending 23 May 2020 for when they looked at it last week. Then I filtered the slicer on the flag where it was true. I still had 23 May 2020 in the selection list and it was the selected value. When I selected 30 May 2020, 23 May went away.
It looks like it's a 'by design' thing with slicers that if the values in the slicer are no longer valid then it still stays selected. I've noticed this in the past where I have a slicer which only displays the weeks where a widget was sold. In the case where I looked for widget A, I could get 1/1/2020, 1/8/2020, 1/15/20, and select 1/8/2020. When I select widget B, and it only sold on the week ending 1/15/2020, 1/8/2020 is still selected in the filter, but it's not a valid date as evidenced when I clear the selection.
I'll mark your last reply as a solution unless Microsoft as a better answer, I'll just consider it another limitation in the tool - I mean seriously how hard is it to add a property to the card visualization so when it would return (blank) you could have it put something else in without you having to write a measure and take additional cycle time.
@TMOCostanzo - why would you not have a date table? You just link the date to every date field in your model (or at least 1 per table) and it doesn't matter if it is a day, week ending date, or month ending date. It units the model and allows you to control any aspect of the model via date from one place.
Various articles point to the performance impact of creating/having date tables as they contain every date from the minimum to the maximum by day. That is, if I have issues created in 2016 in the model ending in 2020, we have 1600 records, and growing every day.
Additionally, if a table has multiple dates (let's say a trouble ticket system) including issue creation, last modified, first worked, resolved, wouldn't you need to create a relationship between the date table and each of those fields to use to slice the data? Since last modified is a date and time field, wouldn't your date table then also include the time in case you wanted to see what was modified during a work day from 6/2/2020 8:00 to 6/2/2020 17:00?
Finally, linking the date table and, if I am getting you correctly, having a single filter to control the date range, the system wouldn't know - from a single table - what field from the issue table to reference. If I used a slicer against the date table, how would it know if I wanted to use the range 5/24/2020 -> 5/30/2020 on the created, resolved, or modified field.
Maybe I'm not understanding the benefits of a date table, but it's one of the listed ways to improve performance from a number of sites. I don't know enough, with regards to the benefits, for me to implement it in a way that is beneficial; especially if I have multiple date fields in a record.
@TMOCostanzo - that is if you use CALENDARAUTO() which you should never do. Date Tables should be created in Power Query. Below is a link to an blog post I wrote that will let you create a dynamic date table that will move and expand with the relevant dates in your model, not all of the dates.
Any article that argues against a date table because it has 4 years of dates really doesn't understand what it is talking about. 1,600 rows in Power BI is nothing. The issue is when you use CalendarAuto and some table has a fake expiration date of Dec 31, 9999. Then you get 2.9 million records. That is a problem.
Here is an article that covers some basics of why a date table should be used. You generally never want to put dates from your FACT tables or other DIM tables in visuals. Always use the linked date from the date table. in fact, I most often hide those other date fields so I don't accidentally use them.
I liked the point on the inactive relationships by the other poster, I didn't even know that was an option, I'd be curious as to the performance impact.
I can't appropriately explain the model we use as I don't have the correct words, but I don't think we use a 'true' fact/dim model as we have normal people developing reports off of the source and it's not intuitive to a non-developer how to create visuals. So a lot of our tables are 'result' table and they can use the dim table for filtering, but the 'report' tables include some of the data from the dim tables.
And I have no idea what you mean by "never want to put dates from your FACT tables or other DIM tables in visuals" as fact tables include things like create_dt, last_modified_dt, sales_dt, etc. If all of those values were in a Dim table, then I would have 5.23 million rows in that table - I just ran a count distinct on the last modified date in our table.
@TMOCostanzo Power BI is built on top of SQL Server Analysis Services - you actually launch an SSAS instance when Power BI launches on your PC.
It is optimized for a query model where the DIM tables are NOT normalized, using a Star Schema. Massive repitition in the DIM tables is fine. Segregating things out into a snowflake schema, like a normal SQL type person thinks, is not good for Power BI. It doesn't perform well. If you know SQL and are used to that, you have to unlearn that for Power BI, or you will bog down its performance. See this article for more, or look at this course and book by SQLBI.
If you cannot adequately explain your model and how it relates to a Star Schema model, you will experience problems with performance and writing measures to return what you want. It is all designed with a Star Schema: DAX, the Vertipaq engine, the tabluar model of SSAS, all of it. While you see tables, and Power BI presents tables to you, in reality, it is only dealing with columns. You can read some of the technical bits here. It is interesting to note that one of the recommendations is, always add a DATE dimension table. You don't need to understand all of it. Just know that is how it is designed, and doing it differently means it will not perform as well as it could, and it will cause you to work harder as a developer to write measures to get the visuals you want.
While I can understand your point, I believe you are missing mine, and we are way off the original question of how we clear a slicer value - that has not been resolved and doesn't appear that is can be. We can limit it as much as we want to, but nothing in the Power BI deselects slicer values that are no longer valid.
We do not have developers using Power BI in all cases; we have project managers and others who are not versed in, nor have any inclination to be, database or developers. As such, we can set up any sort of star schema we want, but it doesn't matter. We have to provide them a clear, single table to use when they are creating the reports and facilitate 'ease of use.' Trying to explain how to write measures using inactive relationships, is well beyond their skillset.
The analogy is this - in SQL Server, you have a set of dim and fact tables storing the data. The customer needs to write 10 SQL statements that always join 5 of these tables together, albeit with different where clauses. The easiest way for them to accomplish this with minimal effort on their part would be to request a view. Does this make it less efficient? Maybe, but that is what we have done for our users in this model. Can we associate that view to dim tables in the model and use that dim table for filters? Yes. Can we force them to create visualizations based on fact to dim relationships? No.
If we had a team of developers or others that could understand and use the model in that way would make life easier. However, that is not the case. I'll have to look up how we could create the measures to use the inactive relationships. Since it's a live connection model, I may be able to create the measures on that side for the inactive relationships.
As a side note, I haven't had the time to review anything you've provided before, so I apologize if anything I've said above is covered in those posts.
As people who generate the models, there is no one true way to do things. We can have our best practices and ideas, but at the end of the day, we need to provide our customers with a model they can use and understand. Education is key, but given the wide availability of the tool, sometimes we have to cut corners. With a limited amount of technically savvy team, it is not always an efficient use of our time to make the perfect model; you make a good enough model. Those reports which update twice a day, load less than 10 MB of data, and are not modified after creation, working to make the most performance efficient model is just not worth the time. For reports which update every 30 minutes, then yeah, be uber-efficient, spend the time documenting to an end-user level, or flag those as "For Developer Modification Only."
Again, I do get what you are saying, and I appreciate the education provided.
This is a must watch for a message from Power BI!
Click here to read more about the December 2020 Updates!
Click here to read the latest blog and learn more about contributing to the Power BI blog!
Mark your calendars and join us for our next Power BI Dev Camp!.