Register now to learn Fabric in free live sessions led by the best Microsoft experts. From Apr 16 to May 9, in English and Spanish.
Hi all,
I have a Power BI dashboard that draws on datasets from several sources. Our team is having trouble accessing a number of these datasets due to trouble with CRM access, so our solution has been to house data elsewhere when it requires continuous updating, but for data that does not require updating, I have just been using the historical pre-loaded data held in the PBI file from when the connection was working.
I now want to set up the queries so that clicking Refresh will only refresh the 15-20 data tables housed elsewhere while ignoring those archived files with the broken connection -- since including these will cause the whole refresh fail like so:
To replicate this, I have attached an example file which calls on two datasets:
LINK: https://drive.google.com/drive/folders/1TFelWa3OmuLTtI4cAnUoMe9Vvxuxw4KX?usp=sharing
NOTE: you will need to add these two datasets to the PBI file on your local device, then close the PBI file and change the subfolder name like so:
Refreshing now produces an error:
In Power Query, I had hoped that a solution to this would be to uncheck 'Include in refresh report' (as below), however I am unable to apply this change without having a working connection to begin with, leaving me unable to refresh it and unable to exclude it from refresh. Is there a workaround to resolve this?
So far the solutions I see are:
Is there a better way here that I am not seeing? Really appreciate any help with this!
Many thanks,
Taylor
Solved! Go to Solution.
Hi Taylor,
Ok, so there's two parts to this: 1) holding a static version of your data to use until the link is fixed, then 2) dynamically switching to the 'correct' data should the link become unbroken.
1) For this part, I would recommend loading your archive data into a Dataflow from whichever link currently works. You only need to connect and refresh once, then don't schedule any further refreshes on this Dataflow so it remains in a static state but also easily available to all reports.
2) In each of your queries, you should reference both sources (archive Dataflow and maybe-unbroken-link-source) and dynamically switch between them based on an "if desired link still broken then use Dataflow" evaluation.
Chris Webb covers this really well in the following link. Just make sure to also read the comments on the post as Ken Puls adds a key update to the original code presented by Chris:
https://blog.crossjoin.co.uk/2014/09/18/handling-data-source-errors-in-power-query/
Pete
Proud to be a Datanaut!
Hi Taylor,
Ok, so there's two parts to this: 1) holding a static version of your data to use until the link is fixed, then 2) dynamically switching to the 'correct' data should the link become unbroken.
1) For this part, I would recommend loading your archive data into a Dataflow from whichever link currently works. You only need to connect and refresh once, then don't schedule any further refreshes on this Dataflow so it remains in a static state but also easily available to all reports.
2) In each of your queries, you should reference both sources (archive Dataflow and maybe-unbroken-link-source) and dynamically switch between them based on an "if desired link still broken then use Dataflow" evaluation.
Chris Webb covers this really well in the following link. Just make sure to also read the comments on the post as Ken Puls adds a key update to the original code presented by Chris:
https://blog.crossjoin.co.uk/2014/09/18/handling-data-source-errors-in-power-query/
Pete
Proud to be a Datanaut!
Hi Pete,
Thanks for taking the time to look into this, I think this will probably be the smoothest way to address the issue and future proof the current system.
Thanks again!
Taylor
Covering the world! 9:00-10:30 AM Sydney, 4:00-5:30 PM CET (Paris/Berlin), 7:00-8:30 PM Mexico City
Check out the April 2024 Power BI update to learn about new features.