Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more.
Get startedGrow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.
Greetings all,
Within the PBI service, my company has a pretty standard deployment pipeline (Dev > Beta > Prod). All of them draw their data from a single Azure data warehouse.
Recently we decided we wanted to reduce the amount of data in our Beta environment to a single year of data for select customers. To do this, we accessed the cube on the backend of the PBI report via Analysis services in SSMS. We selected the table we wanted to clear out, and did a Process Clear on that table.
The intent was to then apply some filters and then reprocess the table so it only had the limited data. However, when we attempt to reprocess the table, it fails within seconds giving the following error.
Failed to save modifications to the server. Error returned: '{"error":{"code":"DMTS_MonikerWithUnboundDataSources","pbi.error":{"code":"DMTS_MonikerWithUnboundDataSources","details":[{"code":"Server","detail":{"type":1,"value":"XXX"}},{"code":"Database","detail":{"type":1,"value":"XXX"}},{"code":"ConnectionType","detail":{"type":0,"value":"Sql"}}],"exceptionCulprit":1}}}
Here, the XXX has replaced the server and database names for security purposes, but I'll note that both are correct, so it doesn't appear to be mispointed in any way.
If it's relevant, the major tables are also partitioned. I have tried processing individual partitions but get the same error.
Other things we've tried include accessing the model through Tabular Editor and reapplying the Refresh Policy to see if something was broken with that as it controls the partitions. It was able to detect and add a new partition to the table, but did not solve the issue of the error. We have also tried scripting things out and running them to no effect.
I could really use some help with this as this is causing quite a bit of hold-up in our deployment pipeline for this report!
Solved! Go to Solution.
UPDATE: I think I found the cause of the issue. It was very unexpected.
I found this post which suggested the issue may be related to the dataset owner. I checked to ensure that I was the dataset owner, and while I was, I noticed that one of the data sources was disconnected. This shouldn't have mattered for the table in question as it was from a source that was still connected.
However, reconnecting this data source has apparently caused the table to be able to process again.
UPDATE: I think I found the cause of the issue. It was very unexpected.
I found this post which suggested the issue may be related to the dataset owner. I checked to ensure that I was the dataset owner, and while I was, I noticed that one of the data sources was disconnected. This shouldn't have mattered for the table in question as it was from a source that was still connected.
However, reconnecting this data source has apparently caused the table to be able to process again.
Same for me, thanks for sharing!
Exactly my issue, thanks.
Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.
Ask questions in Eventhouse and KQL, Eventstream, and Reflex.