Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.
Hi, I'm fairly new to architecting in this space...but I can grok most of the concepts pretty readily..but I am stuck trying to figure out a way to stub out a dataset for a 'test' environment...so I can keep with the principle of testing against known data, and separating out the model and the report.
Many of the data sources I might not have enough control over to be able to spin up test environments, or be able to reliably reload with known data. I see lots of help on how to parameterize connection strings, using powerquery to swap from a flat file to SQL, etc...but that's not quite what I am looking for here.
Something along the lines of:
Git Repo containing:
.PBIP project for a dataset that is sourced from either a .CSV or just a power query table with junk data.
.PBIP project for a dashboard connecting to that data. (Thin Report)
2nd Git Repo containing:
Actual Data Model that could be connected to Dynamics/CRM/SQL/Whatever....but the dataset name * schema for the output tables match the stub in the 1st repo.
Is there a way to connect up some number of workspaces/deployment pipelines , and have the "stub" dataset in the first stage...but swap to the full model later down the stages (UAT, etc)? Without having to manually repoint the Report to the different workspace. (I've tried that, and it works....so in theory it is possible)....or is it totally not worth the headache, and my time would be better spent trying to invent dev/test versions of data sources?
Thanks!
You seem to be describing Deployment Pipelines.
You can also use Parameters in Power Query to limit the data you develop with, and then open up the entire data range in the Power BI service. Similar to what Incremental Refresh does.