Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more.
Get startedGrow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.
To test the auto-ml feature, I created a new dataflow in the workspace and trained it. The results were satisfactory, so I scheduled the dataflow for refresh.
Today, a BI error was detected, and upon investigation, the following message was found: "ErrorMessage":"Dataflow refresh failed because your organization's Fabric compute capacity has exceeded its limits. Try again later. Learn more at https://aka.ms/capacitymessages"
I have identified three problematic dataflows in the FABRIC CAPACITY METRICS, and I'm contemplating how to resolve this situation.
Currently, the dataflows in this workspace are unusable, and I'm unable to update any related BI materials.
Is there anyone who can help with these questions?
Solved! Go to Solution.
- Learn about Overages and Burndowns, The capacity throttling will resolve itself after 24 hours unless you keep accruing CU debt.
- consider enabling Autoscale (comes at additional cost!)
- Abandoning the workspace will resolve nothing - what would "help" is moving the workspace to another, non-overloaded capacity. Of course this assumes you have that option.
- Learn about Overages and Burndowns, The capacity throttling will resolve itself after 24 hours unless you keep accruing CU debt.
- consider enabling Autoscale (comes at additional cost!)
- Abandoning the workspace will resolve nothing - what would "help" is moving the workspace to another, non-overloaded capacity. Of course this assumes you have that option.