Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more.
Get startedGrow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.
In my company we work with Dataflows Gen1, with a large volume of data, which makes us in some cases have some problems with data loading, as there are many data and we load them in Denodo, which has a maximum loading time (timeout) of 15 minutes, time that runs out in some cases.
To solve this problem we have made partitions by semesters to load the data without the volume of data being so large, but even so there are times when we get the timeout error because there is not enough time to load all the data.
How should we structure these dataflows so that we do not have these problems?
What is the best way to work with a large amount of data with Dataflows Gen1?
Clarification: we have to work with Gen1, since we are not allowed to enter Gen2 yet.
Does it need to be a dataflow? Would a bunch of CSV or Parquet files work as well?
Also, this: Known issue - Visuals using the Denodo connector might show connection errors - Microsoft Fabric | M...