Sort of. It's clunky, but I have essentially created three levels of dataflows:
1) the first level is a separate dataflow for each object (Work Details, Installed Product, Work Orders, and so on all have their own dataflows). In my case, I have about 6 dataflows.
1.b) make sure to include the "Last Modified Date" field in each object, and your last step before saving the dataflow needs to be to sort decrementally by the Last Modified Date field during initial setup. That part is VERY important!
1.b) refresh each dataflow as you go before setting up the next one, enabled "detect data changes."
2) the initial collater: I had to setup two separate DFs during initial testing, but you can do a single one. In this one, you can also setup incremental refresh, but you HAVE to delete duplicate rows by ID inside the DF. BI will create duplicate records at some point rather than keeping the newest data if you do not do this. Ask me how I know...
3) the final consumption DF - this one is honestly not all that necessary, but I found performance wise it works better in my use-case. After the first two levels collect and prep the data, my last, single DF manipulates all of that data to get it ready for use in my reports- renaming stuff, adding custom columns, etc.
lastly, I have the first level of DFs refreshing every hour (and see about four failures a day due to timeouts), and the second refreshes twice a day. The third one only refreshes once per day.
This is a very clunky workaround, but it works, and is surprisingly far more reliable and faster than doing full refreshes every time. This is all temporary, though, as we will be switching over to an Azure warehouse at some point in the near future, at which point at least level 1 will disappear, if not 2 as well.