You need to change this parameter in the cluster configuration. Go into the cluster settings, under Advanced select spark and paste spark.driver.maxResultSize 0
(for unlimited) or whatever the value suits you. Using 0 is not recommended. You should optimize the job by re partitioning.
See the links below for more information:
https://docs.microsoft.com/en-us/azure/databricks/kb/jobs/job-fails-maxresultsize-exception#solution
https://stackoverflow.com/questions/31058504/spark-1-4-increase-maxresultsize-memory
https://issues.apache.org/jira/browse/SPARK-12837
Best Regards,
Community Support Team _ Zeon Zheng
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.