Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.

Reply
eddiehsu
Regular Visitor

Mismatched minReaderVersion and readerFeatures

1. I used csv file to create table `bot` in lakehouse, it had 3 rows in the begining.

2. I created a notebook to get the latest id in table `bot`, the code is as below:

 

 

 

 

# Welcome to your new notebook
# Type here in the cell editor to add code!

# import requests

# requests.get("https://4236-114-32-8-159.ngrok-free.app/test/lakehouse")

df = spark.sql("SELECT MAX(id) as max_id FROM lakehouse_2.bot")
mssparkutils.notebook.exit(df.first()["max_id"])

 

 

 

 

3. I created a data pipeline which as 2 nodes:
1st node execute the code above
2nd node use the output of the first node to get the lastest id as the parameter of a GET API, and append the response data into table `bot`.
4. I setup a schedule to run every minutes, and it successfully executed 10 times, and then failed everytime after, due to the notebook error: `SparkRuntimeException: Error while decoding: java.lang.IllegalArgumentException: requirement failed: Mismatched minReaderVersion and readerFeatures. newInstance(class scala.Tuple3).`

I don't know h

 
 

 

 

1 ACCEPTED SOLUTION
govindarajan_d
Solution Supplier
Solution Supplier

Hi @eddiehsu ,

You can take a backup of the table and try to alter the property by running the SQL command below in Spark notebook:

ALTER TABLE <table-identifier> SET TBLPROPERTIES('delta.minReaderVersion' = '1', 'delta.minWriterVersion' = '3')
This might help to fix the issue. 
 
But as you said, this looks like a bug since it had been working in previous executions. 

View solution in original post

14 REPLIES 14
scca
Frequent Visitor

I'm getting the same error. Weird thing is, I'm running the same notebook concurrently going through different tables in the same manner and the rest succeeds with no issues except for one and it even occurs during manual runs...

I'm also getting the same error. 

I found a temporary fix for the problem: we can simply create a new table and copy the existing table data to the new table.

Yeah, though it's helpful in Dev. It's quite the problem when running a live pipeline and the just comes out of nowhere. Haven't had that problem for a while but it is annoying when it does unless it's a problem solvable by the user.

Yup, I agree.

govindarajan_d
Solution Supplier
Solution Supplier

Hi @eddiehsu ,

You can take a backup of the table and try to alter the property by running the SQL command below in Spark notebook:

ALTER TABLE <table-identifier> SET TBLPROPERTIES('delta.minReaderVersion' = '1', 'delta.minWriterVersion' = '3')
This might help to fix the issue. 
 
But as you said, this looks like a bug since it had been working in previous executions. 

Interestingly, I get the same error when trying to run that alter...

Same for me

Hi @eddiehsu , @govindarajan_d 
Apologies for the issue you have been facing. You are right. This is a bug and I got confirmation from the internal team that the fix is rolling in production and the ETA is end of this week across the world.
So try to rerun the pipelines on the tables next week and do let me know if you are still facing any issues.

Please let me know if you have any further questions.

Hi @eddiehsu 
We haven’t heard from you on the last response and was just checking back to see if your query got resolved. Otherwise, will respond back with the more details and we will try to help.
Thanks

Hi @eddiehsu 
We haven’t heard from you on the last response and was just checking back to see if your query got resolved. Otherwise, will respond back with the more details and we will try to help.
Thanks

Just hit this issue... source table was created via a DF Gen 2 and trying to read it ("select *") from a notebook gets the above error

v-nikhilan-msft
Community Support
Community Support

Hi @eddiehsu 
Thanks for using Fabric Community.
The error SparkRuntimeException: Error while decoding: java.lang.IllegalArgumentException: requirement failed: Mismatched minReaderVersion and readerFeatures. newInstance(class scala.Tuple3) indicates a compatibility issue between the Spark version used for writing data and the Spark version used for reading it. This often occurs due to library or configuration changes.
Troubleshooting Steps:

1) Check Spark Versions: Verify the Spark versions used in your notebook and pipeline nodes. They should be consistent. If necessary, upgrade or downgrade Spark versions to align them.

2) Review Library Versions: Ensure compatibility between Spark and any external libraries or dependencies you're using. Check for updates or known issues with specific library versions.

3) Inspect Data Format: Examine the structure and format of the data being written to the bot table. Ensure it's compatible with the Spark version used for reading. If necessary, adjust the data format or writing process to maintain compatibility.

You can also refer to this post which is related to the same error: https://community.fabric.microsoft.com/t5/Dataflows/Error-using-data-imported-through-DataFlow-when-...

Hope this helps. Please let us know if you have any further queries.

The scheduler executed 10 or more times and successfully append API data into table, I don't think the error is due to the reason you listed, since I didn't modify anything after I setup up the scheduler.

I found there's another thread which created at 2023-12-01, and the solution said it's a bug, will be fixed in the end of that week.  However, another reply at 2023-12-28 said he/she still met the same problem.
https://community.fabric.microsoft.com/t5/General-Discussion/Notebook-error-reading-delta-table/m-p/...

Helpful resources

Announcements
Expanding the Synapse Forums

New forum boards available in Synapse

Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.

LearnSurvey

Fabric certifications survey

Certification feedback opportunity for the community.

April Fabric Update Carousel

Fabric Monthly Update - April 2024

Check out the April 2024 Fabric update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.

Top Kudoed Authors