Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and a 50 percent discount on exams.
Get startedEarn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.
Hello,
I created a delta table but I cannot add it to a new semantic model. There are no objects to select from in the New semantic model dialogue box. I also used the SQL connection string but this delta table is not there as well.
Proud to be a Super User!
Solved! Go to Solution.
Ok. After several testing, it appears that a delta table created from a sempy dataframe cannot be added to a sql endpoint either because it is not supported or it is a bug.
To give you a brief background, I have a very large pbix file imported from a data source that's been deprecated. The data is still in the pbix but it needs to be further transformed and connecting XMLA endpoint has proven to be very slow to the point of a capacity error. The data can be exported to CSV but will require a lot of computing power to materialize within Power BI so exporting it at once using DAX Studio is out of question and doing it in chunks can be very tedious.
So I loaded the delta table to a notebook using spark sql and then saved it back as a delta table and voila, it now appears as a sql endpoint object and can be queried.
To those who has the same use case, here's the cleaned notebook code:
#convert from a semantic model to a sempy data frame
from sempy import fabric as FabricDataFrame
sempy_dataframe_name = FabricDataFrame.read_table(workspace="workspace name, not id", dataset="semantic model or dataset name, not id", table="table name in the model")
# rename columns so there are no spaces
column_mappings = {'colum name': 'column_name'}
# Rename columns using the mapping dictionary
sempy_dataframe_name.rename(columns=column_mappings, inplace=True)
from pyspark.sql import SparkSession
# Create a SparkSession
spark = SparkSession.builder \
.appName("Convert DataFrame to PySpark DataFrame") \
.getOrCreate()
#convert sempy dataframe to spark df
df_spark = spark.createDataFrame(sempy_dataframe_name)
# Specify the delta table name and path
delta_table_name = "delta table name"
delta_table_path = "Tables/" + delta_table_name
# Write Spark DataFrame to Delta Table
df_spark.write.format("delta").save(delta_table_path)
Proud to be a Super User!
Ok. After several testing, it appears that a delta table created from a sempy dataframe cannot be added to a sql endpoint either because it is not supported or it is a bug.
To give you a brief background, I have a very large pbix file imported from a data source that's been deprecated. The data is still in the pbix but it needs to be further transformed and connecting XMLA endpoint has proven to be very slow to the point of a capacity error. The data can be exported to CSV but will require a lot of computing power to materialize within Power BI so exporting it at once using DAX Studio is out of question and doing it in chunks can be very tedious.
So I loaded the delta table to a notebook using spark sql and then saved it back as a delta table and voila, it now appears as a sql endpoint object and can be queried.
To those who has the same use case, here's the cleaned notebook code:
#convert from a semantic model to a sempy data frame
from sempy import fabric as FabricDataFrame
sempy_dataframe_name = FabricDataFrame.read_table(workspace="workspace name, not id", dataset="semantic model or dataset name, not id", table="table name in the model")
# rename columns so there are no spaces
column_mappings = {'colum name': 'column_name'}
# Rename columns using the mapping dictionary
sempy_dataframe_name.rename(columns=column_mappings, inplace=True)
from pyspark.sql import SparkSession
# Create a SparkSession
spark = SparkSession.builder \
.appName("Convert DataFrame to PySpark DataFrame") \
.getOrCreate()
#convert sempy dataframe to spark df
df_spark = spark.createDataFrame(sempy_dataframe_name)
# Specify the delta table name and path
delta_table_name = "delta table name"
delta_table_path = "Tables/" + delta_table_name
# Write Spark DataFrame to Delta Table
df_spark.write.format("delta").save(delta_table_path)
Proud to be a Super User!
Hi @danextian ,
It was great to know that you were able to get to a resolution . We expect you to keep using this forum and also motivate others to do that same . You can always help other community members by answering to their queries
I never experienced that myself. I guess there must be some technical issue with this Lakehouse or something. Because you cannot find the table in the SQL Analytics Endpoint, it sounds to me like there is some problem with the metadata sync between the Lakehouse and SQL Analytics Endpoint.
How did you create the Lakehouse and the table? What method did you use to populate the table with data? Are you able to query the table from a Notebook?
But to be honest, I would just try to create a new table, or even create a new lakehouse, or even create a new workspace, and see if that works. Hopefully that will solve your issue.
This may not be a satisfactory answer.
If you want to understand the root cause, I hope someone else can help you with that.
Did you click the refresh icon next to the search bar?
How about if you open the SQL analytics endpoint, is the table there?
(Or in the default direct lake semantic model)
Hi @frithjof_v ,
Thanks for the response. I clicked the refreshed icon several times as refreshed the web page. It is not the default semantic model as it wasn't automatic by default but there was an option to add an object so I tried it but the choices were empty. It is not in SQL analytics endpoint either or from the Lakehouse connector.
Proud to be a Super User!
Hi @danextian ,
Thanks for using Fabric Community.
Yeah some times due to some metadata sync issue or some technical issue we might face it.
Can you please try creating a new delta table or new lakehouse as suggested by @frithjof_v ?
Please let us know if it works, or else we will try to debug the scenario.
Ask questions in Eventhouse and KQL, Eventstream, and Reflex.
Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.
User | Count |
---|---|
1 | |
1 | |
1 | |
1 | |
1 |