Hi, I am running a sql query from Power BI which imports data from columns that have atleast one non-empty & not null data point from a table with ~600 columns. I am using SQL query directly because M takes too long.
The query runs fine in SSMS & I see preview in query editor but I get the following error when I try to close & apply:
Failed to save modifications to the server. Error returned: 'OLE DB or ODBC error: Exception from HRESULT: 0x80040E1D.
Oddly, it works fine when I use the following query (SQL 2016) which can give me columns with empty values (which i call #1 for reference):
set @columnscountqry = REPLACE(stuff((SELECT + ',count(tbl.' + QUOTENAME(ic.COLUMN_NAME) + ') AS ' + QUOTENAME(ic.COLUMN_NAME) FROM INFORMATION_SCHEMA.columns ic where ic.TABLE_NAME = 'tblexample' FOR XML PATH(''), TYPE).value('.', 'VARCHAR(MAX)'), 1, 1, ''), '> 0', '>0')
I run the following to get the columns that I want which are columns that have atleast one non-empty & non null value (#2 for reference):
set @columnscountqry = REPLACE(stuff((SELECT + ',SUM(IIF(LEN(RTRIM(fb.' + QUOTENAME(ic.COLUMN_NAME) + ')) > 0, 1 ,0)) AS ' + QUOTENAME(ic.COLUMN_NAME) FROM INFORMATION_SCHEMA.columns ic where ic.TABLE_NAME = 'tblexample' FOR XML PATH(''), TYPE).value('.', 'VARCHAR(MAX)'), 1, 1, ''), '> 0', '>0')
I use these column names in additional SQL script to get the output that I want. It works fine in SSMS & the results are accurate. I have not given the full SQL script as the rest of the script is the same.
I think I should compare the data types that are getting assigned in power query between #1 & #2 as #1 is getting imported without error. Interestingly, #2 produces less columns as expected & all of #2 columns get also pulled in #1. I do not know the difference between the data types that are getting assigned to them however as there are nearly ~300 columns to check.
Can you please let me know how I can get a list of columns & the data types that are getting assigned to them in query editor? It's not possible to do manually as nearly ~300 columns are extracted.
Any other ideas?
Thanks a lot.
i think the issue is that some date fields have value 'NULL' in them with dates & the table schema shows these columns are non nullable. i'd like to try to change all columns to nullable to get around this problem. - i did a thorough check of the nullable property & that's not causing the issue. please see new comment below.
could somebody please let me know the m query to do so, or direct me to relevant examples please? thanks a lot.
i've come to learn that Power BI is detecting data type based upon the 1st 200 rows. i'm seeing errors generated on random now for either #1 or #2 so it's likely because the SQL result set is not ordered.
i can't order the set because each row represents a set of info & there's no guarantee it will always work. is there a way to pull the data types from the SQL information schema & impose that on the power bi table using M query??
i am using the following function to change the data types of the columns:
(table as table, schema as table) as table =>
existingColumns = Table.ColumnNames(table),
removeMissing = Table.SelectRows(schema, each List.Contains(existingColumns, [Name])),
primativeTransforms = Table.ToRows(removeMissing),
changedPrimatives = Table.TransformColumnTypes(table, primativeTransforms)
schema has the column names with type text only for testing purposes
removeMissing is producing the required table. i am getting an error at primativeTransforms:
An error occurred in the ‘’ query. Expression.Error: We cannot convert the value "text" to type Type.
i am new to m language. could i get some help please? thanks
Hi @fess440 ,
According to the error information, it seems to be data type conversion error... Could you please provide the applied codes in Advanced Editor for the query with error and the related screen shots of query which get error? Thank you.
In addition, please refer the content in the following links and check if they can help you resolve the problem.
@yingyinr thanks for your response. to recap, i ran a dynamic sql script from power bi to pull columns that had atleast 1 data point that was not empty or null from a set of ~600 columns & i pulled nearly ~300 columns. i had to do this for several entities & the results were different for each entity. power bi was not detecting data type accurately for some columns as some columns had null values & dates, for example, spread across many rows. i finally managed to resolve the issue by using M query to change the data type of columns given unique column names & data types from the database information schema after much trial & error. i'm dealing with relatively "big data" & it would be wonderful to have solutions out of the box to do same especially when manual intervention is impractical. do you know of something automated that can achieve same, or would you recommend starting an idea? thanks again for your time. cheers.
Hi @fess440 ,
Did you enable the option Automatic data type detection (go to File->Options and Settings->Options, then Data Load-> Type detection)?
Don't miss the Power BI Dev Camp this week!
Check out a full recap of the month!
Check out the winners of the recent 'Can You Solve These?' community challenge!
Join us for an in-depth look at the new Power BI features and capabilities at the free Microsoft Business Applications Launch Event.