Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Register now to learn Fabric in free live sessions led by the best Microsoft experts. From Apr 16 to May 9, in English and Spanish.

Reply
B_Real
Advocate IV
Advocate IV

Get data from CSV file reads numeric columns as text

I'm doing:

 

Get Data -> Text/CSV 

Data Type Detection: Based on entire dataset

  

The csv file was created in R using the command:

 

write.csv(dataframe, file = "./outputs/financials.csv", row.names = F)

 

There are a bunch of columns in the csv file that contain only numbers, but Power BI is still reading them in as text. It looks like PBI is converting large values into scientific notation, e.g.:

 

600000 becomes 6e+05

 

And therefore treating the whole column as text. In fact it only seems to treat numbers that are  at the hundred thousand level, e.g. 100000 becomes 1e+05, 200000 becomes 2e+05, 300000 becomes 3e+05. All other values in the column are numbers.

 

A similar thing happens when I open the csv file in Excel, the 600000 becomes 65e+05 but only that cell is formatted as 'scientific' whereas the rest of the column is 'general'. 

 

What the deuce is going on?

 

Edit: I know I can just go into the query editor and change the column type, but this is not a practical solution when there are over 90 columns that need to be corrected, every time the csv file is opened.

2 REPLIES 2
Greg_Deckler
Super User
Super User

Hmm, interesting. I was not able to reproduce with a file like:

 

ID,Val1,Val2,Val3,Val4
1,1000000,1000000,1000000,1000000
2,6000000,1000000,1000000,1000000
3,600000000,1000000,1000000,1000000
4,1000000,1000000,1000000,1000000

 

You can select multiple columns and then Change Type. You can also use Advanced Editor, line highlighted below.

 

let
    Source = Csv.Document(File.Contents("C:\temp\powerbi\largeNumbers.csv"),[Delimiter=",", Columns=5, Encoding=1252, QuoteStyle=QuoteStyle.None]),
    #"Promoted Headers" = Table.PromoteHeaders(Source, [PromoteAllScalars=true]),
    #"Changed Type" = Table.TransformColumnTypes(#"Promoted Headers",{{"ID", Int64.Type}, {"Val1", Int64.Type}, {"Val2", Int64.Type}, {"Val3", Int64.Type}, {"Val4", Int64.Type}})
in
    #"Changed Type"

If the column names are consistent or in the same position (do it before promoting headers) then you could just copy and insert the line of code as appropriate. What version are you on? I seem to remember this sort of thing happening to me in the past but I can't reproduce on the current version.


@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...

 

Thanks Greg. I have a feeling the problem is in the csv file that's created by R but I can't see anything obvious. It's fairly big, c. 90k rows and 90 columns.

 

I have, for now, created a custom function in the query editor that transforms all offending text columns to decimal.  Not ideal, but at least I can now repeatedly update the csv file without incurring the wrong data types.

 

I'm on the following release:

 

December 2018

Product Version:
2.65.5313.841 (18.12) (x64)

OS Version:
Microsoft Windows NT 10.0.17134.0 (x64 en-US)

 

 

Helpful resources

Announcements
Microsoft Fabric Learn Together

Microsoft Fabric Learn Together

Covering the world! 9:00-10:30 AM Sydney, 4:00-5:30 PM CET (Paris/Berlin), 7:00-8:30 PM Mexico City

PBI_APRIL_CAROUSEL1

Power BI Monthly Update - April 2024

Check out the April 2024 Power BI update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.