New Idea

Yesterday, all my datasets were refreshing just fine but today nothing happens.

- The automatic refresh hasn't done anything.

- When I press "Refresh now", I get a "popup" saying "preparing refresh" for a split second and then nothing happens.

 

The only way to update the data is to refresh the Desktop version and then upload it. 

 

I have datasets connection to an on-premises database and one connecting to a sharepoint site.

- The on-premises connection usually works perfect, but now nothing happens.

- The sharepoint connection usually brings up an error,  if I try to refresh, but now nothing happens (I just tried it this time as a test, as it usually doesn't work automatically).

 

Can you please help troublefinding - It seems like an issue on you online portal somehow.

 

Thanks in advance.

 

// Tommy

Status: Delivered

Bug in "Don't include in Report Refresh"

Status: Delivered
by ImkeF Super Contributor on ‎01-10-2017 07:58 AM

There's a bug in "Don't include in report refresh" like described here: http://community.powerbi.com/t5/Desktop/Disabling-quot-Include-in-Report-Refresh-quot-works/m-p/1017...

unfortunately I didn't test correctly and this thread is closed, but the bug is still there and preventing solutions for problems like this:

http://community.powerbi.com/t5/Desktop/Big-file-needs-no-update/m-p/111341#U111341

 

If the query that shall not be refreshed is not referenced by any other query or part of a UNION-operation in DAX, then everything works fine, but if it is referenced as mentioned, then it will be loaded again.

A lot of the usecases for this features are of this kind, so this bugfix is really needed !!!

Status: Delivered

If you try to upload Power BI content and haven't checked to see if you are still signed in when you try to publish the file PBI hangs. You can't enter credentials as PBI is attempting to upload the file.

Status: Delivered

NIGHTMARE!

I am building a couple of dashboards in desktop, done 60Hrs this week alone.

Working fine yestereday- all use excel files and tables as data sources

Today- Update inside windows 10 insiderPro build 15002 this morning

Update to Jan 2017 PowerBI

Now every dash won't update with the message 

DataFormat.Error: The input couldn't be recognized as a valid Excel document.

Every file opens fine in excel not corrupted, even tried with new data, same message

 

I've rolled back to an earlier Power BI but get the same error, any other ideas welcome.

Is there a setting in the build 15002 that doesn't recognise excel files as excel files???

 

Status: Accepted

We are trying to use Power BI to pull data from SAP BW leveraging both BEx queries and cube connections. With each scenario we are running into different issues and were trying to figure out if this is a Power BI Limitation/Issue with 1. Connecting to a Warehouse 2. They way that Power BI is communicating with SAP BW to pull its data or is this a SAP BW Limitation/Issue based on how SAP BW servers up data.

 

Tested With Cube:

0FIGL_C10 -General Ledger (New) Transaction Figures

 

Data Load is very slow

Adding new columns and refreshing data seems to take a very long time. I don't know if this is a sap warehouse issue, an OLAP issue, or a power bi Issue. I had our basis team monitor our BW environment when connecting to a cube where I selected all the columns. The cube contains 349,628 records which is based on generating the SQL Statement from the cube (In SAP BW RSA1) and executing that in SQL Server Management Studio. This call eventually failed with the following ‘RFC_ABAP_RUNTIME_FAILURE: TSV_TNEW_PAGE_ALLOC_FAILED which I was told was due to running out of memory. Prior to testing this I had Basis increase our memory footprint on the server in an attempt to rule out memory issues.

 

Column limitation (selecting to many columns) or possibly and combination of Columns + Rows

When I connect to a cube and select 1-4 fields everything is pretty responsive. But as you start adding more and more fields the performance starts to suffer. In testing it looks like we can add 12 fields without any issue (Super Slow Refresh Times) but once we hit the 13th field we get an RFC_ABAP_RUNTIME_FAILURE – TIME_OUT.

 

MDX

I don’t write a lot of MDX statements so my statement could be the cause of the issue but I run into the same issues remain. Which makes sense because I believe Power BI is connecting to SAP BW through the BW Open Analysis Interface OLBAP BAPIs regarless of if you going againt a Cube, BEx Query, or MDX.

 

This works:

Select   NON EMPTY [Measures].MEMBERS ON COLUMNS,   NON EMPTY   [0FISCYEAR].Members *   [0CO_AREA].Members *   [0COMP_CODE].[LEVEL01].Members   PROPERTIES     [0COMP_CODE].[20COMP_CODE]   on rows   from [$0FIGL_C10]   WHERE ([0COMP_CODE].[1100], [0FISCYEAR].[K12016])

 

Doesn’t Work

Select   NON EMPTY [Measures].MEMBERS ON COLUMNS,   NON EMPTY   [0FISCYEAR].Members *   [0CO_AREA].Members *   [0COSTCENTER].Members *   [0CHRT_ACCTS].Members *   [0CO_AREA].Members *   [0CURTYPE].Members *   [0FUNC_AREA].Members *   [0GL_ACCOUNT].Members *   [0PROFIT_CTR].Members *   [0SEGMENT].Members *   [0VALUATION].Members *   [0VERSION].Members *   [0FISCPER].Members *   [0FISCPER3].Members *   [0FISCVARNT].Members *   [0FISCYEAR].Members *   [0COMP_CODE].[LEVEL01].Members   PROPERTIES     [0COMP_CODE].[20COMP_CODE]   on rows   from [$0FIGL_C10]   WHERE ([0COMP_CODE].[1100], [0FISCYEAR].[K12016])

 

Key and Text Values

The last think I want to touch on is the key values. Power BI does not appear to give you the ability to select or toggle between the text and key values. Though MDX I was able to include the properties section and include the key value (Which was nice) but this should be standard functionality.

 

Tableau

I wanted to try and cover all my bases and rule out as much as I could to try and pin point the issue. As such I used Tableau to connect to the same structure as it’s my understanding that it also utilizing the BW Open Analysis Interface OLBAP BAPIs to query SAP BW. I’ll point out that I don’t really use Tableau but it was a tool that was available to use a test. Once again I had Basis monitor the SAP Environment to help track down any issues. Within seconds tableau had pulled in the Metadata and I was able to create a sheet. Tableau appears to provide a live query connection to the cube whereas Power BI appears to try and load the data first which may also be contributing to the performance issues. Again I just wanted to test with another tool to try and rule out the SAP environment.

 

 

 

With SAP being our ERP system we need the ability to effectively be able to query our data warehouse and provide the user community with self-service reporting/analytics. With the issues listed above its making this a very difficult process to move forward with. Any ideas or suggestions would be greatly appreciated. We love Power BI and the users who have seen our demos love what it can do and I feel like we are always installing this on someone’s machine.

 

Thoughts?

Power Query should not wrap simple xml elements in a table

Status: Delivered
by fransbus Occasional Visitor ‎02-10-2017 04:16 AM - edited ‎02-10-2017 04:57 AM

Import this xml into PBI Desktop:

 

<?xml version="1.0" encoding="utf-8"?>
<table>
	<row>
		<AllValues>111</AllValues>
		<AllEmpty></AllEmpty>
		<Mixed1>abcd</Mixed1>
		<Mixed2>abcd</Mixed2>
		<Mixed3></Mixed3>
	</row>
	<row>
		<AllValues>222</AllValues>
		<AllEmpty></AllEmpty>
		<Mixed1></Mixed1>
	</row>
	<row>
		<AllValues>333</AllValues>
		<AllEmpty></AllEmpty>
		<Mixed1></Mixed1>
	</row>
</table>

 

In Query editor the result is:

 

Knipsel.PNG

 

In the UI you cannot expand the columns AllEmpty and Mixed3, since PQ cannot find any columns in the empty tables.

Note the difference between Mixed1 and Mixed2, in row 1 they both have the same value 'abcd'. However, in Mixed1 this value is wrapped in a table.

 

Solution: Power Query should never wrap simple xml-elements in a table. An empty xml-element should always transform to a null, and so should a missing element.

 

Workaround: use a function like Chris Webb describes here (which fails for AllEmpty above, btw).

 

Solving this issue would make processing xml files in PBI and Excel a lot easier!!

 

Thx.

Status: Delivered

Hello,

in Power BI Desktop (Version: 2.41.4581.361 32-bit (November 2016)) I want to connect to an Oracle data base via ODBC.

I do have an ODBC data source set up, which so far worked fine in application MS Access 2016 and MS Excel 2016. In Power BI, however, I get an error.

At first the connection seems to wok fine, I get a list of all tables in the data base as expected. But when I try to load a table, the following error message comes up:

 

DataSource.Error: ODBC: ERROR [07006] [Oracle][ODBC]Restricted data type attribute violation.

Details:

   DataSourceKind=Odbc

   DataSourcePath=dsn=LAWRPT90

   OdbcErrors=Table

 

However, if in settings in SQL statement optional field I put something like SELECT * FROM TABLE_NAME I can access specific table. This could be a workaround for me but it’s very inconvenient approach

 

I have same issue connecting in Power Query or Power Pivot in Excel 2016.

 

Is there a way to fix this problem?

 

Thanks, Danil

Status: Needs Info

Data Labels length in chart

Status: Delivered
by JamesYuan New Member on ‎01-05-2017 01:21 PM

Recently, I built a report to show [HH.MM] data label in Chart bar, found some bars data is incorrect with data source. For example, one of value is 7.23 from data source, however, it shows 7.299999996 in data label. I've checked backend and confirm the definition of the data column, even in dataset the data and column definition only show 2 decimals, but data lable is wrong for some bars. Changing column definition to show 1 decimal, the wrong lable jump to another bar which is very interesting. Here is an example:

 

bar.png

 

After contacting support, they gave me an workaround to change data label property to configure "decimal" value from Auto to 2, and it shows correctly then.

 

However, why it picked up a wrong data from data source is a problem.

 

Post the issue here and hope it will help others to fix the similar issue.

Status: Delivered

Change the currency symbol

Status: New
by thiagozaiden Frequent Visitor on ‎11-10-2016 08:42 AM

Couldn't find Brazil Real currency symbol  R$.

0 Kudos

It seems I've encountered a bug when using R in query editor in PowerBI desktop.

I have an .xlsx with some cells end with "…", take the following one as example.

1text100
2vertical…100
3text100
4vertical…100
5text100
6text100

 

2. Add a R script in query editor to process the chart. Write nothing but "ResultSet <- dataset"

3. After processed, the ResultSet will be disordered like this: 

pbi.png

 

4. If I remove all the "…", the data will be all good.

 

I've used RStudio do the same on the chart, it was OK. So I doubt if it is a bug specific in PowerBI R script.

0 Kudos

Data Refresh Error

Status: New
by bsims Occasional Visitor yesterday

Can someone please assist. I'm new to this but the first dataset I've tried to refresh give me an error.  It's a csv dataset stored on one of our servers.

 

Thank you,
Bobby

 

Capture.JPG

0 Kudos

I'm running some tests to get familiar with the software and encountered an error which I suspect is a bug in how batch loading of data into the table visualization.

 

The setup I'm using is that I've created a direct query, using my own query, datasource to a in-house oracle database. I've then added a table vizualization containing 2 columns with simple data (itemnumber, item description). The data loads into the table fine but when I scroll down and surpass a certain number of records the vizualisation breaks with a "Couldn't load the data for this visual" followed by an oracle error code "ORA-00904: "t0"."DESCRIPTION": invalid identifier. The exception was raised by the IDbCommand interface."

 

This had me quite perplexed. The data is fine when viewing it in the query editor and when running it directly against the database using SQL Developer. Clearly there was something going on with query transmutation done by Power BI (t0 is not an alias I've used in my query).

 

So I went sleuthing and found the FrownSnapShot for this instance and opened up the FlightRecorderCurrent.trc file in Notepad++. Not being an expert not a lot of it made much sense to me and it looked like parts of it was machine code but at the end I found the following query:

 

SELECT * FROM (
SELECT
"t0"."CONTRACT","t0"."PART_NO","t0"."DESCRIPTION"
FROM
(
(select
  inventory_part.contract,
  inventory_part.part_no,
  inventory_part.description,
  inventory_part.skc_no,
  inventory_part.type_designation,
  inventory_part.DIM_QUALITY,
  inventory_part.note_text,
  inventory_part.unit_meas,  
  inventory_part.accounting_group as commoditygroup,
  inventory_part.create_date,
  inventory_part.c_manuf as manufacturer,
  inventory_part.part_no||contract as itemkey
from ifsapp.inventory_part)
)
 "t0"
WHERE
(("t0"."CONTRACT" > '001D') OR
(("t0"."CONTRACT" = '001D') AND
(("t0"."PART_NO" > '3093716') OR
(("t0"."PART_NO" = '3093716') AND ((SELECT (CASE WHEN Op1 IS NULL THEN CASE WHEN Op2 >= ' ' THEN 1 ELSE 0 END ELSE CASE WHEN Op1 < Op2 THEN 1 ELSE 0 END END) FROM (SELECT "t0"."DESCRIPTION" AS Op1, 'SKJORTA' AS Op2 FROM DUAL) AuxTable) = 0))
)
)
)

GROUP BY "t0"."CONTRACT","t0"."PART_NO","t0"."DESCRIPTION" ) WHERE ROWNUM <= 1000001+

 

Running this query in SQL Developer I managed to recreate the ORA-00904 and it seems to be caused by the 4th row in the where statement:

 

(("t0"."PART_NO" = '3093716') AND ((SELECT (CASE WHEN Op1 IS NULL THEN CASE WHEN Op2 >= ' ' THEN 1 ELSE 0 END ELSE CASE WHEN Op1 < Op2 THEN 1 ELSE 0 END END) FROM (SELECT "t0"."DESCRIPTION" AS Op1, 'SKJORTA' AS Op2 FROM DUAL) AuxTable) = 0))

 

The problem I suspect is that this code is embedding a subquery (See the red text) at a level where the alias "t0" is no longer recognized by the oracle query syntax. From what I can tell the purpose of this query is to load additional rows of data going from the latest (?) loaded record (ie next data batch) and this would explain why I'm seeing this if I scroll through the records rather than immediately.

 

I suspect this might be caused by me using the software in an unaggregated manner (which I suspect it's not set up for looking at those group by's) but since this is to me seems to be a bug I figured I'd submit it.

 

I managed to work around it by applying a report-level filter to reduce the record count fetched to the table vizualisation.

0 Kudos

Hi all,

 

First of all, I would like to say that it is a pleasure to be a member of this community.

 

I have a problem with the publishing of my report. It is based on a single Excel file which is stored locally and it has many backgrounds in HD and several HQ logos, which led to a .pbix file size of cca. 125MB. When I try to publish the file, I get the following message:

123.JPG

 

 

I know that several types of errors in the data modelling caused this error message to appear, but well-known solutions did not work for me. Does anybody know any suggestions or ideas what could be wrong?

 

PS: I can upload .pbix file to the cloud or a file sharing site if you want to take a closer look at the file.

 

Thank you in advance and kind regards,

Djordje

 

 

Status: Needs Info
0 Kudos

Hello

 

I have tried integrating the Power BI Embedded using Direct Query. There are few issues I wanted to get clarified regarding the performance.

 

I am using the Azure SQL Warehouse as the data source having 3 lacks of rows. The visualisations are rendered from the single table. Also, there are few measures added along with a couple of slicers in the report. Whenever a filter is applied from a slicer, it takes MINIMUM 15 seconds to refresh the visualisations. Also since this is a direct query, whenever the data gets refreshed, it takes almost a minute to get the visualisations.

 

Let me know where I can improve the performance?

 

 

Thanks in advance!

 

 

0 Kudos

I have merged 2 tables to be a full outer Join by Address field. When I check the join column in the 2nd table, it indicates null ( concluding that no record was matched). However, when I highlight the row and view the details in the record selection the Column is populated correctly.

Also, when I filter on null values from the 2nd column the record does not appear,  weven though it indicates null for that record. The  2nd joined column does not appear as null when exported to a table either.  ( I've tried restarting, refreshin, redoing the merge and renaming the workbook, and each time this occurence is still evident).

 In the image below you can the record is highligted and indicating null but displaying and address value in the  record selection pane at the bottom.

 

 

Status: Needs Info
0 Kudos

Each time a change is made to the data preparation process,  the application must re query the origional datasource. This is very slow. The whole point of pulling a local subset for processing is so that you can make several changes locally without querying the server  each  time. There should, at the very least, be an option to stop querying unitl you are finnished making changes so that you can quickly make changes.  Depending on the query it can take hours to complete tasks like rename columns or filter when you don't need to fetch new preview data at each  incremental change.

Status: Delivered
0 Kudos

Power users of other applications use ctrl -> and ctrl <- to move around code. However in Power BI,  unlike other applications, the selection aways stop before the expected term or lable.

 

{} notates selection (what gets selected)

| notates curser

ctrl <-    ==    [{'Copy Field']|} 

Copies from ] to the beggining of the word, forcing the user to move over one more place to get to the beginning of the term.

 

ctrl ->    ==    {|['Copy Field'}]

copies from [ to the just before the end of the term, forcing the user to click over one more row.

 

ctrl <- two times  == 'Table {one'['Copy Field']}| 

copies from the cursor to the first word to the first word from the right in the table.

 

When you use hotkeys you develop a muscle memory for selecting terms. Having a unique copying protocol for each application caues error and frustration.

Status: Needs Info
0 Kudos

This morning I set up a data connection to a sharepoint list and when I look at the data table, I can see that several of my columns have been renamed:

 

16/17 Delivery Status now appears to be called DeliveryStatusInDelivery

17/18 Delilvery Status is now DeliveryStatusInPlanning

18/19 Delivery Status is now DeliveryStatusPipeline

 

I also have a new column called ID.1 which is a replica of the ID column ...  I didn't create it.

 

I tried to do the same thing in Excel and get the same behaviour - whereas I have made data connections previously (couple of months ago) and the column names were not changed then.

 

any idea why this is happening?

 

Status: Accepted
0 Kudos

Error while creating new group

Status: Needs Info
by Sukhada Occasional Visitor ‎03-22-2017 01:12 PM - edited ‎03-22-2017 01:14 PM

Hi All,

While creating a group , we are facing below error. We have pro version.

Something went wrong
 
Couldn't create the group
Please try again later or contact support. If you contact support, please provide these details.

 

Activity ID7fd266b9-43ff-ca3d-1021-c3f3743a58c0
Request IDecfc566a-d648-3a1c-871c-36221019c64e
Correlation ID6e75d8ae-e499-37cb-1b74-85af80f2bab7
Status code500
TimeWed Mar 22 2017 16:08:45 GMT-0400 (Eastern Daylight Time)
Version13.0.1700.1439

 

I have seen couple of posts with same error but could not find any solution.

Thanks in advance.

Status: Needs Info
0 Kudos

Hi,

 

I have found an issue with comma's in Excel text fields. I have imported a column from an  Excel sheet and transformed it to be distinct (Remove Rows/ Remove Duplicates). However, when I tried to make a relation to another table, I got a message: You can't create a relationship between these two columns, because one of the columns must have unique values. Keep in mind that I imported one column and deduplicated it.

 

I have created a table visual and exported this visual to CSV. I have imported the previously exported CSV into Power BI and it appeared to have 2 (!) columns. This was caused by comma's in the text fields. An additional column was created containing data from after the comma. The comma itself was no longer in the data. 

 

By looking at the above steps, one could deduct that the PBI import does something strange here. It shows the imported Excel column as one column, but it in the engine something else happens. The Remove Duplicates seems to apply the distinct over the 2 columns. This makes it possible to have duplicate data in the first/main column. This column is then used to create a relation to another table and fails due to the duplicates.

 

I'm not sure if this is a known problem, I couldn't find anything similar.

 

Please let me know if you need a sample of the data.

 

Bye,

 

Rudi

Status: Needs Info