Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Grow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.

Reply
Jeanxyz
Post Prodigy
Post Prodigy

pull AWS s3 data into power bi

I need to import some csv files from aws s3 into power bi. Below is the python script I try to use. I have received API url and token from another colleague, how can I complete the bucket and key variable?

********************************************************************************************

import boto3
import pandas as pd
import io

bucket="<input bucket name>" 
key="<file name>"

s3 = boto3.client('s3')
f = s3.get_object(Bucket=bucket, Key=key)
shape = pd.read_csv(io.BytesIO(f['Body'].read()), header=0, index_col=0)
shape = shape.apply(lambdax: x.fillna(0))
print(shape)

1 ACCEPTED SOLUTION

Thanks, @amitchandak . will go through the tutorials when I get some time.

 

I talked to our AWS admin and made some change with the python script, it works now. (see script below).

limitations of python script connector:

1. this import mode is slow, hence I can only import a small csv file. If there is a small error with the csv file, the import query will fail. Is there a way to ignore csv reading errors?

2. to import multiple files from the s3 bucket. I need to write a loop function in python script

*****************************************

import boto3

import pandas as pd

import io

import os

my_bucket_name="xx"

my_file_path="xx.csv"

my_key="xx"

my_secret="xx"

 

session=boto3.Session(aws_access_key_id=my_key,aws_secret_access_key=my_secret)

s3Client=session.client("s3")

f = s3Client.get_object(Bucket=my_bucket_name, Key=my_file_path)

aws_data = pd.read_csv(io.BytesIO(f['Body'].read()), header=0, index_col=0)

print(aws_data)

View solution in original post

2 REPLIES 2
amitchandak
Super User
Super User

Thanks, @amitchandak . will go through the tutorials when I get some time.

 

I talked to our AWS admin and made some change with the python script, it works now. (see script below).

limitations of python script connector:

1. this import mode is slow, hence I can only import a small csv file. If there is a small error with the csv file, the import query will fail. Is there a way to ignore csv reading errors?

2. to import multiple files from the s3 bucket. I need to write a loop function in python script

*****************************************

import boto3

import pandas as pd

import io

import os

my_bucket_name="xx"

my_file_path="xx.csv"

my_key="xx"

my_secret="xx"

 

session=boto3.Session(aws_access_key_id=my_key,aws_secret_access_key=my_secret)

s3Client=session.client("s3")

f = s3Client.get_object(Bucket=my_bucket_name, Key=my_file_path)

aws_data = pd.read_csv(io.BytesIO(f['Body'].read()), header=0, index_col=0)

print(aws_data)

Helpful resources

Announcements
RTI Forums Carousel3

New forum boards available in Real-Time Intelligence.

Ask questions in Eventhouse and KQL, Eventstream, and Reflex.

MayPowerBICarousel1

Power BI Monthly Update - May 2024

Check out the May 2024 Power BI update to learn about new features.