Комментарии:
How to read multiple csv files from a folder in AWS s3?
ОтветитьThanks for this video
ОтветитьI came back to Krish Naik videos after 2+ years and he again saved the day! I've been struggling to upload files to S3 for around 2 weeks (I work fully on Azure) and this video helped me solve the challenge in few mins!
Thanks!
AttributeError: 'S3' object has no attribute 'buckets' im getting this type of error when i run ur code
Ответитьvery nicely explain sir
Ответитьkrish, I see AWS also doing version control for files. It will be replacing the github. I am not aware just asking, in Companies github and AWS s3 both can be used or any one of them. Please advice.
ОтветитьIs it possible to write a file on S3 line-by-line in real-time ? I dont want to upload entire file again and again.
ОтветитьI do not have the access key, but i want to read files from s3 bucket.
Is there any other way?
Please suggest.
sometimes code gets behind your image while you explain
ОтветитьBro could you help me to extract a wav file from aws s3 and to play it using python
ОтветитьHow can we do write request for pdf files in s3?
ОтветитьWhat is the permission name for only downloading the files to local
ОтветитьCan you create video on how to call pkl model to .py file for batch scoring?
ОтветитьGood material. Saved my time, by following exact steps. But there is one problem when I manually add .npy file and try to read in notebook instance. It is giving NoSuchKey error. Could you suggest what would be the problem?
ОтветитьWhat if we want to create json object from excel uploaded in S3?
ОтветитьKrish please make a playlist upon the AWS cloud for the data analyst/scientists. Please
ОтветитьI had a lot of problems with the other videos I watched and this went smoothly. Thanks
ОтветитьWhen I try to upload the files I get the error:
TypeError: bucket_upload_file() missing 1 required positional argument: 'Key'
I already wrriten the pyspark code for converting s3 bucket excel file to csv its working but I want to convert multiple excel to csv can we do that in pyspark ?
ОтветитьNice video. I have one question. I don't want to download the content from S3, and I want to know the content details like no. Of .txt, no. Of. py within that bucket.
I am using aws s3 ls bucket_name/ - - recursive - - human-readable - - summarize
It shows the total size and objects no. But can't find the file counts with respect to their extensions.
Thanks so much for sharing this information!
ОтветитьHow to read multiple parquet files present in S3 bucket inside folder and merge all into one dataframe in local?....please give me suggestions ..really helpful
ОтветитьHey, Very nicely presented all the concept. I have a situation where I want to upload my file to a folder which is under a S3 bucket. For example we have main_bucket and under this we have Chield_Folder. So i want to upload it to Chield_Folder. I am trying to give my bucket = main_bucket/Chield_Folder.
Ответитьhi krish, do share more videos for boto3.
Ответитьplease upload video for IAM and EC2
ОтветитьNice , sir plz make the vedios of social marketing
ОтветитьHey krish i am having great trouble learning data structures and algorithms . Please guide me sir
ОтветитьNice
ОтветитьGood one👍👍
ОтветитьI want to learn more about AWS
ОтветитьWonderful session. It will help us lot
ОтветитьSomeone please guide who is this course for?
Ответить