Комментарии:
thank you so much, super helpful
ОтветитьNice
ОтветитьExceptionally outstanding content. Hats off to Alex. Just one question can we instead of saying
column_data = table.find_all('tr')
we say,
rows = table.find_all('tr')
as in your words r-row. I know we can name variables by any name but I want to be logically correct. Will it(rows) be logically incorrect?
and while fetching 'td' we will save it in row_data variable. Will it be correct?
Thanks for the tutorial! I just found the channel and I like the way you explain it!
ОтветитьHeya... Ur videos are excellent and i have learned a lot from it... But in this web scraping... I got an error as --cannot set a row with mismatched columns... I have checked your other videos... could you please help me on that.
ОтветитьAmazing, thanks!
ОтветитьThanks for this video helped me a lot. When I tried to pull the table headers only worked with tr not th. This might help others with the same issue
ОтветитьExcellent Work Sir!!! I really Appreciated your work believe me You are a great mentor!
ОтветитьHey Alex, thank you so much for ur effort,,,its a really super helpful series 🙏
ОтветитьU r the best ❤
ОтветитьNice tutorial, but there are AI tools now like Kadoa that can do all of this for you. In the time it takes for you to watch this video, you can get an AI scraper up and running.
ОтветитьWhat benefits does scraping using Python have over Power BI or Power Query?
ОтветитьThis was from the Greatest Videos I have Ever seen Thank you! Very Much! 🙃🙃🙃🙃🙃🙃😊
ОтветитьI'm done with the tutorial today and end with awesome successful, i'm facing some trouble since i use different site but yeah, my scraping going well!
Thank you so much!
please where did the title come from in the title.text
ОтветитьHi ! How do you get to using this web interpreter ? Are there tutorials ? I was looking but I couldn't find any not knowing what it actually is
Ответитьhi what are you using to type in? how do i open that "python resources tab /page?
where do i find that sheet to type in (the white sheet your typing the code in?
Hi guys, i'm here to asking about when i export the lastest data to csv, it went from beautiful visual in panda to ugly data in csv, how can Alex export the beautiful one to csv at last ? Pls help me
Ответитьjust copy paste the html to chatgpt and ask it to do it for you bro
ОтветитьThank You so so much for this video, Alex! It was super useful and easy to follow!
ОтветитьThanks alot Alex it helped me alot to explore this Webscraping and thanks for making this interesting and on point
ОтветитьI saw all the videos for this playlist and I am getting to this last one, I haven't felt so happy to learn in a while, thank you for your work and help!
ОтветитьBro time changed 😂.. but you a teaching old method 😢
ОтветитьThank you for doing this Alex. I learned a lot and followed along while watching this series so that I could learn how to do this as well. Now all I need to do is practice, practice, practice.
ОтветитьThank you Alex, I am new to web scrapping and this video was helpful to me! Keep the good work!
ОтветитьExcellent tutorial, what would be this environment you are running?
ОтветитьThank you so much! Very clear and well explained!
ОтветитьHi!
isn't it easier to do everything through "read_html"?
pd.read_html(url)[1]
1000th like from me 😅
ОтветитьHi Alex, I need help scrapping some date from a website but it's been super challenging. Would you be able to help? Once I have an example I should be able to run from there. Thanks a lot.
ОтветитьI need help this isnt working for me.......
ОтветитьGreat video. Thank you...
ОтветитьI think the jquery class is javascript, and the site also uses bootstap for their class.
ОтветитьThank you Alex Frebeg ❤❤
ОтветитьMake videos on artificial intelligence
ОтветитьHey Alex, Can you do a selenium scraping tutorial? It would help a lot to scrape dynamic websites.
ОтветитьAlex telling us he loves us 🥹
ОтветитьThe real problem is to access websites that detects me as webscraper, do you have a tuto abouit this issue?
ОтветитьI tried this for the rows but id displays as one row
['1', 'Walmart', 'Retail', '611,289', '6.7%', '2,100,000', 'Bentonville, Arkansas', '2', 'Amazon', 'Retail and Cloud Computing', '513,983', '9.4%', '1,540,000', 'Seattle, Washington', '3', 'Exxon Mobil', 'Petroleum industry', '413,680', '44.8%', '62,000', 'Spring, Texas', '4', 'A
how can I make it row by row
Thanks Alex
ОтветитьI found out why the class names were different. It seems to be a common issue. Someone explained it on Stack Overflow,
"The table class wikitable sortable jquery-tablesorter does not appear when navigating the website until the column is sorted. I was able to grab exactly one table by using the table class wikitable sortable."
On the "Revenue growth" column, might be important to notate the sign of the growth (positive or negative one).
I found a workaround for this:
revenueGrowth = []
table = soup.find('table', class_='wikitable')
data = table.find_all('td')
for x in data:
if 'Decrease2.svg.png' in str(x):
decreasedGrowth = '-' + str(x.text.strip())
revenueGrowth.append(decreasedGrowth)
Hey Alex,
It was a great video and I did find it to be very helpful and intresting . I would like to ask one question can we also do it for the second table and can we get the same table under the same excel csv file?
I appreciated you, I love you 😂❤
ОтветитьI just have one comment, You are the best Alex 🤩
ОтветитьThanks Alex for making me a great value to the world
ОтветитьHi Alex,
Can you try scraping ycombinator directory site, its proven to be very difficult.