Комментарии:
"parse_page(html)" from lesson 2 suddenly became "parse_search_page(html: HTMLParser):" in lesson 3 without any explanation. Anyway great tutorial as well as a whole series. Very good for beginners.
ОтветитьNice job is there a way to put this whole stuff in a cron job or scheduler to run intermittently
ОтветитьBeautiful job. How can I find the code?
Ответитьcan you stop smashing your keyboard
ОтветитьMan, your videos are great. Your videos on playwirght have really been helpful. I was able to follow your videos and then make my own playwirhgt script in my project. Until I got stuck dealing with dynamic pop-ups. I am unable to get past those. I am supposed to enter a piece of data in those pop-ups (not like captcha stuff). Just unable to make it work. It would help if you could cover dealing with dynamic pop-ups. Thanks.
Ответитьif we can combine playwright with this, then basically we can scrape any dynamic sites? (e.g: social media websites)
thank you so much John this series is very fulfilling.
very very good
ОтветитьExcellent video series, much appreciated. Thank you for posting.
ОтветитьAlso kindly add the product urls column for each product and make it clickable when writing to CSV
ОтветитьHi John, what is the fastest scraper for webpage with dynamically loaded content. I am using selenium and find it very slow in terms of speed. Any other options?
ОтветитьThis is very helpful! I appreciate it a lot.
ОтветитьCan you show how we can do this on websites where we have to log in first?
ОтветитьThank you very much big John!
ОтветитьBased on one of your previous videos figured out, how to get nested objects from tricky div's . Thank you!
Could you please advise, how in function below do I get not only <p>'s but also <h2>'s, <pre>'s and <ul><li>'s elements?
Should it be some sort of pipe like syntax "div.article-formatted-body > div > p | h2 | pre | ul | li |"?
def read_article(html):
article_body = html.css("div.article-formatted-body > div > p")
paragraphs = [i.text() for i in article_body]
print(*paragraphs, sep='\n')
you are genius man, thank you very much
Ответитьthank you! we need more of this sh!t
and i hope a serie like this of BeatifulSoup either
thanks heaps for these John, can we please get the code into a pastebin or something pls? 🙏
ОтветитьThank you please continue this series
ОтветитьExcellent video, great learning experience
ОтветитьHi kindly make a video of python with Selenium because no updated chrome driver available so I don't know how we run script now.
Thanks
Ohayou ❤
Ответить