Timeline for answer to Script Suddenly Stops Crawling Without Error or Exception by undetected Selenium
Current License: CC BY-SA 4.0
Post Revisions
14 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Feb 18, 2021 at 20:02 | history | edited | DisappointedByUnaccountableMod | CC BY-SA 4.0 |
deleted 1 character in body
|
| Oct 18, 2018 at 8:00 | history | bounty awarded | CommunityBot | ||
| Oct 12, 2018 at 7:57 | history | edited | undetected Selenium | CC BY-SA 4.0 |
formatted text
|
| Oct 12, 2018 at 2:13 | comment | added | oldboy |
i actually fixed the issue on my own a number of days ago. i had updated my question and just updated it again for the sake of clarity. yeah, there is a specific reason why i'm not traversing the website using a single instance, but instead creating multiple instances. though now that i think about it, creating new instances may not even be necessary with ff.back() functionality, but it's still certainly a lot more straight forward. i will give your answer a read when i get some free time! thanks for trying to solve my issue <3
|
|
| Oct 11, 2018 at 14:34 | comment | added | undetected Selenium | @Anthony As per your observation I have made some small modifications in my solution. I could have optimized your code and performed the same web scrapping opening the Firefox Browser Client only once and traversing through various products. But to preserve your logic and innovation I have suggested the minimal changes required to get you through. Can you please try the updated solution and let me know the status please? | |
| Oct 11, 2018 at 14:33 | history | edited | undetected Selenium | CC BY-SA 4.0 |
Updated answer
|
| Oct 10, 2018 at 20:53 | history | edited | undetected Selenium | CC BY-SA 4.0 |
code indentation
|
| Oct 10, 2018 at 19:27 | history | edited | undetected Selenium | CC BY-SA 4.0 |
Updated code
|
| Oct 10, 2018 at 16:15 | history | edited | undetected Selenium | CC BY-SA 4.0 |
Updated code
|
| Oct 10, 2018 at 15:53 | comment | added | oldboy |
oh maybe that was why i never reset it. WebDriverWait(ff, 15).until(EC.text_to_be_present_in_element((By.PARTIAL_LINK_TEXT, 'Next→'), 'Next→')).click() returns a bool object which isn't clickable. you have an error in your code
|
|
| Oct 10, 2018 at 15:37 | comment | added | oldboy |
That HTTPConnectionPool error in my question is an outlier. The script works perfectly fine until it suddenly stops on page 9 without an error or exception. The only reason I set but didn't use next_button is because I was trying to troubleshoot this, thought that might've had something to do with it, and never reset it. The question/issue here is why does it stop crawling/scraping once it finishes page 9??
|
|
| Oct 10, 2018 at 11:26 | history | edited | undetected Selenium | CC BY-SA 4.0 |
Updated answer
|
| Oct 10, 2018 at 10:43 | history | edited | undetected Selenium | CC BY-SA 4.0 |
Updated answer
|
| Oct 10, 2018 at 10:23 | history | answered | undetected Selenium | CC BY-SA 4.0 |