Solution: The action for this specific site is action="user/ajax/login" so this is what has to be appended to url of the main site in order to implement the payload. (action can be found by searching ctrl + f for action in the Page Source). The url is the what is going to be scraped. The with requests.Session() as s: is what is maintaining the cookies from within the site, which is what allows consistent scraping. The res variable is the response that posts the payload into the login url, allowing the user to scrape from a specific account page. After the post, requests will then attain the specified url. With this in place, BeautifulSoup can now grab and parse the HTML from within the accounts site. "html.parser" and "lxml" are both compatible in this case. If there is HTML from within an iframe, it's doubtful it can be grabbed and parsed using only requests, so I recommend using selenium preferably using Firefox.
import requests
payload = {"username":"?????", "password":"?????"}
url = "https://9anime.to/user/watchlist"
loginurl = "https://9anime.to/user/ajax/login"
with requests.Session() as s:
res = s.post(loginurl, data=payload)
res = s.get(url)
from bs4 import BeautifulSoup
soup = BeautifulSoup(res.text, "html.parser")
[Windows 10] To install Selenium pip3 install selenium and for the drivers - (chrome: https://sites.google.com/a/chromium.org/chromedriver/downloads) (Firefox: https://github.com/mozilla/geckodriver/releases) How to place "geckodriver" into PATH for Firefox Selenium: control panel "environmental variables "Path" "New" "file location for "geckodriver" enter Then your'e all set.
Also, in order to grab the iframes when using selenium, try import time and time.sleep(5) after 'getting' the url with your driver. This will give the site more time to load those extra iframes
Example:
import time
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Firefox() # The WebDriver for this script
driver.get("https://www.google.com/")
time.sleep(5) # Extra time for the iframe(s) to load
soup = BeautifulSoup(driver.page_source, "lxml")
print(soup.prettify()) # To see full HTML content
print(soup.find_all("iframe")) # Finds all iframes
print(soup.find("iframe"))["src"] # If you need the 'src' from within an iframe.