I want to structure a Python program that downloads files (manga) from a particular site once a week. I'm training, I took the web scraping course, but I'm lost on how to make these requests. I took as an example, the code of this page
import urllib2
import requests
url = 'http://www.carlissongaldino.com.br/modules/pubdlcnt/pubdlcnt.php?file=http://www.carlissongaldino.com.br/sites/default/files/o-fantasma-da-opera.pdf&nid=1287'
print "baixando com urllib"
urllib.urlretrieve(url, "o-fantasma-da-opera-u.pdf")
print "baixando com urllib2"
f = urllib2.urlopen(url)
data = f.read()
with open("o-fantasma-da-opera-u2.pdf", "wb") as code:
code.write(data)
print "baixando com requests"
r = requests.get(url)
with open("o-fantasma-da-opera-r.pdf", "wb") as code:
code.write(r.content)
It teaches you to download a particular file using the following libraries. I see a pattern, I think it would work, I just need to implement the day and the time. I'm lost in this part. How can you help me to elucidate this case?