I need to make a program that downloads PDFs from several sites daily and automatically. It is very easy to perform this operation using the C # WebClient command, however, on certain sites it is not possible to find the download URL in any way. In the event of click
of the download button, the site code calls a JavaScript and in no time is generated a link, I already tried to make a webrequest
containing session cookies in an attempt to download the PDF by the server response (I used the fiddler to identify), but I did not succeed.
Click "search journals" in the left-hand corner .
Using the DLL Watin, which is a web browser simulator, I can simulate the click of the button in the browser, but it is not possible to handle the " or open the file "of Internet Explorer.
Is there any method of downloading from sites like this?