Download all PDF links in a web page? [closed]
You can use wget and run a command like this:
wget --recursive --level=1 --no-directories --no-host-directories --accept pdf http://example.com
Or with the short options:
wget -r -l 1 -nd -nH -A pdf http://example.com
UPDATE: Since your update says you are running Windows 7: use wget for Windows from a cmd
prompt.
UPDATE 2: For a graphical solution - though it may be overkill since it gets other files too is DownThemAll
-
In your browser, press CTRL+SHIFT+J, and enter
var pdflinks =[]; Array.prototype.map. call(document.querySelectorAll("a[href$=\".pdf\"]"), function(e, i){if((pdflinks||[]).indexOf(e.href)==-1){ pdflinks.push( e.href);} }); console.log(pdflinks.join(" "));
This will return in the console:
"https://superuser.com/questions/tagged/somepdf1.pdf" "https://superuser.com/questions/tagged/somepdf2.pdf" "https://superuser.com/questions/tagged/somepdf3.pdf"
Now using
wget
with the command line optionswget url1 url2 ...
Copy and paste this, open a console enter wget
press the right mouse button to insert your clipboard content and press enter.
To use a download file, join the lines with "\n" and use the parameter as follows wget -i mydownload.txt
Note that most other (GUI) download programs too accept to be called with a space separated list of URLs.
Hope this helps. This is how I generally do it. It is faster and more flexible than any extension with a graphical UI, I have to learn and remain familiar with.