How I can download PDFs of a website by using only the root domain name?

Solution 1:

The following command should work:

wget -r -A "*.pdf" "http://yourWebsite.net/"

See man wget for more info.

Solution 2:

In case the above doesn't work try this: (replace the URL)

lynx -listonly -dump http://www.philipkdickfans.com/resources/journals/pkd-otaku/ | grep pdf | awk '/^[ ]*[1-9][0-9]*\./{sub("^ [^.]*.[ ]*","",$0); print;}' | xargs -L1 -I {} wget {} 

you might need to install lynx:

sudo apt install lynx