Bulk image download from piwigo-based web gallery

You can use wget as shown here:

Downloading an Entire Web Site with wget

Sep 05, 2008 By Dashamir Hoxha in HOW-TOs

If you ever need to download an entire Web site, perhaps for off-line viewing, wget can do the job—for example:

$ wget \
     --recursive \
     --no-clobber \
     --page-requisites \
     --html-extension \
     --convert-links \
     --restrict-file-names=windows \
     --domains website.org \
     --no-parent \
         www.website.org/tutorials/html/

This command downloads the Web site http://www.website.org/tutorials/html/.

The options are:

  • --recursive: download the entire Web site.
  • --domains website.org: don't follow links outside website.org.
  • --no-parent: don't follow links outside the directory tutorials/html/.
  • --page-requisites: get all the elements that compose the page (images, CSS and so on).
  • --html-extension: save files with the .html extension.
  • --convert-links: convert links so that they work locally, off-line.
  • --restrict-file-names=windows: modify filenames so that they will work in Windows as well.
  • --no-clobber: don't overwrite any existing files (used in case the download is interrupted and resumed).

Out of these --page-requisites & --recursive will likely be needed, though --convert-links or --no-clobber may be useful. For more information on using wget run man wget (or look here).


This is my solution for this as your question get the images

So first create the folder to save the images, then cd into it

#terminal
mkdir imagesFolder
cd imagesFolder/

# this one will take a long time but will download 
# every single image related to this website
wget -r -nd -H -p -A '*.jpg','*.jpeg','*.png' -e robots=off http://mermaid.pink/

#I recomend to use this one better as the images in this site are all jpg
# And recursion level is set to 1
wget -r -l 1 -nd -H -p -A '*.jpg' -e robots=off http://mermaid.pink/

The wget arguments explained:

  • -r | --recursive:

    • Turn on recursive retrieving. The default maximum depth is 5.
  • -l depth | --level=depth:

    • Specify recursion maximum depth level depth.
  • -nd | --no-directories:

    • Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the filenames will get extensions .n).
  • -H | --span-hosts:

    • Enable spanning across hosts when doing recursive retrieving.
  • -p | --page-requisites:

    • This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets...
  • -A | --accept:

    • Specify comma-separated lists of file name suffixes or patterns to accept. Note that if any of the wildcard characters, , ?, [ or ], appear in an element of acclist , it will be treated as a pattern, rather than a suffix. In this case, you have to enclose the pattern into quotes to prevent your shell from expanding it, like in -A ".mp3" or -A '*.mp3'.
  • -e | --execute:

    • Execute command as if it were a part of .wgetrc. A command thus invoked will be executed after the commands in .wgetrc, thus taking precedence over them. If you need to specify more than one wgetrc command, use multiple instances of -e.
    • In this case the robots=off is the argument of -e

For more info in wget type in terminal

man wget

OR check THIS

Thanks T04435