How to download a webpage's all images at once?
Automator
Use OS X's Automator.app to find, extract, and save the images from your current web page. The combination of Actions needed are:
- Get Current Webpage from Safari
- Get Contents of Webpages
- Save Images From Web Content
To learn more about using Automator, see Apple's Mac Basics: Automator.
Terminal
An alternative approach is to use curl
through the command line, What's the fastest and easiest way to download all the images from a website.
Using wget
:
wget http://en.wikipedia.org/wiki/Service_Ribbon -p -A .jpg,.jpeg,.png -H -nd
-p
(--page-requisites
) downloads resources like images and stylesheets even when you don't use -r
. -A
specifies suffixes or glob-style patterns to accept. -H
(--span-hosts
) follows links to other domains like upload.wikimedia.org
. -nd
(--no-directories
) downloads all files to the current directory without creating subdirectories.
You can install wget
with brew install wget
after installing Homebrew.
You might also just use curl
:
curl example.tumblr.com | grep -o 'src="[^"]*.jpg"' | cut -d\" -f2 |
while read l; do curl "$l" -o "${l##*/}"; done
Downloading images from Tumblr or Blogspot:
api="http://api.tumblr.com/v2/blog/example.tumblr.com/posts?type=photo&api_key=get from tumblr.com/api"
seq 0 20 $(curl -s $api | jq .response.total_posts) |
while read n; do
curl -s "$api&offset=$n" |
jq -r '.response.posts[].photos[].original_size.url'
done | awk '!a[$0]++' | parallel wget -q
curl -L 'http://someblog.blogspot.com/atom.xml?max-results=499' |
grep -io 'href="http://[^&]*.jpg' |
cut -d\; -f2 |
awk '!a[$0]++' |
parallel wget -q