CURL to download a directory

Solution 1:

Always works for me, included no parent and recursive to only get the desired directory.

 wget --no-parent -r http://WEBSITE.com/DIRECTORY

Solution 2:

HTTP doesn't really have a notion of directories. The slashes other than the first three (http://example.com/) do not have any special meaning except with respect to .. in relative URLs. So unless the server follows a particular format, there's no way to “download all files in the specified directory”.

If you want to download the whole site, your best bet is to traverse all the links in the main page recursively. Curl can't do it, but wget can. This will work if the website is not too dynamic (in particular, wget won't see links that are constructed by Javascript code). Start with wget -r http://example.com/, and look under “Recursive Retrieval Options” and “Recursive Accept/Reject Options” in the wget manual for more relevant options (recursion depth, exclusion lists, etc).

If the website tries to block automated downloads, you may need to change the user agent string (-U Mozilla), and to ignore robots.txt (create an empty file example.com/robots.txt and use the -nc option so that wget doesn't try to download it from the server).