Uncompress file from URL to hard disk?
If unzip
would have adhered to the pipes and filters architectural style characteristic of Unix, then one would expect to be able to do this by piping the output of wget into the input of unzip:
wget -O - http://www.test.com/test.zip | (cd destination-folder && unzip -)
Where the the dash after -O
means standard output, the dash after unzip
means standard input and the parenthesised right hand side starts a subshell in the correct directory.
However the ZIP format stores its contents index at the end of the file, and unzip
needs to read this before it can start decompressing. So, contrary to gzip
et al, it cannot decompress a stream in one pass.
As a workaround you need a temporary file:
wget -O /tmp/z.$$ http://www.test.com/test.zip &&
(cd destination-folder && unzip /tmp/z.$$)
Where the $$
are just used to generate a non-colliding number.
Instead of running the subshell, you could use the -d
option of unzip. And for added hygiene, clean up your temporary file:
wget -O /tmp/z.$$ http://www.test.com/test.zip &&
unzip -d destination-folder /tmp/z.$$ &&
rm /tmp/z.$$
You need more than one step:
-
Go into your destination folder
cd destination
-
Download the file
wget www.test.com/test.zip
-
Extract the file
unzip test.zip
To use only one command use a script:
#!/bin/bash
wget -P "/tmp" "$1"
filename=$(awk -F'/' {print $NF})
unzip "$filename" -d "$2"
rm "/tmp/$filename"
-
Open an editor to create a new file called
unzip_by_url
and put it either in~/bin
or/usr/bin/local
nano unzip_by_url
And paste the code above.
Save and close and
-
Make the file executable
chmod +x /path/to/file/unzip_by_url
-
Now start the script with:
unzip_by_url www.test.com/test.zip destination-folder-on-my-pc