How to get the contents of a webpage in a shell variable?
In Linux how can I fetch an URL and get its contents in a variable in shell script?
You can use wget
command to download the page and read it into a variable as:
content=$(wget google.com -q -O -)
echo $content
We use the -O
option of wget
which allows us to specify the name of the file into which wget
dumps the page contents. We specify -
to get the dump onto standard output and collect that into the variable content
. You can add the -q
quiet option to turn off's wget output.
You can use the curl command for this aswell as:
content=$(curl -L google.com)
echo $content
We need to use the -L
option as the page we are requesting might have moved. In which case we need to get the page from the new location. The -L
or --location
option helps us with this.
There are many ways to get a page from the command line... but it also depends if you want the code source or the page itself:
If you need the code source:
with curl:
curl $url
with wget:
wget -O - $url
but if you want to get what you can see with a browser, lynx can be useful:
lynx -dump $url
I think you can find so many solutions for this little problem, maybe you should read all man pages for those commands. And don't forget to replace $url
by your URL :)
Good luck :)