Bash script -- store `curl` output in variable, then format against string in variable
I was hoping to curl
data into a shell script variable, then use cat
or awk
to operate on that variable as if it was a file.
Is this possible, or is there a workaround? Do I have to save the curl
command into a file?
Do I have to run curl
several times, for each bit of string formatting I want to do?
I am curl
ing ipinfo.io, and want to output the matching lines for "city" and "region". I know how to use awk
and sed
to format the output as desired.
Try for example, curl ipinfo.io/"8.8.8.8"
for the result for a Google DNS IP address.
I'd like to print just "Mountain View" and "California," preferably on the same line, with some common formatting, like "Mountain View, California".
Solution 1:
In general there are more appropriate ways of parsing JSON objects, but since in this case the JSON object is very simple you may store curl
's output in a variable (which is possible) and just use AWK:
var="$(curl ipinfo.io/"8.8.8.8" 2>/dev/null)"
<<<"$var" awk -F'"' '$2=="city"{printf("%s, ", $4)}$2=="region"{print $4}'
% var="$(curl ipinfo.io/"8.8.8.8" 2>/dev/null)"
% <<<"$var" awk -F'"' '$2=="city"{printf("%s, ", $4)}$2=="region"{print $4}'
Mountain View, California
However unless you want to use curl
's output multiple times you may just use a pipe:
curl ipinfo.io/"8.8.8.8" 2>/dev/null | awk -F'"' '$2=="city"{printf("%s, ", $4)}$2=="region"{print $4}'
curl ipinfo.io/"8.8.8.8" 2>/dev/null | awk -F'"' '$2=="city"{printf("%s, ", $4)}$2=="region"{print $4}'
Mountain View, California
<<<
is a form of input redirection called "here string"; it redirects the STDIN of a command from the terminal to a string.
What happens here is that $var
is expanded between the double quotes; the STDIN of the AWK command is redirected from the terminal to the expanded string and AWK consequently reads the string as its input file.
Solution 2:
I was hoping to curl data into a shell script variable, then use cat or awk to operate on that variable as if it were a file. Is this possible, or is there a workaround?
Sure you can. If the content is not too large, and if you want to run multiple commands to parse the content, then it's a good thing to cache it in memory rather than re-download every time.
To store the result of curl
in a variable:
ipinfo=$(curl ipinfo.io/8.8.8.8)
To run commands on it:
<<< "$ipinfo" awk ...
<<< "$ipinfo" sed ...
The double-quotes no `"$ipinfo" are important to preserve all the whitespace characters.
A "workaround" to not saving the content in a variable is to figure out a way to process the content in a single pipeline, like @kos did.