What's the point of eval/bash -c as opposed to just evaluating a variable?
The third form is not at all like the other two -- but to understand why, we need to go into the order of operations when bash in interpreting a command, and look at which of those are followed when each method is in use.
Bash Parsing Stages
- Quote Processing
- Splitting Into Commands
- Special Operator Parsing
- Expansions
- Word Splitting
- Globbing
- Execution
Using eval "$string"
eval "$string"
follows all the above steps starting from #1. Thus:
- Literal quotes within the string become syntactic quotes
- Special operators such as
>()
are processed - Expansions such as
$foo
are honored - Results of those expansions are split on characters into whitespace into separate words
- Those words are expanded as globs if they parse as same and have available matches, and finally the command is executed.
Using sh -c "$string"
...performs the same as eval
does, but in a new shell launched as a separate process; thus, changes to variable state, current directory, etc. will expire when this new process exits. (Note, too, that that new shell may be a different interpreter supporting a different language; ie. sh -c "foo"
will not support the same syntax that bash
, ksh
, zsh
, etc. do).
Using $string
...starts at step 5, "Word Splitting".
What does this mean?
Quotes are not honored.
printf '%s\n' "two words"
will thus parse as printf
%s\n
"two
words"
, as opposed to the usual/expected behavior of printf
%s\n
two words
(with the quotes being consumed by the shell).
Splitting into multiple commands (on ;
s, &
s, or similar) does not take place.
Thus:
s='echo foo && echo bar'
$s
...will emit the following output:
foo && echo bar
...instead of the following, which would otherwise be expected:
foo
bar
Special operators and expansions are not honored.
No $(foo)
, no $foo
, no <(foo)
, etc.
Redirections are not honored.
>foo
or 2>&1
is just another word created by string-splitting, rather than a shell directive.
$ bash -c "$COMMAND"
This version starts up a new bash interpreter, runs the command, and then exits, returning control to the original shell. You don't need to be running bash at all in the first place to do this, you can start a bash interpreter from tcsh, for example. You might also do this from a bash script to start with a fresh environment or to keep from polluting your current environment.
EDIT:
As @CharlesDuffy points out starting a new bash shell in this way will clear shell variables but environment variables will be inherited by the spawned shell process.
Using eval
causes the shell to parse your command twice. In the example you gave, executing $COMMAND directly or doing an eval
are equivalent, but have a look at the answer here to get a more thorough idea of what eval
is good (or bad) for.
There are at least times when they are different. Consider the following:
$ cmd="echo \$var"
$ var=hello
$ $cmd
$var
$ eval $cmd
hello
$ bash -c "$cmd"
$ var=world bash -c "$cmd"
world
which shows the different points at which variable expansion is performed. It's even more clear if we do set -x
first
$ set -x
$ $cmd
+ echo '$var'
$var
$ eval $cmd
+ eval echo '$var'
++ echo hello
hello
$ bash -c "$cmd"
+ bash -c 'echo $var'
$ var=world bash -c "$cmd"
+ var=world
+ bash -c 'echo $var'
world
We can see here much of what Charles Duffy talks about in his excellent answer. For example, attempting to execute the variable directly prints $var
because parameter expansion and those earlier steps had already been done, and so we don't get the value of var
, as we do with eval
.
The bash -c
option only inherits export
ed variables from the parent shell, and since I didn't export var
it's not available to the new shell.