Why combine commands on a single line in a Bash script?
I am new to Linux and Bash scripting. At work, I've seen Bash scripts with constructions similar to this:
mkdir build && cd build && touch blank.txt
Or:
mkdir build; cd build; touch blank.txt
Or even the exotic:
COMMAND="mkdir build && cd build && touch blank.txt"
eval ${COMMAND}
The last example gives one possible use-case where a single line could be useful, but generally the following is easier to read and (at least for me) allows you to visually debug the script:
mkdir build
cd build
touch blank.txt
Are there technical advantages to cramming everything on a single line?
Solution 1:
mkdir build && cd build && touch blank.txt
In Bash (and some other shells and most high level programming languages) &&
is a logical and, which means - if the previous command returns true the next command will be executed. There is also logical or ||
. For example you can combine these two options in one statement:
mkdir /tmp/abc/123 && echo 'the dir is created' || echo "the dir isn't created"
Note, the construction cmd_1 && cmd_2 || comd_3
is not a substitute of the if.. then.. else
statement, because no matter which of the preceding commands return false, the cmd_3
will be executed. So you must be careful about the circumstances in which you are using it. Here is an example:
$ true && false || echo success
success
$ false && true || echo success
success
$ false && false || echo success
success
As a rule rule of thumb, usually, when I'm using the cd
command within a script, I'm putting a test whether the directory change is successful: cd somewhere/ || exit
. More proper test is proposed by @dessert: if ! cd dir; then exit 1; fi
. But in all cases as protection of script's failure it is better to use the set -e
option as it is shown in the @mrks' answer.
mkdir build; cd build; touch blank.txt
;
is a line delimiter and it is used when few separated commands are written in one line.
Note when ;
or &&
and ||
are in use, it is not mandatory to write the commands at on line - this is illustrated within the @allo's answer.
At all, IMO, there is not any special technical advantage/difference between writing the commands at one or at separate lines.
COMMAND="mkdir build && cd build && touch blank.txt"
eval ${COMMAND}
Here one or more commands (and their arguments) are grouped as value of a variable, and then this variable (argument) is converted to a command by the help of eval
. Thus you will be able to change the command that will be executed multiple times within some script by changing it at only one place.
Let's say you need to change the way in which the file blank.txt
is created, for example, you can change the relevant line in a way as this:
COMMAND="mkdir build && cd build && echo 'test' > blank.txt"
The actual eval
's advantage over the usage of alias
is when there are re-directions, pipes, logical operators - at all control flow operators - in use.
In most cases you can use functions
instead of eval
when alias
is not applicable. Also in case the variable contains only a single command, i.e. CMD="cat"
, we do not need eval
, because Bash world-split will expand "$CMD" "$file1" "$file2"
correctly.
-
Here is one example where
eval
was the simplest way to do some command automation: Tail the “in the last hour written lines from a log file” is it possible? -
The previous version of this section, discussed within the comments is available here.
Solution 2:
The answers so far address what happens/how it works but I think you were asking "why"
The main "Reason" to do it (for me) rather than typing them on three lines is that sometimes the commands take time (Sometimes minutes) and you don't want to hang out for the whole time.
For example, I might build &&
deploy &&
start my app then head out for lunch. The process could take 15 minutes to complete. However since I used &&
, it won't deploy if the build fails--so that works.
Type-ahead is an alternative but it is iffy, you might not be able to see what you type (and therefore make mistakes) or a program might eat the type-ahead (Thanks, grails!). Then you come back from lunch and find out that you still have two longish tasks to kick off because of a typo.
The other alternative is writing a small script or alias. Not a bad idea but doesn't allow as much flexibility, like I can:
stop && build && test && deploy && start
or I can just:
stop && start
Or any number of other combinations that would quickly pollute a script with bunches of flags.
Even if you do write a script you are likely to integrate it with other scripts with &&
.
Solution 3:
Combining commands on Linux can be very useful.
A good example may be restarting a remote network interface via ssh (if you've changed network configuration or something...).
ifdown eth0 && ifup eth0
This can prevent you from going physically to the server in order to bring up the interface you where originally ssh'ing to. (You will not be able execute ifup eth0
if ifdown eth0
is executed alone.)
Solution 4:
What do the commands do ?
In order to understand the why, we also need to understand what is being done. Scripts are sequential. In your last example:
mkdir build
cd build
touch blank.txt
cd build
will be executed regardless of mkdir build
succeeding or not. Of course if mkdir fails
, you will see error from cd
.
mkdir build; cd build; touch blank.txt
is a sequential list. This can be thought of as essentially the same as multiple script lines. Shell Grammar will treat this slightly differently, but result will be exactly the same as above. Again, commands are executed regardless of whether the previous succeeded.
Finally, there's
mkdir build && cd build && touch blank.txt
which is an AND lists - a sequence of one or more pipelines ( or commands) separated by &&
operator. Commands in such list are executed with left associativity. This means, the shell will keep taking two commands/pipelines separated by &&
, execute one on the left first, and run the one on the right only if left one succeeded ( returned zero exit status ).
In this example, what will happen ?
- shell executes
mkdir build
first. - above command succeeded ( returned exit code 0 ),
cd build
will run - Again, let's look at the left associativity.
... && touch blank.txt
. Did stuff to the left of&&
succeed ? If yes, runtouch blank.txt
.
Why use sequential lists vs AND list ?
The sequential list mkdir build; cd build; touch blank.txt
makes sense when we are OK with one of the commands failing. Suppose mkdir build
already exists. We still want to cd
into the build/
directory and touch blank.txt
. Of course, the disadvantage here is that there's possibility of unintended result. What if build
has no execute bit set, and that's why cd build
fails ? The touch blank.txt
will happen in our current working directory instead of intended build/
.
Now consider mkdir build && cd build && touch blank.txt
. This is somewhat more logical, though has its own pitfalls. If build
already exists, likely someone already did the touch blank.txt
as well, so we may not want to do touch blank.txt
again as this will modify the access timestamp on it, which is something that might not be desirable.
Conclusion
Placing commands on the same line depends on the purpose of the commands and what you're trying to achieve. Sequential lists can simplify the output, as shell won't redisplay the prompt until commands have finished. AND list allows for conditional execution of commands and may prevent execution of unnecessary commands. Bottom line is that the user needs to know the differences and what to choose as the right tool for the task.
Solution 5:
Another reason might be in how the script came to be: It is not unusual for technical users to build very long interactive command lines, repeatedly extending and testing them until they work as desired, then pasting the result into a text file and slapping a #!/bin/bash
header over it. Sometimes multiple segments of a script are evolved by that method.
Long one line constructs that begin for i in
, grep | cut | sed | xargs
chains, heavy use of &&
and ||
, $(cat something)
constructs are often indicative of such origins.