How can I delete a newline if it is the last character in a file?

perl -pe 'chomp if eof' filename >filename2

or, to edit the file in place:

perl -pi -e 'chomp if eof' filename

[Editor's note: -pi -e was originally -pie, but, as noted by several commenters and explained by @hvd, the latter doesn't work.]

This was described as a 'perl blasphemy' on the awk website I saw.

But, in a test, it worked.


You can take advantage of the fact that shell command substitutions remove trailing newline characters:

Simple form that works in bash, ksh, zsh:

printf %s "$(< in.txt)" > out.txt

Portable (POSIX-compliant) alternative (slightly less efficient):

printf %s "$(cat in.txt)" > out.txt

Note:

  • If in.txt ends with multiple newline characters, the command substitution removes all of them.Thanks, Sparhawk (It doesn't remove whitespace characters other than trailing newlines.)
  • Since this approach reads the entire input file into memory, it is only advisable for smaller files.
  • printf %s ensures that no newline is appended to the output (it is the POSIX-compliant alternative to the nonstandard echo -n; see http://pubs.opengroup.org/onlinepubs/009696799/utilities/echo.html and https://unix.stackexchange.com/a/65819)

A guide to the other answers:

  • If Perl is available, go for the accepted answer - it is simple and memory-efficient (doesn't read the whole input file at once).

  • Otherwise, consider ghostdog74's Awk answer - it's obscure, but also memory-efficient; a more readable equivalent (POSIX-compliant) is:

  • awk 'NR > 1 { print prev } { prev=$0 } END { ORS=""; print }' in.txt

  • Printing is delayed by one line so that the final line can be handled in the END block, where it is printed without a trailing \n due to setting the output-record separator (OFS) to an empty string.

  • If you want a verbose, but fast and robust solution that truly edits in-place (as opposed to creating a temp. file that then replaces the original), consider jrockway's Perl script.


You can do this with head from GNU coreutils, it supports arguments that are relative to the end of the file. So to leave off the last byte use:

head -c -1

To test for an ending newline you can use tail and wc. The following example saves the result to a temporary file and subsequently overwrites the original:

if [[ $(tail -c1 file | wc -l) == 1 ]]; then
  head -c -1 file > file.tmp
  mv file.tmp file
fi

You could also use sponge from moreutils to do "in-place" editing:

[[ $(tail -c1 file | wc -l) == 1 ]] && head -c -1 file | sponge file

You can also make a general reusable function by stuffing this in your .bashrc file:

# Example:  remove-last-newline < multiline.txt
function remove-last-newline(){
    local file=$(mktemp)
    cat > $file
    if [[ $(tail -c1 $file | wc -l) == 1 ]]; then
        head -c -1 $file > $file.tmp
        mv $file.tmp $file
    fi
    cat $file
}

Update

As noted by KarlWilbur in the comments and used in Sorentar's answer, truncate --size=-1 can replace head -c-1 and supports in-place editing.


head -n -1 abc > newfile
tail -n 1 abc | tr -d '\n' >> newfile

Edit 2:

Here is an awk version (corrected) that doesn't accumulate a potentially huge array:

awk '{if (line) print line; line=$0} END {printf $0}' abc