bash & friends: removing excess <LF>

There are a couple of text files with one or more (or a lot) of superfluous line feeds at the end, possibly with some white space in between. The desired result is just one LF at the end. The following solution seems pretty ugly. Can you think of a more elegant solution?

WS="${SPACE}${TAB}"
cat $INFILE | sed -e "s/^$WS]*$//" | tr '\012' '¶' \
            | sed -e "s/¶*$/¶/" | tr '¶' '\012' >$OUTFILE
rm $INFILE ; mv $OUTFILE $INFILE

Old DOS files? dos2unix, should be in the repos. Used this in the 90’s for exactly this problem.

Use Perl.

perl -0777 -ne ‘s/
\s*$/
/; print $_’

perl -0777 -ne 's/
\s*$/
/; print $_'

I am impressed. The universal junk-at-the-end-of-file remover filter in 39 bytes. I give you a rep point and say “thank you”.

Well only if the junk is ASCII whitespace, not soft spaces and that sort of stuff in Unicode. :slight_smile:

If you do a hexdump -C there are multiple, consecutive “x0a” or “012”](http://www.asciitable.com/).

Remove blank lines

sed ‘/^$/d’ InputFile > OutputFile

If you do a hexdump -C there are multiple, consecutive “x0a” or “012”.

Remove blank lines

sed ‘/^$/d’ InputFile > OutputFile

Oh no, this would remove any blank lines within the text as well. This is not what I want to do. The solution proposed by ken_yap is perfect for any app using plain ASCII (or ISO-8859-1) encodings.

AFAIK dos2unix does exactly the same. Can’t remember installing it, it’s on both my systems, maybe installed by default ? See ‘man dos2unix’.