View unicode codepoints for all letters in file on bash
I have to deal with a file that has lot of invisible control characters, like "right to left" or "zero width non-joiner", different spaces than the normal space and so on, and I have troubles dealing with that.
Now, I would like to somehow view all letters in a given file, letter by letter (I would like to say "left to right", but I am unfortunately dealing with right-to-left language), as unicode codepoints, using only basic bash tools (like vi
, less
, cat
...). Is it possible somehow?
I know I can display the file in hexadecimal by hexdump
, but I would have to recompute the codepoints. I really want to see the actual unicode codepoints, so I can google them and find out what's happenning.
edit: I will add that I don't want to transcode it to different encoding (because that's what I am finding out online). I have the file in UTF8 and that is fine. I just want to know the exact codepoints of all the letters.
I wrote myself a perl one-liner, that do just that, and it also prints the original character. (It expects the file from STDIN)
perl -C7 -ne 'for(split(//)){print sprintf("U+%04X", ord)." ".$_."\n"}'
However, there should be a better way than this.
I needed the code point for some common smileys, and came up with this:
echo -n "😊" | # -n ignore trailing newline \
iconv -f utf8 -t utf32be | # UTF-32 big-endian happens to be the code point \
xxd -p | # -p just give me the plain hex \
sed -r 's/^0+/0x/' | # remove leading 0's, replace with 0x \
xargs printf 'U+%04X\n' # pretty print the code point
which prints
U+1F60A
which is the code point for "SMILING FACE WITH SMILING EYES".
Inspired by Neftas's answer, here is a slightly simpler solution that works with strings, rather than a single char:
iconv -f utf8 -t utf32le | hexdump -v -e '8/4 "0x%04x " "\n"' | sed -re"s/0x / /g"
# ^
# The number `8` above determines the number of columns in the output. Modify as needed.
I also made a Bash script that reads from stdin, or from a file, and that displays the original text alongside with the unicode values:
COLWIDTH=8
SHOWTEXT=true
tmpfile=$(mktemp)
cp "${1:-/dev/stdin}" "$tmpfile"
left=$(set -o pipefail; iconv -f utf8 -t utf32le "$tmpfile" | hexdump -v -e $COLWIDTH'/4 "0x%05x " "\n"' | sed -re"s/0x / /g")
if [ $? -gt 0 ]; then
echo "ERROR: Could not convert input" >&2
elif $SHOWTEXT; then
right=$(tr [:space:] . < "$tmpfile" | sed -re "s/.{$COLWIDTH}/|&|\n/g" | sed -re "s/^.{1,$((COLWIDTH+1))}\$/|&|/g")
pr -mts" " <(echo "$left") <(echo "$right")
else
echo "$left"
fi
rm "$tmpfile"
The perl oneliner didn't work for me, and I couldn't get the hexdump methods to display the actual character besides the codepoint, so here's a python oneliner:
python -c 'import sys; print("\n".join(["\\u%04x -> %s" % (ord(c), c) for c in sys.stdin.read() if c.strip()]))'
The output is something like this:
$ cat test.txt
A á Ü Ñ 日本語 1 1 / _
$ python -c 'import sys; print("\n".join(["\\u%04x -> %s" % (ord(c), c) for c in sys.stdin.read() if c.strip()]))' < test.txt
\u0041 -> A
\u00e1 -> á
\u00dc -> Ü
\u00d1 -> Ñ
\u65e5 -> æ—¥
\u672c -> 本
\u8a9e -> 語
\u0031 -> 1
\uff11 -> 1
\u002f -> /
\u005f -> _
Note: for python2 the text would need to be decoded:
python2 -c 'import sys; print("\n".join(["\\u%04x -> %s" % (ord(c), c) for c in sys.stdin.read().decode("utf-8") if c.strip()]))'