How can I find encoding of a file via a script on Linux?

FileShellUnixEncoding

File Problem Overview


I need to find the encoding of all files that are placed in a directory. Is there a way to find the encoding used?

The file command is not able to do this.

The encoding that is of interest to me is ISO 8859-1. If the encoding is anything else, I want to move the file to another directory.

File Solutions


Solution 1 - File

It sounds like you're looking for enca. It can guess and even convert between encodings. Just look at the man page.

Or, failing that, use file -i (Linux) or file -I (OS X). That will output MIME-type information for the file, which will also include the character-set encoding. I found a man-page for it, too :)

Solution 2 - File

file -bi <file name>

If you like to do this for a bunch of files

for f in `find | egrep -v Eliminate`; do echo "$f" ' -- ' `file -bi "$f"` ; done

Solution 3 - File

uchardet - An encoding detector library ported from Mozilla.

Usage:

~> uchardet file.java
UTF-8

Various Linux distributions (Debian, Ubuntu, openSUSE, Pacman, etc.) provide binaries.

Solution 4 - File

In Debian you can also use: encguess:

$ encguess test.txt
test.txt  US-ASCII

It should be installed on most system:

$ dpkg -S /usr/bin/encguess
perl: /usr/bin/encguess

Solution 5 - File

Here is an example script using file -I and iconv which works on Mac OS X.

For your question, you need to use mv instead of iconv:

#!/bin/bash
# 2016-02-08
# check encoding and convert files
for f in *.java
do
  encoding=`file -I $f | cut -f 2 -d";" | cut -f 2 -d=`
  case $encoding in
    iso-8859-1)
    iconv -f iso8859-1 -t utf-8 $f > $f.utf8
    mv $f.utf8 $f
    ;;
  esac
done

Solution 6 - File

To convert encoding from ISO 8859-1 to ASCII:

iconv -f ISO_8859-1 -t ASCII filename.txt

Solution 7 - File

It is really hard to determine if it is ISO 8859-1. If you have a text with only 7-bit characters that could also be ISO 8859-1, but you don't know. If you have 8-bit characters then the upper region characters exist in order encodings as well. Therefore you would have to use a dictionary to get a better guess which word it is and determine from there which letter it must be. Finally, if you detect that it might be UTF-8 then you are sure it is not ISO 8859-1.

Encoding is one of the hardest things to do, because you never know if nothing is telling you.

Solution 8 - File

With Python, you can use the chardet module.

Solution 9 - File

In PHP you can check it like below:

Specifying the encoding list explicitly:

php -r "echo 'probably : ' . mb_detect_encoding(file_get_contents('myfile.txt'), 'UTF-8, ASCII, JIS, EUC-JP, SJIS, iso-8859-1') . PHP_EOL;"

More accurate "mb_list_encodings":

php -r "echo 'probably : ' . mb_detect_encoding(file_get_contents('myfile.txt'), mb_list_encodings()) . PHP_EOL;"

Here in the first example, you can see that I used a list of encodings (detect list order) that might be matching. To have a more accurate result, you can use all possible encodings via: mb_list_encodings()

Note the mb_* functions require php-mbstring:

apt-get install php-mbstring

Solution 10 - File

With this command:

for f in `find .`; do echo `file -i "$f"`; done

you can list all files in a directory and subdirectories and the corresponding encoding.

If files have a space in the name, use:

IFS=$'\n'
for f in `find .`; do echo `file -i "$f"`; done

Remember it'll change your current Bash session interpreter for "spaces".

Solution 11 - File

This is not something you can do in a foolproof way. One possibility would be to examine every character in the file to ensure that it doesn't contain any characters in the ranges 0x00 - 0x1f or 0x7f -0x9f but, as I said, this may be true for any number of files, including at least one other variant of ISO 8859.

Another possibility is to look for specific words in the file in all of the languages supported and see if you can find them.

So, for example, find the equivalent of the English "and", "but", "to", "of" and so on in all the supported languages of ISO 8859-1 and see if they have a large number of occurrences within the file.

I'm not talking about literal translation such as:

English   French
-------   ------
of        de, du
and       et
the       le, la, les

although that's possible. I'm talking about common words in the target language (for all I know, Icelandic has no word for "and" - you'd probably have to use their word for "fish" [sorry that's a little stereotypical. I didn't mean any offense, just illustrating a point]).

Solution 12 - File

If you're talking about XML files (ISO-8859-1), the XML declaration inside them specifies the encoding: <?xml version="1.0" encoding="ISO-8859-1" ?> So, you can use regular expressions (e.g., with Perl) to check every file for such specification.

More information can be found here: [How to Determine Text File Encoding][1].

[1]: http://codesnipers.com/?q=how-to-determine-text-file-encoding "How to Determine Text File Encoding"

Solution 13 - File

I know you're interested in a more general answer, but what's good in ASCII is usually good in other encodings. Here is a Python one-liner to determine if standard input is ASCII. (I'm pretty sure this works in Python 2, but I've only tested it on Python 3.)

python -c 'from sys import exit,stdin;exit()if 128>max(c for l in open(stdin.fileno(),"b") for c in l) else exit("Not ASCII")' < myfile.txt

Solution 14 - File

I am using the following script to

  1. Find all files that match FILTER with SRC_ENCODING
  2. Create a backup of them
  3. Convert them to DST_ENCODING
  4. (optional) Remove the backups

 

#!/bin/bash -xe

SRC_ENCODING="iso-8859-1"
DST_ENCODING="utf-8"
FILTER="*.java"

echo "Find all files that match the encoding $SRC_ENCODING and filter $FILTER"
FOUND_FILES=$(find . -iname "$FILTER" -exec file -i {} \; | grep "$SRC_ENCODING" | grep -Eo '^.*\.java')

for FILE in $FOUND_FILES ; do
    ORIGINAL_FILE="$FILE.$SRC_ENCODING.bkp"
    echo "Backup original file to $ORIGINAL_FILE"
    mv "$FILE" "$ORIGINAL_FILE"

    echo "converting $FILE from $SRC_ENCODING to $DST_ENCODING"
    iconv -f "$SRC_ENCODING" -t "$DST_ENCODING" "$ORIGINAL_FILE" -o "$FILE"
done

echo "Deleting backups"
find . -iname "*.$SRC_ENCODING.bkp" -exec rm {} \;

Solution 15 - File

In Cygwin, this looks like it works for me:

find -type f -name "<FILENAME_GLOB>" | while read <VAR>; do (file -i "$<VAR>"); done

Example:

find -type f -name "*.txt" | while read file; do (file -i "$file"); done

You could pipe that to AWK and create an iconv command to convert everything to UTF-8, from any source encoding supported by iconv.

Example:

find -type f -name "*.txt" | while read file; do (file -i "$file"); done | awk -F[:=] '{print "iconv -f "$3" -t utf8 \""$1"\" > \""$1"_utf8\""}' | bash

Solution 16 - File

You can extract encoding of a single file with the file command. I have a sample.html file with:

$ file sample.html 

sample.html: HTML document, UTF-8 Unicode text, with very long lines

$ file -b sample.html

HTML document, UTF-8 Unicode text, with very long lines

$ file -bi sample.html

text/html; charset=utf-8

$ file -bi sample.html  | awk -F'=' '{print $2 }'

utf-8

Solution 17 - File

I was working in a project that requires cross-platform support and I encounter many problems related with the file encoding.

I made this script to convert all to utf-8:

#!/bin/bash
## Retrieve the encoding of files and convert them
for f  `find "$1" -regextype posix-egrep -regex ".*\.(cpp|h)$"`; do
  echo "file: $f"
  ## Reads the entire file and get the enconding
  bytes_to_scan=$(wc -c < $f)
  encoding=`file -b --mime-encoding -P bytes=$bytes_to_scan $f`
  case $encoding in
    iso-8859-1 | euc-kr)
    iconv -f euc-kr -t utf-8 $f > $f.utf8
    mv $f.utf8 $f
    ;;
  esac
done

I used a hack to read the entire file and estimate the file encoding using file -b --mime-encoding -P bytes=$bytes_to_scan $f

Solution 18 - File

With Perl, use Encode::Detect.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionMangluView Question on Stackoverflow
Solution 1 - FileShalom CraimerView Answer on Stackoverflow
Solution 2 - FilemaduView Answer on Stackoverflow
Solution 3 - Fileqwert2003View Answer on Stackoverflow
Solution 4 - Filenot2qubitView Answer on Stackoverflow
Solution 5 - FileWolfgang FahlView Answer on Stackoverflow
Solution 6 - FilefimbulwinterView Answer on Stackoverflow
Solution 7 - FileNorbert HartlView Answer on Stackoverflow
Solution 8 - FilefccoelhoView Answer on Stackoverflow
Solution 9 - FileMohamed23gharbiView Answer on Stackoverflow
Solution 10 - FiledaniloView Answer on Stackoverflow
Solution 11 - FilepaxdiabloView Answer on Stackoverflow
Solution 12 - Fileevgeny9View Answer on Stackoverflow
Solution 13 - FilewkschwartzView Answer on Stackoverflow
Solution 14 - FileMatyasView Answer on Stackoverflow
Solution 15 - FileskeetastaxView Answer on Stackoverflow
Solution 16 - FileDaniel FaureView Answer on Stackoverflow
Solution 17 - FileTeocciView Answer on Stackoverflow
Solution 18 - Filemanu_vView Answer on Stackoverflow