Is there a set of "Lorem ipsums" files for testing character encoding issues?

Unit TestingCharacter EncodingXunit

Unit Testing Problem Overview


For layouting we have our famous "Lorem ipsum" text to test how it looks like.

What I am looking for is a set of files containing Text encoded with several different encodings that I can use in my JUnit tests to test some methods that are dealing with character encoding when reading text files.

Example:

Having a ISO 8859-1 encoded test-file and a Windows-1252 encoded test-file. The Windows-1252 have to trigger the differences in region 8016 – 9F16. In other words it must contain at least one character of this region to distinguish it from ISO 8859-1.

Maybe the best set of test-files is that where the test-file for each encoding contains all its characters once. But maybe I am not aware of sth - we all like this encoding stuff, right? :-)

Is there such a set of test-files for character-encoding issues out there?

Unit Testing Solutions


Solution 1 - Unit Testing

The Wikipedia article on diacritics is pretty comprehensive, unfortunately you have to extract these characters manually. Also there might exist some mnemonics for each language. For instance in Polish we use:

> Zażółć gęślą jaźń

which contains all 9 Polish diacritics in one correct sentence. Another useful search hint are pangrams: sentences using every letter of the alphabet at least once:

> * in Spanish, "El veloz murciélago hindú comía feliz cardillo y kiwi. La cigüeña tocaba el saxofón detrás del palenque de paja." (all 27 letters and diacritics).

> * in Russian, "Съешь же ещё этих мягких французских булок, да выпей чаю" (all 33 Russian Cyrillic alphabet letters).

List of pangrams contains an exhaustive summary. Anyone care to wrap this in a simple:

public interface NationalCharacters {
  String spanish();
  String russian();
  //...
}

library?

Solution 2 - Unit Testing

How about trying to use the ICU test suite files? I don't know if they are what you need for your test, but they seem to have pretty complete from/to UTF mapping files at least: Link to the repo for ICU test files

Solution 3 - Unit Testing

I don't know of any complete text documents, but if you can start with a simple overview of all character sets there are some files available at the [ftp.unicode.org server][1]

Here's WINDOWS-1252 for example. The first column is the hexadecimal character value, and the second the unicode value.

ftp://ftp.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WINDOWS/CP1250.TXT

[1]: ftp://ftp.unicode.org/Public/MAPPINGS/ "unicode.org server"

Solution 4 - Unit Testing

Well, I had used an online tool to create my text char sets from Lorem Ipsum. I believe it can help you. I dont have one which has all the different charsets in a single page.

http://generator.lorem-ipsum.info/

Solution 5 - Unit Testing

There are a few ready-to-use comprehensive unicode setups straight-forward downloadable.

From w3c

Here, there's a nice test file by w3.org including: maths, linguistics, Greek, Georgian, Russian, Thai, Runes, Braille among many others in a single file:

Coming from w3.org should be nice to use, shouldn't it?

Cutting out the HTML part

If you want to get the "original txt file" without risk of your editor messing it up, 1) download, 2) tail+head it, 3) Check with a diff:

wget https://www.w3.org/2001/06/utf-8-test/UTF-8-demo.html
tail +8 UTF-8-demo.html | head -n -3 > UTF-8-demo.txt
diff UTF-8-demo.html UTF-8-demo.txt

This generates a UTF-8-demo.txt without human intervention and without risk of loosing data.

More from w3c

There are many more files one level up in the directory structure, still inside the dir utf-8-test:

From github

There's a very interesting file here too with ALL printable chars (including Chinese, Braille, Arab, etc.)

Want also non printable characters?

There are also many more test files in the same repo:

and also a generator if you don't trust the committed file and you want to generate it by yourself.

My personal choice

I have decided that for my projects I'll start with 2 files: The specific one I pointed out from w3c and the specific one I pointed out from the github repo by bits.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionFabian BarneyView Question on Stackoverflow
Solution 1 - Unit TestingTomasz NurkiewiczView Answer on Stackoverflow
Solution 2 - Unit TestingDaniel TeplyView Answer on Stackoverflow
Solution 3 - Unit TestingOptimistView Answer on Stackoverflow
Solution 4 - Unit TestingSandeep NairView Answer on Stackoverflow
Solution 5 - Unit TestingXavi MonteroView Answer on Stackoverflow