Question

I have a file which is ANSI encoded. However it shows Arabic letters inside it. this text file was generated by some program (I have no info on) but it seems like there is some kind of internal encoding (if I might say and if it's possible) for the Arabic letters to make appear.

Is there such a thing? If not, how can the ANSI file show the Arabic letters?

*If possible explain in Java code


Edition 01

When I open it in Notepad++ it shows that the page encoding is ANSI. Please check this photo:

http://www.4shared.com/file/221862075/e8705951/text-Windows.html


Edition 02

you can check the file at from:

http://www.4shared.com/file/221853641/3fa1af8c/data.html

Was it helpful?

Solution

I tried opening the file in both Firefox and Opera. I had to set the character encoding to Arabic Windows-1256 to get it to display correctly in both browsers, so the file's encoding is most likely to be that.

NOTE: I originally posted this as a comment, but was asked to make it an answer.

OTHER TIPS

How do you know that it's ANSI encoded? If it's not a multi-byte encoding like UTF-8, my guess would be it's encoded using an arabic code page like this one: Windows-1256.

You could look at the file in a Hex editor and find out what numbers the arabic characters have and that way try to find out which encoding / code page it was created with.

Short answer: Likely, your text file is not "ANSI"-encoded, but utf-8.

Long answer:

First, the term "ANSI" (on Windows) doesn't mean a fixed encoding; it's meaning depends on your language settings. For example, in Western Europe and USA, it will usually be Windows-1252 (a variant of ISO/IEC 8859-1, also known as latin-1), in Japan, it's SHift JIS, and in Arabic countries, it's ISO/IEC_8859-6.

If you are using a non-Arabic version of Windows and heave not changed your language settings, and you can see Arabic letters in the file when you open it in Notepad, then it is certainly not in any of these ANSI encodings. Instead, it is probably Unicode.

Note that I don't mean "UNICODE", which on Windows usually means UTF-16LE. It could be UTF-8 as well. Both are encodings that can encode all 100.000+ characters currently defined in Unicode, but they do it in different ways. Both are variable length encodings, meaning that not all characters are encoded using the same number of bits.

In UTF-8, each character is encoded as one to four bytes. The encoding has been chosen such that ASCII characters are encoded in one byte.

In UTF-16, each character is encoded as either two four bytes. This encoding has originally been invented when Unicode had fewer than 64K characters, and one therefore could encode every character in a single 16-bit word. Later, when it became clear that Unicode would have to grow beyond the 64K limit, a scheme was invented where pairs of words in the range 0xD800-0xDFFF are used to represent characters outside of the first 64K (minus 0x800) characters.

To see what's actually in the file, open it in a hex editor:

  • If the first two bytes are FF FE, then it is likely UTF-16LE (little endian)
  • If the first two bytes are FE FF, then it is likely UTF-16BE (big endian, unlikely on Windows)
  • If the first three bytes are EF BB BF, then it is likely UTF-8
  • If you see a lot of 00 Bytes, it is likely UTF-16 (or UTF-32, if you see pairs of 00 BYtes)
  • If Arabic characters occupy a single Byte, it is likely ISO-8859-6 (e.g. ش would be D5).
  • If Arabic characters occupy multiple Bytes, it is likely UTF-8 (e.g. ش would be D8 B4).

Is there such a thing?

No.

If not, how can the ANSI file show the Arabic letters?

It’s not a Windows-ANSI encoded file. More likely, it uses a variable-width encoding, most likely UTF-8: many common character positions in UTF-8 are equivalent to their positions in US-ASCII (in fact, it was designed that way), and by inference also for Windows-ANSI.

EDIT: We have to thank Microsoft for this confusion. “ANSI” isn’t well-specified when it comes to encodings. Usually it’s meant to stand for the Windows default encoding with codepage 1252 (“Windows-1252”), which happens to correspond to “Western” alphabets derived from Latin.

However, in other countries the default encoding used by Windows (in older Windows versions … today, the default is UTF-8) is not Windows-1252 but rather a different encoding, which is then also called “ANSI”. In this case, codepage 1256.

ANSI character encoding allows for 217 characters and does not contain Arabic letters. I think perhaps the file uses an alternative encoding.

Anwsering your edit, it appears that the problem is with Notepad++, because what is being displayed is clearly beyond the capabilities of the ANSI charset.

first i downloaded your file and tried to use vim to check its encoding and it didn't seem to know and on a second machine it said latin1 which could be similar to what happened in notepad++ (gave the generic answer).
so i did file data.txt and the output was this:

data.txt: ISO-8859 text, with CRLF line terminators

hope this helps.

EDIT:
using the browser thing showed that this answer is incorrect.

ISO-8859-4 and ISO-8859-13 could display the text, without errors, but the characters where not in Arabic.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top