سؤال

I'm developing an email viewer which one reads .eml files and displays the message in a browser control. I've found code snippet and it can display 7bit and quoted-printable messages (Content-Transfer-Encoding: quoted-printable / Content-Transfer-Encoding: base64). What I need is decoding 8bit messages.

    private static AlternateView ImportText(StringReader r, string encoding, System.Net.Mime.ContentType contentType)
    {
        string line = string.Empty;
        StringBuilder b = new StringBuilder();
        while ((line = r.ReadLine())!= null)
        {
            switch (encoding)
            {
                case "quoted-printable":
                    if (line.EndsWith("="))
                    {
                        b.Append(DecodeQuotedPrintables(line.TrimEnd('='), contentType.CharSet));
                    }
                    else
                    {
                        b.Append(DecodeQuotedPrintables(line, contentType.CharSet) + "\n");
                    }
                    break;
                case "base64":
                    b.Append(DecodeBase64(line, contentType.CharSet));
                    break;

                case "8bit": // I need an 8bit decoder here!!!
                    b.Append(IneedAn8bitDecoderHere(line, contentType.CharSet));
                    break;
                default:
                    b.Append(line);
                    break;
            }
        }

        AlternateView returnValue = AlternateView.CreateAlternateViewFromString(b.ToString(), null, contentType.MediaType);
        returnValue.TransferEncoding = TransferEncoding.QuotedPrintable;
        return returnValue;
    }

I googled for an 8bit decoder but couldn't find any. Do I really need an 8bit decoder here and do you know a good working one?

UPDATE:

Related headers:

 MIME-Version: 1.0
 Content-Type: text/plain; charset="koi8-r";
 Content-Transfer-Encoding: 8bit

Body message in my code (string line):

 ����������� �� ����, � �����  ��� � ������        ��������� �������  �   ��������  �������� ��   ������� 

What Outlook displays in real world:

 Фантастично но факт, я снова  как и раньше сделалась статной  и   красивой  примерно за  месяцок 

I think I don't need case "8bit": part. As SLaks mentioned, I need to load mail source into byte array instead of string at the very beginning of the process. Examining the charset= in mail header from byte array will give the appropriate code page.

هل كانت مفيدة؟

المحلول

This is how I solved the problem:

// My previous method:
string file = File.ReadAllText("koi8-r.eml");

// Correct method:    
Encoding efile = detectTextEncoding("koi8-r.eml", out file);

txtRaw.Text = output;

Link: detectEncoding()

// Function to detect the encoding for UTF-7, UTF-8/16/32 (bom, no bom, little
// & big endian), and local default codepage, and potentially other codepages.
// 'taster' = number of bytes to check of the file (to save processing). Higher
// value is slower, but more reliable (especially UTF-8 with special characters
// later on may appear to be ASCII initially). If taster = 0, then taster
// becomes the length of the file (for maximum reliability). 'text' is simply
// the string with the discovered encoding applied to the file.
public Encoding detectTextEncoding(string filename, out String text, int taster = 1000)
{
byte[] b = File.ReadAllBytes(filename);

//////////////// First check the low hanging fruit by checking if a
//////////////// BOM/signature exists (sourced from http://www.unicode.org/faq/utf_bom.html#bom4)
if (b.Length >= 4 && b[0] == 0x00 && b[1] == 0x00 && b[2] == 0xFE && b[3] == 0xFF) { text = Encoding.GetEncoding("utf-32BE").GetString(b, 4, b.Length - 4); return Encoding.GetEncoding("utf-32BE"); }  // UTF-32, big-endian 
else if (b.Length >= 4 && b[0] == 0xFF && b[1] == 0xFE && b[2] == 0x00 && b[3] == 0x00) { text = Encoding.UTF32.GetString(b, 4, b.Length - 4); return Encoding.UTF32; }    // UTF-32, little-endian
else if (b.Length >= 2 && b[0] == 0xFE && b[1] == 0xFF) { text = Encoding.BigEndianUnicode.GetString(b, 2, b.Length - 2); return Encoding.BigEndianUnicode; }     // UTF-16, big-endian
else if (b.Length >= 2 && b[0] == 0xFF && b[1] == 0xFE) { text = Encoding.Unicode.GetString(b, 2, b.Length - 2); return Encoding.Unicode; }              // UTF-16, little-endian
else if (b.Length >= 3 && b[0] == 0xEF && b[1] == 0xBB && b[2] == 0xBF) { text = Encoding.UTF8.GetString(b, 3, b.Length - 3); return Encoding.UTF8; } // UTF-8
else if (b.Length >= 3 && b[0] == 0x2b && b[1] == 0x2f && b[2] == 0x76) { text = Encoding.UTF7.GetString(b,3,b.Length-3); return Encoding.UTF7; } // UTF-7


//////////// If the code reaches here, no BOM/signature was found, so now
//////////// we need to 'taste' the file to see if can manually discover
//////////// the encoding. A high taster value is desired for UTF-8
if (taster == 0 || taster > b.Length) taster = b.Length;    // Taster size can't be bigger than the filesize obviously.


// Some text files are encoded in UTF8, but have no BOM/signature. Hence
// the below manually checks for a UTF8 pattern. This code is based off
// the top answer at: https://stackoverflow.com/questions/6555015/check-for-invalid-utf8
// For our purposes, an unnecessarily strict (and terser/slower)
// implementation is shown at: https://stackoverflow.com/questions/1031645/how-to-detect-utf-8-in-plain-c
// For the below, false positives should be exceedingly rare (and would
// be either slightly malformed UTF-8 (which would suit our purposes
// anyway) or 8-bit extended ASCII/UTF-16/32 at a vanishingly long shot).
int i = 0;
bool utf8 = false;
while (i < taster - 4)
{
    if (b[i] <= 0x7F) { i += 1; continue; }     // If all characters are below 0x80, then it is valid UTF8, but UTF8 is not 'required' (and therefore the text is more desirable to be treated as the default codepage of the computer). Hence, there's no "utf8 = true;" code unlike the next three checks.
    if (b[i] >= 0xC2 && b[i] <= 0xDF && b[i + 1] >= 0x80 && b[i + 1] < 0xC0) { i += 2; utf8 = true; continue; }
    if (b[i] >= 0xE0 && b[i] <= 0xF0 && b[i + 1] >= 0x80 && b[i + 1] < 0xC0 && b[i + 2] >= 0x80 && b[i + 2] < 0xC0) { i += 3; utf8 = true; continue; }
    if (b[i] >= 0xF0 && b[i] <= 0xF4 && b[i + 1] >= 0x80 && b[i + 1] < 0xC0 && b[i + 2] >= 0x80 && b[i + 2] < 0xC0 && b[i + 3] >= 0x80 && b[i + 3] < 0xC0) { i += 4; utf8 = true; continue; }
    utf8 = false; break;
}
if (utf8 == true) {
    text = Encoding.UTF8.GetString(b);
    return Encoding.UTF8;
}


// The next check is a heuristic attempt to detect UTF-16 without a BOM.
// We simply look for zeroes in odd or even byte places, and if a certain
// threshold is reached, the code is 'probably' UF-16.          
double threshold = 0.1; // proportion of chars step 2 which must be zeroed to be diagnosed as utf-16. 0.1 = 10%
int count = 0;
for (int n = 0; n < taster; n += 2) if (b[n] == 0) count++;
if (((double)count) / taster > threshold) { text = Encoding.BigEndianUnicode.GetString(b); return Encoding.BigEndianUnicode; }
count = 0;
for (int n = 1; n < taster; n += 2) if (b[n] == 0) count++;
if (((double)count) / taster > threshold) { text = Encoding.Unicode.GetString(b); return Encoding.Unicode; } // (little-endian)


// Finally, a long shot - let's see if we can find "charset=xyz" or
// "encoding=xyz" to identify the encoding:
for (int n = 0; n < taster-9; n++)
{
    if (
        ((b[n + 0] == 'c' || b[n + 0] == 'C') && (b[n + 1] == 'h' || b[n + 1] == 'H') && (b[n + 2] == 'a' || b[n + 2] == 'A') && (b[n + 3] == 'r' || b[n + 3] == 'R') && (b[n + 4] == 's' || b[n + 4] == 'S') && (b[n + 5] == 'e' || b[n + 5] == 'E') && (b[n + 6] == 't' || b[n + 6] == 'T') && (b[n + 7] == '=')) ||
        ((b[n + 0] == 'e' || b[n + 0] == 'E') && (b[n + 1] == 'n' || b[n + 1] == 'N') && (b[n + 2] == 'c' || b[n + 2] == 'C') && (b[n + 3] == 'o' || b[n + 3] == 'O') && (b[n + 4] == 'd' || b[n + 4] == 'D') && (b[n + 5] == 'i' || b[n + 5] == 'I') && (b[n + 6] == 'n' || b[n + 6] == 'N') && (b[n + 7] == 'g' || b[n + 7] == 'G') && (b[n + 8] == '='))
        )
    {
        if (b[n + 0] == 'c' || b[n + 0] == 'C') n += 8; else n += 9;
        if (b[n] == '"' || b[n] == '\'') n++;
        int oldn = n;
        while (n < taster && (b[n] == '_' || b[n] == '-' || (b[n] >= '0' && b[n] <= '9') || (b[n] >= 'a' && b[n] <= 'z') || (b[n] >= 'A' && b[n] <= 'Z')))
        { n++; }
        byte[] nb = new byte[n-oldn];
        Array.Copy(b, oldn, nb, 0, n-oldn);
        try {
            string internalEnc = Encoding.ASCII.GetString(nb);
            text = Encoding.GetEncoding(internalEnc).GetString(b);
            return Encoding.GetEncoding(internalEnc);
        }
        catch { break; }    // If C# doesn't recognize the name of the encoding, break.
    }
}


// If all else fails, the encoding is probably (though certainly not
// definitely) the user's local codepage! One might present to the user a
// list of alternative encodings as shown here: https://stackoverflow.com/questions/8509339/what-is-the-most-common-encoding-of-each-language
// A full list can be found using Encoding.GetEncodings();
text = Encoding.Default.GetString(b);
return Encoding.Default;

}

نصائح أخرى

You're going to potentially run into a problem with your implementation because of the StringReader(). Somewhere along the line someone needs to turn raw bytes into a string. Unless you're doing something special before this then .Net will do this for you and will usually use the computer defaults.

The problem with the 8-bit era was that the 8th bit had dozens of implementations (if not more) and there's no real way to tell from the bytes which implementation to use. If you're using the ASCII, anything with the 8th bit set will get converted to ASCII 63 - ?. If you're using UTF-8, anything with the 8th bit set will try to read the next one to five characters (see Wikipedia for more info) and if that doesn't work it will get converted to UTF-8 65533 which is what you're seeing. If you manually specify the encoding such as the one you're being given koi8-r then that 8th bit will be parsed properly. Below is sample code that shows this off. Instead of dumping to the Console I'm messaging boxing but you can switch that as long as you remember to change your console's encoding.

var bytes = new byte[] { 226 };
var s1 = System.Text.Encoding.ASCII.GetString(bytes);//Invalid
var s2 = System.Text.Encoding.UTF8.GetString(bytes);//Invalid
var s3 = System.Text.Encoding.GetEncoding("koi8-r").GetString(bytes); //Б

MessageBox.Show(String.Format("{0} {1} {2}", s1, s2, s3));

To summarize, if you're getting the UTF-8 replacement character (which you are) that means that you've lost the original value of those bytes and you need to fix it earlier. Whatever is converting the bytes to string needs to take Content-Type: text/plain; charset="koi8-r"; into account, you can't do it after the fact.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top