I appears that auto-decompression is supported, but the implementation solely depends on the headers sent by the server. In my particular case I'm trying to crawl a crappy website that is not sending the correct content-encoding even though the response is compressed. My guess would be that the MS impl in typical MS fashion attempts to go beyond the headers check and perhaps adds the GZ magic first two bytes, etc.
In any case, for anyone who might find themselves in this spot in the future, here's my solution:
var request = (HttpWebRequest)WebRequest.Create("http://www.example.com/test.xml.gz");
request.Accept = "*/*";
request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
var response = (HttpWebResponse)request.GetResponse();
var mem = new MemoryStream() as Stream;
using (var stream = response.GetResponseStream())
stream.CopyTo(mem);
var bytes = (mem as MemoryStream).GetBuffer();
if (bytes[0] == '\x1F' && bytes[1] == '\x8B')
{
mem.Position = 0;
mem = new GZipStream(mem, CompressionMode.Decompress);
var uncompressed = new MemoryStream(bytes.Length * 20);
mem.CopyTo(uncompressed);
mem.Dispose();
mem = uncompressed;
}
if (File.Exists("lala.txt"))
File.Delete("lala.txt");
var file = new FileStream("lala.txt", FileMode.CreateNew);
mem.Position = 0;
mem.CopyTo(file);
file.Close();
(NOTE: That I'm not disposing of everything correctly here, this is just example code.)