Question

I'm trying to set up logging for Windows Azure service.
I used nlog as described here and got it working, but now I want to experiment with different settings etc. Currently my diagnostics.wadcfg looks like this:

<?xml version="1.0" encoding="utf-8"?>
<DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M" overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
  <DiagnosticInfrastructureLogs />
  <Directories>
    <IISLogs container="wad-iis-logfiles" />
    <CrashDumps container="wad-crash-dumps" />
  </Directories>
  <Logs bufferQuotaInMB="1024" scheduledTransferPeriod="PT1M" scheduledTransferLogLevelFilter="Verbose" />
</DiagnosticMonitorConfiguration>

I found out that one minute is a minimum value for scheduledTransferPeriod. But it's extremely inconvenient for development purposes, because I have to wait a minute after every change I make in logging to test it. Is there a way to reduce this time? Or am I doing something wrong?

Était-ce utile?

La solution

No, you aren't doing anything wrong. You can try adding PT10S or something similar, but I believe this will just get rounded to a minute. The diagnostics agent on the instances flush the data out of their buffers into the storage accounts and I don't think they will do so in less than 1 minute intervals. This may be frustrating for development or testing, but for real production runs setting things this low can have a significant impact on the performance of the machine. The system wasn't designed to be pumping information over this quickly.

One option, since you use nlog, is to use a target that writes directly to Windows Azure Table storage. Then as you do your testing you can look at the table for your values. Some folks do this for production as well rather than using the log transfer mechanism. Of course, you are trading a single transfer from time to time, to something that could be very chatty so make sure you think about the impact transaction and overhead wise of using this in production. One upside of going straight to table storage is that if the instances go down while between the flush of data you don't run the risk of losing data that was in the buffer.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top