문제

We have a weblogic batch application which processes multiple requests from consumers at the same time. We use log4j for logging puposes. Right now we log into a single log file for multiple requests. It becomes tedious to debug an issue for a given request as for all requests the logs are in a single file.

So plan is to have one log file per request. The consumer sends a request ID for which processing has to be performed. Now, in reality there could be multiple consumers sending the request IDs to our application. So question is how to seggregate the log files based on the request.

We cannot start & stop the production server every time so the point in using an overridden file appender with date time stamp or request ID is ruled out. This is what is explained in the article below: http://veerasundar.com/blog/2009/08/how-to-create-a-new-log-file-for-each-time-the-application-runs/

I also tried playing around with these alternatives:

http://cognitivecache.blogspot.com/2008/08/log4j-writing-to-dynamic-log-file-for.html

http://www.mail-archive.com/log4j-user@logging.apache.org/msg05099.html

This approach gives the desired results but it does not work properly if multiple request are send at the same time. Due to some concurrency issues logs go here and there.

I anticipate some help from you folks. Thanks in advance....

도움이 되었습니까?

해결책

Here's my question on the same topic: dynamically creating & destroying logging appenders

I follow this up on a thread where I discuss doing something exactly like this, on the Log4J mailing list: http://www.qos.ch/pipermail/logback-user/2009-August/001220.html

Ceci Gulcu (inventor of log4j) didn't think it was a good idea...suggested using Logback instead.

We went ahead and did this anyway, using a custom file appender. See my discussions above for more details.

다른 팁

참고 : 다음은 문제가 무엇인지 추측 일뿐입니다.다른 사람들과 마찬가지로 귀하의 질문을 해소하십시오.

JOIN를 원합니다 ( http://dev.mysql.com/doc/refman/5.1/de / join.html ) 그 테이블.당신이 작성하는 것은 결합의 또 다른 형태 일뿐입니다. 즉, 동일한 효과가 있습니다.그러나 당신은 "조인"되어 있습니다.일을 더 명확하게 만들려면 구문이 발명되었으며 그러한 실수를 피하십시오.위에 주어진 링크에서 그것에 대해 자세히 알아보십시오.

당신이 달성하고자하는 것은 다음과 같이 수행 할 수 있습니다 :

SELECT
Listings.Amount, Members.City, Groups.GroupRank
FROM
Listings
INNER JOIN Groups ON Listings.GroupKey=Groups.Key
INNER JOIN Members ON Members.Key=Listings.MemberKey
.

대출 테이블에서 선택을 수행하지 않으므로이 쿼리에서 필요하지 않습니다.

테이블 A의 모든 행의 모든 행이 표 B에있는 항목이있는 결과를 제공하는 내부 조인입니다.이 경우 왼쪽 또는 오른쪽 조인을 사용해야합니다.

This problem is handled very well by Logback. I suggest to opt for it if you have the freedom.

Assuming you can, what you will need to use is is SiftingAppender. It allows you to separate log files according to some runtime value. Which means that you have a wide array of options of how to split log files.

To split your files on requestId, you could do something like this:

logback.xml

<configuration>

  <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
    <discriminator>
      <key>requestId</key>
      <defaultValue>unknown</defaultValue>
    </discriminator>
    <sift>
      <appender name="FILE-${requestId}" class="ch.qos.logback.core.FileAppender">
        <file>${requestId}.log</file>
        <append>false</append>
        <layout class="ch.qos.logback.classic.PatternLayout">
          <pattern>%d [%thread] %level %mdc %logger{35} - %msg%n</pattern>
        </layout>
      </appender>
    </sift>
  </appender>

  <root level="DEBUG">
    <appender-ref ref="SIFT" />
  </root>

</configuration>

As you can see (inside discriminator element), you are going to discriminate the files used for writing logs on requestId. That means that each request will go to a file that has a matching requestId. Hence, if you had two requests where requestId=1 and one request where requestId=2, you would have 2 log files: 1.log (2 entries) and 2.log (1 entry).

At this point you might wonder how to set the key. This is done by putting key-value pairs in MDC (note that key matches the one defined in logback.xml file):

RequestProcessor.java

public class RequestProcessor {

    private static final Logger log = LoggerFactory.getLogger(RequestProcessor.java);

    public void process(Request request) {
        MDC.put("requestId", request.getId());
        log.debug("Request received: {}", request);
    }
}

And that's basically it for a simple use case. Now each time a request with a different (not yet encountered) id comes in, a new file will be created for it.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top