문제

Quote from ObjectRocket Support

Hi, Fyodor! The journalCompressor setting is specifically related to the journal files. Journals are typically not compressed because compressing them causes additional latency when flushing the data from the WiredTiger memory cache to the disk. This latency trickles down to the write concern. journalCompressor none is a default value in our configurations.

ObjectRocket seems to understand something about DBaaS

ObjectRocket is a cloud database (DBaaS) company based in Austin, Texas, specializing in NoSQL datastores including MongoDB and Redis. In 2013, they were acquired by Rackspace

What do you think about journalCompressor not snappy? Is this a valid config for DBaaS provider and best for most use cases?

storage:
   dbPath: "/data/mongodb"
   journal:
       enabled: true
   engine: "wiredTiger"
   wiredTiger:
       engineConfig:
           journalCompressor: none
           directoryForIndexes: "/indexes/mongodb/"
       collectionConfig:
           blockCompressor: snappy
       indexConfig:
           prefixCompression: true
도움이 되었습니까?

해결책

What do you think about journalCompressor not snappy? Is this a valid config for DBaaS provider and best for most use cases?

In general I would leave MongoDB settings at the default unless variations have proven to be beneficial for your common workloads (i.e. through actual testing and measurement). Snappy compression/decompression typically does not have much overhead and I/O is often more of a resource constraint than CPU. The default settings are intended to be "best for most use cases" and are more widely tested/used in MongoDB deployments and continuous integration tests.

The size limit on WiredTiger journal files is 100MB; when the journal file size limit is reached WiredTiger creates a new journal file and syncs the previous journal file to disk. With the default journalCompressor setting of snappy individual journal files can contain more data than with no compression. Without compression, more journal files will be created and there will be more journal I/O.

The right tradeoff for CPU vs I/O usage will vary based on your deployments and workloads. If your deployments have CPU to spare, you could also consider testing zlib compression (expecting better compression but more CPU usage).

If you are concerned about I/O contention between the journal and data files in your dbPath or want to do finer tuning given journal writes are sequential rather than random, another option would be to symlink the journal directory to a separate mount point. This approach may affect your backup strategy (e.g. if you are relying on filesystem snapshots), however since you are already using the directoryForIndexes option I suspect you are already aware of this caveat.

For more configuration suggestions for production deployments, see the Production Notes in the MongoDB manual.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 dba.stackexchange
scroll top