Mongodb crashed with Invalid access at address - segmentation fault signal 11 - version 2.6

StackOverflow https://stackoverflow.com/questions/22947465

  •  30-06-2023
  •  | 
  •  

Question

I am trying to insert documents in MongoDB. My current version is 2.6 production release. My application is able to insert few documents but starts giving below error after some point of time. I am getting the same error everytime. I am running mongod from cmd prompt. I was getting same issue with 2.6.0 rc and 2.6.1 rc, but was working fine in 2.5.4. Below is the log trace.

2014-04-08T20:04:01.373+0000 [conn12] command deadlinedb.$cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros) w:40 reslen:40 18ms
2014-04-08T20:04:01.612+0000 [conn15] update deadlinedb.DeadlineSettings query: { _id: "history_log" } update: { $push: { Entries: "2014/04/08 15:04:00 vibhu MOBILE-019 (MOBILE-019\Vibhu): Submitted new job: (tyler.jordan,"Among assumed stumbles v051",92:none:none,2014/04/08 15:04:00,)." }, $inc: { EntryCount: 1 } } nscanned:1 nscannedObjects:1 nMatched:1 nModified:1 keyUpdates:0 numYields:0 locks(micros) w:5621 21ms
2014-04-08T20:04:01.612+0000 [conn15] command deadlinedb.$cmd command: update { update: "DeadlineSettings", ordered: true, updates: [ { q: { _id: "history_log" }, u: { $push: { Entries: "2014/04/08 15:04:00 vibhu MOBILE-019 (MOBILE-019\Vibhu): Submitted new job: (tyler.jordan,"Among assumed stumbles v051",92:none:none,2014/04/08 15:04:00,)." }, $inc: { EntryCount: 1 } }, multi: false, upsert: true } ] } keyUpdates:0 numYields:0  reslen:55 22ms
2014-04-08T20:04:01.681+0000 [conn12] command deadlinedb.$cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros) w:23 reslen:40 16ms
2014-04-08T20:04:01.727+0000 [conn12] update deadlinedb.LimitGroups query: { _id: "534456307f87b2196c09841b" } update: { $set: { IsSub: false }, $currentDate: { LastWriteTime: { $type: "date" } } } nscanned:1 nscannedObjects:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 locks(micros) w:161 17ms
2014-04-08T20:04:01.727+0000 [conn12] command deadlinedb.$cmd command: update { update: "LimitGroups", ordered: true, updates: [ { q: { _id: "534456307f87b2196c09841b" }, u: { $set: { IsSub: false }, $currentDate: { LastWriteTime: { $type: "date" } } }, multi: false, upsert: false } ] } keyUpdates:0 numYields:0  reslen:55 17ms
2014-04-08T20:04:02.676+0000 [conn15] command deadlinedb.$cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros) w:21 reslen:40 11ms
2014-04-08T20:04:02.676+0000 [conn12] update deadlinedb.Jobs query: { _id: "534456317f87b2196c098426" } update: { $set: { IsSub: false }, $currentDate: { LastWriteTime: { $type: "date" } } } nscanned:1 nscannedObjects:1 nMatched:1 nModified:1 keyUpdates:1 numYields:0 locks(micros) w:89 17ms
2014-04-08T20:04:02.676+0000 [conn12] command deadlinedb.$cmd command: update { update: "Jobs", ordered: true, updates: [ { q: { _id: "534456317f87b2196c098426" }, u: { $set: { IsSub: false }, $currentDate: { LastWriteTime: { $type: "date" } } }, multi: false, upsert: false } ] } keyUpdates:0 numYields:0  reslen:55 17ms
2014-04-08T20:04:03.165+0000 [conn12] insert deadlinedb.JobTasks query: { _id: "534456327f87b2196c09842b_11", JobID: "534456327f87b2196c09842b", TaskID: 11, Frames: "11-11", Slave: "", Stat: 2, Errs: 0, Start: new Date(-62135596800000), StartRen: new Date(-62135596800000), Comp: new Date(-62135596800000), WtgStrt: false } ninserted:1 keyUpdates:0 numYields:0 locks(micros) w:150875 151ms
2014-04-08T20:04:03.178+0000 [conn12] command deadlinedb.$cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros) w:259 reslen:40 166ms
2014-04-08T20:04:03.526+0000 [conn15] insert deadlinedb.JobTasks query: { _id: "534456327f87b2196c098430_0", JobID: "534456327f87b2196c098430", TaskID: 0, Frames: "0-9", Slave: "", Stat: 2, Errs: 0, Start: new Date(-62135596800000), StartRen: new Date(-62135596800000), Comp: new Date(-62135596800000), WtgStrt: false } ninserted:1 keyUpdates:0 numYields:0 locks(micros) w:12295 12ms
2014-04-08T20:04:03.529+0000 [conn15] command deadlinedb.$cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros) w:12 reslen:40 16ms
2014-04-08T20:04:03.710+0000 [conn15] command deadlinedb.$cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros) w:22 reslen:40 30ms
2014-04-08T20:04:04.042+0000 [conn12] insert deadlinedb.JobTasks query: { _id: "534456337f87b2196c098435_22", JobID: "534456337f87b2196c098435", TaskID: 22, Frames: "132-137", Slave: "", Stat: 2, Errs: 0, Start: new Date(-62135596800000), StartRen: new Date(-62135596800000), Comp: new Date(-62135596800000), WtgStrt: false } ninserted:1 keyUpdates:0 numYields:0 locks(micros) w:15158 15ms
2014-04-08T20:04:04.043+0000 [conn12] command deadlinedb.$cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros) w:49 reslen:40 17ms
2014-04-08T20:04:04.072+0000 [conn15] SEVERE: Invalid access at address: 0x20069d1e02
2014-04-08T20:04:04.081+0000 [conn15] SEVERE: Got signal: 11 (Segmentation fault).
Backtrace:0x11bd301 0x11bc6de 0x11bc7cf 0x7f659066acb0 0x796892 0x796fbf 0x79773d 0x78f974 0x78fd42 0xc220ee 0xc430b9 0xc3ce61 0xc48ca6 0xb8f1c9 0xb993e8 0x76b76f 0x117367b 0x7f6590662e9a 0x7f658f9753fd
 mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11bd301]
 mongod() [0x11bc6de]
 mongod() [0x11bc7cf]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) [0x7f659066acb0]
 mongod(_ZNK5mongo11mutablebson8Document4Impl12writeElementINS_16BSONArrayBuilderEEEvjPT_PKNS_10StringDataE+0x52) [0x796892]
 mongod(_ZNK5mongo11mutablebson8Document4Impl13writeChildrenINS_16BSONArrayBuilderEEEvjPT_+0x4f) [0x796fbf]
 mongod(_ZNK5mongo11mutablebson8Document4Impl12writeElementINS_14BSONObjBuilderEEEvjPT_PKNS_10StringDataE+0x22d) [0x79773d]
 mongod(_ZN5mongo11mutablebson8Document11makeElementENS0_12ConstElementEPKNS_10StringDataE+0x64) [0x78f974]
 mongod(_ZN5mongo11mutablebson8Document27makeElementWithNewFieldNameERKNS_10StringDataENS0_12ConstElementE+0x22) [0x78fd42]
 mongod(_ZNK5mongo11ModifierPop3logEPNS_10LogBuilderE+0x10e) [0xc220ee]
 mongod(_ZN5mongo12UpdateDriver6updateERKNS_10StringDataEPNS_11mutablebson8DocumentEPNS_7BSONObjEPNS_11FieldRefSetE+0x319) [0xc430b9]
 mongod(_ZN5mongo6updateERKNS_13UpdateRequestEPNS_7OpDebugEPNS_12UpdateDriverEPNS_14CanonicalQueryE+0xc71) [0xc3ce61]
 mongod(_ZN5mongo14UpdateExecutor7executeEv+0x66) [0xc48ca6]
 mongod(_ZN5mongo14receivedUpdateERNS_7MessageERNS_5CurOpE+0x729) [0xb8f1c9]
 mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xec8) [0xb993e8]
 mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x9f) [0x76b76f]
 mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x4fb) [0x117367b]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f6590662e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f658f9753fd]

Could you please look at this asap. Please let me know if further info is required.

Thanks!

Vibhu

Was it helpful?

Solution

Just because no one added the answer, I will.

This is a bug in Mongo 2.6.0 and you can see report in Mongo Jira.

ISSUE SUMMARY Issuing an update to an array where more than 128 BSON elements of the document (not limited to the array itself) are traversed during the update may result in a crash of the mongod process. Affected update operators include {$pop: {: -1}} (from the front) and positional updates with .$ or . where is a numeric value.

USER IMPACT Users updating large arrays with $pop or positional operators are affected most, but the limit of 128 BSON elements applies to the entire document, not just the array. The failed update will crash the primary (and any primary nodes that are subsequently elected, if the application repeats the update against one of those nodes). Deployments will suffer downtime until the cluster can be downgraded, but no data loss / data corruption will be incurred.

There is nothing that can be done in 2.6.0 version. Workarounds are to upgrade to 2.6.1 or higher or to downgrade to 2.4.x branch.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top