Question

My Tcl script aborts, saying that it is unable to realloc 2191392 bytes. This happens when the script is kept for a longer execution duration, say, more than 10 hours. My Tcl script connects to devices using telnet and ssh connections and executes/verifies some command outputs on devices. The Linux machine has enough RAM 32GB, and ulimit is unlimited for process, data, file size. My script process doesn't eat up more memory, but the worst case is < 1GB. I just wonder why memory allocation is failed having plenty of RAM.

Était-ce utile?

La solution

That message is an indication that an underlying malloc() call returned NULL (in a non-recoverable location), and given that it's not for very much, it's an indication that the system is thoroughly unable to allocate much memory. Depending on how your system is configured (32-bit vs. 64-bit; check what parray tcl_platform prints to find out) that could be an artefact of a few things, but if you think that it shouldn't be using more than a gigabyte of memory, it's an indication of a memory leak.

Unfortunately, it's hard to chase down memory leaks in general. Tcl's built-in memory command (enabled via configure --enable-symbols=mem at build time) can help, as can a tool like Electric Fence, but they are imperfect and can't generally tell you where you're getting things wrong (as you'll be looking for the absence of something to release memory). At the Tcl level, see whether each of the variables listed by info globals is of sensible size or whether there's a growing number of globals. You'll want to use tools like string length, array exists, array size and array names for this.

It's also possible for memory allocation to fail due to another process consuming so much memory that the OS starts to feel highly constrained. I hope this isn't happening in your case, since it's much harder to prevent.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top