Question

Related Post: Delete folder with items

How can more pointers in an array of pointers to struct be available for use than designated? For a learning tool for ftw/ftwn, I rewrote the ftwn solution in the related post (above) for ftw. Basically using the ftw callback to fill an array of structs containing the filename, type and depth for each file. File removal then proceeded from maxdepth to 0 removing files then directories along the way. This was a test, so printf shows where unlink or rmdir should be called and the removal commands are never executed.

The storage for the array of structs was tried 3 different ways. (1) statically designating the number of pointers available struct _rmstat *rmstat [100]; (ftw 'nopenfd' set to 200), (2) dynamically allocating struct _rmstat **rmstat; and finally (3) adding the information to a linked list. Testing the static allocation, I specifically chose test directories with less than 100 files and then directories with more that 100 files, to cause failure.

To my surprise, the statically allocated test would routinely handle directories containing well over 100 files, and up to as many as 450! How is that possible? I thought the static allocation struct _rmstat *rmstat [100]; should guarantee a segfault (or similar core dump) when the 101st struct assignment was attempted. Is there something in gcc that does this in stack/heap allocation? Or, is this just part of what 'undefined' behavior is? With ftw, I set 'nopenfd' greater than the available struct pointers, so I don't think this is the result of ftw limiting file descriptors and closing/reopening files.

I have searched, but can't find an explanation for how you possibly get more pointers than designated. Does anybody here know how this can happen?

The test program source is available. It is safe - it deletes NOTHING, just prints with printf. Build with: gcc -Wall -o rmftws rmdir-ftw-static.c Thanks for any insight you can provide.

Was it helpful?

Solution

Exceeding the bounds of an array merely results in undefined behaviour. It would be nice if it seg faulted but it is not required to do so.

In terms of the concrete question - the compiler has asked the system to allocate a segment to contain the static data and told it how big to make it. When it does the allocation, the system may over-allocate storage typically up to a page boundary.

OTHER TIPS

A declaration of 100 pointers in such an array, does not always guarantee a segfault. What it does guarantee is a memory overwrite if you use more pointers than the array size. However, if the overwritten memory belongs to other variables you have declared, their values will be trashed, but you will not have a fault right there and then, until later in your code you try to use whatever values you have stored in those variables, in which case, your code may misbehave but not crash or it may crash at some point but for a reason not so obviously related to the overwrite of the initial array.

One case that you might have a crash immediately upon using the 101st location of that array is if the array happens to be allocated by the compiler at the exact end of the current data section, and then next one is write-protected. But this is a compiler and OS controlled issue.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top