Question

I'm attempting to "packetize" a large mmap()d file, like so:

//numBytes is based on user input
data = static_cast<char*>(mmap((caddr_t)0, numBytes, PROT_READ, MAP_SHARED, myFile, 0));

int
Sender::Packetize(char* data, int numBytes)
{
    int seqNum = 1;
    int offset = 0;
    size_t totalPacked = 0;
    unsigned int length = sizeof(struct sockaddr_in);

    bool dataRemaining = true;
    while(dataRemaining)
    {
            //MTU = 1460
        size_t payloadSize;
        (numBytes > MTU) ? payloadSize = MTU : payloadSize = numBytes;
        char* payload = (char*)malloc(payloadSize);

        memcpy(payload, data, payloadSize);
        Packet pac = {seqNum, 0, payloadSize, payload}; //Basic struct

        totalPacked += payloadSize;    
        cout << "Packed Bytes: " << payloadSize << endl;
        cout << "Total Packed: " << totalPacked << endl;

        dataMap.insert(pair<int, struct Packet>(seqNum, pac));

        if(numBytes > MTU)
        {
            offset += MTU;
            data = &data[offset];
        }
        else
            dataRemaining = false;

        numBytes -= MTU;
        seqNum++;
    }

    return 0;
}

I'm working with a 2MB+ file. When I pass in something relatively small for numBytes (5000), everything appears to work swimmingly. However, if I attempt to pass the entire file (2533431), I get segfaults during the memcpy(). I've noticed that it appears to be an issue at around 100KB:

[.. snip ..]
Packed Bytes: 1460
Total Packed: 99280
Packed Bytes: 1460
Total Packed: 100740
Packed Bytes: 1460
Total Packed: 102200
Segmentation fault (core dumped)

However, if I attempt to go one chunk smaller (100740), I get:

[.. snip ..]
Packed Bytes: 1460
Total Packed: 16060
Packed Bytes: 1460
Total Packed: 17520
Packed Bytes: 1460
Total Packed: 18980
Segmentation fault (core dumped)

Is there some fundamental flaw that I've overlooked, causing my virtual machine to segfault here?

Was it helpful?

Solution

I believe this bit of code is the culprit

        offset += MTU;
        data = &data[offset];

offset starts at 0, data starts at x.

  • First time round the loop offset is now 1460, data = data + offset = x + 1460
  • Next time round the loop offset is now 2920, data = data + offset = (x + 1460) + 2920 = x + 4380
  • Next time round the loop offset is now 4380, data = data + offset = (x + 4380) + 4380 = x + 8790

So data is increasing faster than it should. This means that eventually you will be accessing outside of the bounds of data.

I recommend removing the data = &data[offset]; part and just using data + offset in the memcpy.

OTHER TIPS

You are leaking payload memory. Not freeing the memory is causing problem.

Problem After all memory is exhausted, malloc returns NULL. You should always check the return value of malloc to ensure allocation was successful. If you try to copy contents into NULL memory, you would get a segfault.

Solution Use free at appropriate places to release the memory. I would suggest allocating MTU size of memory before entering loop and returning it after loop. If MTU is compile time constant, you can better use a static sized array instead of dynamically allocating it.

Because you are using C++, instead of char* payload = (char*)malloc(payloadSize);, you can grab memory from some STL container to automatically release the memory.

vector<unsigned char> buf(size);
payload = &buf[0];

Your memory would be released when buf goes out of scope.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top