Question

what's happening is i'm reading encryption packets, and I encounter a corrupted packet that gives back a very large random number for the length.

size_t nLengthRemaining = packet.nLength - (packet.m_pSource->GetPosition() - packet.nDataOffset);

seckey.SecretValues.m_data.resize(nLengthRemaining);

In this code m_data is a std::vector<unsigned char>. nLengthRemaining is too large due to a corrupted data packet, therefore the resize function throws. The problem isn't that resize throws (we handle the exceptions), but that resize has corrupted memory already and this leads to more exceptions later.

What I want to do is know if the length is too large before I call resize, then only call resize if it's ok. I have tried putting this code before the call to resize:

std::vector<unsigned char>::size_type nMaxSize = seckey.SecretValues.m_data.max_size();
if(seckey.SecretValues.m_data.size() + nLengthRemaining >=  nMaxSize) {
    throw IHPGP::PgpException("corrupted packet: length too big.");
}
seckey.SecretValues.m_data.resize(nLengthRemaining);

This code is using the std::vector max_size member function to test if the nLengthRemaining is larger. That must not be reliable though, because nLengthRemaining is still less than nMaxSize, but apparently still large enough to cause resize to have a problem (nMaxSize was 4xxxxxxxxx and nLengthRemaining is 3xxxxxxxxx).

Also, I haven't determine what exception resize is throwing. It's not a std::length_error and it's not a std::bad_alloc. What exception it's throwing really isn't too important to me, but i'm curious to know.

btw, just so you know, this code does work correctly in normal cases. This case of a corrupted data packet is the only place where it goes crazy. Please help! thanks.

UPDATE:

@Michael. For now i'll just ignore the packet if it's larger than 5 MB. I'll discuss with other team members about possibly validating the packets (it may already be there and I just don't know it). I'm starting to think it really is a bug in our version of STL, the exception it throws isn't even a std::exception, which surprized me. I'll try to find out from my supervisor what version of STL we're running too (how would I check?).

ANOTHER UPDATE: I just prove that it is a bug in the STL version I'm using on my Visual Studio 6 development machine. I wrote this sample app:

// VectorMaxSize.cpp : Defines the entry point for the console application. //

#include "stdafx.h"
#include <vector>
#include <iostream>
#include <math.h>
#include <typeinfo>

typedef std::vector<unsigned char> vector_unsigned_char;

void fill(vector_unsigned_char& v) {
    for (int i=0; i<100; i++) v.push_back(i);
}


void oput(vector_unsigned_char& v) {
    std::cout << "size: " << v.size() << std::endl;
    std::cout << "capacity: " << v.capacity() << std::endl;
    std::cout << "max_size: " << v.max_size() << std::endl << std::endl;
}

void main(int argc, char* argv[]) {
    {
        vector_unsigned_char v;

        fill(v);

        try{
            v.resize(static_cast<size_t>(3555555555));
        }catch(std::bad_alloc&) {
            std::cout << "caught bad alloc exception" << std::endl;
        }catch(const std::exception& x) {
            std::cerr << typeid(x).name() << std::endl;
        }catch(...) {
            std::cerr << "unknown exception" << std::endl;
        }

        oput(v);    
        v.reserve(500);
        oput(v);
        v.resize(500);
        oput(v);
    }

    std::cout << "done" << std::endl;
}

On my VS6 dev machine it has the same behavior has the encryption project, it causes all kinds of havoc. When I build and run it on my Visual Studio 2008 machine, resize will throw a std::bad_alloc exception and the vector will not be corrupted, just as we would have expected! Time for some EA Sport NCAA football hehe!

Was it helpful?

Solution

I think that vector::max_size() is pretty much always a 'hard coded' thing - it's independent of how much memory the system/library is prepared to dynamically allocate. Your problem seems to be a bug in the vector implementation that corrupts things when an allocation fails.

'Bug' might be too strong of a word. vector::resize() is defined in terms of vector::insert() and the standard says this about vector::insert():

If an exception is thrown other than by the copy constructor or assignment operator of T there are no effects

So it seems like there may be times when the resize() operation is allowed to corrupt a vector, but it would still be nice if the operation were exception safe (and I think it wouldn't be out of line to expect the library to do that, but maybe it's harder than I imagine).

You seem to have a couple reasonable options:

  • change or update to a library that doesn't have the corruption bug (what compiler/library version are you using?)
  • instead of checking against vector::max_size() set nMaxSize to your own reasonable maximum and do what you have above but using that threshold instead.

Edit:

I see that you're using VC6 - there's definitely a bug in vector::resize() that might have something to do with your problem, though looking at the patch I honestly don't see how (actually it's a bug in vector::insert(), but as mentioned, resize() calls insert()). I'd guess it would be worthwhile to visit Dinkumwares' page for bug fixes to VC6 and apply the fixes.

The problem might also have something to do with the <xmemory> patch on that page - it's unclear what the bug is that's discussed there, but vector::insert() does call _Destroy() and vector<> does define the name _Ty so you might be running into that problem. One nice thing - you won't have to worry about managing the changes to the headers, as Microsoft is never touching them again. Just make sure the patches make it into version control and get documented.

Note that Scott Meyers in "Effective STL" suggests using SGI's or STLPort's library to get better STL support than comes with VC6. I haven't done that so I'm not sure how well those libraries work (but I also haven't used VC6 with STL very much). Of course, if you have the option to move to a newer version of VC, by all means do it.


One more edit:

Thanks for the test program...

VC6's _Allocate() implementation for the default allocator (in <xmemory>) uses a signed int to specify the number of elements to allocate, and if the size passed in is negative (which apparently is what you're doing - certainly in the test program you are) the _Allocate() function forces the requested allocation size to zero and proceeds. Note that a zero-sized allocation request will pretty much always succeed (not that vector checks for a failure anyway), so the vector::resize() function merrily tries to move its contents into the new block, which isn't quite big enough to say the least. So the heap gets corrupted, it'll likely hit a invalid memory page, and regardless - your program is hosed.

So the bottom line is don't ever ask VC6 to allocate more than INT_MAX objects in one go. Probably not a great idea in most circumstances (VC6 or otherwise).

Also, you should keep in mind that VC6 uses a pre-standard idiom of returning 0 from new when an allocation fails rather than throwing bad_alloc.

OTHER TIPS

I would STRONGLY suggest that you check your data for corruptions BEFORE calling library functions with maybe wrong arguments!

Use some kind of hash code or check sum algorithm on your packets. You cannot rely on the library to help you since it isn't able to do: It may be that you give it a corrupted but still valid (from the viewpoint of the library) size that is real big so it allocates for example 768MB of RAM. This may work if there is enough free memory in the system but may fail if there are other programs running that consume too much memory on your 1024MB machine.

So as said above: Check first!

I have no idea what you mean when you say "resize has corrupted memory". How do you determine that?

FWIW, I don't agree with Michael's answer. If std::vector<>::resize() throws on vector expansion, I see two possibilities:

  1. Either one of the constructors used to fill the new space (or copy the elements) threw or
  2. the allocator used to grow the vector did
  3. or the vector determined before-hand that the requested size is too much and throws.

With std::vector<unsigned char> we can safely dismiss #1, so that leaves #2. If you don't use any special allocator, then std::allocator should be used and, AFAIK, that will call new to allocate memory. And new would throw std::bad_alloc. However, you say you cannot catch this, so I don't know what happens.

Whatever it is, it should be derived from std::exception, so you could do this to find out:

try {
  my_vec.resize( static_cast<std::size_t>(-1) );
} catch(const std::exception& x) {
  std::cerr << typeid(x).name() << '\n';
}

What's the result of that?

Anyway, whatever it is, I'm fairly sure it should not corrupt memory. Either that's a bug in your std lib implementation (unlikely, if you ask me, unless you use a very old one) or you've done something wrong elsewhere.


Edit now that you said you're using VS6...

You should have said this earlier. VC6 was released more than a decade ago, after MS had lost their vote in the std committee because they hadn't appeared to the meetings for too long. The std lib implementation they shipped was from Dinkumware (good), but due to legal trouble it was the one for VC5 (very bad), that had lots of smaller and bigger bugs and didn't even have support for member templates, even though the VC6 compiler supported it. Honestly, what do you expect from such old a product?

If you cannot switch to a decent VC version (I'd recommend at least VC7.1 aka VS.NET 2003 as this was the one which made the major leap towards standard conformance), at least see if Dinkumware still sell a VC6t version of their excellent library. (Actually, I'd be surprised, but they used to have one and you never know...)

As for the exceptions: In earlier VC version (this include VC6 and does not include VC8 aka VS.NET 2005, I'm not sure about VC7.1, though) by default Access Violations could be caught by catch(...). So if such a catch block caught something, you wouldn't know whether this even was a C++ exception. My advice would be to only use catch(...) in confunction with throw; in order to let that exception pass on. If you do, you get a real crash at AV's and are able to stack-trace them in the debugger. If you don't, AV's will be swallowed and then you're stuck with an application that's gone berserk without you even knowing. But doing anything but aborting with an AV'ed application makes no sense. An AV is one result of undefined behavior and after that, all bets are off.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top