Question

Last week I wrote a few lines of code in C# to fire up a large text file (300,000 lines) into a Dictionary. It took ten minutes to write and it executed in less than a second.

Now I'm converting that piece of code into C++ (because I need it in an old C++ COM object). I've spent two days on it this far. :-( Although the productivity difference is shocking on its own, it's the performance that I would need some advice on.

It takes seven seconds to load, and even worse: it takes just exactly that much time to free all the CStringWs afterwards. This is not acceptable, and I must find a way to increase the performance.

Are there any chance that I can allocate this many strings without seeing this horrible performace degradation?

My guess right now is that I'll have to stuff all the text into a large array and then let my hash table point to the beginning of each string within this array and drop the CStringW stuff.

But before that, any advice from you C++ experts out there?

EDIT: My answer to myself is given below. I realized that that is the fastest route for me, and also step in what I consider the right direction - towards more managed code.

Was it helpful?

Solution

You are stepping into the shoes of Raymond Chen. He did the exact same thing, writing a Chinese dictionary in unmanaged C++. Rico Mariani did too, writing it in C#. Mr. Mariani made one version. Mr. Chen wrote 6 versions, trying to match the perf of Mariani's version. He pretty much rewrote significant chunks of the C/C++ runtime library to get there.

Managed code got a lot more respect after that. The GC allocator is impossible to beat. Check this blog post for the links. This blog post might interest you too, instructive to see how the STL value semantics are part of the problem.

OTHER TIPS

This sounds very much like the Raymond Chen vs Rico Mariani's C++ vs C# Chinese/English dictionary performance bake off. It took Raymond several iterations to beat C#.

Perhaps there are ideas there that would help.

http://blogs.msdn.com/ricom/archive/2005/05/10/performance-quiz-6-chinese-english-dictionary-reader.aspx

Yikes. get rid of the CStrings...

try a profiler as well. are you sure you were not just running debug code?

use std::string instead.

EDIT:

I just did a simple test of ctor and dtor comparisons.

CStringW seems to take between 2 and 3 times the time to do a new/delete.

iterated 1000000 times doing new/delete for each type. Nothing else - and a GetTickCount() call before and after each loop. Consistently get twice as long for CStringW.

That doesn't address your entire issue though I suspect.

EDIT: I also don't think that using string or CStringW is the real the problem - there is something else going on that is causing your issue.

(but for god's sake, use stl anyway!)

You need to profile it. That is a disaster.

If it is a read-only dictionary then the following should work for you.

Use fseek/ftell functionality, to find the size of the text file.

Allocate a chunk of memory of that size + 1 to hold it.

fread the entire text file, into your memory chunk.

Iterate though the chunk.

    push_back into a vector<const char *> the starting address of each line.

    search for the line terminator using strchr.

    when you find it, deposit a NUL, which turns it into a string.
    the next character is the start of the next line

until you do not find a line terminator.

Insert a final NUL character.

You can now use the vector, to get the pointer, that will let you access the corresponding value.

When you are finished with your dictionary, deallocate the memory, let the vector die when going out of scope.

[EDIT] This can be a little more complicated on the dos platform, as the line terminator is CRLF.

In that case, use strstr to find it, and increment by 2 to find the start of the next line.

What sort of a container are you storing your strings in? If it's a std::vector of CStringW and if you haven't reserve-ed enough memory beforehand, you're bound to take a hit. A vector typically resizes once it reaches it's limit (which is not very high) and then copies out the entirety to the new memory location which is can give you a big hit. As your vector grows exponentially (i.e. if initial size is 1, next time it allocates 2, 4 next time onwards, the hit becomes less and less frequent).

It also helps to know how long the individual strings are. (At times :)

Thanks all of you for your insightful comments. Upvotes for you! :-)

I must admit I wasn't prepared for this at all - that C# would beat the living crap out of good old C++ in this way. Please don't read that as an offence to C++, but instead what an amazingly good memory manager that sits inside the .NET Framework.

I decided to take a step back and fight this battle in the InterOp arena instead! That is, I'll keep my C# code and let my old C++ code talk to the C# code over a COM interface.

A lot of questions were asked about my code and I'll try to answer some of them:

  • The compiler was Visual Studio 2008 and no, I wasn't running a debug build.

  • The file was read with an UTF8 file reader which I downloaded from a Microsoft employee who published it on their site. It returned CStringW's and about 30% of the time was actually spent there just reading the file.

  • The container I stored the strings in was just a fixed size vector of pointers to CStringW's and it was never resized.

EDIT: I'm convinced that the suggestions I was given would indeed work, and that I probably could beat the C# code if I invested enough time in it. On the other hand, doing so would provide no customer value at all and the only reason to pull through with it would be just to prove that it could be done...

The problem is not in the CString, but rather that you are allocating a lot of small objects - the default memory allocator isn't optimized for this.

Write your own allocator - allocate a big chunk of memory and then just advance a pointer in it when allocating. This what actually the .NET allocator does. When you are ready delete the whole buffer.

I think there was sample of writing custom new/delete operators in (More) Effective C++

Load the string to a single buffer, parse the text to replace line breaks with string terminators ('\0'), and use pointers into that buffer to add to the set.

Alternatively - e.g. if you have to do an ANSI/UNICODE conversion during load - use a chunk allocator, that sacrifices deleting individual elements.

class ChunkAlloc
{
   std::vector<BYTE> m_data;
   size_t m_fill;
   public:
     ChunkAlloc(size_t chunkSize) : m_data(size), m_fill(0) {}
     void * Alloc(size_t size)
     {
       if (m_data.size() - m_fill < size)
       {
          // normally, you'd reserve a new chunk here
          return 0;
       }
       void * result = &(m_data[m_fill]);
       m_fill += size;
       return m_fill;
     }
}
// all allocations from chuunk are freed when chung is destroyed.

Wouldn't hack that together in ten minutes, but 30 minutes and some testing sounds fine :)

When working with string classes, you should always have a look at unnecessary operations, for example, don't use constructors, concatenation and such operations too often, especially avoid them in loops. I suppose there's some character coding reason you use CStringW, so you probably can't use something different, this would be another way to optimize your code.

It's no wonder that CLR's memory management is better than the bunch of old and dirty tricks MFC is based on: it is at least two times younger than MFC itself, and it is pool-based. When I had to work on similar project with string arrays and WinAPI/MFC, I just used std::basic_string instantiated with WinAPI's TCHAR and my own allocator based on Loki::SmallObjAllocator. You can also take a look at boost::pool in this case (if you want it to have an "std feel" or have to use a version of VC++ compiler older than 7.1).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top