문제

I stumbled upon the following code:

//
// Top-level file that includes all of the C/C++ files required
//
// The C code may be compiled by compiling this top file only,
// or by compiling individual files then linking them together.

#ifdef __cplusplus
extern "C" {
#endif

#include <stdlib.h>
#include "my_header.h"
#include "my_source1.cc"
#include "my_source2.cc"

#ifdef __cplusplus
}
#endif

This is definitely unusual but is it considered bad practice and if so why?

One potential negative I can think of is that a typical build system would have difficulty analysing dependencies. Are there any other reasons that this technique isn't widely used?

도움이 되었습니까?

해결책

First off: extern "C" { #include "my_cpp_file.cc" } just doesn't add up... anyway, I'll attempt to answer your question using a practical example.
Note that sometimes, you do see #include "some_file.c" in a source file. Often this is done because the code in the other file is under development, or it's not certain that the feature that is being developed in that file will make the release.
Another reason is quite simple: to improve readability: not having to scroll too much), or even: Reflecting you're threading. To some, having the child's code in a separate file helps, especially when learning threading.

Of course, the major benefit of including translation units into one master translation unit (which, to me, is abusing the pre-processor, but that's not the point) is simple: less I/O while compiling, hence, faster compilation. It's all been explained here.

That's one side of the story, though. This technique is not perfect. Here's a couple of considerations. And just to balance out the "the magic of unity builds" article, here's the "the evils of unity builds" article.

Anyway, here's a short list of my objections, and some examples:

  • static global variables (be honest, we've all used them)
  • extern and static functions alike: both are callable everywhere
  • Debugging would require you to build everything, unless (as the "pro" article suggests) have both a Unity-Build and modular-build ready for the same project. IMO a bit of a faff
  • Not suitable if you're looking to extract a lib from your project you'd like to re-use later on (think generic shared libraries or DLL's)

Just compare these two situation:

//foo.h
struct foo
{
    char *value;
    int checksum;
    struct foo *next;
};

extern struct foo * get_foo(const char *val);

extern void free_foo( struct foo **foo);

//foo.c
#include <foo.h>
static int get_checksum( const char *val);
struct foo * get_foo( const char *val)
{
    //call get_checksum
    struct foo *retVal = malloc(sizeof *retVal);
    retVal->value = calloc(strlen(val) + 1, 1);
    retVal->cecksum = get_checksum(val);
    retVal->next = NULL;
    return retVal;
}
void free_foo ( struct foo **foo)
{
    free(*foo->value);
    if (*foo->next != NULL)
        free_foo(&(*foo->next));
    free(*foo);
    *foo = NULL;
}

If I were to include this C file in another source file, the get_checksum function would be callable in that file, too. Here, this is not the case.
Name conflicts would be a lot more common, too.

Imagine, too, if you wrote some code to easily perform certain quick MySQL queries. I'd write my own header, and source files, and compile them like so:

gccc -Wall -std=c99 mysql_file.c `mysql_config --cflags --libs` -o mysql.o

And simply use that mysql.o compiled file in other projects, by linking it simply like this:

//another_file.c
include <mysql_file.h>

int main ( void )
{
    my_own_mysql_function();
    return 0;
}

Which I can then compile like so:

gcc another_file.c mysql.o -o my_bin

This saves development time, compilation time, and makes your projects easier to manage (provided you know your way around a make file).

Another advantage with these .o files is when collaborating on projects. Suppose I would announce a new feature for our mysql.o file. All projects that have my code as a dependency can safely continue to use the last stable compiled mysql.o file while I'm working on my piece of the code.
Once I'm done, we can test my module using stable dependencies (other .o files) and make sure I didn't add any bugs.

다른 팁

The problem is that each of your *.cc files will be compiled every time the header is included.

For example, if you have:

// foo.cc:

// also includes implementations of all the functions
// due to my_source1.cc being included
#include "main_header.h"

And:

// bar.cc:

// implementations included (again!)
// ... you get far more object code at best, and a linker error at worst
#include "main_header.h"

Unrelated, but still relevant: Sometimes, compilers have trouble when your headers include C stdlib headers in C++ code.

Edit: As mentioned above, there is also the problem of having extern "C" around your C++ sources.

This is definitely unusual but is it considered bad practice and if so why?

You're likely looking at a "Unity Build". Unity builds are a fine approach, if configured correctly. It can be problematic to configure a library to be built this way initially because there may be conflicts due to expanded visibility -- including implementations which were intended by an author to be private to a translation.

However, the definitions (in *.cc) should be outside if the extern "C" block.

One potential negative I can think of is that a typical build system would have difficulty analysing dependencies. Are there any other reasons that this technique isn't widely used?

It reduces dependency/complexity because the translation count goes down.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top