Pergunta

O meu colega reivindicações que devemos dissecar nossa aplicação C ++ (C ++, Linux) em bibliotecas compartilhadas para melhorar a modularidade do código, a capacidade de teste e reutilização.

Do meu ponto de vista, é um fardo desde a gravação que código não precisam ser compartilhados entre aplicativos na mesma máquina nem a ser dinamicamente carregar ou descarregar e nós podemos simplesmente ligar um aplicativo executável monolítica.

Além disso, as classes de embrulho C ++ com interfaces de função C IMHO torna mais feio.

Eu também acho aplicação de arquivo único será muito mais fácil de atualizar remotamente nas instalações do cliente.

bibliotecas dinâmicas deve ser usado quando não há necessidade de compartilhar o código binário entre as aplicações e não carregamento de código dinâmico?

Foi útil?

Solução

Eu diria que o código dividir em bibliotecas compartilhadas para melhorar , sem ter qualquer objetivo imediato em mente é um sinal de um ambiente de desenvolvimento buzzwords-infestadas. É melhor escrever código que pode ser facilmente dividido em algum ponto.

Mas por que você precisa para quebrar classes C ++ em interfaces de funções C, exceto, talvez, para a criação do objeto?

Além disso, dividindo-se em bibliotecas compartilhadas aqui soa como uma mentalidade linguagem interpretada. Em linguagens compiladas você não tentar adiar até tempo de execução que você pode fazer em tempo de compilação. Desnecessária a vinculação dinâmica é exatamente o caso.

Outras dicas

Enforcing bibliotecas compartilhadas garante que as bibliotecas não tem dependências circulares. Usando bibliotecas compartilhadas muitas vezes leva a erros de ligação e link mais rápidos são descobertos em um estágio mais cedo do que se não houver qualquer ligação antes da aplicação final está ligada. Se você quer evitar o envio de vários arquivos para os clientes que você pode considerar que liga a aplicação dinamicamente em seu ambiente de desenvolvimento e estaticamente ao criar compilações.

EDIT: Eu realmente não vejo uma razão para isso, você precisa envolver suas classes C ++ usando interfaces C - isto é feito nos bastidores. No Linux, você pode usar bibliotecas compartilhadas sem qualquer tratamento especial. No Windows, no entanto, você precisa ___ declspec (exportação) e ___ declspec (importação).

Improve reuse even though there will not be any? Doesn't sound like a strong argument.

Modularity and testability of code need not depend upon the unit of ultimate deployment. I would expect linking to be a late decision.

If truly you have one deliverable and never anticipate any change to that then it sounds like overkill and needless complexity to deliver in pieces.

Short answer: no.

Longer answer: dynamic libraries add nothing to add to testing, modularity, or reuse that cannot be done just as easily in a monolithic app. About the only benefit that I can think of is that is may force the creation of an API in a team that does not have the discipline to do it on their own.

There is nothing magical about a library (dynamic or otherwise). If you have all of the code to build an application an the assorted libraries, you can just as easily compile it all together in a single executable.

In general, we've found that the costs of having to deal with dynamic libraries is not worth it unless there is a compelling need (libraries in multiple applications, needing to update a number of applications without recompiling, enabling the user to add functions to the application).

Dissecting your colleague's arguments

If he believes that splitting your code into shared libraries will improve code modularity, testability and reuse, then I guess that this means he believes you have some problems with your code, and that enforcing a "shared library" architecture will correct it.

Modularity?

Your code must have undesired interdependencies that would not have happened with a cleaner separation between "library code" and "code using library code".

Now, this can be achieved through static libraries, too.

Testing?

Your code could be tested better, perhaps building unit tests for each separate shared library, automated at each compilation.

Now, this can be achieved through static libraries, too.

Reuse of code?

Your colleague would like to reuse some code that is not exposed because hidden in the sources of your monolithic application.

Conclusion

The points 1 and 2 can still be achieved with static libraries. The 3 would make shared libraries mandatory.

Now, if you have more than one depth of library linking (I'm thinking about linking together two static libraries which alread were compiled linking the other libraries), this can be complex. On Windows, this leads to error to link as some functions (usually the C/C++ runtime functions, when linked with statically) are referenced more than once, and the compiler can't choose which function to call. I don't know how this work on Linux, but I guess this could happen, too.

Dissecting your own arguments

Your own arguments are somewhat biased:

Burden of compilation/linking of shared libraries?

The burden of compiling and linking to shared libraries, compared to compiling and linking to static libraries is non-existent. So this argument has no value.

Dynamicaly loading/unloading?

Dynamically loading/unloading a shared library could be a problem in a very limited use case. In normal cases, the OS loads/unloads the library when needed without your intervention, and anyway, your performance problems lie elsewhere.

Exposing C++ code with C interfaces?

As for using a C-function interface for you C++ code, I fail to understand: You already link together static libraries with a C++ interface. Linking shared libraries is no different.

You would have a problem if you had different compilers to produce each library of your application, but this is not the case, as you already link your libraries statically.

A single file binary is easier?

You're right.

On Windows, the difference is negligible, but then, there is still the problem of DLL Hell, which disappears if you add the version to your library names or work with Windows XP.

On Linux, in addition to the Windows problem above, you have the fact that by default, the shared libraries need to be in some system default directories to be useable, so you'll have to copy them there at install (which can be a pain...) or change some default environment settings (which can be a pain, too...)

Conclusion: Who is right?

Now, your problem is not "is my colleague is right?". He is. As you are, too.

Your problem is:

  1. What do you really want to achieve?
  2. Is the work necessary for this task worth it?

The first question is very important, as it seems to me that your arguments and your colleague's arguments are biased to lead to the conclusion that seems more natural for each of you.

Put it in another wording: Each of you already know what the ideal solution should be (according to each viewpoint) and each of you stacks up arguments to reach this solution.

There is no way to answer that hidden question...

^_^

Do a simple cost/benefit analysis - do you really need modularity, testability and reuse? Do you have the time to spend refactoring your code to get those features? Most importantly, if you do refactor, will the benefits you gain justify the time it took to perform the refactoring?

Unless you have issues with testing now, I'd recommend leaving your app as-is. Modularization is great but Linux has its own version of "DLL hell" (see ldconfig), and you've already indicated that reuse is not a necessity.

If you're asking the question and the answer isn't obvious, then stay where you are. If you haven't gotten to the point where building a monolithic application takes too long or it's too much of a pain for your group to work on together, then there's no compelling reason to move to libraries. You can build a test framework that works on the application's files if you want to as it stands or you can simply create another project that uses the same files, but attach a testing API and build a library with that.

For shipping purposes, if you want to build libraries and ship one big executable, you can always link to them statically.

If modularity would help with development, i.e. you're always butting heads with other developers over file modifications, then libraries may help, but that's no guarantee either. Using good object-oriented code design will help regardless.

And there's no reason to wrap any functions with C-callable interfaces necessary to create a library unless you want it to be callable from C.

Shared libraries come with their headaches, but I think shared libraries are the right way to go here. I would say in most cases you should be able to make parts of your application modular and reusable elsewhere in your business. Also, depending on the size of this monolithic executable, it may be easier to just upload a set of updated libraries instead of one big file.

IMO, libraries in general lead to better code, more testable code, and allows for future projects to be created in a more efficient manner because you're not reinventing the wheel.

In short, I agree with your colleague.

On Linux (and Windows) you can create a shared library using C++ and not have to load it using C function exports.

That is, you build classA.cpp into classA.so, and you build classB.cpp into classB(.exe) which links to classA.so. All you're really doing is splitting your application into multiple binary files. This does have the advantage that they are faster to compile, easier to manage and you can write applications that load just that library code for testing.

Everything is still C++, everything links, but your .so is separate from your statically linked application.

Now, if you wanted to load a different object at runtime (that is, you don't know which one to load until runtime) then you would need to create a shared object with c-exports, but you will also be loading those functions manually; you would not be able to use the linker to do this for you.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top