Pregunta

Tendré los siguientes componentes en mi aplicación

  • Acceso de datos
  • DataAccess.Test
  • negocios
  • Business.Test
  • solicitud

Esperaba utilizar Castle Windsor como IoC para pegar las capas, pero no estoy seguro del diseño del encolado.

Mi pregunta es ¿quién debería ser responsable de registrar los objetos en Windsor? Tengo un par de ideas;

  1. Cada capa puede registrar sus propios objetos. Para probar el BL, el banco de pruebas podría registrar clases simuladas para el DAL.
  2. Cada capa puede registrar el objeto de sus dependencias, por ejemplo, La capa empresarial registra los componentes de la capa de acceso a datos. Para probar el BL, el banco de pruebas tendría que descargar el " real " Objeto DAL y registrar los objetos simulados.
  3. La aplicación (o aplicación de prueba) registra todos los objetos de las dependencias.

¿Puede alguien ayudarme con algunas ideas y ventajas / desventajas con los diferentes caminos? Los enlaces a proyectos de ejemplo que utilizan Castle Windsor de esta manera serían muy útiles.

¿Fue útil?

Solución

En general, todos los componentes de una aplicación deben estar compuestos lo más tarde posible, porque eso garantiza la máxima modularidad, y los módulos están lo más flojos posible.

En la práctica, esto significa que debe configurar el contenedor en la raíz de su aplicación.

  • En una aplicación de escritorio, eso estaría en el método Principal (o muy cerca de él)
  • En una aplicación ASP.NET (incluido MVC), estaría en Global.asax
  • En WCF, eso estaría en un ServiceHostFactory
  • etc.

El contenedor es simplemente el motor que compone los módulos en una aplicación que funciona. En principio, podrías escribir el código a mano (esto se llama Poor Man's DI ), pero es mucho más fácil usar un contenedor DI como Windsor.

Dicha Raíz de composición idealmente será la única pieza de código en la raíz de la aplicación, haciendo de la aplicación un llamado Humble Executable (un término del excelente < a href = "https://rads.stackoverflow.com/amzn/click/com/0131495054" rel = "noreferrer"> xUnit Test Patterns ) que no necesita pruebas unitarias en sí mismo.

Sus pruebas no deberían necesitar el contenedor en absoluto, ya que sus objetos y módulos deberían ser compostables, y puede suministrarles Test Doubles directamente desde las pruebas unitarias. Es mejor si puede diseñar todos sus módulos para que sean agnósticos al contenedor.

También específicamente en Windsor, debe encapsular su lógica de registro de componentes dentro de los instaladores (tipos que implementan IWindsorInstaller ) Consulte la documentación para más detalles

Otros consejos

While Mark's answer is great for web scenarios, the key flaw with applying it for all architectures (namely rich-client - ie: WPF, WinForms, iOS, etc.) is the assumption that all components needed for an operation can/should be created at once.

For web servers this makes sense since every request is extremely short-lived and an ASP.NET MVC controller gets created by the underlying framework (no user code) for every request that comes in. Thus the controller and all its dependencies can easily be composed by a DI framework, and there is very little maintenance cost to doing so. Note that the web framework is responsible for managing the lifetime of the controller and for all purposes the lifetime of all its dependencies (which the DI framework will create/inject for you upon the controller's creation). It is totally fine that the dependencies live for the duration of the request and your user code does not need to manage the lifetime of components and sub-components itself. Also note that web servers are stateless across different requests (except for session state, but that's irrelevant for this discussion) and that you never have multiple controller/child-controller instances that need to live at the same time to service a single request.

In rich-client apps however this is very much not the case. If using an MVC/MVVM architecture (which you should!) a user's session is long-living and controllers create sub-controllers / sibling controllers as the user navigates through the app (see note about MVVM at the bottom). The analogy to the web world is that every user input (button click, operation performed) in a rich-client app is the equivalent of a request being received by the web framework. The big difference however is that you want the controllers in a rich-client app to stay alive between operations (very possible that the user does multiple operations on the same screen - which is governed by a particular controller) and also that sub-controllers get created and destroyed as the user performs different actions (think about a tab control that lazily creates the tab if the user navigates to it, or a piece of UI that only needs to get loaded if the user performs particular actions on a screen).

Both these characteristics mean that it's the user code that needs to manage the lifetime of controllers/sub-controllers, and that the controllers' dependencies should NOT all be created upfront (ie: sub-controllers, view-models, other presentation components etc.). If you use a DI framework to perform these responsibilities you will end up with not only a lot more code where it doesn't belong (See: Constructor over-injection anti-pattern) but you will also need to pass along a dependency container throughout most of your presentation layer so that your components can use it to create their sub-components when needed.

Why is it bad that my user-code has access to the DI container?

1) The dependency container holds references to a lot of components in your app. Passing this bad boy around to every component that needs to create/manage anoter sub-component is the equivalent of using globals in your architecture. Worse off any sub-component can also register new components into the container so soon enough it will become a global storage as well. Developers will throw objects into the container just to pass around data between components (either between sibling controllers or between deep controller hierarchies - ie: an ancestor controller needs to grab data from a grandparent controller). Note that in the web world where the container is not passed around to user code this is never a problem.

2) The other problem with dependency containers versus service locators / factories / direct object instantiation is that resolving from a container makes it completely ambiguous whether you are CREATING a component or simply REUSING an existing one. Instead it is left up to a centralized configuration (ie: bootstrapper / Composition Root) to figure out what the lifetime of the component is. In certain cases this is okay (ie: web controllers, where it is not user code that needs to manage component's lifetime but the runtime request processing framework itself). This is extremely problematic however when the design of your components should INDICATE whether it's their responsibility to manage a component and what it's lifetime should be (Example: A phone app pops up a sheet that asks the user for some info. This is achieved by a controller creating a sub-controller which governs the overlaying sheet. Once the user enters some info the sheet is resigned, and control is returned to the initial controller, which still maintains state from what the user was doing prior). If DI is used to resolve the sheet sub-controller it's ambiguous what the lifetime of it should be or whom should be responsible for managing it (the initiating controller). Compare this to the explicit responsibility dictated by the use of other mechanisms.

Scenario A:

// not sure whether I'm responsible for creating the thing or not
DependencyContainer.GimmeA<Thing>()

Scenario B:

// responsibility is clear that this component is responsible for creation

Factory.CreateMeA<Thing>()
// or simply
new Thing()

Scenario C:

// responsibility is clear that this component is not responsible for creation, but rather only consumption

ServiceLocator.GetMeTheExisting<Thing>()
// or simply
ServiceLocator.Thing

As you can see DI makes it unclear whom is responsible for the lifetime management of the sub-component.

NOTE: Technically speaking many DI frameworks do have some way of creating components lazily (See: How not to do dependency injection - the static or singleton container) which is a lot better than passing the container around, but you are still paying the cost of mutating your code to pass around creation functions everywhere, you lack first-level support for passing in valid constructor parameters during creation, and at the end of the day you are still using an indirection mechanism unnecessarily in places where the only benefit is to achieve testability, which can be achieved in better, simpler ways (see below).

What does all this mean?

It means DI is appropriate for certain scenarios, and inappropriate for others. In rich-client applications it happens to carry a lot of the downsides of DI with very few of the upsides. The further your app scales out in complexity the bigger the maintenance costs will grow. It also carries the grave potential for misuse, which depending on how tight your team communication and code review processes are, can be anywhere from a non-issue to a severe tech debt cost. There is a myth going around that Service Locators or Factories or good old Instantiation are somehow bad and outdated mechanisms simply because they may not be the optimal mechanism in the web app world, where perhaps a lot of people play in. We should not over-generalize these learnings to all scenarios and view everything as nails just because we've learned to wield a particular hammer.

My recommendation FOR RICH-CLIENT APPS is to use the minimal mechanism that meets the requirements for each component at hand. 80% of the time this should be direct instantitation. Service locators can be used to house your main business layer components (ie: application services which are generally singleton in nature), and of course Factories and even the Singleton pattern also have their place. There is nothing to say you can't use a DI framework hidden behind your service locator to create your business layer dependencies and everything they depend on in one go - if that ends up making your life easier in that layer, and that layer doesn't exhibit the lazy loading which rich-client presentation layers overwhelmingly do. Just make sure to shield your user code from access to that container so that you can prevent the mess that passing a DI container around can create.

What about testability?

Testability can absolutely be achieved without a DI framework. I recommend using an interception framework such as UnitBox (free) or TypeMock (pricey). These frameworks give you the tools you need to get around the problem at hand (how do you mock out instantiation and static calls in C#) and do not require you to change your whole architecture to get around them (which unfortunately is where the trend has gone in the .NET/Java world). It is wiser to find a solution to the problem at hand and use the natural language mechanisms and patterns optimal for the underlying component then to try to fit every square peg into the round DI hole. Once you start using these simpler, more specific mechanisms you will notice there is very little need for DI in your codebase if any at all.

NOTE: For MVVM architectures

In basic MVVM architectures view-models effectively take on the responsibility of controllers, so for all purposes consider the 'controller' wording above to apply to 'view-model'. Basic MVVM works fine for small apps but as the complexity of an app grows you may want to use an MVCVM approach. View-models become mostly dumb DTOs to facilitate data-binding to the view while interaction with the business layer and between groups of view-models representing screens/sub-screens gets encapsulated into explicit controller/sub-controller components. In either architecture the responsibility of controllers exists and exhibits the same characteristics discussed above.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top