Question

I am using Ninject version 3 in an MVVM-type scenario in a .NET WPF application. In a particular instance I am using a class to act as coordinator between the view and its view model, meaning the coordinator class is created first and the view and view model (along with other needed services) are injected into it.

I have bindings for the services, but I have not created explicit bindings for the view/view model classes, instead relying on Ninject's implicit self-binding since these are concrete types and not interfaces.

A conceptual version of this scenario in a console app is shown below:

class Program
{
    static void Main(string[] args)
    {
        StandardKernel kernel = new StandardKernel();

        kernel.Bind<IViewService>().To<ViewService>();
        //kernel.Bind<View>().ToSelf();
        //kernel.Bind<ViewModel>().ToSelf();

        ViewCoordinator viewCoordinator = kernel.Get<ViewCoordinator>();
    }
}

public class View
{

}

public class ViewModel
{

}

public interface IViewService
{

}

public class ViewService : IViewService
{

}

public class ViewCoordinator
{
    public ViewCoordinator()
    {

    }

    public ViewCoordinator(View view, ViewModel viewModel, IViewService viewService)
    {

    }
}

If you run this code as-is, the kernel.Get<> call will instantiate the ViewCoordinator class using the parameterless constructor instead of the one with the dependencies. However, if you remove the parameterless constructor, Ninject will successfully instantiate the class with the other constructor. This is surprising since Ninject will typically use the constructor with the most arguments that it can satisfy.

Clearly it can satisfy them all thanks to implicit self-binding. But if it doesn't have an explicit binding for one of the arguments it seems to first look for alternate constructors it can use before checking to see if it can use implicit self-binding. If you uncomment the explicit Bind<>().ToSelf() lines, the ViewController class will instantiate correctly even if the parameterless constructor is present.

I don't really want to have to add explicit self-bindings for all the views and view models that may need this (even though I know that burden can be lessened by using convention-based registration). Is this behavior by design? Is there any way to tell Ninject to check for implicit self-binding before checking for other usable constructors?

UPDATE

Based on cvbarros' answer I was able to get this to work by doing my own implementation of IConstructorScorer. Here's the changes I made to the existing code to get it to work:

using Ninject.Selection.Heuristics;

class Program
{
    static void Main(string[] args)
    {
        StandardKernel kernel = new StandardKernel();

        kernel.Components.RemoveAll<IConstructorScorer>();
        kernel.Components.Add<IConstructorScorer, MyConstructorScorer>();

        kernel.Bind<IViewService>().To<ViewService>();

        ViewCoordinator viewCoordinator = kernel.Get<ViewCoordinator>();
    }
}


using System.Collections;
using System.Linq;
using Ninject.Activation;
using Ninject.Planning.Targets;
using Ninject.Selection.Heuristics;

public class MyConstructorScorer : StandardConstructorScorer
{
    protected override bool BindingExists(IContext context, ITarget target)
    {
        bool bindingExists = base.BindingExists(context, target);

        if (!(bindingExists))
        {
            Type targetType = this.GetTargetType(target);

            bindingExists = (
                !targetType.IsInterface
                && !targetType.IsAbstract
                && !targetType.IsValueType
                && targetType != typeof(string)
                && !targetType.ContainsGenericParameters
                );
        }

        return bindingExists;
    }

    private Type GetTargetType(ITarget target)
    {
        var targetType = target.Type;
        if (targetType.IsArray)
        {
            targetType = targetType.GetElementType();
        }

        if (targetType.IsGenericType && targetType.GetInterfaces().Any(type => type == typeof(IEnumerable)))
        {
            targetType = targetType.GetGenericArguments()[0];
        }

        return targetType;
    }

}

The new scorer just sees if a BindingExists call failed by overriding the BindingExists method and if so it checks to see if the type is implicitly self-bindable. If it is, it returns true which indicates to Ninject that there is a valid binding for that type.

The code making this check is copied from the SelfBindingResolver class in the Ninject source code. The GetTargetType code had to be copied from the StandardConstructorScorer since it's declared there as private instead of protected.

My application is now working correctly and so far I haven't seen any negative side effects from making this change. Although if anyone knows of any problems this could cause I would welcome further input.

Was it helpful?

Solution

By default, Ninject will use the constructor with most bindings available if and only if those bindings are defined (in your case they are implicit). Self-bindable types do not weight when selecting which constructor to use.

You can mark which constructor you want to use by applying the [Inject] attribute to it, this will ensure that constructor is selected.

If you don't want that, you can examine StandardConstructorScorer to see if that will fit your needs. If not, you can replace the IConstructorScorer component of the Kernel with your own implementation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top