Pergunta

In Clean Architecture, a use-case calls a presenter, based on the agreed output object. When the presenter is called, it returns a ViewModel used by the view. That is fine until you have more than two views: a CLI and a Web, for instance. If you have these two views, you need two different presenters too. But the use case would be the same for both presenters. Each view probably needs a different ViewModel, so each presenter needs to return different data.

The problem arises when each presenter returns different data. The use-case has to return two different types. But this is difficult to achieve for strongly typed languages like Java or C++.

I have found this related question, where the user defines an abstract presenter that the use case uses and each presenter return a different view model. That design is ok until you try to implement it, because you would find the problem that I have described.

Maybe I’m overthinking it or lack a good understanding of clean architecture. How should I solve this problem?

Foi útil?

Solução

First, I am going to assume that you are using Uncle Bob's interpretation of clean architecture, and so I am quoting the source here:

For example, consider that the use case needs to call the presenter. However, this call must not be direct because that would violate The Dependency Rule: No name in an outer circle can be mentioned by an inner circle. So we have the use case call an interface (Shown here as Use Case Output Port) in the inner circle, and have the presenter in the outer circle implement it.

So your use-case absolutely must not return different types for different presenters. It'll just break the clean architecture.

Use-case doesn't care about the specifics of your presentation layer (what Uncle Bob calls "Interface Adapters"), and at most it just knows the kind of data your interface will need to present. So it creates the same model for all interfaces that might consume it.

That model is then passed along to the abstraction of a presenter, which is then resolved to a specific presenter without any acknowledgement on the part of your use-case.

The presenter proceeds to take the same generic model and builds a view-model that is, indeed, interface specific.

This bundle of Presenter+ViewModel+View is, more or less, specific to your interface, be it web or cli, though you should probably strive to have them know as little as possible about each other. That, however, isn't really part of the core clean architecture per se.

I would argue that the whole point of defining use-cases is to separate different... well... use-cases. If your presenters need to return wildly different data, and it makes no sense for all this data to be inside one model passed down from your use-case, then you should probably redefine your use-cases, as it seems that you are mixing multiple of them into one.

Outras dicas

Let’s clear this up with some examples:

enter image description here

  • A progress indication is displayed after the user requests some intensive calculation

  • A menu is displayed after the user makes a choice

Both of these are use cases. Both are doable with the web or a CLI. Both require different Use Case Interactors. But if only changing from CLI to web requires you to change the Use Case Interactor then you've let Presenter details leak into the Use Case Interactor. You're making the Interactor do part of the Presenters job.

You should be able to look at Output Data and tell if you're looking at the progress indicator or the menu. These can be completely different classes/data structures. But you shouldn't be able to tell if it's going to be displayed on the web or on a CLI. That's the View Models job.

This is what I believe @JKlen meant with:

This bundle of Presenter+ViewModel+View is, more or less, specific to your interface, be it web or cli

I fully support @JKlen's answer. Just thought I'd shed a little more light.

The problem arises when each presenter returns different data. The use-case has to return two different types. But this is difficult to achieve for strongly typed languages like Java or C++.

It's not difficult if you know the fix. The Use Case Interactor "returns" based on which Use Case Interactor it is (e.g., progress or menu). This works because some Presenters (not all) know how to handle the result of that particular Use Case Interactor. You just have to match them up correctly when you build this object graph. Because sending a menu to a Progress Presenter is going to cause problems. Web or CLI.

Let me try and complement the other answers by taking a slightly different perspective.

I think what you may be finding confusing is that there are (seemingly) a lot of "moving parts" in Clean Architecture, and if you're new to it, it's not obvious how they fit together. Many of the concepts seem like they are talking about something exotic that you haven't encountered before, but that's not actually the case.

So let's get rid of these complications, and let us just think about a single function. Let's just start with the approach that would feel straightforward to someone who's used to CRUD-based applications, and see how we can evolve the architecture from there.

Pull-based approach

Suppose you have a function like this:

    public ProcessingResult ProcessProducts(ProductCategory category) { ... }

So, this function implements some use case. It takes a ProductCategory, does something with it internally to perform some processing on a bunch of products, and returns a ProcessingResult - an object that contains some generalized information about the operation, and maybe a list of processed products. For the time being, and for the purposes of this discussion, we don't care what's going on inside the function, if it's decoupled correctly, weather it follows Clean Architecture or not, etc. Let's just focus on its interface - the signature1 of the function.


1 For clarity, within this answer, signature refers to the name of the function, the types that appear in the parameter list, and the return type - the things other code depends on when it uses the function. Some languages formally don't consider the return type to be a part of the signature (you can't overload on the return type), but that's not useful when discussing design.


A use case interactor (which is, in this simplified example, not even an object - it's just this function), has input data and output data (a.k.a. an input model, and an output model). These are just generic names; you aren't actually going to use those names in your application - instead, you'll choose more meaningful names.

In this case the input model is just the ProductCategory class - it has some properties that represent certain details of a product category needed by the use case. That's what the word "model" means - a model is a representation of something. Similarly, the output model here is the ProcessingResult class.

OK. So, let's say that all the implementation details behind the ProcessProducts function are considered to be the "inner layer" (this inner layer could have layers inside it, but we're ignoring that for now). The function itself, and the types ProductCategory & ProcessingResult, belong to this same layer, but they are special because they are at the layer boundary (they are the API to the inner layer, if you will). Code from an outer layer will call this function, and it will refer to these types by name. In other words, code from an outer layer will directly depend on this function and the types that appear in its signature, but it will not know anything about the code behind the function (its implementation details) - which is what lets you change the two independently, as long as you don't have to change the signature of this function.

Introducing an outer layer - without a view model

Now, suppose you want to have two different views. The code related to these will live in your outer layer. One view is HTML, the other is plain text to be displayed as an output of a CLI tool.

Well, all you need to do is to call this function, take the result, and convert it to the appropriate format. Let's not use view models for now (you don't need view models for everything). For example:

    // In your web code:
    
    var result = ProcessProducts(category);   // controller invoking the use case

    // Presentation code 
    // (could be in the same function, but maybe it's in a separate function):

    // fill HTML elements with result.summary
    // create an <ul>
    // for each product in result.ProcessedProducts, create an <li>

or:

    // In your CLI code:
    
    var result = ProcessProducts(category);   // controller invoking the use case

    // Presentation code
    // (could be in the same function, but maybe it's in a separate function):
    Console.WriteLine(result.summary);
    foreach(var product in result.ProcessedProducts)
        Console.WriteLine(product.summary);

So, at this point, you have this - your controller directly references the use case, and coordinates presentation logic:

enter image description here

View models

If your views have some nontrivial logic, and maybe add their own view-specific data, or if it's not convenient to work with the data returned by the use case, then introducing a view model as a level of indirection helps you deal with that.

With view models, the code is not very different from the one above, except that you don't create the view directly; instead, you take the result and create a view model from it. Perhaps you then return it, or maybe pass it to something that renders the view. Or you don't do any of that: if the framework you're using relies on data binding, you just update the view model, and the data-binding mechanism updates the connected view.

Redesign towards a push-based interface

Now, what I described above is a "pull based" approach - you actively ask for ("pull") a result. Suppose you realized that you need to redesign towards a "push based" UI2 - i.e., you want to invoke the ProcessProducts function, and have it initiate the update of some view after it completes processing?


2 I'm not saying that pushing data to the UI is better, just that it's an option. What I'm trying to get at is why Clean Architecture has the elements that it has.


Remember, you want the code in the use case to be written without reference to a concrete view, because, well, you have to support two very different views. You can't call the view/presenter directly from within, otherwise you break the dependency rule. Well, use dependency inversion.

Dependency inversion

You want to push the ProcessingResult to some output location, but you don't want the function to know what it is. So, you need some kind of... oh I dunno... output abstraction? Clean architecture has this notion of an output boundary (a.k.a output port) - an interface that abstracts away a dependency on something you need to push data to. Again, in your code, you'll give it a more meaningful name (the one I came up with here isn't great, I admit). In the example here, all this interface needs to have is a method that accepts ProcessingResult as a parameter:

    public interface IProcessingOutputPresenter {
        void Show(ProcessingResult result);
    }

So, now you redesign the function signature to something like this:

    public void ProcessProducts(ProductCategory category, IProcessingOutputPresenter presenter) { 
        // stuff happens...
        ProcessingResult result = <something>; 
        presenter.Show(result);
    }

Or maybe it's a long-running operation:

    public async Task ProcessProductsAsync(ProductCategory category, IProcessingOutputPresenter presenter) { 
        // stuff happens...
        ProcessingResult result = await <something>; 

        presenter.Show(result);
    }

So now, you can do this:

    // presenter class:
    public class WebPresenter : IProcessingOutputPresenter { ... }

    // In your web controller:    
    ProcessProducts(category, this.webPresenter);

or:

    // presenter class:
    public class CliPresenter : IProcessingOutputPresenter { ... }

    // In your CLI controller:
    ProcessProducts(category, this.cliPresenter);

or, in your tests:

    // mock presenter:
    public class MockPresenter : IProcessingOutputPresenter { ... }

    // In your test:
    var presenter = new MockPresenter();
    ProcessProducts(category, mockPresenter);

So, now you've reused ProcessProducts code in three different contexts.

Basically, ProcessProducts doesn't have to worry about the view, it just "fires and forgets" by calling .Show(result). It's the job of the presenter to convert the input to whatever the view needs (suppose there's also a data-binding mechanism involved, that triggers view update when the view model changes).

It's the dependency structure that matters here, not whether you're using objects or functions. In fact, since IProcessingOutputPresenter is a single-method interface, you could just use a lambda - it's still the same pattern, same architectural idea. The lambda plays the role of the output port:

    public ProcessProducts(ProductCategory category, Action<ProcessingResult> presenterAction);

    // then:
    ProcessProducts(category, (result) => presenter.Show(result));

It's the same thing.

What you have with this setup is the part highlighted here:

enter image description here

You can also redesign your interface to allow for several simultaneous views:

    public void ProcessProducts(ProductCategory category, IEnumerable<IProcessingOutputPresenter> presenters)
    {
        // stuff happens...
        // ProcessingResult result = <something> 
        foreach (var presenter in presenters)
            presenter.Show(result);
    }

What if you have an object, not just a function?

It's essentially the same basic idea, except that you'll typically pass the presenter (an implementation of the output boundary interface) to the constructor of the use case. Instead of passing the presenter from the controller as before, you might set it up in a dependency injection container, or even manually, in a composition root (e.g., in Main()):

    var cliPresenter = new CliPresenter();
    var productRepository = new ProductRepository(/* ... */);
    var productProcessor = new ProductProcessor(cliPresenter, productRepository);  // <----
    var cliController = new CliController(productProcessor);
    RunCliApplication(cliController);
    
    // (or something of the sort)

Note that the data access code has been injected in a similar way:

enter image description here

Or, if you want to be able to change the output destination dynamically, you can absolutely have your output destination be a parameter of a method of the use-case object (e.g., maybe output for different product categories should be displayed in two different views in the same application):

productProcessor.Process(trackedProducts, graphPresenter);
productProcessor.Process(untrackedProducts, listPresenter);

Same idea applies across layer boundaries

This same basic idea applies throughout the application - either call the inner layer directly, or implement an interface defined in an inner layer so that it can call you, even though that code is not aware of you.

It's just that you need to apply this technique judiciously. You don't need (or want) 5 layers of abstraction that all repeat the same data structures. Because you'll get them wrong (even if you're experienced), and then you'll hesitate to redesign because it's too much work. Yes, you'll have some idea what the different architectural elements are from the initial analysis, but in general, start simple, then decompose and restructure here and there as the code becomes more complicated - preventing it from getting too tangled as you go along. You can do this because the implementation details are hidden behind the interface of your use case. You can "reshape" the insides of the inner layer as it grows in complexity.

You keep the code maintainable by noticing that it's starting to become less maintainable, and doing something about it.

Here we started with a simple function, called by a controller that was initially doing the work of the presenter as well. After a couple of refactorings, you'll be able to extract different parts, define interfaces, separate responsibilities of different subcomponents, etc - eventually approaching something that's closer to the idealized Clean Architecture.

There are two takeaways here. First, you've probably seen these techniques used outside of the context of CA; CA doesn't do anything radically new or different. There's nothing too mysterious about CA. It just gives you a way to think about these things. Second, you don't have to figure out every element of the architecture at once (in fact, you run the risk of overengineering by doing so); instead, you want to postpone some of those decisions until you see what the code is turning out to be.

Licenciado em: CC-BY-SA com atribuição
scroll top