Pergunta

From what I understand from the 'Clean Architecture', the controller determines which usecase to execute dependent on the input of the user. If the input from the CLI for example is invalid, the controller already knows that something is wrong, so that the controller could directly talk to the presenter in order to show an error or does this violate anything? The following options I have in mind:

  1. The controller could have an extra usecase for that, so that the interactor sends a message to the Presenter. This seems to be like a detour. Is the idea of an extra usecase for wrong inputs in that case correct?

  2. The Controller could directly talk to the presenter or would that violate something, because the presenter would depend on the controller.

What is the idea of the 'Clean Architecture' in that case?

Foi útil?

Solução

If the controller talks directly to the presenter you lose the ability to independently swap out controllers and presenters. They are now entangled. They know about each other.

If the controller sticks to talking through an abstraction to something that implements, lets say, a Use Case Input Port, then it neither knows nor cares which presenter displays error messages.

enter image description here

You do not have to use an "extra use case". Each use case can be capable of formulating their response regardless of whether that response indicates a result or an error.

Also, understand that the controller doesn't have to decide which use case it talks to. That is determined when this object graph is constructed. You don't see construction diagrammed here but none of this stuff builds itself if it wants to maintain independence. This decision could have been made in main(). It's a handy place to put construction code.

Regarding davidh38's comment:

Why no extra usecase? Is the controller not doing something like if input == "show data" then ... else usecase.call_invalid_input. Can you elaborate on that please

Remember that the Controllers job is to be an Interface Adapter. You don't put business rules here. The CLI Controller's job is to beat CLI input into something so uniform that nothing can tell that it came from the CLI rather than the web, the GUI, or whatever. Violations of business rule expectation are dealt with by things that understand those rules.

That doesn't mean that controllers never deal with errors. But there are many options for dealing with errors that do not require a special use case for them. Returning error codes and exceptions send the flow of control right back where it came from. In this case the blue ring. There may be times when that's appropriate. For example, something has happened that has destabilized the whole system in an unrecoverable way and now it's time to roll over and die before the system starts sending the president threatening emails.

The more interesting case is when there exists a value that can be passed to the use case that defies the expectations placed on it's type. I'm not talking about null, since it's evil. I'm talking about things with some semantic like empty string that make it obvious that displaying nothing without blowing up is OK. -1, "N/A", and special symbols for unrecognized Unicode characters all come to mind.

And yes, you can have a use case for errors. All I said was that you don't have to use one. Most likely you're going to need a mix. Though hopefully you wont have to mix exceptions and error codes. Yuck.

Forgive me while I respond to Lavi's comment with what seems to have become a rant.

If we want the controller to be reusable so that we can use the very same controller for the web as for the command line,

Yes.

the controller should not be in charge of instantiating the output port. Right?

Yes.

It should rather be informed alongside with the controller's inputs in every call.

If that were the case, the outer layer (the blue one in this case) should be responsible for the injection of the output port. Am I right?

Clean Architecture doesn't tell you how to construct any of of this. You're looking at an object graph. Nothing Mr. Martin has published tells us how it got built. I've talked about this before.

However, I'm a fan of reference passing. You'll probably find more recent info about that if you call it Pure Dependency Injection but rest assured, it's the same thing.

That's important to understand because it colors the way I approach this problem. You don't have to do it my way. But if you do here's what you think about:

How long do these things live? When is the earliest we can decide what they're going to be? How far up the call stack can we push their construction? Can we push construction all the way up to the program entry point? The highest place you can push this to, regardless of framework restrictions, was called the Composition Root by Mark Seemann. In normal programs we call it main().

Everything I see in this graph seems like something that could live for the entire life of the program. I see nothing here that must be ephemeral (a fancy way to describe things that blink in and out of existence). In other words, this object graph looks static. Guess what? main() is static.

That means we can build the whole thing before a single line of behavior code is executed. This pattern of build it before you run it isn't just a dependency injection pattern. This is something server code authors do because server code has to be stable. Server code must stay up for months at a time. So it's really nice if you can separate code that has a fixed memory foot print from code that dynamically allocates memory as the program runs. Because a memory leak is not your friend. It's nice if the places you have to hunt for it are few. And yes, you can leak memory in Java and C#. Just hold a reference to an ephemeral object longer than you need it.

So if an object is going to live as long as the program I'd prefer to build it in main() not the blue ring. That doesn't mean main() is a pile of procedural code. Use every language feature and creational pattern you like. Just make the behavior code wait until later.

main() simply isn't in this diagram. When this object graph exists main() is on it's last line. Which is usually something like: runner.start();.

But like I said, Mr. Martin is utterly silent on construction here. So all that stuff is up to you.

Licenciado em: CC-BY-SA com atribuição
scroll top