Question

Loose coupling is, to some developers, the holy grail of well-engineered software. It's certainly a good thing when it makes code more flexible in the face of changes that are likely to occur in the foreseeable future, or avoids code duplication.

On the other hand, efforts to loosely couple components increase the amount of indirection in a program, thus increasing its complexity, often making it more difficult to understand and often making it less efficient.

Do you consider a focus on loose coupling without any use cases for the loose coupling (such as avoiding code duplication or planning for changes that are likely to occur in the foreseeable future) to be an anti-pattern? Can loose coupling fall under the umbrella of YAGNI?

Was it helpful?

Solution

Somewhat.

Sometimes loose coupling that comes without too much effort is fine, even if you don't have specific requirements demanding that some module be decoupled. The low-hanging fruit, as it were.

On the other hand, overengineering for ridiculous amounts of change will be a lot of unnecessary complexity and effort. YAGNI, as you say, hits it on the head.

OTHER TIPS

Is programming practice X good or bad? Clearly, the answer is always "it depends."

If you're looking at your code, wondering what "patterns" you can inject, then you're doing it wrong.

If you are building your software so that unrelated objects don't fiddle around with each other, then you're doing it right.

If you're "engineering" your solution so that it can be infinitely extended and changed, then you're actually making it more complicated.

I think at the end of the day, you're left with the single truth: is it more or less complicated to have the objects decoupled? If it is less complicated to couple them, then that is the correct solution. If it is less complicated to decouple them, then that is the right solution.

(I am presently working in a fairly small codebase that does a simple job in a very complicated way, and part of what makes it so complicated is the lack of understanding of the terms "coupling" and "cohesion" on the part of the original developers.)

I think what you're getting at here is the concept of cohesion. Does this code have a good purpose? Can I internalize that purpose and understand the "big picture" of what's going on?

It could lead to hard-to-follow code, not just because there are many more source files (assuming these are separate classes), but because no single class seems to have a purpose.

From an agile perspective, I might suggest that such loose coupling would be an anti-pattern. Without the cohesion, or even use cases for it, you cannot write sensible unit tests, and cannot verify the code's purpose. Now, agile code can lead to loose coupling, for example when test-driven development is used. But if the right tests were created, in the right order, then it's likely there is both good cohesion and loose coupling. And you're talking about only the cases for when clearly no cohesion is there.

Again from the agile perspective, you don't want this artificial level of indirection because it is wasted effort on something that probably won't be needed anyway. It's much easier to refactor when the need is real.

Overall, you want high-coupling within your modules, and loose coupling between them. Without the high-coupling, you probably do not have cohesion.

For the majority of questions like this, the answer is "it depends." On the whole if I can make a design logically loosely coupled in ways that make sense, without a major overhead to doing so, I will. Avoiding unnecessary coupling in code is, to my mind, an entirely worthwhile design goal.

Once it comes to a situation where it seems like components should logically be tightly coupled, I'll be looking for a compelling argument before I start breaking them up.

I guess the principle I work to with most of these types of practice is one of inertia. I have an idea of how I would like my code to work and if I can do it that way without making life any harder then I will. If doing it will make development harder but maintenance and future work easier I will try and figure out a guess at whether it will be more work across the lifetime of the code and use that as my guide. Otherwise it would need to be a deliberate design point to be worth going with.

The simple answer is loose coupling is good when done correctly.

If the principle of one-function, one purpose is followed then it should be easy enough to follow what is going on. Also, loose couple code follows as a matter of course without any effort.

Simple design rules: 1. don't build knowledge of multiple items into a single point (as pointed out everywhere, depends) unless you are building a facade interface. 2. one function - one purpose (that purpose may be multi-faceted such as in a facade) 3. one module - one clear set of interrelated functions - one clear purpose 4. if you cannot simply unit test it then it does not have a simple purpose

All these comments about easier to refactor later are a load of cods. Once the knowledge gets built into many places, particularly in distributed systems, the cost of refactoring, the rollout synchronisation, and just about every other cost blows it so far out of consideration that in most cases the system ends up being trashed because of it.

The sad thing about software development these days is 90% of people develop new systems and have no ability to comprehend old systems, and are never around when the system has got to such a poor state of health due to continual refactoring of bits and pieces.

Doesn't matter how tightly coupled one thing is to other if that other thing never changes. I've found it generally more productive over the years to focus on seeking fewer reasons for things to change, to seek stability, than to make them easier to change by trying to achieve the loosest form of coupling possible. Decoupling I've found to be very useful, to the point where I sometimes favor modest code duplication to decouple packages. As a basic example, I had a choice of using my math library to implement an image library. I didn't and just duplicated some basic math functions which were trivial to copy.

Now my image library is completely independent of the math library in a way where no matter what sort of changes I make to my math lib, it won't affect the image library. That's putting stability first and foremost. The image library is more stable now, as in having drastically fewer reasons to change, since it is decoupled from any other library that could change (besides the C standard library which hopefully should never change). As a bonus it's also easy to deploy when it's just a standalone library that doesn't requiring pulling in a bunch of other libs to build it and use it.

Stability is very helpful to me. I like building a collection of well-tested code which has fewer and fewer reasons to ever change in the future. That's not a pipe dream; I have C code I've been using and using again since the late 80s which hasn't changed at all since then. It is admittedly low-level stuff like pixel-oriented and geometry-related code while a lot of my higher-level stuff became obsolete, but it is something that still helps a lot to have around. That almost always means a library that relies on fewer and fewer things, if nothing external at all. Reliability goes up and up if your software increasingly depends on stable foundations that find few or no reasons to change. Fewer moving parts is really nice, even if in practice the moving parts are much greater in number than the stable parts. It still helps my sanity a lot to know that the entirety of a codebase doesn't consist of moving parts.

Loose coupling is in the same vein, but I find often that loose coupling is so much less stable than no coupling. Unless you're working in a team with far superior interface designers and clients that don't change their mind than I ever worked with, even pure interfaces often find reasons to change in ways that still causes cascading breakages throughout code. This idea that stability can be achieved by directing dependencies towards the abstract rather than the concrete is only useful if the interface design is easier to get right the first time around than the implementation. I often find it reversed where a developer might have created a very good, if not wonderful, implementation given the design requirements they thought they should fulfill, only to find in the future that the design requirements change completely. Design is much harder to get right than implementation if you ask me with large scale codebases trying to fulfill ever-changing requirements, and in that case separating abstract designs from concrete implementations doesn't help that much if the abstract design is the thing that's most prone to be changed.

So I like favoring stability and complete decoupling so that I can at least confidently say, "This little isolated library that has been used for years and secured by thorough testing has almost no probability of requiring changes no matter what goes on in the chaotic outside world." It gives me a little slice of sanity no matter what sort of design changes are called for outside.

Coupling and Stability, ECS Example

I also love entity-component systems and they introduce a lot of tight coupling because the system to component dependencies all access and manipulate raw data directly, like so:

enter image description here

All the dependencies here are pretty tight since components just expose raw data. The dependencies aren't flowing towards abstractions, they're flowing towards raw data which means that each system has the maximum amount of knowledge possible about each type of component they request to access. Components have no functionality with all the systems accessing and tampering with the raw data. However, it's very easy to reason about a system like this since it's so flat. If a texture comes out screwy, then you know with this system immediately that only the rendering and painting system access texture components, and you can probably quickly rule out the rendering system since it only reads from textures conceptually.

Meanwhile a loosely coupled alternative might be this:

enter image description here

... with all the dependencies flowing towards abstract functions, not data, and every single thing in that diagram exposing a public interface and functionality of its own. Here all the dependencies might be very loose. The objects might not even directly depend on each other and interact with each other through pure interfaces. Still it's very hard to reason about this system, especially if something goes wrong, given the complex tangle of interactions. There will also be more interactions (more coupling, albeit looser) than the ECS because the entities have to know about the components they aggregate, even if they only know about each other's abstract public interface.

Also if there are design changes to anything, you get more cascading breakages than the ECS, and there will typically be more reasons and temptations for design changes since every single thing is trying to provide a nice object-oriented interface and abstraction. That immediately comes with the idea that each and every single little thing will be trying to impose constraints and limitations to the design, and those constraints are often what warrant design changes. Functionality is much more constrained and has to make so many more design assumptions than raw data.

I've found in practice the above type of "flat" ECS system to be so much easier to reason about than even the most loosely-coupled systems with a complex spiderweb of loose dependencies and, most importantly to me, I find so few reasons for the ECS version to ever need to change any existing components since the components depended upon have no responsibility except to provide the appropriate data needed for the systems to function. Compare the difficulty of designing a pure IMotion interface and concrete motion object implementing that interface that provides sophisticated functionality while trying to maintain invariants over private data vs. a motion component that only needs to provide raw data relevant to solve the problem and doesn't bother with functionality.

Functionality is so much harder to get right than data, which is why I think it's often preferable to direct the flow of dependencies towards data. After all, how many vector/matrix libraries are out there? How many of them use the exact same data representation and only differ subtly in functionality? Countless, and yet we still have so many in spite of identical data representations because we want subtle differences in functionality. How many image libraries are out there? How many of them represent pixels in a different and unique way? Hardly any, and again showing that functionality is much more unstable and prone towards design changes than data in many scenarios. Of course at some point we need functionality, but you can design systems where the bulk of the dependencies flow towards data, and not towards abstractions or functionality in general. Such would be prioritizing stability above coupling.

The most stable functions I've ever written (the kind I've been using and reusing since the late 80s without having to change them at all) were all ones that relied on raw data, like a geometry function which just accepted an array of floats and integers, not ones that depend on a complex Mesh object or IMesh interface, or vector/matrix multiplication which just depended on float[] or double[], not one that depended on FancyMatrixObjectWhichWillRequireDesignChangesNextYearAndDeprecateWhatWeUse.

Licensed under: CC-BY-SA with attribution
scroll top