Question

I am an amateur developper and I deploy my (home oriented) code to containers. This is usually Python and JavaScript.

JavaScript, when saving dependencies for a further npm install, will pin the libraries to exact specific versions. It is also possible (and recommended) to do that in Python via requirements.txt.

This allows for strong reproductibility: you always know what is built from what.

The drawback is that one ends up with outdated libraries, which may not be a problem when there are good tests (you may use a "feature" which is actually a bug, but this may not matter since it works and passes the tests), except that there may be security vulnerabilities, and you will be unaware of these.

My question: why is there a default to pin the exact version, instead of the major one?

My understanding is that depending on the libraries, "major" may mean different things and "minor" versions may still break things even if the use of the library was according to the documentation (and not a bypass/shortcut/non-documented feature).

On the other hand, if a library cannot decide on non-breaking changes for documented usage, it may not be a good library to start with.

Was it helpful?

Solution

The trade-off being made is highly reproducible builds over having the latest dependencies.

Why would you want highly reproducible builds? There are a lot of reasons.

You can't rely on the versioning of the dependency. Although Semantic Versioning has rules, there's no guarantee that the third-party dependency is following those rules. Even if they are trying, a mistake could be made that introduces a breaking change into a minor or a patch release of a Semantic Versioning versioned app.

If you have a legacy system, your test coverage may be weak in some areas. You know exactly how a particular version of a dependency behaves and don't want anything else, since you may not easily detect a breaking change in functionality.

If you are operating in an environment where you need to keep a highly managed configuration, you want to account for the use of any modified version of a dependency, perform appropriate risk assessment, and update at a time of your choosing.

There are probably more cases, as well. The short story is that if you're building a system in a professional context, there are more reasons to favor slightly more manual upgrading of dependencies than not. Also, if you're working in a professional context, you're probably using tools that tell you when your dependencies are out of date or that monitor your dependencies for vulnerabilities and report on them, which would trigger a review and perhaps a planned updated based on the value of the changes in the dependency.

It seems like pinning to the exact version by default and forcing the choice to upgrade to a newer version will do the most good for the most number of people. That is every developer building commercial-grade software and the users and customers that they support.

OTHER TIPS

You are right that pinning the exact version of dependencies forecloses many advantages:

  • Getting a bug fixed without intervention.
  • Getting a performance upgrade without intervention.
  • Getting a feature upgrade without intervention.
  • Having an easier time matching the dependencies versions to each other.

On the other hand, programmers are only human, thus:

  • An update can be buggy, taking your code down with it too.
  • Someone simply forgets to bump the major number.
  • You might no longer get away with violating the contract (whether you knew you did or not).

In short, pinning it all down assures you that the code you tested is the code deployed.
On the flip-side, you need to watch for and react to any dependencies update by evaluating, testing, and fully rolling out a new version just about instantly to avoid leaving known vulnerabilities.

Licensed under: CC-BY-SA with attribution
scroll top