Tools for upstream maintainers? For testing before release (Debian, etc.) [closed]

StackOverflow https://stackoverflow.com/questions/20300803

  •  06-08-2022
  •  | 
  •  

Frage

I develop a library that is used by other software. Typically this library ends up packaged in Debian, Fedora, etc., and its "reverse-dependencies" also end up packaged and using it.

So, I guess this makes me an "upstream maintainer." I simply use autotools to produce a tarball, and packagers then use that to produce .deb files, etc. Now, something that has bothered me for quite some time is the disconnect between maintainers and packagers. I feel like every time I do a release, even if it is simply a bugfix release, I am potentially causing headaches for everyone down the chain.

Possible problems:

  • I introduced a bug that wasn't caught in testing, even though I tried extensively to test various configurations -- I don't have unlimited testing resources and it is a small library so I am mostly on my own although there are 1 or 2 other interested people who help out, but generally only test on one platform.
  • I forgot to bump the version number, causing confusion
  • I did bump the version number but forgot to bump the SO version (you know, the thing that specifies API/ABI compatibility, and is independent from the software release version)
  • I made a small change but accidentally caused an API incompatibility without thinking (e.g. made something "const" that should have been all along, didn't realize it would break people's code)
  • I made a small change but accidentally caused an ABI incompatibility -- e.g., changed a constant in a header file, wasn't thinking and forgot that this would be "baked in" in software compiled against a previous version

I have done pretty much all of these things at some time or other in the past. Due to these previous mistakes, these days I probably spend more time testing than actually developing, and still end up making mistakes. The mistakes are often not that bad, after all people understand, mistakes happen, but they sometimes cause people to drop using the library, without even talking to me or communicating on the mailing list, which sucks -- if those people were so interested, it would be cool if they had helped test before I published a release -- but anyways, you get the idea.

So, rather than just compiling and running the unit tests, my testing process now involves some pretty extensive steps. In particularly, I am now using "apt-cache rdepends" to find software that uses my library, and I install it and switch the binary out to test for ABI compatibility. Then, I uninstall it, and "apt-get source" it, and compile it against the new version to test for API compatibility.

This kind of testing involves,

  • understanding other peoples software and figuring out where and how it exercises my code
  • compiling other peoples software, including figuring out their other dependencies and how to get everything working -- for large projects this can be a nightmare.
  • some projects using my software are actually plugins for other projects, meaning I have to additionally get the host program working
  • many projects using my library are GUI-oriented, so I have to navigate and learn some software I don't even know or use, and then guess when I have got it to a place where it is actually calling out to my library
  • my library works on Linux, Windows, and OS X, and often I don't have enough machines and operating systems around to test on. For example, a huge problem with my last release was a bug that only showed up in Linux on x86_64. I had tested on Linux i386, and OS X 64-bit, but somehow these platforms didn't show the bug, it was particular to the Linux-64-bit combination which I had neglected testing because I didn't have the right hardware and assumed I'd covered enough ground.

As you can imagine this is not a light task, and makes for huge delays before publishing a given release, delaying the dissemination of bugfixes, etc. The worst thing is that my project is not even a large library, and is a hobby project of mine, so all of this feels like huge overhead just for something I do in my spare time. I'd rather be developing features than just defending against my own potential mistakes for every little change I make. But, it currently has 42 rdepends listed in Ubuntu, to give you an idea, and I'm proud that it is useful to other people so I want to be able to develop and improve it without worrying so much about breaking things for everyone.

My question is, how can I improve the efficiency of this testing process? Are there for example any tools that will automatically compile "rdepends" packages against a new version of my library and give me a report? Or somehow download compiled binaries of rdepends and test loading them against my ABI without actually necessarily requiring me to navigate the GUI of some unknown software?

War es hilfreich?

Lösung

how can I improve the efficiency of this testing process?

The main problem is communication, apart the fact that you lack scripts that automate the process. You can do pre-releases of your packages, mailing the distributions that your library supports, etc. or instead of maintaining the packages yourself insert them into some mayor Distro and let some experimented maintainer do the stuff.

You can always break people stuff, just don't do it so frequently. Remember that people need stability in some certain sense so you may document very well each change so people using your library can't say you didn't tell them.

About tools... you should find your own pace. Maybe some buildbots (AFAIK some projects lend build bots), maybe script automatizing the process you build stuff, etc. etc. etc., did I said etc.? The problem is too broad and there are effectively too many solutions that makes any suggestion non-viable. You may want to check https://softwareengineering.stackexchange.com/q/150466/104338 as "some" methods but, again, you should find your own pace.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top