Question

I am trying to move my configuration folder to my debian server and store it in a place which seems the most sensible and logical. I was dumping everything in home but now it looks like this:

Software: /opt/
Web Files: /var/www

However, I have to move my software configuration folder to a location on the server so they can then be symlinked to the right location. Which of these seems the most location place to do this:

/home/configs
/var/cfgs
Or another?

Sorry if this seems pedantic, but you know what they say, there is a place for everything and everything in its place ;)

Was it helpful?

Solution

i'd suggest using /etc, which is the place to put host-specific, system-wide configurations, according to the FHS.

this assumes that the config is for server-applications (and not applications run by human users).

e.g. i'm running multiple instances of zope, which has configuration files in

/etc/zope/instanceA/
/etc/zope/instanceB/
/etc/zope/test/

OTHER TIPS

It's is a taste thing... are you the only user on the machine?
If so it does not matter, you can put everything under home.

If it is a server, you have to ensure that your config directory has the right access permissions. So $HOME would not be appropriate.

One more thing, if it is a Debian machine, do leave /var/www in it's place. That's where many web related packages install stuff. If you started moving things around, you might have problems later when you upgrade these packages.

Creating a user for each software is also a taste thing ... I assume these service user do not login, so no real purpose of even setting them a $HOME. I would even set them to nologin as a security mean, if these software are accessible from other machines and you want to avoid someone compromising these users.

System wide configuration files should NEVER be in a user's home directory. Way too easy to clobber, or modify by mistake, change permissions inappropriately.

/etc becomes problematic when you are managing multiple systems. Best practice: Keep a copy in another directory, and push to all appropriate systems via scp as appropriate.

It also becomes difficult if you have packages not managed by the system package manager e.g. installed from source code. Add to that difficulties if you have multiple versions of software running.

As the numbers and types of systems, and numbers of packages and versions increase management gets more complex. For one machine, with few users, having config in /etc is fine.

For a counter example: Consider if you customize your foo installation. Assume that this is a package that your distribution provides, but that you need a somewhat different version.

Installing the configuration file in the default location may result in it getting clobbered on the next system upgrade. (Possibly along with your modified version in /usr/bin)

One common practice is to keep stuff YOU add/modify completely separate from the system file tree. When I was a practicing sysadmin I made all special stuff reside in /opt.

/opt
/opt/bin
/opt/sbin
/opt/usr/bin
/opt/usr/sbin
/opt/man
/opt/etc

This keeps stuff out of the way of the system, but it's not always clear which files belong to which package.

Packages that have many executables and man pages may get their own directory. Netpbm and ImageMagic, gnu utils, perl (modules) are good examples. This adds a level to the directory tree.

/opt/misc/bin #Contained anything that was one program, one man page
/opt/netpbm_2.1/bin
/opt/imagemagick_3.2/...
/opt/gnu_1.1/...

along with all the other directories. E.g a full set of /opt/{package-version}/{bin|sbin|man|etc|var}

This is especially true for packages where you need to keep multiple versions.  e.g.

/opt/perl4.023...
/opt/perl5.8...

Maintaining separate trees for large packages makes switching easy -- an backing out of a bad change quick. In most cases you will keep symbolic links in the user's default path to point to the actual binaries. E.g. /usr/bin/perl -> /opt/perl5.8/bin/perl. Good practice here, is to leave a link with specific version. e.g. /usr/bin/perl4 -> /opt/perl4.023/bin/perl

In some places you have multiple architectures to deal with. (I had 11 unix variants one place...) This can add yet another level to the tree:

/opt/hpux/...
/opt/irix/...
/opt/sysv/...
/opt/solaris/...

Usually I maintained that way only on an NFS file share, and local copies were made to individual machines.

The advantages of the complex hierarchy is ease of maintenance. If /opt/apache contains all the binaries, man pages, and configurtion files, then if you decide to change to lighttpd, you don't end up with lighttpd getting confused with apache's configuration file. (Both are called httpd.conf)

It also will mean that when you try a package, and remove it, you don't leave a litter of stuff here and there.

Generally you will not want the data for an application to reside in their /opt/application tree.

There are disadvantages:

  • If config files and var can be separated from /opt, then /opt can be read only except for upgrades. This can simplify backups.

  • One side effect of this, is the problem of executable path maintenance. Keep systemwide dot files (.cshrc, .tcshrc, .bashrc) that make the necessary path changes so that the bulk of your users aren't aware of changes. These files are sourced from their default dot files.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top