Install application/HTTP services to "/srv"?

I have been working with a team who used to install all the application and HTTP services like Apache or Tomcat to the '/srv' directory. I suspect mostly in order to keep the installed services separated from the OS as much as possible. For my own projects I kept this practice. However, over time it more and more looked like this might not be a very good idea: It prevents you from using distribution specific packages (they had a very bad reputation in that team, so mostly everything was custom installations), and I noticed that I was getting into quite some trouble when trying to use chef cookbooks, that are already available.

So lately I was tempted to switch to using the distribution specific packages instead of trying to build custom installations which fit into that directory structure. I was wondering if there is anything that I might be overlooking. Is there actually any good reason to put everything into a '/srv' directory or any good reason to not use the distribution specific packages?

What I currently need in my stack is: nginx, Tomcat (Oracle JDK) and MongoDB.


Solution 1:

The FHS compliant path to install third party software is not /srv, but /opt. Check here and here.

With regards to whether or not to use precompiled packages, you have two choices:

  • Use them if you trust the vendor regarding security updates and bugfixes. I would, they surely have more manpower and resources dedicated to this task than your company. You can keep using the OS default repositories and packaging infrastructure. You get to live with whatever version the vendor provides (plus backported fixes).
  • Don't use them, and patch your home-brewed installation every time a new vulnerability is made public. You need to maintain your private repositories (well, you can also manually install everything everytime). You can use more recent versions of the software.

If you only need to maintain 5-10 machines, putting everything under /opt is doable, but if you maintain a farm of more than a couple of hundreds, you'd be Doing It Wrong™

In my opinion, the professional way to do it, is to use the vendor supplied precompiled packages unless there is a compelling reason not to.

Solution 2:

I do not think /srv was ever a good idea. For this case there is /usr/local or /opt directory, while /srv is for site-specific data which are served by the system.

This is defined by Filesystem Heararchy Standard, keep in mind that it is not followed by 100% in all distributions. FHS suggests this:

  • /srv - Site-specific data which are served by the system.
  • /opt - Optional application software packages.

Regarding distro specific packages, you should generally use them unless there is a very good reason not to. Keeping everything up-to-date and making sure you have latest patches can be too hard to maintain and can expose system to additional security risks. Even in the cases when your linux distribution's package manager provides an older than needed version, you can usually find community maintained repositories for newer versions of certain packages which would be easier to maintaint than your own packages. This is usually the case for more popular packages like nginx in distrubutions like CentOS (personal experience).

What I would keep in /srv instead (keep in mind this is personal preference):

  • http vhost-based directories (e.g. /srv/http/example.com)
  • ftpsites (e.g. /srv/ftp/example.com)
  • mailboxes (e.g. /srv/mail/example.com/user)

This helps me keep /srv/ organised with usually managed data while most distributions would keep these files under /var, I use this so I can have same directory structure for common site-specific data across different distributions.

Solution 3:

This particular example, retargeting every package to /srv, while a form of job security, seems singularly myopic and unproductive from a business prospective (where productive means advancing the usability/profitability/security/etc. of the work environment these applications are serving).

If you can live with what a distro package does to your system, then you're leveraging the development and testing the team that produces the package has put into it now and with the updates. If you can't live with what the distro package does to your system, then find another package or convince/wheedle/harrass the development group to do things "your way" (or at least make it easier to do it "your way") or encapsulate it in a virtual machine. Or put in the minimum amount of effort needed to fix the real/perceived problem and automate the hell out of provisioning and configuration management with your favorite toolset. To paraphrase Dijkstra, simplicity and repeatability are prerequisites for reliability.