Is docker suitable to be used for long running containers?

I'm currently migrating from a powerfull root server to a less powerfull and most notably cheaper server. On the root server i had some services isolated into separate VMs. On the new server this is not possible. But I'd like to still have some isolation for some services... If possible.

Currently I'm thinking of using docker for this isolation. But I'm not sure if docker is the right tool here. I tried to google for an answer but most posts i found about docker are only related to short term containers for development, ci or testing purposes. In my case it would be more like having a long term container that runs eg a web service stack with nginx, php and mysql/mariadb (while the db might even get its own container) and other container that run other services.

so my question is: Is Docker suitable for a task of running a container for a longer time. or in other words... is docker usable as a "replacement" for kvm based VMs?


Docker is used all over the place for web apps which are long running apps. Currently in production I have the following running in docker

  • php-fpm apps
  • celery queue workers (python)
  • nodejs apps
  • java tomcat7
  • Go

As with all judgement calls, there will be some opinion in any answer. Nevertheless, it is definitely true to say that containerisation is not virtualisation. They are different technologies, working in different ways, with different pros and cons. To regard containerisation as virtualisation lite is to make a fundamental mistake, just as regarding a virtualised guest as a cheap dedicated server is a mistake. We see a lot of questions on SF from people that have been sold a container as a "cheap VPS"; misunderstanding what they have, they try to treat it as a virtualised guest, and cause themselves trouble.

Containerisation is undoubtedly excellent for development work: it enables a very large number of environments to be spun up very quickly, and thus makes development on multiple fast-changing copies of a slowly-changing reference back end very easy. Note that in this scenario the containers are all very similar in infrastructure and function; they're all essentially subtly-different copies of a single back-end.

Trouble may arise when people try to containerise multiple distros on a single host, or guests have different needs in terms of kernel modules, or external hardware connectivity arises as an issue - and in many other comparable departures from the scenarios where containerisation really does work well.

If you decide to deploy into production on containers, keep your mind on what you've done, and don't fall into the mindset of thinking of your deployment as virtualised; be aware that saving money has opportunity costs associated with it. Cut your coat according to your cloth, and you may very well have a good experience. But allow yourself (or, more commonly, management) to misunderstand what you've done, and trouble may ensue.