docker-machine memory allocation

As mentionned in the answer to the question in your 2nd Edit, Containers are not like VMs in that you don't usually reserve memory for them as you would with Virtual Machines. Because they all run on the same OS, it can dispatch memory as needed to different processes, just as if they were not running in a container. That is to say, memory is pooled for all processes, regardless of containers.

What you did set in the above exemple with docker-machine was the 'virtual' host's total memory pool. In your production case, it'll be the whole 128 GB (unless you plan to also use docker-machine or VMs to segment it).

However, containers also are a great way to make use of the kernel's cgroups (control groups) features, which lets you configure resource management for a whole container system. This does not let you 'reserve' memory to a container, but instead you can set upper bounds to all your containers memory so one won't eat up memory that could be used for others (in the event of a leak or bug for example).

With Docker, depending on the used container backend, you can set basic memory limits as follow:

  • When running docker's default libcontainer, by running the containers with the -m or -memory option
  • When running the legacy LXC provider, by running the containers with the LXC option lxc.cgroup.cpuset.memory=amount using --lxc-conf

You can find more information about cgroups usage in Docker here: https://www.cloudsigma.com/manage-docker-resources-with-cgroups/

The article also contains slides of an introduction to cgroups functionalities.