How do I pass environment variables to Docker containers?

I'm new to Docker, and it's unclear how to access an external database from a container. Is the best way to hard-code in the connection string?

# Dockerfile
ENV DATABASE_URL amazon:rds/connection?string

You can pass environment variables to your containers with the -e flag.

An example from a startup script:

sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \ 
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \  
--name container_name dockerhub_id/image_name

Or, if you don't want to have the value on the command-line where it will be displayed by ps, etc., -e can pull in the value from the current environment if you just give it without the =:

sudo PASSWORD='foo' docker run  [...] -e PASSWORD [...]

If you have many environment variables and especially if they're meant to be secret, you can use an env-file:

$ docker run --env-file ./env.list ubuntu bash

The --env-file flag takes a filename as an argument and expects each line to be in the VAR=VAL format, mimicking the argument passed to --env. Comment lines need only be prefixed with #


You can pass using -e parameters with docker run .. command as mentioned here and as mentioned by @errata.

However, the possible downside of this approach is that your credentials will be displayed in the process listing, where you run it.

To make it more secure, you may write your credentials in a configuration file and do docker run with --env-file as mentioned here. Then you can control the access of that config file so that others having access to that machine wouldn't see your credentials.