How to get into psql of a running postgres container?
fig will create a docker container with a different name than the one used in the fig.yml
file.
I got it working by finding the container name with docker ps
and looking at the NAMES column.
Then running the psql
command in the running container with docker exec -ti NAME_OF_CONTAINER psql -U YOUR_POSTGRES_USERNAME
Important Note:
-
docker exec
runs thepsql
command on a running container -
docker run
will start a new container.
Update
fig is now called docker-compose
Instead of connecting with:
psql
Use the -h and -p option:
psql -h localhost -p 5432
Why did that work?
If we run a local psql without arguments (or with incorrect arguments), psql will try to connect via a unix socket instead of tcp, because this is a tad more efficient.
However, psql's micro optimization doesn't work for our quirky setup, because our container's file system is separate by design. Even if it weren't, psql wouldn't know where to look for the socket file.
The solution is not writing a burly docker exec command, nor is it mounting a volume so that our local psql instance can find the socket in the container, but to move the whole interaction to tcp.
Sure this is slightly less efficient, but the very reason we are using a container is so that things work even if they are setup in different computers, TCP is what docker containers use to communicate between processes, whether in the same machine or not. .
To tell psql that we want to connect via TCP, we use the -h option to identify the (virtual) machine, along with -p to identify postgresql process, 5432 being the default.
psql -h localhost -p 5432
psql could try to use these default parameters before giving up when given no arguments, but it choses to fail early, issue an error message and let the admin work out the connection details theirself. More information about this decision can be found in https://www.postgresql.org/message-id/20191217141456.GA2413%40elch.exwg.net
My 2P here; I prefer to have middlewares deployed as containers, instead of install and maintain them in the host system:
docker run --rm --name postgresql -p 5432:5432
-e POSTGRES_USER=admin -e POSTGRES_PASSWORD=admin
-e POSTGRES_DB=demodb
-d postgres:latest
docker exec -it postgresql psql -d demodb -U admin
You need to run a new container to connect to the one started by fig. This is so because the main container by default starts the service, and if you do fig run db psql
fig will NOT start the service but run the psql client instead. See the Dockerfile.
So to connect to the PostgreSQL service you need to run another container linked to the one started by fig. See https://registry.hub.docker.com/_/postgres/.
First, since fig changes the names of the containers started, check the NAMES column of the docker ps
container after having done fig up
. Then:
docker run -it --link <postgres_container_name>:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
You can do the docker exec
trick as well as desribed by @sargas too, but the linking way sounds more canonical to me.
Can you post the result of docker ps
? My guess is you need to specify the port the postgres container is exposing. Running docker ps
should give you
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
948b1f6ebc0a my_postgres:latest "/usr/lib/postgresql 6 days ago Up 6 days 0.0.0.0:49155->5432/tcp db
and looking under the PORTS column for your db container you'll see the port the db is actually exposed on. In this case it's 49155, but docker will choose a random port between 49153 and 65535 if not explicitly specified at container start. You need to supply the -p
option to psql to then target that port as such
psql -p 49155 ...
Source: https://docs.docker.com/userguide/dockerlinks/