I’m running funkwhale in docker. This consists of a half dozen docker containers one of which is postgres.

To run a backup, funkwhale suggests shutting down all of the containers and then docker compose running pg_dump on the postgres container. Presumably this is to copy the database when nobody is accessing it.

For some reason when I do this, I get an error like:

pg_dump: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
	Is the server running locally and accepting connections on that socket?

It would seem that postgres isn’t running. I see the same error with other commands such as psql.

If I fully boot the container and then try exec-ing the command, it works fine.

So it would seem that the run command isn’t fully booting the instance before running the command? What’s going on here?

The container is built from postgres:15-alpine

  • Matt The Horwood@lemmy.horwood.cloud
    link
    fedilink
    English
    arrow-up
    3
    ·
    16 hours ago

    if it helps, I run Lemmy and dont stop the database at all.

    I mount a back directory to the container and then run the bellow to do the backup.

    dockerID=$(docker ps | grep lemmy_postgres | awk '{print $1}')
    docker exec ${dockerID} /usr/local/bin/pg_dumpall -c -U lemmy | gzip > /mnt/backups/lemmy/lemmy_dump_`date +%Y%m%d-%H%M%S`.sql.gz
    

    replace the lemmy_postgres with your funkwhale name.

  • Luke@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    17 hours ago

    That’s working as intended; as the compose docs state, the command passed by run overrides the command defined in the service configuration, so it wouldn’t normally be possible to actually shut down all the containers and then use docker compose run to interact with one of them. Run doesn’t start anything up in the container other than the command you pass to it.

    I’m not familiar with funkwhale, but they probably meant either to (a) shut down all the containers except postgres so that running pg_dump has something to connect to, or (b) use exec as you have done.

    Personally, I do what you did, and use exec most of the time to do database dumps. AFAIK, postgres doesn’t require that all other connections to it are closed before using pg_dump. It begins a transaction at the time you run it, so it’s not interfering with anything else going on while it produces output (see relevant SO answer here). You could probably just leave your entire funkwhale stack up when you use docker compose exec to run pg_dump.