Adding files to standard images using docker-compose

DockerDocker Compose

Docker Problem Overview


I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.

One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.

But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?

Docker Solutions


Solution 1 - Docker

Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.

This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:

version: '2'

services:
  my-db-app:
    build: db/.
    image: custom-db

And db/Dockerfile would look like:

FROM mysql:latest
COPY ./sql /sql

The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.


Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:

tar -cC sql . | docker run --rm -it -v sql-files:/sql \
  busybox /bin/sh -c "tar -xC /sql"

Run that via a script and then have that same script bounce the db container to reload that config.


Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:

# create a reusable volume
$ docker volume create --driver local \
    --opt type=nfs \
    --opt o=addr=192.168.1.1,rw \
    --opt device=:/path/to/dir \
    foo

# or from the docker run command
$ docker run -it --rm \
  --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
  foo

# or to create a service
$ docker service create \
  --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
  foo

Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:

version: '3.4'

configs:
  sql_file_1:
    file: ./file_1.sql

services
  my-db-app:
    image: my-db-app:latest
    configs:
      - source: sql_file_1
        target: /sql/file_1.sql
        mode: 444

Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.

Solution 2 - Docker

If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.

Example for nginx config in official image:

version: "3.7"

services:
  nginx:
    image: nginx:alpine
    ports:
      - 80:80
    environment:
      NGINX_CONFIG: |
        server {
          server_name "~^www\.(.*)$$" ;
          return 301 $$scheme://$$1$$request_uri ;
        }
        server {
          server_name example.com
          ...
        }
    command:
      /bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""

Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):

https://docs.docker.com/compose/compose-file/#env_file https://docs.docker.com/compose/compose-file/#variable-substitution

To get the original entrypoint command of a container:

docker container inspect [container] | jq --raw-output .[0].Config.Cmd

To investigate which file to modify this usually will work:

docker exec --interactive --tty [container] sh

Solution 3 - Docker

This is how I'm doing it with volumes:

services:
  my-db-app:
    command: /shell_scripts/go.sh
    volumes:
      - ./shell_scripts:/shell_scripts 

Solution 4 - Docker

i think you had to do in a compose file:

volumes:
 - src/file:dest/path

Solution 5 - Docker

As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).

version: '3.3'
services:
  my-db-app:
    command: /shell_scripts/go.sh
    volumes:
      shell_scripts:/shell_scripts 
volumes:
    shell_scripts:
      driver: "cloudstor:aws"

Solution 6 - Docker

With Compose V2 you can simply do (as in the documentation) :

docker compose cp src [service:]dest

Before v2 you can use the workaround using docker cp explained in the associated issue

docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionAndreas WederbrandView Question on Stackoverflow
Solution 1 - DockerBMitchView Answer on Stackoverflow
Solution 2 - DockerBobíkView Answer on Stackoverflow
Solution 3 - DockerAdam SpenceView Answer on Stackoverflow
Solution 4 - DockerCristian MontiView Answer on Stackoverflow
Solution 5 - DockerEoanView Answer on Stackoverflow
Solution 6 - DockermarrcoView Answer on Stackoverflow