Docker follow symlink outside context
DockerSymlinkDockerfileSymlink TraversalDocker Problem Overview
Yet another Docker symlink question. I have a bunch of files that I want to copy over to all my Docker builds. My dir structure is:
parent_dir
- common_files
- file.txt
- dir1
- Dockerfile
- symlink -> ../common_files
In above example, I want file.txt to be copied over when I docker build inside dir1. But I don't want to maintain multiple copies of file.txt. Per this link, as of docker version 0.10, docker build must
> Follow symlinks inside container's root for ADD build instructions.
But I get no such file or directory when I build with either of these lines in my Dockerfile:
ADD symlink /path/dirname
or
ADD symlink/file.txt /path/file.txt
mount option will NOT solve it for me (cross platform...).
I tried tar -czh . | docker build -t
without success.
Is there a way to make Docker follow the symlink and copy the common_files/file.txt into the built container?
Docker Solutions
Solution 1 - Docker
That is not possible and will not be implemented. Please have a look at the discussion on github issue #1676:
> We do not allow this because it's not repeatable. A symlink on your machine is the not the same as my machine and the same Dockerfile would produce two different results. Also having symlinks to /etc/paasswd would cause issues because it would link the host files and not your local files.
Solution 2 - Docker
If anyone still has this issue I found a very nice solution on superuser.com:
https://superuser.com/questions/842642/how-to-make-a-symlinked-folder-appear-as-a-normal-folder
It basically suggests using tar to dereference the symlinks and feed the result into docker build:
$ tar -czh . | docker build -
Solution 3 - Docker
One possibility is to run the build in the parent directory, with:
$ docker build [tags...] -f dir1/Dockerfile .
(Or equivalently, in child directory,)
$ docker build [tags...] -f Dockerfile ..
The Dockerfile will have to be configured to do copy/add with appropriate paths. Depending on your setup, you might want a .dockerignore
in the parent to leave out
things you don't want to be put into the context.
Solution 4 - Docker
I know that it breaks portability of docker build, but you can use hard links instead of symbolic:
ln /some/file ./hardlink
Solution 5 - Docker
I just had to solve this issue in the same context. My solution is to use hierarchical Docker builds. In other words:
parent_dir
- common_files
- Dockerfile
- file.txt
- dir1
- Dockerfile (FROM common_files:latest)
The disadvantage is that you have to remember to build common_files before dir1. The advantage is that if you have a number of dependant images then they are all a bit smaller due to using a common layer.
Solution 6 - Docker
I got frustrated enough that I made a small NodeJS utility to help with this: file-syncer
Given the existing directory structure:
parent_dir
- common_files
- file.txt
- my-app
- Dockerfile
- common_files -> symlink to ../common_files
Basic usage:
cd parent_dir
// starts live-sync of files under "common_files" to "my-app/HardLinked/common_files"
npx file-syncer --from common_files --to my-app/HardLinked
Then in your Dockerfile
:
[regular commands here...]
# have docker copy/overlay the HardLinked folder's contents (common_files) into my-app itself
COPY HardLinked /
Q/A
-
How is this better than just copying
parent_dir/common_files
toparent_dir/my-app/common_files
before Docker runs? > That would mean giving up the regular symlink, which would be a loss, since symlinks are helpful and work fine with most tools. For example, it would mean you can't see/edit the source files ofcommon_files
from the in-my-app copy, which has some drawbacks. (see below) -
How is this better than copying
parent_dir/common-files
toparent_dir/my-app/common_files_Copy
before Docker runs, then having Docker copy that over toparent_dir/my-app/common_files
at build time? > There are two advantages: > 1)file-syncer
does not "copy" the files in the regular sense. Rather, it creates hard links from the source folder's files. This means that if you edit the files underparent_dir/my-app/HardLinked/common_files
, the files underparent_dir/common_files
are instantly updated, and vice-versa, because they reference the same file/inode. (this can be helpful for debugging purposes and cross-project editing [especially if the folders you are syncing are symlinked node-modules that you're actively editing], and ensures that your version of the files is always in-sync/identical-to the source files) > 2) Becausefile-syncer
only updates the hard-link files for the exact files that get changed, file-watcher tools like Tilt or Skaffold detect changes for the minimal set of files, which can mean faster live-update-push times than you'd get with a basic "copy whole folder on file change" tool would. -
How is this better than a regular file-sync tool like Syncthing? > Some of those tools may be usable, but most have issues of one kind or another. The most common one is that the tool either cannot produce hard-links of existing files, or it's unable to "push an update" for a file that is already hard-linked (since hard-linked files do not notify file-watchers of their changes automatically, if the edited-at and watched-at paths differ). Another is that many of these sync tools are not designed for instant responding, and/or do not have run flags that make them easy to use in restricted build tools. (eg. for Tilt, the
--async
flag offile-syncer
enables it to be used in alocal(...)
invokation in the project'sTiltfile
)
Solution 7 - Docker
instead of using simlinks it is possible to solve problem administratively by just moving files from sites_available to sites_enabled instead of copying or making simlinks
so your site config will be in one copy only in site_available folder if it stopped or something or in sites_enabled if it should be used
Solution 8 - Docker
Commonly I isolate build instructions to subfolder, so application and logic levels are higher located:
.
├── app
│ ├── package.json
│ ├── modules
│ └── logic
├── deploy
│ ├── back
│ │ ├── nginx
│ │ │ └── Chart.yaml
│ │ ├── Containerfile
│ │ ├── skaffold.yaml
│ │ └── .applift -> ../../app
│ ├── front
│ │ ├── Containerfile
│ │ ├── skaffold.yaml
│ │ └── .applift -> ../../app
│ └── skaffold.yaml
└── .......
I utilize name ".applift" for those symbolic links .applift -> ../../app
And now follow symlink via realpath without care about path depth
dir/deploy/front$ docker build -f Containerfile --tag="build" `realpath .applift`
or pack in func
dir/deploy$ docker_build () { docker build -f "$1"/Containerfile --tag="$2" `realpath "$1/.applift"`; }
dir/deploy$ docker_build ./back "front_builder"
so
COPY app/logic/ ./app/
in Containerfile will work
Yes, in this case you will loose context for other layers. But generally there is no any other context files located in build-directory
Solution 9 - Docker
Use a small wrapper script to copy the needed dir to the Dockerfile's location;
build.sh
;
.
#!/bin/bash
[ -e bin ] && rm -rf bin
cp -r ../../bin .
docker build -t "sometag" .
Solution 10 - Docker
If you're on mac, rembember to do
brew install gnu-tar
and use gtar instead of tar. Seems there are some differences between the two.
gtar worked for me at least.