Docker
-
Dockerfile - file used to define the image that will represent your service. It’ll define the necessary OS, the dependencies that must be installed, the source code, the env vars, the commands to build and run the app/service in the server
-
Image - The ‘blueprint’ of your service. The ‘class’ that defines the service. This is the package that has it all that’s needed to run your service. It’s created from a Dockerfile with the
docker buildcommand:docker build -t image-name .-ttags the image with a name,.tells where to look for the Dockerfile. -
Container - An instance of an image. The ‘object’ created from the ‘class’. You can have multiple containers running from the same image. It’s created from an image with the
docker runcommand:docker run -d -p 3000:3000 --name container-name image-name-druns the container in detached mode (in the background),-pmaps the host port to the container port. -
docker exec - A command that allows you to run commands in a running container. Used to interact, explore and debug the container.
docker exec -it container-name /bin/bash-itallows interactive terminal access.To run a command in background, instead:
docker exec -d container-name mkdir /tmp/mydir -
compose.yml - file that defines how to run the service defined in a specific Dockerfile. Can have multiple related services needed for the app to run together. Eg: a rails web app, its redis and postgresql db services. Volumes are listed and ‘bind’ed here. The command
docker compose upreads this file and starts the services defined in it. -
docker init - A command like
git initandbundle init. Creates theDockerfileandcompose.ymlfiles for you when run from a project’s root. -
dev container - It’s a “development container” where you have a running container that has the code, its dependencies and all the tools needed to develop the code. It allows you to edit your code as if it was in the container itself. A project needs a
.devcontainer/devcontainer.jsonfile to define the dev container settings. Opening it in vscode with the ‘dev container’ extension will make it to detect the dev container. Once it does its work, you’ll see that the code is in the container (as if you COPY’d it from host to container, and then docker-exec’d into it). From there you can run the project commands. This doesn’t seem to be absolutely necessary. And this is different from the “production container” that we normally build with the help of the project’s Dockerfile. That one is a representation of the server that’s going to run in production. A “dev container” is not that.
Docker usecases
Local development
DHH tweeted this on March 2025:
The sweet spot for Docker in development is to let it run your accessories, like database, Redis, ElasticSearch, but stick to native installations of Ruby/JavaScript/whatever using mise. No “works on my machine” version drift, little fuss, great performance 👌
This is great on 2 levels:
- you don’t have to install weird libraries just to get some particular version of an accessory your service depends on, working locally. It keeps your development machine clean, minimal and fast.
- you don’t have to bother with dev containers mentioned above. You can write code on your native machine using your favorite editor configuration, no need to bother with coding inside docker, which is as weird as it sounds.
So how do you do this?
The application code itself doesn’t have to know anything about the docker use. You’ll install the necessary libraries just as usual.
You’ll only have an extra file in your source code: the docker-compose.yml at the project root dir.
It will list all the ‘services’ you might need for your app to develop/run/test.
And when it’s time to run the app locally, in a separate terminal session, you’ll run docker compose up -d. This will bring up all the services listed in the yml file.
To persist the data these services will handle, you can add a volume mapping that points the container’s data store to the one in the host machine.
Here’s an example docker compose yml file for a ruby sinatra app that uses postgresql. Here we’ll not install postgres locally, instead use docker compose to download the standard postgres image, and then run it.
Here’s the docker-compose.yml:
services:
db:
image: postgres:18.3
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
ports:
- "${POSTGRES_PORT}:${POSTGRES_PORT}"
volumes:
- ~/.local/share/postgres/sinatra_app:/var/lib/postgresql
Note the volumes key. It tells the container to save the db’s data in the host machine, rather than in the container itself. This is what you want. This will persist the data even when the container is brought down. And it allows you to backup (and restore) the db data easily.
The secrets come from an .env file in the same project root dir location. It looks like this:
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USER=myuser
POSTGRES_PASSWORD=mypassword
POSTGRES_DB=myapp_db
Now when you start developing the project, you can start postgresql server by bringing up a container using this command:
docker compose up -d
And you can start your sinatra app and it will work with the db as usual. (Notice that we’re mapping the pg container’s default pg port (5432) to that same port number in host machine.)
In this way, you can just add all the services your development workflow needs to the compose file and bring them all up easily together, and stop them when not needed.
-
Documentation:
-
Useful Reads:
-
Questions:
- What are layers?
- How does caching work?
- How volumes and network interfaces work?