Collection of Docker and docker-compose templates

Preparations

Whenever you want to run AdonisJS in container it’s good to add SIGINT listener.

Otherwise CTRL + C won’t kill the container

https://github.com/AdonisCommunity/create-adonis-ts-app/issues/5

server.ts

import 'reflect-metadata'
import sourceMapSupport from 'source-map-support'
import { Ignitor } from '@adonisjs/core/build/src/Ignitor'

sourceMapSupport.install({ handleUncaughtExceptions: false })

const server = new Ignitor(__dirname)
  .httpServer()

server.start()
  .catch(console.error)

// Without it process won't die in container
process.on('SIGINT', () => {
  server.kill(10)
})

It’s also important to have .dockerignore and have at least node_modules and build in there. Otherwise might run into all kinds of “fun” issues where Node versions differ or some package version differs or some binary architecture differs etc.

.dockerignore

build
node_modules
.git
*.log
.env
Dockerfile

Plain Docker template

That’s just really simple plain Dockerfile. Installs and builds stuff in 2 steps

In this Dockerfile first all packages are installed in build step then required files are copied to final container and only required (prod) packages will be installed in there


# Build AdonisJS
FROM node:16-alpine as builder
# Set directory for all files
WORKDIR /home/node/app
# Copy over package.json files
COPY package*.json ./
# Install all packages
RUN npm install
# Copy over source code
COPY . .
# Build AdonisJS for production
RUN npm run build --production


# Build final runtime container
FROM node:16-alpine
# Set environment variables
ENV NODE_ENV=production
# Disable .env file loading
ENV ENV_SILENT=true
# Listen to external network connections
# Otherwise it would only listen in-container ones
ENV HOST=0.0.0.0
# Set port to listen
ENV PORT=3333
# Set app key at start time
ENV APP_KEY=
# Set home dir
WORKDIR /home/node/app
# Copy over built files
COPY --from=builder /home/node/app/build .
# Install only required packages
RUN npm ci --production
# Expose port to outside world
EXPOSE 3333
# Start server up
CMD [ "node", "server.js" ]

Then build it with docker build . and run with docker run -e APP_KEY=super_strok_key_no1_quezzes_it --network host 640b3a53c462 where 640b3a53c462 is built container hash

Docker template with SQLite

This one is actually exactly the same as above. Only thing that changes is run command

Need to add volume to your container. Otherwise all data will be lost with every release. Which isn’t most likely desired way docker run -e APP_KEY=super_strok_key_no1_quezzes_it --network host -v /path/on/host:/home/node/app/tmp 640b3a53c462

^ SQLite is held in tmp/db.sqlite3 by default, so we can mount whole tmp folder to some folder in host so DB stays on the host even when container is killed and new one starts up (in case of releases). Can read more about Docker volumes in official doc

Docker-compose template with Postgres

Basic docker-compose with DB that uses Dockerfile in same directory. Dockerfile can be copy-pasted from above (and missing env vars added)

version: '3'

services:
  api:
    # Build dockerfile
    build: .
    # Restart container in case of crashes etc
    restart: always
    # Set API to use host networking
    network_mode: host
    # API depends on DB to be there
    depends_on:
      - db
    # Set env variables
    environment:
      APP_KEY: super_strok_key_no1_quezzes_it
      PG_PASSWORD: example
      PG_USER: postgres

  db:
    # Set DB version to run
    image: postgres:13.3-alpine
    # Restart container in case of crashes etc
    restart: always
    # Set DB to use host networking
    network_mode: host
    # Set DB env variables
    environment:
      POSTGRES_PASSWORD: example
    # Mount DB data to volume, 
    # so we don't lose all DB data over deployments
    volumes:
      - database:/var/lib/postgresql/data

# Define the DB volume
volumes:
  database:

Docker-compose template with Postgres and uploads

Quite the same as above. Only addition is uploads volume. Having uploads in volume ensures they persist over deployments

version: '3'

services:
  api:
    # Build dockerfile
    build: .
    # Restart container in case of crashes etc
    restart: always
    # Set API to use host networking
    network_mode: host
    # API depends on DB to be there
    depends_on:
      - db
    # Set env variables
    environment:
      APP_KEY: super_strok_key_no1_quezzes_it
      PG_PASSWORD: example
      PG_USER: postgres
    # Mount uploads to volume, 
    # so they wont get lost over deployments
    # Change uploads path to wherever 
    # you store uploads in your app
    # Also ensure NodeJS has write access to there
    # (by default Node will have it)
    volumes:
      - uploads:/home/node/app/public/uploads

  db:
    # Set DB version to run
    image: postgres:13.3-alpine
    # Restart container in case of crashes etc
    restart: always
    # Set DB to use host networking
    network_mode: host
    # Set DB env variables
    environment:
      POSTGRES_PASSWORD: example
    # Mount DB data to volume, 
    # so we don't lose all DB data over deployments
    volumes:
      - database:/var/lib/postgresql/data

# Define the DB volume
volumes:
  database:
  uploads:

Host networking?!

Well.. Yea.. Docker networking stack isn’t the best thing ever invented. Under heavy loads Docker networking takes 20-33% of total CPU. It used to be worse, they have made it lil bit better. Still.. IMO wasting 33% of server resources is not worth the benefits you get with Docker own networking stack.

In case you want to stick with Docker own networking just replace network_mode and --network host with port maps

When using Windows it’s easier to use port maps!

Windows has some features with network_mode=host and what network interface is used as host one depends on if contaiers are run inside WSL, inside WSL2 or Windows native containers so it’s easier to stick to port maps and access your services at localhost rather than go for IP hunting