Skip to content

Docker and NodeJS

Created: 2019-01-21 09:08:39 -0800 Modified: 2019-01-21 09:09:08 -0800

If using pnpm, follow these instructions.

HiDeoo wrote a quick guide for me on this here.

Summary:

  • Since I’m using a monorepo (Lerna), I wanted to avoid copying over hundreds of MB of files to the Docker daemon (which takes forever). The .dockerignore file is supposed to be used for that, but there are some issues (e.g. you can’t do something like “!packages/*/package.json”). So HiDeoo ignores * by default and then only includes the files (via ”!”) that are needed for the build.
  • When it comes to using private modules, you have to have your npmrc set up correctly. Unfortunately, “npm adduser” accepts credentials via prompts, not via command-line arguments, so it’s easier to copy a complete .npmrc onto the machine. The way that I did this locally:
    • Create .npmrc_for_docker in the root directory that I pass to the “docker build” command. It’s not named “.npmrc” so that it doesn’t get picked up by normal NPM commands automatically and because it needs to be git-ignored. This file should contain any NPMRC configuration like “save-exact=true” and registry/auth information.
    • Add .npmrc_for_docker to .gitignore.
    • Add “!.npmrc_for_docker” to .dockerignore so that it’s not ignored.
    • Modify the Dockerfile to copy .npmrc_for_docker as .npmrc:
COPY .npmrc_for_docker .npmrc
  • If ever running this from CI, you could omit this step and have CI output the .npmrc from an environment variable since CI should be the thing that has secrets in it, not your repository.

I ran into a huge problem with all of this:

Your lockfile needs to be updated, but yarn was run with --frozen-lockfile.

To clarify: “—frozen-lockfile” will install using yarn.lock, but if any updates need to be made, it will error out (as this would indicate that something would be different in the resulting build).

I tried everything that I could think of to fix this:

  • I updated the Yarn version inside the container from 1.9.X to 1.12.X. This seemed to have no impact other than adding in integrity hashes, which apparently aren’t necessary (HiDeoo: And they added integrity as a migration path with unsafe-disable-integrity-migration so this shouldn’t error anw except if the integrity is invalid).
  • I tried to make sure that my npmrc on the host (Linux) was the same as in the container.
  • I wiped out almost everything in Docker in case it was a layer-caching issue.
  • Meanwhile, this did work for HiDeoo, so I know there wasn’t a problem with only ever installing the Overseer’s package.json file.

In the end, I never figured out what was causing this issue. This means that the builds in production could be slightly different from the builds in development, which could lead to issues that would be incredibly difficult to track down.

Using Yarn or NPM to start your application

Section titled Using Yarn or NPM to start your application

Suppose you have this in your Dockerfile:

ENTRYPOINT ["yarn"]
CMD ["start"]

Then, suppose you have this in your package.json

"start": "node --single_threaded ./dist/main"

If you start your container and then run htop in the container, you’ll see something like this

PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
18 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.00 node /opt/yarn-v1.9.4/bin/yarn.js start
19 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.00 node /opt/yarn-v1.9.4/bin/yarn.js start
20 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.00 node /opt/yarn-v1.9.4/bin/yarn.js start
21 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.00 node /opt/yarn-v1.9.4/bin/yarn.js start
22 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.00 node /opt/yarn-v1.9.4/bin/yarn.js start
23 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.00 node /opt/yarn-v1.9.4/bin/yarn.js start
24 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.00 node /opt/yarn-v1.9.4/bin/yarn.js start
25 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.00 node /opt/yarn-v1.9.4/bin/yarn.js start
26 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.00 node /opt/yarn-v1.9.4/bin/yarn.js start
1 bldeploy 20 0 656M 49048 21472 S 0.0 0.8 0:00.47 node /opt/yarn-v1.9.4/bin/yarn.js start
28 bldeploy 20 0 655M 101M 22132 S 0.0 1.7 0:00.00 node --single_threaded ./dist/main
29 bldeploy 20 0 655M 101M 22132 S 0.0 1.7 0:00.00 node --single_threaded ./dist/main
30 bldeploy 20 0 655M 101M 22132 S 0.0 1.7 0:00.00 node --single_threaded ./dist/main
31 bldeploy 20 0 655M 101M 22132 S 0.0 1.7 0:00.00 node --single_threaded ./dist/main
32 bldeploy 20 0 655M 101M 22132 S 0.0 1.7 0:00.00 node --single_threaded ./dist/main
27 bldeploy 20 0 655M 101M 22132 R 0.0 1.7 4:19.31 node --single_threaded ./dist/main
43 root 20 0 1588 1016 832 S 0.0 0.0 0:00.01 /bin/ash
50 root 20 0 4056 1624 876 R 0.0 0.0 0:00.00 htop

I don’t know if there’s any real CPU impact of doing this, but as you can see, Node sticks around in order for Yarn to work in the first place, so you’ll always be using a few tens of MB of RAM.

When exposing ports, expose 9229-9231, e.g.:

ports:
- "3000:3000"
- "9229:9229"
- "9230:9230"
- "9231:9231"

Add a launch configuration inside VSC:

{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "attach",
"name": "Docker",
"address": "localhost",
"port": 9229,
"restart": true,
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
]
}