Building Your First Node App Using Docker

Building Your First Node App Using Docker

Written by Karl Hughes and edited by Carmen Bourlon

Since its first release in 2013, Docker has quickly become one of the hottest topics in software development, and has been adopted by both large and small companies across the world. That said, there are plenty of developers who have never gotten a chance to use Docker, or been exposed to it enough to understand its benefits. In this article, we'll cover an overview of how Docker works and three examples of using Docker with Node.

What is Docker and what are containers?

Docker is a container engine. Containers are similar to virtual machines, but they don't actually emulate the whole operating system. Instead, all of the containers you run share the same underlying kernel with the host machine, which means that they're much lighter and more efficient than virtual machines. While a VM may take 1-2 minutes to start up, Docker containers take just a few seconds. Even if you're not an expert in virtualization or devops, using containers can still improve your development practices.

Docker offers developers advantages like increased modularity, an easy way to share environmental dependencies, and tools to make server configuration simpler. This makes Docker a great tool for developers working in any language, but in this blog post, we'll focus on three examples in Node. First, we'll run a simple "Hello World" script in a Docker container, then we'll move on to an Express web app, and finally, we'll add a database connection to our web app to demonstrate running and linking multiple containers.

Note: the complete code for this blog post is available on Github. You can use the start-here branch if you're following this tutorial, or the master branch if you just want to get a final working product.

1. Running a Node Script in Docker

Before you get started, install Docker for your operating system. Docker can be run on Windows, Linux, or Mac and it's free if you're using the community edition. You can make sure Docker is installed and running using the command: docker -v. If you're not using version 17.0 or greater, download the latest version before continuing.

Next, clone the repository used for this demo and switch to the start-here branch:

git clone https://github.com/karllhughes/node-docker-demo.git
cd node-docker-demo
git checkout start-here

To get a feel for how Docker works, let's start with a simple example. In the root of the directory there is a file called hello.js that contains a console log statement like this:

console.log("Hello World!");

To run this script in a Docker container, enter this command in your terminal:

docker run --rm -v $(pwd):/app -w /app node:9 node hello.js

After Docker downloads the image (it may take a couple minutes if this is your first time using it), you should see Hello World! in your terminal. Congratulations, you just ran your first Node script in a Docker container!

What's Going on Here?

To better understand what you just did by running the Docker command above, it's helpful to understand how Docker works. Docker users images to run containers. Images are created using Dockerfiles. In the example above, we specified node:9 as our image. This instructs Docker to download and use an image with Node v9 installed to run the hello.js script.

But what about the rest of that command above? Let's dive into what's going on:

  • docker run - This is the Docker command that runs a container from an image. There are dozens of options you can set when using the docker run command, but we've used a bare minimum set to get started.
  • --rm - By default, Docker runs a container's command and then shuts the container down, but instead of deleting it, Docker keeps that container around in case it's needed later. Because we don't want to re-run this container, we've set the --rm flag. This saves space and is generally a good practice for one-off scripts like this.
  • -v $(pwd):/app - Each container has its own isolated filesystem, so it typically won't be able to access files on your computer (called the "host" machine). In order to get the hello.js file into the container, we use a bind mount. This "binds" files in the host machine's directory to the /app directory within our Docker container's filesystem.
  • -w /app - Docker images usually define a "working directory", but we've overridden this value. This sets the base path for any commands run in this container to /app.
  • node:9 - At this point in the command, we've set all the options for the container, and this piece tells Docker what image to use. Docker Hub is the official image host for most open source images. In this case, we're using the Node v9 image. If we wanted to use a different version of Node to run this script, it would be as easy as changing this part of the command to node:4 or node:6.
  • node hello.js - Finally, this is the actual command run in the container. Containers should run only one command, but in some cases that command may be a long-running one (for example, running a Node server) as we'll see in the next example.

2. Using Docker to Run an Express App

Now that you've got a basic understanding of how Docker runs a single Node script, let's explore what it will take to run an Express web application on Docker.

In your terminal, navigate to the node-docker-demo repository that you cloned in the previous section, and be sure that you're on the start-here branch of the repository. This repository already has the Node code we'll need to run the application.

Next, create a file at the root of your project directory called Dockerfile. Dockerfiles are configuration files for Docker images. In short, you write a Dockerfile and use it to build a Docker image. Next you'll run the image to create an instance of a Docker container.

Open up the Dockerfile in your any IDE or text editor, and add the following:

## Specifies the base image we're extending
FROM node:9

## Create base directory
RUN mkdir /src

## Specify the "working directory" for the rest of the Dockerfile
WORKDIR /src

## Install packages using NPM 5 (bundled with the node:9 image)
COPY ./package.json /src/package.json
COPY ./package-lock.json /src/package-lock.json
RUN npm install --silent

## Add application code
COPY ./app /src/app
COPY ./bin /src/bin
COPY ./public /src/public

## Add the nodemon configuration file
COPY ./nodemon.json /src/nodemon.json

## Set environment to "development" by default
ENV NODE_ENV development

## Allows port 3000 to be publicly available
EXPOSE 3000

## The command uses nodemon to run the application
CMD ["node", "node_modules/.bin/nodemon", "-L", "bin/www"]

You should also create a .dockerignore file at the root directory of your project. Include the following lines as well as any other configuration files that you do not want included in the Docker image:

.git
.idea
**/node_modules
.DS_Store
.data

This will ensure that Docker doesn't include your git history, IDE configuration, or local node_modules in the image that it builds. Including these files would take up space and pose possible security risks if you decide to distribute or share your Docker image later.

In order to build this Dockerfile and get a Docker image, run the following command:

docker build -t node-docker .

This creates an image with the "tag" (or name) node-docker. Now we can use this tag to run a container of our application:

docker run --rm -v $(pwd)/app:/src/app -v $(pwd)/public:/src/public -p 3000:3000 node-docker

Right now, only the home page will work as we haven't set up or connected the database yet, but you can see how simple it was to get this Node app running in Docker. We didn't even have to install any node_modules on our host machine as Docker handled that when building the image.

Most of the options in this docker run command were covered when we ran the Node script in the previous exercise, but that last option (-p 3000:3000) allows you to map the Docker container's port 3000 with your host machine's port 3000, so when you open your browser and navigate to localhost:3000, you should see the home page:

Exiting the Container

In order to exit the container, you should just have to hit control + c if you're using a Mac. If that doesn't work, you can open a new terminal window, and type docker ps:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED                  STATUS              PORTS                    NAMES
e1fc85b838f4        node-docker         "node node_modules/.…"   Less than a second ago   Up 4 seconds        0.0.0.0:3000->3000/tcp   mystifying_liskov

Then use the CONTAINER ID to stop the process:

docker stop e1fc85b838f4

Now you've got a simple Express application running in a Docker container, but this isn't really a very realistic application yet. Most real web applications need a connection to a database, so let's take a look at what it will take to add Postgres to this project.

3. Adding a Database

Typically, setting up a database for local development means installing Postgres locally and then hooking up to it through one of your machine's ports. The problem with this solution is that you're locked into running one version of the database and if you want to run multiple applications on different Postgres databases, you're going to have to make sure your configurations are set up properly. These configurations might be different for each developer on your team, so it's a decent amount of work to install a database locally or on a virtual machine.

Docker makes this much easier. Instead of installing a database onto our host machine, we can simply run a database container then link it to our web application container.

The node-docker-demo repository uses a Sequelize, so the code is already in place for this, but you can check it out in the /app/models directory.

We can start a database container using the Postgres image available on Docker Hub:

docker run -d --rm -p 5432:5432 -e POSTGRES_USER=admin -v $(pwd)/.data:/var/lib/postgresql/data -v $(pwd)/sql:/sql --name nd-db postgres:9.6

You can verify that the database is running by typing docker ps and looking for the postgres:9.6 image in the list of running containers.

We've seen some of the options above in previous docker run commands, but let's take a look at the new ones:

  • -d - Adding this flag causes the container to run in detached mode, meaning that your terminal isn't attached to the container's process. This will allow us to run another container without opening a new terminal window.
  • -e POSTGRES_USER=admin - This passes in an environmental variable that tells Postgres to create a user and database called admin. If we were running this app in a hosted environment, we would definitely want to add a password using the POSTGRES_PASSWORD environmental variable, but we'll skip that for this tutorial.
  • --name nd-db - Naming your containers is optional, but it will make linking to them easier. If you don't name your container, Docker will make up a name, but it will be different each time you run the image.
  • postgres:9.6 - One of the advantages to Docker is the ability to switch to different database versions effortlessly. If you wanted to use another version of Postgres, you would simply edit the container name to one of the other available versions.

Now that the database container is running, let's set up the schema. There are two files in the /sql directory that we need to run - one for database seeds and one for the migrations.

In order to run a command on a running container, we'll use the docker exec command:

docker exec nd-db psql admin admin -f /sql/migrations.sql

This will log into the Postgres container, and run the migrations.sql file. This file creates a table called colleges in the admin table.

Next, run the seeds.sql file in the same way:

docker exec nd-db psql admin admin -f /sql/seeds.sql

You should now have a Postgres database table with three records in it that we will be able to use in our application. This time when we start the web application container, we'll add the --link option to indicate that the web application should be linked to the Postgres container. We'll also add the -d flag to make sure the application container runs in the background:

docker run --rm -p 3000:3000 -d -v $(pwd)/app:/src/app -v $(pwd)/public:/src/public --link nd-db --name nd-app node-docker

Now when you load the application and go to localhost:3000/colleges you should see the three records we added above in your Node app. Now we've successfully linked two containers together to build a more realistic Node app using Docker!

What's next?

Many Node developers know that applications are usually more than just a database and Express application. We often need to build frontend assets, connect a web server (like Nginx), add a cached data store (like Redis), and set up a logging service. Each of these will run in its own container and will be connected to our primary web application much like we connected our database. Once this process of connecting containers becomes unwieldy, check out Docker Compose, which will allow you to run all your containers at once with a single configuration file.

If you use this project to start your next Dockerized Node app, it's important to keep a few things in mind about working with Docker. First, the node_modules are installed when your Docker image is built, so if you need to add a new NPM module, be sure to stop your container, rebuild your image, and then start the container again. You don't need to stop and restart the database container each time though. Another thing to keep in mind is that the data in your database is stored inside the container and not on your host machine unless you use a volume or bind mount. This means that when the container goes down, you could lose your data, so you'll want to make sure you understand Docker volumes before you deploy an application like this to the web with real data.

Docker allows you to more easily upgrade, experiment with, and share your stack across your team, but it does come with a learning curve. If you're still scratching your head about Docker, take some time to read the official documentation. While not Javascript/Node specific, it does reveal a lot about what you can do with Docker to maximize its effectiveness.

The contributors to JavaScript January are passionate engineers, designers and teachers. Emily Freeman is a developer advocate at Kickbox and curates the articles for JavaScript January.