Select Page

I recently had the privilege of spending time with my buddy Julian Fahrer to talk about Docker and what it means for Angular developers. Julian has a fantastic! course on learning Docker at https://LearnDocker.online which is a great way to get on the Docker fast track and save yourself a ton of tears and wasted hours. Julian was super fun to hang out with and I even coerced him into writing a post for your enjoyment. Cheers!

Docker for Front-end Developers by Julian Fahrer

If you think that Docker is only interesting for DevOps and Backend focused developers, you are auf dem Holzweg.

There are quite a few advantages that Docker can give you when it comes to developing applications with Angular and other front-end frameworks. In this article, I want to go over one of them with you. In order to do that, I prepared a little demo application that you can find here.

Let’s talk about the application for a second. It is a simple Angular app that allows us to manage a todo list. To do so, it interacts with a service providing a REST API, the backend of which is written in Ruby. This brings us straight to our use case for Docker: while developing our Angular app, we need to have a version of the backend available locally.

We could, of course, install Ruby on our machine and clone the repository with the source code of the backend. After that, the fun part begins: Figuring out what dependencies need to be installed and configured. I’m talking about setting up things like databases, search engines, and so on. In our example, we just need to run a single service and a PostgreSQL server for the backend; however, other applications might consist of a myriad of microservices and you need to figure out how to get them all up and running.

What if somebody could give you a single configuration file that allows you to spin up a local version of the backend with a single command? Well, meet Docker.

Hello Docker

Before we get into the nitty-gritty of spinning up an application with all its dependencies in containers – let’s start with something simple and run a single container.

You need to install Docker on your machine if you want to follow along. It is pretty straightforward for MacOS, Windows, and all major Linux distributions.

With Docker installed, we can now run our first container. Let’s start with a traditional
Hello World!

docker container run hello-world

The output of the command will look similar to this:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9bb5a5d4561a: Pull complete
Digest: sha256:3e1764d0f546ceac4565547df2ac4907fe46f007ea229fd7ef2718514bcec35d
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

With our command, we instructed Docker to use an image with the name hello-world to start a container. In the container landscape, an image is a portable and executable package of an application and all the application’s dependencies. Container images usually include things like system libraries, system utilities, configuration files, a runtime for the application as well as the source code or compiled version of the application. Because the image is portable, it can be moved to any machine that has the Docker Engine installed and then executed. And it is by executing an image that you create a container. Since the image contains the application and all its dependencies, the container can just do its work without us needing to take care of installing anything besides the Docker Engine! We didn’t even have to manually get the image onto our machine. It was automatically downloaded from Docker Hub – a place to discover and share images. Neat, right?

Getting serious

A simple Hello World does not magically bring our backend to life. We need something more sophisticated here. We need a container for our database and another one for our application server. We need to be able to talk to the application server over HTTP and our application needs to talk to the database. I’m just going to throw the commands to achieve that at you and we will talk about what they do and why we will NOT execute them:

docker network create myapp
docker container run \
  --network myapp \
  --name db \
  -e PG_USERNAME=postgres -e PG_PASSWORD=mysecret \
  -d postgres:9.6-alpine
docker container run \
  --network myapp \
  --name app \
  -e PG_USERNAME=postgres -e PG_PASSWORD=mysecret \
  -e PG_HOST=db -e PG_PORT=5432 -e RUN_MIGRATIONS=true \
  -p 3000:3000 \
  -d jfahrer/angular-demo-backend:latest

With docker network create, we tell Docker to create a container network with the name myapp. Then we use docker container run to start our containers. We give each of them a name with the --name flag. And – you might have guessed it – with --network we attach the containers to the previously created network.
It is common to configure containerized applications via environment variables. This is what we are doing with the -e flag. We use various environment variables like PG_HOST to make sure that our backend service can communicate with the database. For our app container we also publish the port 3000 with -p. This allows us to access a service that runs inside the container on port 3000 by talking to localhost on port 3000. So it looks like the application is running natively on our local machine. The last part of the command tells Docker which image we want to use to start the containers.

We make use of two images here: postgres:9.6-alpine and jfahrer/angular-demo-myapp:latest. The image jfahrer/angular-demo-myapp:latest is one of my images on Docker Hub, hence the jfahrer/ before the name of the image. My username on Docker Hub is used to namespace the image. On the other hand, postgres:9.6-alpine is not under a namespace, because it is an official image. That means that Docker Inc. makes sure that the image is well-taken care off. The part after the colon is a so-called “tag”. It allows us to reference a specific version of an image. There is more to container images, but I would rather take this article in a different direction (otherwise you might be busy reading for a couple weeks).

We are not going to execute those commands. Mainly because I promised you a single command to spin up all our dependencies. Those commands are also long and hard to remember and type. Instead, we are going to introduce a new tool called Docker Compose. With Docker Compose we can define the images we want to use, environment variables, ports to publish and more in a configuration file. We are going to call it docker-compose.yml. Here is the config for our backend:

version: '3.4'

services:
  app:
    image: jfahrer/angular-demo-backend:latest
    ports:
      - 3000:3000
    environment:
      - PG_HOST=pg
      - PG_PORT=5432
      - PG_USERNAME=postgres
      - PG_PASSWORD=mysecret
      - RUN_MIGRATIONS=true
  pg:
    image: postgres:9.6-alpine
    volumes:
      - pg-data:/var/lib/postgresql/data
    environment:
      - PG_USERNAME=postgres
      - PG_PASSWORD=mysecret
volumes:
  pg-data:

The directives are self-explanatory and I won’t go over the details here. To make things easy, I included the docker-compose.yml as part of the source code of our Angular app. To start all the containers for our backend, we can simply run the following command within the directory that contains the source code:

docker-compose up -d

That’s it! You can verify that it works by pointing your browser to http://localhost:3000/status

But our goal is to consume the service from our Angular application. So let’s run it:

ng serve

To see what’s happening on the backend while we interact with the application, we can use the following command

docker-compose logs -f app

Now we can go to http://localhost:4200/. Just perform a couple actions inside your browser and you should see logs popping up on your console.

Done testing? Run docker-compose down -v to stop and remove all the containers as well as the data stored in our database. Or omit the -v to keep the data around.

Conclusion

We used Docker to spin up a whole environment with a single command. This allowed us to have our own separate environment that we can use while working on the frontend. The only dependency that we had to install was Docker.

Ultimately there are many cases where Docker can help you to improve your workflows, applications, pipelines, and infrastructure. For example, you could be developing an Angular application within a container. Meaning there would be no need to install any tools and packages locally on your system. Or you build and ship your angular application in a container, allowing others to easily deploy it to production or use it in development. The possibilities are endless and I could keep writing for days. But I’d rather not 🙂

If you are interested in learning what Docker can do for you and gain mastery, check out my course https://LearnDocker.online. It takes you all the way from spinning up your first container to fully utilizing containers in every stage of your application’s lifecycle.