When you start working on an existing project, open source or for a client, setting up the development environment is the first task you have to solve. If you are lucky the project has scripts for installing the required applications locally, but you will usually have to install required infrastructure yourself. After setting up everything, you will certainly have it break down at some point, later on.
With docker and container technology this has become much easier to manage. Using containers we can not only automate installation of the actual applications, but include the environment to run them as well. Using dockerfiles and docker-compose this can be included as a natural part of the source code, even if only used for development purposes.
A common service can consist of one or more web applications, a database and a message broker. Normally this would require installing some web hosting runtime and a database server, or the environment could be shared, requiring all contributors to be online while working. When updating the database server, everyone will have to be informed and update their local instance. Most of the time you will not notice until you have to work on something dependant on that database, when it will break and you have to figure out why.
The architecture in figure 1 can be defined in a docker-compose file which will provision everything you need to have the service run locally.
The docker files for the web applications can in example be run using a dotnet core image:
# Build image
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS builder
# Copy files
COPY . ./
RUN dotnet publish --configuration Release -o out
# Build runtime image
COPY --from=builder /src/Web/out .
ENTRYPOINT ["dotnet", "Web1.dll"]
docker-compose up -d will build and start the containers. Dns records are automatically created in docker, so that e.g. "web2" resolves to the internal ip of the web2 container. An application can call an web api in "web1" by sending a request to http://web1/api/something. The postgres database can be accessed with a connection string pointing to the name in the compose file:
Depending on the docker configuration on your local machine you should be able to test the web applications in your browser, as the ports are mapped to 80 and 81 (docker default to run on localhost). Note that the default database and AMQP ports are mapped, making these available from outside docker. You can access the rabbit web management interface on
localhost:15672 and the database on
Now everything is running inside docker, to apply changes to one of the applications, we would have to rebuild the images and redeploy the containers. This is easily solved, by taking down the application you want to work on, and running it outside docker using your development tool of choice. Docker compose makes this easy:
docker-compose down web1.
When running web1 in your IDE or command line, requests to the database or web2 will fail, as the DNS record does not exist outside docker.
This is solved by adding the names to the local
hosts-file. On Linux based systems this can be found at
/etc/hosts. A similar file is usually at
C:\Windows\System32\Drivers\etc on Windows.
Add all docker service names pointing to localhost:
Now applications can be run outside docker, while accessing resources inside. Changes in infrastructure can be distributed through source control in the docker-compose file. If your team are not currently using docker, you can choose to simply run your local database etc. using docker-compose, and continuing to run the applications outside as before.