Package And Deploy Docker Containers To ECS

Micro services architecture often employ the use of container such as Docker to package application services. The use of containers provides many benefits including an independent run environment and self contained ‘packages’.

Most modern cloud providers and deployment tools such as Kubernetes work with containers and enable application containers to be deployed and scaled without much manual intervention. However to leverage this advantage application architecture and code has to be build in a way that facilitates this.

Packaging Code

Factors to consider when applications are run from containers, including:

  • Where to store configurations
  • How communications is routed in / out of the container
  • Idempotent services

The nature of seamless scaling of micro services mean that configurations are stored in a central location such as databases and cloud service stores such as table store / parameter storage services (which most cloud providers offer). Applications should be “stateless” and all instances of the services should be identical.

Certain configurations such as connection strings and internal endpoints might need to be passed in dynamically. In these cases, settings should be provided when the containers are started in the form of environment parameters.

Example of list of environment variables that can be configured
docker-compose:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
version: '3.1'
services:
ghostblog1:
image: ghost:3.12.0-alpine
ports:
- 6000:2368
restart: always
volumes:
- ghost-content:/var/lib/ghost/content
environment:
url: https://mycloudology.com
database__client: mysql
database__connection__host: ...
database__connection__port: 3306
database__connection__user: ...
database__connection__password: ...
database__connection__database: ...
volumes:
ghost-content: {}

Docker:

1
docker run -e "NPGSQL_DB=[conn_string]" -e "NPGSQL_DB_SCHEMA=[schema]" exampleAppdockerimage

Example

Assuming we have a .Net Core API service listening on port 5000 and a database connection using port 1433, the application can be build and packaged with the following Dockerfile.

The Dockerfile should be at the root of the app:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
RUN dotnet restore
RUN dotnet publish -c Release -o out

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS runtime
WORKDIR /app
COPY --from=build /build_app/ExampleApp/out ./

RUN ls .
#RUN chmod u+x dotenv.env
#RUN /bin/bash -c "source dotenv.env"
RUN chmod u+x TaskboardServerDnx.pfx
RUN chmod u+x ./Content
RUN chmod u+x ./EmailTemplates/Templates
RUN pwd

EXPOSE 5000
EXPOSE 1433

ENTRYPOINT ["dotnet", "ExampleApp.dll"]

Command to package application into a Docker container:

1
docker build -t ExampleApp -f Dockerfile .

The packaged container will be in the list of docker images. At this point the Docker image can be pushed to the Docker Registry and further distributed.

Docker Registry

To distribute the Docker image, each ECS instance will need to pull the new Docker image and start a new instance. See linked article for more details.

Example of creating a Docker Registry and how to interact with the container registry.

Reverse Proxy

One way to run multiple micro services on a single ECS instance is to have multiple container instances run “behind” a web server reverse proxy such as Nginx.

docker-compose.yml, localbox should be mapped to the Docker host IP.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
version: '3.1'
services:
production-nginx-container:
container_name: 'production-nginx-container'
image: nginx:latest
ports:
- 80:80
- 443:443
volumes:
- ./production.conf:/etc/nginx/conf.d/default.conf
- ./dh-param/dhparam-2048.pem:/etc/ssl/certs/dhparam-2048.pem
- /docker-volumes/etc/letsencrypt/live/exampleApp.com/fullchain.pem:/etc/letsencrypt/live/exampleApp.com/fullchain.pem
- /docker-volumes/etc/letsencrypt/live/exampleApp.com/privkey.pem:/etc/letsencrypt/live/exampleApp.com/privkey.pem
- /docker-volumes/data/letsencrypt:/data/letsencrypt
networks:
- docker-network
extra_hosts:
- "localbox:xxx.xx.xx.xx"

networks:
docker-network:
driver: bridge

Dockerfile for starting exampleApp on docker-network

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
version: '1.1'

services:
production-exampleApp-01:
container_name: 'production-exampleApp-01'
image: exampleAppDockerImage
ports:
- 5000:5000
environment:
NPGSQL_DB: '...'
NPGSQL_DB_SCHEMA: 'exampleApp'
networks:
- docker-network

production-exampleApp-02:
container_name: 'production-exampleApp-02'
image: exampleAppDockerImage
ports:
- 5001:5001
environment:
NPGSQL_DB: '...'
NPGSQL_DB_SCHEMA: 'exampleApp'
networks:
- docker-network

networks:
docker-network:
driver: bridge

nginx.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
upstream exampleApp {
server localbox:5000;
server localbox:5001;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name exampleApp.com;
...
location / {
proxy_pass http://exampleApp;
#proxy_set_header X-Forwarded-For $remote_addr;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
}
}

Connect to RDS

Whenever you have an application that is using an RDS connection, it is best practice to create a user per application. This means applications should not share the same RDS database instance user across different applications.

The user should have minimum access permissions as possible.
Example of an user with limited database access permissions used by the application.
Example of an user (_admin) with limited database access permissions used by the application.