How to Use Traefik as Ingress Router on AWS
Today’s world, microservices are everywhere including cloud environments and on premises. This brings the application ingress routing challenge in the infrastructure includes lots of docker container.
CHALLENGE
Before using Traefik, incoming traffic was routing to containers living in AWS EC2’s via AWS Elastic Load Balancer. For each application, we needed to manually define listener rule (for ex, example.com/foo), healthcheck, traffic port etc. Moreover, we needed to create target group for each application with associated backend server.
These steps become unmanageable and tedious after number of your application, environment(dev,test,staging,prod) and workload grows.
Before Traefik, Our application routing structure was looking like following:
SOLUTION
We decided to deploy reliable, high performer, secure ingress router behind AWS Load Balancer in order to handle application routing challenge. Traefik was the answer for us. Traefik is open-source, fast and reliable edge router written in golang. It is cloud native, easy to adapt, and they have a great community support!
- Traefik routes your incoming request to responsible container by using docker container labels.
- Traefik communicates docker engine via listening docker socket. In this way, traefik automatically discovers routes of your containers even if you add, remove or scale them :) (no manual configuration or restart needed!)
With using Traefik as Ingress proxy, now the structure looks like this:
Overview
Imagine that you have deployed a bunch of microservices with the help of an orchestrator (like Docker Swarm or Kubernetes) or a service registry (like etcd or consul). Now you want users to access these microservices, and you need a reverse proxy.
Traditional reverse-proxies require that you configure each route that will connect paths and subdomains to each microservice. In an environment where you add, remove, kill, upgrade, or scale your services many times a day, the task of keeping the routes up to date becomes tedious.
This is when Traefik can help you!
What we will cover on this post
- Install Traefik v2 on Docker Swarm Manager Nodes
- Deploy sample application that incoming requests will be routed via traefik
- Scale out docker containers, see how traefik automatically discovers new services. No need configuration change or restart!
Prerequisites
- I assume you already have 3 nodes that installed Docker and initialized as Swarm mode.
Required for AWS side
- Internet facing AWS Load Balancer accepts incoming traffic from port 443
- AWS target group that associated to 3 Docker Swarm Manager nodes on port 80 (We use traefik on port 80 because incoming HTTPS will be offloaded on AWS Load Balancer side)
- DNS record that resolves the AWS Load Balancer
Install Traefik
Traefik released a version 2, which includes major features compared to version 1 such as TCP routing support, middleware definiton, canary deployments, new dashboard and much more.
First, create overlay docker network on each swarm nodes
“The overlay network driver creates a distributed network among multiple Docker daemon hosts.”
docker network create --driver=overlay traefik-net
Deploy traefik using following docker-compose file:
Now you can deploy Traefik using the following command:
docker stack deploy traefik-ingress --compose-file=traefikv2.yaml
Our ingress proxy is ready to accept and route requests to the responsible container.
But wait… We did not create a reverse proxy configuration file (like nginx.conf ) and define routing rules?
Yes, we won’t manage any reverse proxy configuration file :)
Traefik uses container labels to retrieve its routing configuration. Attach labels to your containers and let Traefik do the rest!
Deploy Sample Application
Now we can deploy bunch of applications into Docker Swarm and Traefik will route incoming requests to each container.
I’ll deploy example whoami application. It’s small application that exposes OS and Http request information.
Deploy whoami app using the following docker-compose file:
docker stack deploy whoami --compose-file=whoami.yaml
Now let me explain meaning of each container labels:
"traefik.enable=true"
Tells that expose this container only using Traefik. If this label is not added, traefik will not route incoming request.
"traefik.http.routers.whoami.rule=Host(`example.com`) && PathPrefix(`/whoami`)"
Host & Path based matching rule. Traefik will route all the requests coming with host example.com and path /whoami to this responsible container
"traefik.http.routers.whoami.entrypoints=web"
Traefik accepts network packages and listens port via entrypoints. Here tells this container will use traefik entrypoint named web
"traefik.http.services.whoami.loadbalancer.server.port=80"
Indicate to Traefik that the container runs on port 80 internally.
Now make a request to example.com/whoami, traefik will accept the incoming request, then route to responsible container
curl example.com/whoamiHostname: c6bfdefeff53
IP: 127.0.0.1
IP: 10.0.2.8
IP: 172.19.0.4
RemoteAddr: 10.0.2.10:49484
GET /whoami HTTP/1.1
Host: example.comUser-Agent: curl/7.47.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.0.0.2
X-Forwarded-Host: example.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 4b53c653542d
X-Real-Ip: 10.0.0.2
Congratulations! You successfully deploy your application and request routed via Traefik.
Scale Out Docker Containers
Last but not least, imagine incoming requests to whoami application increased and we needed to scale container horizontally. Don’t worry, traefik automatically discovers your docker services and load balance requests with round robin algorithm by default.
Now lets’ scale whoami container to 3 replicas then see how each requests routed to the individual containers.
docker service scale whoami_whoami=3whoami_whoami scaled to 3
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
Now make 3 requests to example.com/whoami
curl example.com/whoamiHostname: 403d0f774616 (container ID)
IP: 127.0.0.1
IP: 10.0.2.17 (container IP)
IP: 172.19.0.3
RemoteAddr: 10.0.2.10:60216
GET /whoami HTTP/1.1Host: example.com
User-Agent: curl/7.47.0Accept: */*
Accept-Encoding: gzipX-Forwarded-For: 10.0.0.2
X-Forwarded-Host: example.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 747aed1f991b
X-Real-Ip: 10.0.0.2
second request, as you see below, this time request routed to the another container that runs whoami application into Docker Swarm cluster.
curl example.com/whoamiHostname: f44ae1bcc1c6
IP: 127.0.0.1
IP: 10.0.2.19
IP: 172.19.0.4
RemoteAddr: 10.0.2.10:37662
GET /whoami HTTP/1.1
Host: example.com
User-Agent: curl/7.47.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.0.0.2
X-Forwarded-Host: example.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 747aed1f991b
X-Real-Ip: 10.0.0.2
third request
curl example.com/whoamiHostname: 055f72e8304f
IP: 127.0.0.1
IP: 10.0.2.20
IP: 172.19.0.4
RemoteAddr: 10.0.2.10:40700
GET /whoami HTTP/1.1
Host: example.com
User-Agent: curl/7.47.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.0.0.2
X-Forwarded-Host: example.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 747aed1f991b
X-Real-Ip: 10.0.0.2
Thats it! Your ingress routing proxy solution works!
As you realize,
- We did not manually edit any reverse proxy configuration rule file.
- We did not restart our reverse proxy to apply changes.
- Just deploy your application with related docker labels and let Traefik do the rest!
Lastly, if you want to retrieve IP address of a container. Run the docker inspect command below:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <CONTAINER_ID_HERE>