It solves your problem through a variety of approaches. The one I would use would be as follows.
A single container would be responsible for responding to requests via the URL www.teste.com
, just like a load balancer. This container would receive the requests of the paths and, depending on the path, would make another request for a second container. This strategy is up to you.
For my load balancer, I wrote a code in Go that receives the request and passes it to another server based on the path called:
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
)
func router(w http.ResponseWriter, r *http.Request) {
var url string
switch r.URL.Path {
case "/stack":
url = "http://app1.dev" + r.URL.Path
case "/overflow":
url = "http://app2.dev" + r.URL.Path
default:
url = "http://app3.dev" + r.URL.Path
}
resp, err := http.Get(url)
if err != nil {
w.WriteHeader(500)
w.Write([]byte(fmt.Sprintf("Could not call '%s'.\n", url)))
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
w.WriteHeader(500)
w.Write([]byte(fmt.Sprintf("Could not read '%s'.\n", url)))
}
w.WriteHeader(resp.StatusCode)
w.Write(body)
}
func main() {
http.HandleFunc("/", router)
err := http.ListenAndServe(":80", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
To orchestrate the containers, I chose to write a docker-compose.yml
, in which I discriminate all the containers of the application:
mm_lb:
image: mm/lb:latest
container_name: mm_lb
links:
- mm_app1:app1.dev
- mm_app2:app2.dev
- mm_app3:app3.dev
ports:
- "80"
mm_app1:
image: mm/app1:latest
container_name: mm_app1
ports:
- "80"
mm_app2:
image: mm/app2:latest
container_name: mm_app2
ports:
- "80"
mm_app3:
image: mm/app3:latest
container_name: mm_app3
ports:
- "80"
Notice that the setting for the mm_lb
container creates links with other containers. This is how I can call them from the load balancer which, in turn, responds to the main URL.