Multi-datacenter Ansible load balancer template

I am migrating the management of an existing multi-datacenter setup to Ansible, but I am not sure what is the best way to model it, as I am new to it.

I have three data centers D1, D2 and D3. In each, the same configuration is repeated identically:

  • An nginx load balancer (lb.D[n]) bound to a public IP
  • Two application servers (as[1-2].D[n]) that receive traffic solely from the local load balancer
  • A slave (read only) DB server (db.D[n]) from which both the app servers read.

The hosts file I made so far looks something like this:

# DC1 -----------
[dc_1_webservers]
10.43.0.10

[dc_1_appservers]
10.43.0.20
10.43.0.21

[dc_1_dbservers]
10.43.0.30

[dc_1:children]
dc_1_webservers
dc_1_appservers
dc_1_dbservers

# DC2 -----------
[dc_2_webservers]
10.43.10.10

[dc_2_appservers]
10.43.10.20
10.43.10.21

[dc_2_dbservers]
10.43.10.30

[dc_2:children]
dc_2_webservers
dc_2_appservers
dc_2_dbservers

# DC3 -----------
[dc_3_webservers]
10.43.20.10

[dc_3_appservers]
10.43.20.20
10.43.20.21

[dc_3_dbservers]
10.43.20.30

[dc_3:children]
dc_3_webservers
dc_3_appservers
dc_3_dbservers

[webservers:children]
dc_1_webservers
dc_2_webservers
dc_3_webservers

[appservers:children]
dc_1_appservers
dc_2_appservers
dc_3_appservers

I have purposefully left only IP addresses in here because I would like to understand how a pure Ansible solution would work, instead of resorting to DNS.

The problem is populating correctly nginx's reverse proxy upstream, so that only the app servers local to every DC are added when the nginx role is run and the config file template is copied onto the load balancer machine. In particular, is it possible to do something like this?

# file /etc/nginx/sites-enabled/loadbalancer.conf in lb.D[n] (i.e. lb.D2)
 upstream backend  {
 # Iterate over the app servers in the current data center (i.e. D2)
 {% for host in [datacenters][current_datacenter][appservers] %}
     # Add each local app server IP to the load balancing pool 
     # (i.e. 10.43.10.20 and 10.43.10.21 for DC2)
     server {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }};
 {% endfor %}
 }

For one thing I am not sure the hosts file makes entirely sense (should I instead be adding variables to the individual entries? In the current configuration I cannot do something like [dc][3][appservers], even though I am not sure that's where the solution lies.)

Thank you very much!

EDIT 1:

The playbook's structure is as follows:

main.yml
hosts
vars.yml
servers/
    webservers.yml
    appservers.yml
roles/
   base/
     files/
       ssh/
       newrelic/
     tasks/
       main.yml
     handlers/
       main.yml
   webserver/
     files/
       ssl_certs/
     templates/
       nginx/
          loadbalancer.j2
     tasks/
       main.yml
     handlers/
       main.yml
   appserver/
     files/
       pip/
         requirements.txt
     templates/
       supervisor/
          gunicorn.j2
     tasks/
        main.yml
     handlers/
        main.yml

The main.yml entry point is only two lines:

---
- include: servers/webservers.yml
- include: servers/appservers.yml

webservers.yml gathers facts about appservers (I figured that'd be necessary to achieve my goal, even though I am not entirely sure how yet), and then first invokes a base role that just installs some shared SSH keys, NewRelic bindings and other stuff that's common to every machine in our cloud, then invokes the actual webserver role.

---
- name: Gather data about appservers
  hosts:  appservers
  gather_facts: yes
  tasks:
    - debug: Gather Facts

- name: Configure all frontend web servers
  hosts: webservers
  sudo: yes
  roles:
    - { role: base }
    - { role: webserver }

Said "webserver" role installs nginx, copies the SSL certificates and then finally copies over the jinja2 nginx config template.

 - name: Install nginx configuration file.
    template: src=files/loadbalancer.j2 dest=/etc/nginx/sites-available/{{ project_name }} backup=yes

Solution 1:

You can utilize magic variables group_names and groups to look up groups defined in your inventory:

---
- hosts: webservers
  vars:
    dcs: [dc_1, dc_2, dc_3]
  tasks:
  - debug:
      msg: |
        upstream backend {
        {%- for dc in dcs %}
        {%-   if dc in group_names %}
        {%-     for host in groups[dc+'_appservers'] %}
        server {{host}};
        {%-     endfor %}
        {%-   endif %}
        {%- endfor %}
        }

This playbook will give you the following output.

TASK: [debug ]     **************************************************************** 
ok: [10.43.0.10] => {
    "msg": "upstream backend { server 10.43.0.20; server 10.43.0.21;}"
}
ok: [10.43.10.10] => {
    "msg": "upstream backend { server 10.43.10.20; server 10.43.10.21;}"
}
ok: [10.43.20.10] => {
    "msg": "upstream backend { server 10.43.20.20; server 10.43.20.21;}"
}

Change server {{host}}; as you need to.