Assign multiple IPs to 1 Entry in hosts file

The hosts file does not provide such mechanism. If you list two IPs for the same name, only the first one will be used. So, there is no such thing as primary and secondary IPs.

Also, the hosts file does not handle URLs. It just handles names like the ones provided in the question. A URL contains complete path and protocol such as http://host/path/to/resource.


You can't provide resilience or round robin load balancing via the /etc/hosts file - it is not designed for that purpose.

Instead, your options are ... (in no particular order)

  1. Configure your network properly, so that routes change when a link is dropped
  2. Use DNS round-robin load balancing (not A Good Idea TM) using a managed service (eg. loaddns.com or dnsmadeeasy.com etc.)
  3. Use a local L3 load balancer for the outbound traffic (HAProxy?) with the back-ends defined as necessary
  4. Build the resilience into your web application itself

/etc/hosts doesn't support round robin but you can write a simple bash script to sed replace an entry tagged with a #RoundRobin comment (or any other tag you wish to use, just reflect it in the grep line in the script).

#!/bin/bash
fqdnips=( $(nslookup sub.domain.com|grep Address:|awk -F\  '{ print $2 }'|grep -v '#') )

new=`printf "${fqdnips[@]}"`
old=`grep "#RoundRobin" /etc/hosts|awk -F\  '{ print $1 }'`
sed -i "s/$old/$new/g" /etc/hosts

The above script grabs the output of nslookup for sub.domain.com and stores it in an array. It then prints the top most value to $new and grabs the existing value for tag #RoundRobin assigned in /etc/hosts ... lastly, it performs a sed replace

/etc/hosts file entry would look like this

127.0.0.1        localhost
::1              localhost
11.12.13.14      sub.domain.com      #RoundRobin

Lastly, place this script in the root's crontab to run every hour or so and you'll now have an /etc/host round-robin.

This is particularly useful if you have a coded page that is pulling some data from an API and the DNS lookup for the API server is causing a lot of hang time in the page's script execution... resulting in high cpu consumption for what would otherwise appear to be a simple page. To avoid the costly DNS lookup (particularly if your site is doing hundreds of them per minute do to heavy traffic), you should use /etc/hosts to resolve the FQDN of the remote API server. This will dramatically reduce the CPU usage for pulling the API data and generating the page.


Note that at least on macOS, in contradiction to what the other answers say, the system resolver will return all entries associated with a host name in /etc/hosts instead of stopping at the first.


Easy to setup, please follow the instruction:

  1. install dnsmasq
  2. edit /etc/resolv.conf and set "nameserver 127.0.0.1" as a first DNS
  3. add normal DNS as alternative (google one for example) "nameserver 8.8.8.8" as a second line
  4. make sure two required records are in your /etc/hosts file
  5. now check with a command host abc.efg.datastore.com

    that should respond two records with RR-DNS so if one node from the list is down - your application will be connected to another one