How do I ensure a REST API does not have a bottleneck when it receives requests? [closed]
I'm creating a REST API that will listen on a public URL and accept uploads of large amounts of data. I understand how to scale software processes using message queues, but what I don't understand is how to avoid a bottleneck between users and reaching my message queue server farm. To my understanding, at some point I have a machine listening to requests on an IP address, and that will be a single point of failure bottleneck. But this sounds really wrong since, you know, sites like google.com exist.
My subsequent question is if you can achieve this scalability for a fixed IP address instead of a fixed URL. Not because I have to do this, I just want to understand if in the practical sense of running a website (or other HTTP server) scalability can only be achieved by using a dynamic number of IPs.
What physical bottleneck determines how much data can be uploaded to one IP address?
The hop with the least bandwidth available for your traffic.
Updated question, IP Endpoint redundancy: See HSRP or CARP. See also Load Balancing
For multi-site redundnacy see: Anycast or Geocast, both of which are quite expensive and complicated.