How does network pricing work exactly in cloud platforms? And how should I avoid dedicated pricing attacks?

Cloud services provide awesome benefits for projects, which are specially built for using them. If we are talking about application/web hosting, then cloud pricing models suit businesses, which acquire their main income from users, who use said application/web-site.

Unless that is your main source of income (in which case you should easily convert between raw web traffic and profits), then cloud hosting might not suit you at all.

Keep in mind that if your application was not built for cloud infrastructure (like the said game server), you won't have any benefits over single-instance VPS/Dedicated hosting, unless you spent some amount of time doing devops work. The same devops work is needed to effectively prevent attack on your bandwidth you described.

Since that kind of attack has already got a name - you can find some more or less viable advises by searching for "Denial of Wallet"


Neither cloud vendor offers XX bandwidth at a fixed price. Pricing is consumption based.

You are responsible for controlling client access to your resources.

This means deploying authentication or throttling or other technologies but you choose how and with what products.

Cloud services are much like building a house. Home Depot does not give you unlimited nails and lumber. You purchase what you need to build your house. You then purchase fuel to heat your home.

I have been working in the cloud since day zero. Before that private data centers. Your concern is possible, but in the real world, it does not happen enough to stop most of us from deploying in the cloud. In order to consume your bandwidth, I need to consume my available network bandwidth. If a hacker wants to take you down, there are many more painful methods they can deploy that are a lot cheaper for them and that are more difficult to track who/where they are located.


On the server side, the ways to avoid this are:

  • Require authentication for requests. IOW, make it such that you can track down who is making such attacks, and make such attacks a bannable offense. Then properly lock down your account creation so that it’s not easy to get new credentials after you get banned.
  • Implement rate limiting. Usually this is done in multiple stages, often rate-limiting per authenticated account (so any user has a rate limit), per API endpoint (so computationally expensive endpoints that don’t get called very often can’t be easily abused for DoS attacks), and possibly overall per IP. You should be doing this for other reasons (most notably protecting against smart DoS attacks and the possibility of timing-based attacks on the game logic).
  • Utilize good compression in the protocol. For your stated response size, Brotli is probably worth looking into (or possibly snappy or LZ4), it should be fast enough to meet your performance needs but still shave a few kB off of the response size, and at the point of dealing with millions of requests per month, that’s a nontrivial amount of data saved.

As far as mitigating it elsewhere, that’s not as easy to do. Full cloud providers like what you are talking about often provide some minimal ‘free tier’ to let developers easily experiment without costing an arm and a leg. I think for AWS it’s 5GB per month outbound (they don’t charge for inbound data at all) for example. That’s as close as you can get in such a full cloud setup to block usage as you are talking about, because most users will benefit from having exact usage prorated as opposed to buying in ‘chunks’.

If you really want a big chunk though, look into AWS Lightsail. There, you buy a package deal of a virtual node that includes some fixed number of CPUs, fixed amount of RAM and integrated storage, and a specified network usage cap. Network usage below that cap in a billing cycle costs nothing, while above is prorated just like in a full cloud setup. I do not know if GCP has an equivalent, but most VPS providers (such as Vultr, Linode, or Digital Ocean) work the same way for their primary offerings and I would actually suggest looking there instead if you go this route. The network caps on these are usually in the multi-terabyte range, and the norm is that only the direction that has higher usage gets tracked for accounting.