S3 doesn't respect hierarchical namespaces. Each bucket simply contains a number of mappings from key to object (along with associated metadata, ACLs and so on).

Even though your object's key might contain a '/', S3 treats the path as a plain string and puts all objects in a flat namespace.

In my experience, LIST operations do take (linearly) longer as object count increases, but this is probably a symptom of the increased I/O required on the Amazon servers, and down the wire to your client.

However, lookup times do not seem to increase with object count - it's most probably some sort of O(1) hashtable implementation on their end - so having many objects in the same bucket should be just as performant as small buckets for normal usage (i.e. not LISTs).

As for the ACL, grants can be set on the bucket and on each individual object. As there is no hierarchy, they're your only two options. Obviously, setting as many bucket-wide grants will massively reduce your admin headaches if you have millions of files, but remember you can only grant permissions, not revoke them, so the bucket-wide grants should be the maximal subset of the ACL for all its contents.

I'd recommend splitting into separate buckets for:

  • totally different content - having separate buckets for images, sound and other data makes for a more sane architecture
  • significantly different ACLs - if you can have one bucket with each object receiving a specific ACL, or two buckets with different ACLs and no object-specific ACLs, take the two buckets.

Answer to the original question "Max files per directory in S3" is: UNLIMITED. See also S3 limit to objects in a bucket.