Max files per directory in S3

PerformanceFileAmazon S3LimitDirectory

Performance Problem Overview


If I had a million images, would it be better to store them in some folder/sub-folder hierarchy or just dump them all straight into a bucket (without any folders)?

Would dumping all the images into a hierarchy-less bucket slow down LIST operations?

Is there a significant overhead in creating folders and sub folders on the fly and setting up their ACLs (programatically speaking)?

Performance Solutions


Solution 1 - Performance

S3 doesn't respect hierarchical namespaces. Each bucket simply contains a number of mappings from key to object (along with associated metadata, ACLs and so on).

Even though your object's key might contain a '/', S3 treats the path as a plain string and puts all objects in a flat namespace.

In my experience, LIST operations do take (linearly) longer as object count increases, but this is probably a symptom of the increased I/O required on the Amazon servers, and down the wire to your client.

However, lookup times do not seem to increase with object count - it's most probably some sort of O(1) hashtable implementation on their end - so having many objects in the same bucket should be just as performant as small buckets for normal usage (i.e. not LISTs).

As for the ACL, grants can be set on the bucket and on each individual object. As there is no hierarchy, they're your only two options. Obviously, setting as many bucket-wide grants will massively reduce your admin headaches if you have millions of files, but remember you can only grant permissions, not revoke them, so the bucket-wide grants should be the maximal subset of the ACL for all its contents.

I'd recommend splitting into separate buckets for:

  • totally different content - having separate buckets for images, sound and other data makes for a more sane architecture
  • significantly different ACLs - if you can have one bucket with each object receiving a specific ACL, or two buckets with different ACLs and no object-specific ACLs, take the two buckets.

Solution 2 - Performance

Answer to the original question "Max files per directory in S3" is: UNLIMITED. See also https://stackoverflow.com/questions/3980968/s3-limit-to-objects-in-a-bucket.

Solution 3 - Performance

I use a directory structure with a root then at least one sub directory. I often use "document import date" as the directory under the root. This can make managing backups a little easier. Whatever file system you are using you're bound to hit a file count limit (a practical if not a physycal limit) eventually. You might think about supporting multiple roots as well.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionNikhil GupteView Question on Stackoverflow
Solution 1 - PerformanceJames BradyView Answer on Stackoverflow
Solution 2 - PerformanceVacilandoView Answer on Stackoverflow
Solution 3 - PerformanceJim BlizardView Answer on Stackoverflow