S3 - What Exactly Is A Prefix? And what Ratelimits apply?

Amazon Web-ServicesAmazon S3

Amazon Web-Services Problem Overview


I was wondering if anyone knew what exactly an s3 prefix was and how it interacts with amazon's published s3 rate limits:

> Amazon S3 automatically scales to high request rates. For example, > your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 > GET requests per second per prefix in a bucket. There are no limits to > the number of prefixes in a bucket.

While that's really clear I'm not quite certain what a prefix is?

Does a prefix require a delimiter?

If we have a bucket where we store all files at the "root" level (completely flat, without any prefix/delimters) does that count as single "prefix" and is it subject to the rate limits posted above?

The way I'm interpreting amazon's documentation suggests to me that this IS the case, and that the flat structure would be considered a single "prefix". (ie it would be subject to the published rate limits above)

> Suppose that your bucket (admin-created) has four objects with the > following object keys: > > Development/Projects1.xls > > Finance/statement1.pdf > > Private/taxdocument.pdf > > s3-dg.pdf > > The s3-dg.pdf key does not have a prefix, so its object appears > directly at the root level of the bucket. If you open the Development/ > folder, you see the Projects.xlsx object in it.

In the above example would s3-dg.pdf be subject to a different rate limit (5500 GET requests /second) than each of the other prefixes (Development/Finance/Private)?


What's more confusing is I've read a couple of blogs about amazon using the first N bytes as a partition key and encouraging about using high cardinality prefixes, I'm just not sure how that interacts with a bucket with a "flat file structure".

Amazon Web-Services Solutions


Solution 1 - Amazon Web-Services

You're right, the announcement seems to contradict itself. It's just not written properly, but the information is correct. In short:

  1. Each prefix can achieve up to 3,500/5,500 requests per second, so for many purposes, the assumption is that you wouldn't need to use several prefixes.
  2. Prefixes are considered to be the whole path (up to the last '/') of an object's location, and are no longer hashed only by the first 6-8 characters. Therefore it would be enough to just split the data between any two "folders" to achieve x2 max requests per second. (if requests are divided evenly between the two)

For reference, here is a response from AWS support to my clarification request:

> Hello Oren, > > Thank you for contacting AWS Support. > > I understand that you read AWS post on S3 request rate performance > being increased and you have additional questions regarding this > announcement. > > Before this upgrade, S3 supported 100 PUT/LIST/DELETE requests per > second and 300 GET requests per second. To achieve higher performance, > a random hash / prefix schema had to be implemented. Since last year > the request rate limits increased to 3,500 PUT/POST/DELETE and 5,500 > GET requests per second. This increase is often enough for > applications to mitigate 503 SlowDown errors without having to > randomize prefixes. > > However, if the new limits are not sufficient, prefixes would need to > be used. A prefix has no fixed number of characters. It is any string > between a bucket name and an object name, for example: > > - bucket/folder1/sub1/file > - bucket/folder1/sub2/file > - bucket/1/file > - bucket/2/file > > Prefixes of the object 'file' would be: /folder1/sub1/ , > /folder1/sub2/, /1/, /2/. In this example, if you spread reads > across all four prefixes evenly, you can achieve 22,000 requests per > second.

Solution 2 - Amazon Web-Services

This looks like it is obscurely addressed in an amazon release communication

https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/

> Performance scales per prefix, so you can use as many prefixes as you > need in parallel to achieve the required throughput. There are no > limits to the number of prefixes. > > This S3 request rate performance increase removes any previous > guidance to randomize object prefixes to achieve faster performance. > That means you can now use logical or sequential naming patterns in S3 > object naming without any performance implications. This improvement > is now available in all AWS Regions. For more information, visit the > Amazon S3 Developer Guide.

Solution 3 - Amazon Web-Services

S3 prefixes used to be determined by the first 6-8 characters;

This has changed mid-2018 - see announcement https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/

But that is half-truth. Actually prefixes (in old definition) still matter.

S3 is not a traditional “storage” - each directory/filename is a separate object in a key/value object store. And also the data has to be partitioned/ sharded to scale to quadzillion of objects. So yes this new sharding is kinda of “automatic”, but not really if you created a new process that writes to it with crazy parallelism to different subdirectories. Before the S3 learns from the new access pattern, you may run into S3 throttling before it reshards/ repartitions data accordingly.

Learning new access patterns takes time. Repartitioning of the data takes time.

Things did improve in mid-2018 (~10x throughput-wise for a new bucket with no statistics), but it's still not what it could be if data is partitioned properly. Although to be fair, this may not be applied to you if you don't have a ton of data, or pattern how you access data is not hugely parallel (e.g. running a Hadoop/Spark cluster on many Tbs of data in S3 with hundreds+ of tasks accessing same bucket in parallel).

TLDR:

"Old prefixes" still do matter. Write data to root of your bucket, and first-level directory there will determine "prefix" (make it random for example)

"New prefixes" do work, but not initially. It takes time to accommodate to load.

PS. Another approach - you can reach out to your AWS TAM (if you have one) and ask them to pre-partition a new S3 bucket if you expect a ton of data to be flooding it soon.

Solution 4 - Amazon Web-Services

In order for AWS to handle billions of requests per second, they need to shard up the data so it can optimise throughput. To do this they split the data into partitions based on the first 6 to 8 characters of the object key. Remember S3 is not a hierarchical filesystem, it is only a key-value store, though the key is often used like a file path for organising data, prefix + filename.

Now this is not an issue if you expect less than 100 requests per second, but if you have serious requirements over that then you need to think about naming.

For maximum parallel throughput you should consider how your data is consumed and use the most varying characters at the beginning of your key, or even generate 8 random character for the first 8 characters of the key.

e.g. assuming first 6 characters define the partition:

files/user/bob would be bad as all the objects would be on one partition files/.

2018-09-21/files/bob would be almost as bad if only todays data is being read from partition 2018-0. But slightly better if the objects are read from past years.

bob/users/files would be pretty good if different users are likely to be using the data at the same time from partition bob/us. But not so good if Bob is by far the busiest user.

3B6EA902/files/users/bob would be best for performance but more challenging to reference, where the first part is a random string, this would be pretty evenly spread.

Depending on your data, you need to think of any one point in time, who is reading what, and make sure that the keys start with enough variation to partition appropriately.


For your example, lets assume the partition is taken from the first 6 characters of the key:

for the key Development/Projects1.xls the partition key would be Develo

for the key Finance/statement1.pdf the partition key would be Financ

for the key Private/taxdocument.pdf the partition key would be Privat

for the key s3-dg.pdf the partition key would be s3-dg.

Solution 5 - Amazon Web-Services

The upvoted answer on this was a bit misleading for me. If these are the paths

bucket/folder1/sub1/file
bucket/folder1/sub2/file
bucket/1/file
bucket/2/file

Your prefix for file would actually be
folder1/sub1/
folder1/sub2/
1/file
2/file

https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html Please se docs. I had issues with the leading '/' when trying to list keys with the airflow s3hook.

Solution 6 - Amazon Web-Services

In the case you query S3 using Athena, EMR/Hive or Redshift Spectrum increasing the number of prefixes could mean adding more partitions (as the partititon id is part of the prefix). If using datetime as (one of) your partititon keys the number of partittions (and prefixes) will automatically grow as new data is added over time and the total max S3 GETs per second grow as well.

Solution 7 - Amazon Web-Services

> S3 - What Exactly Is A Prefix?

S3 recently updated their document to better reflect this.

"A prefix is a string of characters at the beginning of the object key name. A prefix can be any length, subject to the maximum length of the object key name (1,024 bytes). "

From - https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html

Note: "You can use another character as a delimiter. There is nothing unique about the slash (/) character, but it is a very common prefix delimiter."

As long as two objects have different prefixes, s3 will provide the documented throughput over time.

Update: https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix reflecting the updated definition.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
Questiondm03514View Question on Stackoverflow
Solution 1 - Amazon Web-ServicesOrenView Answer on Stackoverflow
Solution 2 - Amazon Web-Servicesdm03514View Answer on Stackoverflow
Solution 3 - Amazon Web-ServicesTagarView Answer on Stackoverflow
Solution 4 - Amazon Web-ServicesMatt DView Answer on Stackoverflow
Solution 5 - Amazon Web-ServicesNico JordaanView Answer on Stackoverflow
Solution 6 - Amazon Web-ServicesMagnus ErikssonView Answer on Stackoverflow
Solution 7 - Amazon Web-ServicesVijayView Answer on Stackoverflow