Amazon S3 ACL for read-only and write-once access

Amazon S3Acl

Amazon S3 Problem Overview


I'm developing a web application and I currently have the following ACL assigned to the AWS account it uses to access its data:

{
  "Statement": [
    {
      "Sid": "xxxxxxxxx", // don't know if this is supposed to be confidential
      "Action": [
        "s3:*"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::cdn.crayze.com/*"
      ]
    }
  ]
}

However I'd like to make this a bit more restrictive so that if our AWS credentials were ever compromised, an attacker could not destroy any data.

From the documentation, it looks like I want to allow just the following actions: s3:GetObject and s3:PutObject, but I specifically want the account to only be able to create objects that don't exist already - i.e. a PUT request on an existing object should be denied. Is this possible?

Amazon S3 Solutions


Solution 1 - Amazon S3

This is not possible in Amazon S3 like you probably envisioned it; however, you can work around this limitation by Using Versioning which is a means of keeping multiple variants of an object in the same bucket and has been developed with use cases like this in mind:

> You might enable versioning to prevent objects from being deleted or > overwritten by mistake, or to archive objects so that you can retrieve > previous versions of them.

There are a couple of related FAQs as well, for example:

  • What is Versioning? - Versioning allows you to preserve, retrieve, and restore every version of every object stored in an Amazon S3 bucket. Once you enable Versioning for a bucket, Amazon S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.

  • Why should I use Versioning? - Amazon S3 provides customers with a highly durable storage infrastructure. Versioning offers an additional level of protection by providing a means of recovery when customers accidentally overwrite or delete objects. This allows you to easily recover from unintended user actions and application failures. You can also use Versioning for data retention and archiving. [emphasis mine]

  • How does Versioning protect me from accidental deletion of my objects? - When a user performs a DELETE operation on an object, subsequent default requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon S3 bucket and can be retrieved or restored. Only the owner of an Amazon S3 bucket can permanently delete a version. [emphasis mine]

If you are really paramount about the AWS credentials of the bucket owner (who can be different than the accessing users of course), you can take that one step further even, see How can I ensure maximum protection of my preserved versions?:

> Versioning’s MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of > security. [...] If you enable Versioning with MFA Delete > on your Amazon S3 bucket, two forms of authentication are required to > permanently delete a version of an object: your AWS account > credentials and a valid six-digit code and serial number from an > authentication device in your physical possession. [...]

Solution 2 - Amazon S3

If this is accidental overwrite you are trying to avoid, and your business requirements allow a short time window of inconsistency, you can do the rollback in the Lambda function:

  1. Make it a policy that "no new objects with the same name". Most of the time it will not happen. To enforce it:
  2. Listen for S3:PutObject events in an AWS Lambda function.
  3. When the event is fired, check whether more than one version is present.
  4. If there is more than one version present, delete all but the newest one.
  5. Notify the uploader what happened (it's useful to have the original uploader in x-amz-meta-* of the object. More info here).

Solution 3 - Amazon S3

You can now lock versions of objects with S3 Object Lock. It's a per-bucket setting, and allows you to place one of two kinds of WORM locks.

  • "retention period" - can't be changed
  • "legal hold" - can be changed by the bucket owner at any time

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html

As mentioned by @Kijana Woodard below, this does not prevent creation of new versions of objects.

Solution 4 - Amazon S3

Edit: Applicable if you came here from this question.

Object Locks only work in versioned buckets. If you can not enable versioning for your bucket, but can tolerate brief inconsistencies where files are presumed to exist while DELETE-ing them is still in-flight (S3 is only eventually-consistent) possibly resulting in PUT-after-DELETE failing intermittently if used in a tight-loop, or conversely, successive PUTs falsely succeeding intermittently, then the following solution may be appropriate.

Given the object path, read the Object's Content-Length header (from metadata, HeadObject request). Write the object only if the request succeeds, and where applicable, if length is greater than zero.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionJake PetroulesView Question on Stackoverflow
Solution 1 - Amazon S3Steffen OpelView Answer on Stackoverflow
Solution 2 - Amazon S3Motiejus JakštysView Answer on Stackoverflow
Solution 3 - Amazon S3Dan PrittsView Answer on Stackoverflow
Solution 4 - Amazon S3toasterpicView Answer on Stackoverflow