Amazon S3 bucket returning 403 Forbidden

Amazon Web-ServicesAmazon S3

Amazon Web-Services Problem Overview


I've recently inherited a Rails app that uses S3 for storage of assets. I have transferred all assets to my S3 bucket with no issues. However, when I alter the app to point to the new bucket I get 403 Forbidden Status.

My S3 bucket is set up with the following settings:

Permissions

Everyone can list

Bucket Policy

{
 "Version": "2012-10-17",
 "Statement": [
	{
		"Sid": "PublicReadGetObject",
		"Effect": "Allow",
		"Principal": "*",
		"Action": "s3:GetObject",
		"Resource": "arn:aws:s3:::bucketname/*"
	}
 ]
}

CORS Configuration

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
    </CORSRule>
    <CORSRule>
        <AllowedOrigin>https://www.appdomain.com</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Static Web Hosting

Enabled.

What else can I do to allow the public to reach these assets?

Amazon Web-Services Solutions


Solution 1 - Amazon Web-Services

I know this is an old thread, but I just encountered the same problem. I had everything working for months and it just suddenly stopped working giving me a 403 Forbidden error. It turns out the system clock was the real culprit. I think s3 uses some sort of time-based token that has a very short lifespan. And in my case I just ran:

ntpdate pool.ntp.org

And the problem went away. I'm running CentOS 6 if it's of any relevance. This was the sample output:

19 Aug 20:57:15 ntpdate[63275]: step time server ip_address offset 438.080758 sec

Hope in helps!

Solution 2 - Amazon Web-Services

It could also be that a proper policy needs to be set according to the AWS docs.

Give the bucket in question this policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
    }
  ]
}

Solution 3 - Amazon Web-Services

The issue is that the transfer was done according to this thread, which by itself is not an issue. The issue came from the previous developer not changing permissions on the files before transferring. This meant I could not manage any of the files, even though they were in my bucket.

Issue was solved by re-downloading the files cleanly from the previous bucket, deleting the old phantom files, re-uploading the fresh files and setting their permissions to allow public reading of the files.

Solution 4 - Amazon Web-Services

I had same problem just adding * at end of policy bucket resource solved it

{
  "Version":"2012-10-17",
  "Statement":[{
	"Sid":"PublicReadGetObject",
        "Effect":"Allow",
	  "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::example-bucket/*"
      ]
    }
  ]
}

Solution 5 - Amazon Web-Services

Here's the Bucket Policy I used to make index.html file inside my S3 Bucket accessible from the internet:

enter image description here

I also needed to go to Permissions -> "Block Public Access" and remove the block public access rules for the bucket. Like so:

enter image description here

Also make sure the access permissions for the individual Objects inside each bucket is open to the public. Check that here: enter image description here

Solution 6 - Amazon Web-Services

One weird thing that fixed this for me after already setting up the correct permissions, was I removed the extension from the filename. So I had many items in the bucket all with the same permissions and some worked find and some returned 403. The only difference was the ones that didn't work had .png at the end of the filename. When I removed that they worked fine. No idea why.

Solution 7 - Amazon Web-Services

Another "solution" here: I was using Buddy to automate uploading a github repo to an s3 bucket, which requires programmatic write access to the bucket. The access policy for the IAM user first looked like the following: (Only allowing those 6 actions to be performed in the target bucket).

    {
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": ""arn:aws:s3:::<bucket_name>/*"
        }
    ]
}

My bucket access policy was the following: (allowing read/write access for the IAM user).

{
"Version": "2012-10-17",
"Id": "1234",
"Statement": [
    {
        "Sid": "5678",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::<IAM_user_arn>"
        },
        "Action": [
            "s3:DeleteObject",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::<bucket_name>/*"
    }

However, this kept giving me the 403 error.

My workaround solution was to give the IAM user access to all s3 resources:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "*"
        }
    ]
}

This got me around the 403 error, although clearly it doesn't sound like how it should be.

Solution 8 - Amazon Web-Services

For me, none of the other answers worked. File permissions, bucket policies, and clock were all fine. For me, the issue was intermittent, and while it may sound trite, the following have both worked for me previously:

  1. Log out and log back in.
  2. If you are trying to upload a single file, try to do a bulk upload. Conversely, if trying to upload a single file, try to do a bulk upload.

Solution 9 - Amazon Web-Services

Just found the same issue on my side on my iPhone app. It was working completely fine with Android with same configuration and S3 setup but iPhone app was throwing an error. I reached Amazon support team with this issue, after checking logs on their end; they told me your iPhone has date and time. Then I went to settings of my iPhone and just adjusted correct date and time. Then I tried to upload new image and it worked as expected.

If you are having same issue and you have wrong date or time in your iphone or simulator; this may help you.

Thanks!

Solution 10 - Amazon Web-Services

For me it was the Public access under Access Control tab.

just ensure the read and write permission under public access is Yes by default its - which means No.

Happy coding.

JFYI: am using flutter for my android development.

enter image description here

Solution 11 - Amazon Web-Services

Make sure you use the correct AWS Profile!!!! (dev \ prod etc...)

Solution 12 - Amazon Web-Services

I hit this error when trying to PUT a file to S3 from JavaScript using a URL presigned in Python. Turns out my Python needed the ContentType attribute.

Once I added that, the following worked:

import boto3
import requests

access_key_id = 'AKIA....'
secret_access_key = 'LfNHsQ....'
bucket = 'images-dev'
filename = 'pretty.png'

s3_client = boto3.client(
  's3',
  aws_access_key_id=access_key_id,
  aws_secret_access_key=secret_access_key
)

# sign url
response = s3_client.generate_presigned_url(
  ClientMethod = 'put_object',
  Params = {
    'Bucket': bucket,
    'Key': filename,
    'ContentType': 'image/png',
  }
)

print(" * GOT URL", response)

# NB: to run the PUT command in Python, one must remove the ContentType attr above!
# r = requests.put(response, data=open(filename, 'rb'))
# print(r.status_code)

Then one can PUT that image to S3 using that url from the client:

var xhr = new XMLHttpRequest();
xhr.open('PUT', url);
xhr.onreadystatechange = () => {
  if (xhr.readyState === 4) {
    if (xhr.status !== 200) {
      console.log('Could not upload file.');
    }
  }
};

xhr.send(file);

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionthatgibbyguyView Question on Stackoverflow
Solution 1 - Amazon Web-ServicesStheView Answer on Stackoverflow
Solution 2 - Amazon Web-ServicesDa RodView Answer on Stackoverflow
Solution 3 - Amazon Web-ServicesthatgibbyguyView Answer on Stackoverflow
Solution 4 - Amazon Web-Servicesuser3470929View Answer on Stackoverflow
Solution 5 - Amazon Web-ServicesGeneView Answer on Stackoverflow
Solution 6 - Amazon Web-ServicesandrewcockerhamView Answer on Stackoverflow
Solution 7 - Amazon Web-ServicesunieView Answer on Stackoverflow
Solution 8 - Amazon Web-ServicesentpnerdView Answer on Stackoverflow
Solution 9 - Amazon Web-ServicesJignesh MayaniView Answer on Stackoverflow
Solution 10 - Amazon Web-ServicesThe Billionaire GuyView Answer on Stackoverflow
Solution 11 - Amazon Web-ServicesYitzchakView Answer on Stackoverflow
Solution 12 - Amazon Web-ServicesduhaimeView Answer on Stackoverflow