nginx upload client_max_body_size issue

HttpFile UploadNginx

Http Problem Overview


I'm running nginx/ruby-on-rails and I have a simple multipart form to upload files. Everything works fine until I decide to restrict the maximum size of files I want uploaded. To do that, I set the nginx client_max_body_size to 1m (1MB) and expect a HTTP 413 (Request Entity Too Large) status in response when that rule breaks.

The problem is that when I upload a 1.2 MB file, instead of displaying the HTTP 413 error page, the browser hangs a bit and then dies with a "Connection was reset while the page was loading" message.

I've tried just about every option there is that nginx offers, nothing seems to work. Does anyone have any ideas about this?

Here's my nginx.conf:

worker_processes  1;
timer_resolution  1000ms;
events {
    worker_connections  1024;
}

http {
    passenger_root /the_passenger_root;
    passenger_ruby /the_ruby;

    include       mime.types;
    default_type  application/octet-stream;

    sendfile           on;
    keepalive_timeout  65;

    server {
      listen 80;
      server_name www.x.com;
      client_max_body_size 1M;
      passenger_use_global_queue on;
      root /the_root;
      passenger_enabled on;

      error_page 404 /404.html;
      error_page 413 /413.html;    
    }    
}

Thanks.


**Edit**

Environment/UA: Windows XP/Firefox 3.6.13

Http Solutions


Solution 1 - Http

nginx "fails fast" when the client informs it that it's going to send a body larger than the client_max_body_size by sending a 413 response and closing the connection.

Most clients don't read responses until the entire request body is sent. Because nginx closes the connection, the client sends data to the closed socket, causing a TCP RST.

If your HTTP client supports it, the best way to handle this is to send an Expect: 100-Continue header. Nginx supports this correctly as of 1.2.7, and will reply with a 413 Request Entity Too Large response rather than 100 Continue if Content-Length exceeds the maximum body size.

Solution 2 - Http

Does your upload die at the very end? 99% before crashing? Client body and buffers are key because nginx must buffer incoming data. The body configs (data of the request body) specify how nginx handles the bulk flow of binary data from multi-part-form clients into your app's logic.

The clean setting frees up memory and consumption limits by instructing nginx to store incoming buffer in a file and then clean this file later from disk by deleting it.

Set body_in_file_only to clean and adjust buffers for the client_max_body_size. The original question's config already had sendfile on, increase timeouts too. I use the settings below to fix this, appropriate across your local config, server, & http contexts.

client_body_in_file_only clean;
client_body_buffer_size 32K;

client_max_body_size 300M;

sendfile on;
send_timeout 300s;

Solution 3 - Http

From the documentation:

> It is necessary to keep in mind that the browsers do not know how to correctly show this error.

I suspect this is what's happening, if you inspect the HTTP to-and-fro using tools such as Firebug or Live HTTP Headers (both Firefox extensions) you'll be able to see what's really going on.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionkrukidView Question on Stackoverflow
Solution 1 - HttpJoe ShawView Answer on Stackoverflow
Solution 2 - HttpBent CardanView Answer on Stackoverflow
Solution 3 - HttpZoFreXView Answer on Stackoverflow