Maximum on HTTP header values?

HttpHttp Headers

Http Problem Overview


Is there an accepted maximum allowed size for HTTP headers? If so, what is it? If not, is this something that's server specific or is the accepted standard to allow headers of any size?

Http Solutions


Solution 1 - Http

No, HTTP does not define any limit. However most web servers do limit size of headers they accept. For example in Apache default limit is 8KB, in IIS it's 16K. Server will return 413 Entity Too Large error if headers size exceeds that limit.

Related question: https://stackoverflow.com/questions/654921/how-big-can-a-user-agent-string-get

Solution 2 - Http

As vartec says above, the HTTP spec does not define a limit, however many servers do by default. This means, practically speaking, the lower limit is 8K. For most servers, this limit applies to the sum of the request line and ALL header fields (so keep your cookies short).

It's worth noting that nginx uses the system page size by default, which is 4K on most systems. You can check with this tiny program:

pagesize.c:

#include <unistd.h>
#include <stdio.h>

int main() {
    int pageSize = getpagesize();
    printf("Page size on your system = %i bytes\n", pageSize);
    return 0;
}

Compile with gcc -o pagesize pagesize.c then run ./pagesize. My ubuntu server from Linode dutifully informs me the answer is 4k.

Solution 3 - Http

Here is the limit of most popular web server

  • Apache - 8K
  • Nginx - 4K-8K
  • IIS - 8K-16K
  • Tomcat - 8K – 48K
  • Node (<13) - 8K; (>13) - 16K

Solution 4 - Http

> HTTP does not place a predefined limit on the length of each header field or on the length of the header section as a whole, as described in Section 2.5. Various ad hoc limitations on individual header field length are found in practice, often depending on the specific field semantics.

HTTP Header values are restricted by server implementations. Http specification doesn't restrict header size.

> A server that receives a request header field, or set of fields, larger than it wishes to process MUST respond with an appropriate 4xx (Client Error) status code. Ignoring such header fields would increase the server's vulnerability to request smuggling attacks (Section 9.5).

Most servers will return 413 Entity Too Large or appropriate 4xx error when this happens.

> A client MAY discard or truncate received header fields that are larger than the client wishes to process if the field semantics are such that the dropped value(s) can be safely ignored without changing the message framing or response semantics.

Uncapped HTTP header size keeps the server exposed to attacks and can bring down its capacity to serve organic traffic.

Source

Solution 5 - Http

RFC 6265 dated 2011 prescribes specific limits on cookies.

https://www.rfc-editor.org/rfc/rfc6265 6.1. Limits

Practical user agent implementations have limits on the number and size of cookies that they can store. General-use user agents SHOULD provide each of the following minimum capabilities:

o At least 4096 bytes per cookie (as measured by the sum of the length of the cookie's name, value, and attributes).

o At least 50 cookies per domain.

o At least 3000 cookies total.

Servers SHOULD use as few and as small cookies as possible to avoid reaching these implementation limits and to minimize network bandwidth due to the Cookie header being included in every request.

Servers SHOULD gracefully degrade if the user agent fails to return one or more cookies in the Cookie header because the user agent might evict any cookie at any time on orders from the user.

--

The intended audience of the RFC is what must be supported by a user-agent or a server. It appears that to tune your server to support what the browser allows you would need to configure 4096*50 as the limit. As the text that follows suggests, this does appear to be far in excess of what is needed for the typical web application. It would be useful to use the current limit and the RFC outlined upper limit and compare the memory and IO consequences of the higher configuration.

Solution 6 - Http

I also found that in some cases the reason for 502/400 in case of many headers could be because of a large number of headers without regard to size. from the docs

> tune.http.maxhdr Sets the maximum number of headers in a request. When a request comes with a number of headers greater than this value (including the first line), it is rejected with a "400 Bad Request" status code. Similarly, too large responses are blocked with "502 Bad Gateway". The default value is 101, which is enough for all usages, considering that the widely deployed Apache server uses the same limit. It can be useful to push this limit further to temporarily allow a buggy application to work by the time it gets fixed. Keep in mind that each new header consumes 32bits of memory for each session, so don't push this limit too high.

https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.http.maxhdr

Solution 7 - Http

If you are going to use any DDOS provider like Akamai, they have a maximum limitation of 8k in the response header size. So essentially try to limit your response header size below 8k.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionCoryView Question on Stackoverflow
Solution 1 - HttpvartecView Answer on Stackoverflow
Solution 2 - HttpDavid SchoonoverView Answer on Stackoverflow
Solution 3 - HttpSarath AkView Answer on Stackoverflow
Solution 4 - HttprealPKView Answer on Stackoverflow
Solution 5 - HttpAjay SindwaniView Answer on Stackoverflow
Solution 6 - HttpShay RybakView Answer on Stackoverflow
Solution 7 - HttpvsinghView Answer on Stackoverflow