400 Bad Request - request header or cookie too large
I am getting a 400 Bad Request request header or cookie too large from nginx with my Rails app. Restarting the browser fixes the issue. I am only storing a string id in my cookie so it should be tiny.
Where can I find the nginx error logs? I looked at nano /opt/nginx/logs/error.log, but it doesn't have anything related.
I tried to set following and no luck:
location / {
large_client_header_buffers 4 32k;
proxy_buffer_size 32k;
}
nginx.conf
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /home/app/.rvm/gems/ruby-1.9.3-p392/gems/passenger-3.0.19;
passenger_ruby /home/app/.rvm/wrappers/ruby-1.9.3-p392/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
client_max_body_size 20M;
server {
listen 80;
server_name localhost;
root /home/app/myapp/current/public;
passenger_enabled on;
#charset koi8-r;
#access_log logs/host.access.log main;
# location / {
# large_client_header_buffers 4 32k;
# proxy_buffer_size 32k;
# }
# location / {
# root html;
# index index.html index.htm;
# client_max_body_size 4M;
# client_body_buffer_size 128k;
# }
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443;
# server_name localhost;
# ssl on;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_timeout 5m;
# ssl_protocols SSLv2 SSLv3 TLSv1;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
Here's my code storing the cookies and a screenshot of the cookies in Firebug. I used firebug to check stored session and I found New Relic and jQuery are storing cookies too; could this be why the cookie size is exceeded?
def current_company
return if current_user.nil?
session[:current_company_id] = current_user.companies.first.id if session[:current_company_id].blank?
@current_company ||= Company.find(session[:current_company_id])
end
It's just what the error says - Request Header Or Cookie Too Large
. One of your headers is really big, and nginx is rejecting it.
You're on the right track with large_client_header_buffers
. If you check the docs, you'll find it's only valid in http
or server
contexts. Bump it up to a server block and it will work.
server {
# ...
large_client_header_buffers 4 32k;
# ...
}
By the way, the default buffer number and size is 4
and 8k
, so your bad header must be the one that's over 8192 bytes. In your case, all those cookies (which combine to one header) are well over the limit. Those mixpanel cookies in particular get quite large.
Fixed by adding
server {
...
large_client_header_buffers 4 16k;
...
}
With respect to answers above, but there is client_header_buffer_size
needs to be mentioned:
http {
...
client_body_buffer_size 32k;
client_header_buffer_size 8k;
large_client_header_buffers 8 64k;
...
}
I get the error almost per 600 requests when web scraping. Firstly, assumed that a proxy server or remote ngix limits. I've tried to delete all cookies and other browser solutions that generally talked by related posts, but no luck. Remote server is not in my control.
In my case, I made a mistake about adding over and over new header to the httpClient object. After defined a global httpclient object, added header once and the problem doesn't appear again. It was a little mistake but unfortunately instead of try to understand the problem, jumped to the stackoverflow :) Sometimes, we should try to understand the problem own.