"limit_req_zone" directive; minimum size of zone is increased.
Previously an unsigned variable was used to keep the return value of
ngx_parse_size() function, which led to an incorrect zone size if NGX_ERROR
was returned.
The new code has been taken from the "limit_conn_zone" directive.
The aio_return() must be called regardless of the error returned by
aio_error(). Not calling it resulted in various problems up to segmentation
faults (as AIO events are level-triggered and were reported again and again).
Additionally, in "aio sendfile" case r->blocked was incremented in case of
error returned from ngx_file_aio_read(), thus causing request hangs.
It's already called by OPENSSL_config(). Calling it again causes some
openssl engines (notably GOST) to corrupt memory, as they don't expect
to be created more than once.
The ngx_hash_init() function did not expect call with zero elements count,
which caused FPE error on configs with an empty "types" block in http context
and "types_hash_max_size" > 10000.
Example configuration to reproduce:
events { }
http {
types_hash_max_size 10001;
types {}
server {}
}
Second argument (cpusetsize) is size in bytes, not in bits. Previously
used constant 32 resulted in reading of uninitialized memory and caused
EINVAL to be returned on some Linux kernels.
Support for TLSv1.1 and TLSv1.2 protocols was introduced in OpenSSL 1.0.1
(-beta1 was recently released). This change makes it possible to disable
these protocols and/or enable them without other protocols.
The problem was localized in ngx_http_proxy_rewrite_redirect_regex() handler
function which did not take into account prefix when overwriting header value.
New directives: proxy_cache_lock on/off, proxy_cache_lock_timeout. With
proxy_cache_lock set to on, only one request will be allowed to go to
upstream for a particular cache item. Others will wait for a response
to appear in cache (or cache lock released) up to proxy_cache_lock_timeout.
Waiting requests will recheck if they have cached response ready (or are
allowed to run) every 500ms.
Note: we intentionally don't intercept NGX_DECLINED possibly returned by
ngx_http_file_cache_read(). This needs more work (possibly safe, but needs
further investigation). Anyway, it's exceptional situation.
Note: probably there should be a way to disable caching of responses
if there is already one request fetching resource to cache (without waiting
at all). Two possible ways include another cache lock option ("no_cache")
or using proxy_no_cache with some supplied variable.
Note: probably there should be a way to lock updating requests as well. For
now "proxy_cache_use_stale updating" is available.
It's possible that configured limit_rate will permit more bytes per
single operation than sendfile_max_chunk. To protect disk from takeover
by a single client it is necessary to apply sendfile_max_chunk as a limit
regardless of configured limit_rate.
See here for report (in Russian):
http://mailman.nginx.org/pipermail/nginx-ru/2010-March/032806.html
If proxy_pass was used with variables and there was no URI component,
nginx always used unparsed URI. This isn't consistent with "no variables"
case, where e.g. rewrites are applied even if there is no URI component.
Fix is to use the same logic in both cases, i.e. only use unparsed URI if
it's valid and request is the main one.
This resolves issue with try_files (see ticket #70), configuration like
location / { try_files $uri /index.php; }
location /index.php { proxy_pass http://backend; }
caused nginx to use original request uri in a request to a backend.
Historically, not clearing of the r->valid_unparsed_uri on internal redirect
was a feature: it allowed to pass the same request to (another) upstream
server via error_page redirection. Since then named locations appeared
though, and it's time to start resetting r->valid_unparsed_uri on internal
redirects. Configurations still using this feature should be converted
to use named locations instead.
Patch by Lanshun Zhou.
The SCGI specification doesn't specify format of the response, and assuming
CGI specs should be used there is no reason to complain. RFC 3875
explicitly states that "A Status header field is optional, and status
200 'OK' is assumed if it is omitted".
The r->http_version is a version of client's request, and modules must
not set it unless they are really willing to downgrade protocol version
used for a response (i.e. to HTTP/0.9 if no response headers are available).
In neither case r->http_version may be upgraded.
The former code downgraded response from HTTP/1.1 to HTTP/1.0 for no reason,
causing various problems (see ticket #66). It was also possible that
HTTP/0.9 requests were upgraded to HTTP/1.0.