Starting from TLSv1.1 (as seen since draft-ietf-tls-rfc2246-bis-00),
the "certificate_authorities" field grammar of the CertificateRequest
message was redone to allow no distinguished names. In TLSv1.3, with
the restructured CertificateRequest message, this can be similarly
done by optionally including the "certificate_authorities" extension.
This allows to avoid sending DNs at all.
In practice, aside from published TLS specifications, all supported
SSL/TLS libraries allow to request client certificates with an empty
DN list for any protocol version. For instance, when operating in
TLSv1, this results in sending the "certificate_authorities" list as
a zero-length vector, which corresponds to the TLSv1.1 specification.
Such behaviour goes back to SSLeay.
The change relaxes the requirement to specify at least one trusted CA
certificate in the ssl_client_certificate directive, which resulted in
sending DNs of these certificates (closes#142). Instead, all trusted
CA certificates can be specified now using the ssl_trusted_certificate
directive if needed. A notable difference that certificates specified
in ssl_trusted_certificate are always loaded remains (see 3648ba7db).
Co-authored-by: Praveen Chaudhary <praveenc@nvidia.com>
Unordered chunks could result in trak->end_chunk smaller than trak->start_chunk
in ngx_http_mp4_crop_stsc_data(). Later in ngx_http_mp4_update_stco_atom()
this caused buffer overread while trying to calculate trak->end_offset.
While cropping an stsc atom in ngx_http_mp4_crop_stsc_data(), a 32-bit integer
overflow could happen, which could result in incorrect seeking and a very large
value stored in "samples". This resulted in a large invalid value of
trak->end_chunk_samples. This value is further used to calculate the value of
trak->end_chunk_samples_size in ngx_http_mp4_update_stsz_atom(). While doing
this, a large invalid value of trak->end_chunk_samples could result in reading
memory before stsz atom start. This could potentially result in a segfault.
In some rare cases, graceful shutdown may happen while initializing an HTTP/2
connection. Previously, such a connection ignored the shutdown and remained
active. Now it is gracefully closed prior to processing any streams to
eliminate the shutdown delay.
Previously handlers were mandatory. However they are not always needed.
For example, a server configured with ssl_reject_handshake does not need a
handler. Such servers required a fake handler to pass the check. Now handler
absence check is moved to runtime. If handler is missing, the connection is
closed with 500 code.
Previously the last chain field of ngx_quic_buffer_t could still reference freed
chains and buffers after calling ngx_quic_free_buffer(). While normally an
ngx_quic_buffer_t object should not be used after freeing, resetting last_chain
field would prevent a potential use-after-free.
Sending handshake-level CRYPTO frames after the client's Finished message could
lead to memory disclosure and a potential segfault, if those frames are sent in
one packet with the Finished frame.
While inserting a new entry into the dynamic table, first the entry is added,
and then older entries are evicted until table size is within capacity. After
the first step, the number of entries may temporarily exceed the maximum
calculated from capacity by one entry, which previously caused table overflow.
The easiest way to trigger the issue is to keep adding entries with empty names
and values until first eviction.
The issue was introduced by 987bee4363d1.
Previously a decoder stream was created on demand for sending Section
Acknowledgement, Stream Cancellation and Insert Count Increment. If conditions
for sending any of these instructions never happen, a decoder stream is not
created at all. These conditions include client not using the dynamic table and
no streams abandoned by server (RFC 9204, Section 2.2.2.2). However RFC 9204,
Section 4.2 defines only one condition for not creating a decoder stream:
An endpoint MAY avoid creating a decoder stream if its decoder sets
the maximum capacity of the dynamic table to zero.
The change enables pre-creation of the decoder stream at HTTP/3 session
initialization if maximum dynamic table capacity is not zero. Note that this
value is currently hardcoded to 4096 bytes and is not configurable, so the
stream is now always created.
Also, the change fixes a potential stack overflow when creating a decoder
stream in ngx_http_v3_send_cancel_stream() while draining a request stream by
ngx_drain_connections(). Creating a decoder stream involves calling
ngx_get_connection(), which calls ngx_drain_connections(), which will drain the
same request stream again. If client's MAX_STREAMS for uni stream is high
enough, these recursive calls will continue until we run out of stack.
Otherwise, decoder stream creation will fail at some point and the request
stream connection will be drained. This may result in use-after-free, since
this connection could still be referenced up the stack.
Previously chain links could sometimes be dropped instead of being reused,
which could result in increased memory consumption during long requests.
A similar chain link issue in ngx_http_gzip_filter_module was fixed in
da46bfc484ef (1.11.10).
Based on a patch by Sangmin Lee.
Passing from udp was not possible for the most part due to preread buffer
restriction. Passing to udp could occasionally work, but the connection would
still be bound to the original listen rbtree, which prevented it from being
deleted on connection closure.
When loading certificate keys via ENGINE_load_private_key() in runtime,
it was possible to overwrite configuration on ENGINE_by_id() failure.
OpenSSL documention doesn't describe errors in details, the only reason
I found in the comment to example is when the engine is not available.
Previously, a request body larger than declared in Content-Length resulted in
a 413 status code, because Content-Length was mistakenly used as the maximum
allowed request body, similar to client_max_body_size. Following the HTTP/3
specification, such requests are now rejected with the 400 error as malformed.
The ngx_quic_run() function uses qc->close timer to limit the handshake
duration. Normally it is removed by ngx_quic_do_init_streams() which is
called once when we are done with initial SSL processing.
The problem happens when the client sends early data and streams are
initialized in the ngx_quic_run() -> ngx_quic_handle_datagram() call.
The order of set/remove timer calls is now reversed; the close timer is
set up and the timer fires when assigned, starting the unexpected connection
close process.
The fix is to skip setting the timer if streams were initialized during
handling of the initial datagram. The idle timer for quic is set anyway,
and stream-related timeouts are managed by application layer.
Notably, Apple Silicon CPUs have 128 byte cache line size,
which is twice the default configured for generic aarch64.
Signed-off-by: Piotr Sikora <piotr@aviatrix.com>
Previously, the response text wasn't initialized and the rewrite module
was sending response body set to NULL.
Found with UndefinedBehaviorSanitizer (pointer-overflow).
Signed-off-by: Piotr Sikora <piotr@aviatrix.com>
Previously, it could result when left-shifting signed integer due to implicit
integer promotion, such that the most significant bit appeared on the sign bit.
In practice, though, this results in the same left value as with an explicit
cast, at least on known compilers, such as GCC and Clang. The reason is that
in_addr_t, which is equivalent to uint32_t and same as "unsigned int" in ILP32
and LP64 data type models, has the same type width as the intermediate after
integer promotion, so there's no side effects such as sign-extension. This
explains why adding an explicit cast does not change object files in practice.
Found with UndefinedBehaviorSanitizer (shift).
Based on a patch by Piotr Sikora.
While copying ngx_http_variable_value_t structures to geo binary base
in ngx_http_geo_copy_values(), and similarly in the stream module,
uninitialized parts of these structures are copied as well. These
include the "escape" field and possible holes. Calculating crc32 of
this data triggers uninitialized memory access.
Found with MemorySanitizer.
Signed-off-by: Piotr Sikora <piotr@aviatrix.com>
In preparation for adding more parameters to the listen directive,
and to be in sync with the corresponding structure in the http module.
No functional changes.
Originally, the stream module was developed based on the mail module,
following the existing style. Then it was diverged to closely follow
the http module development. This change updates style to use sscf
naming convention troughout the stream module, which matches the http
module code style. No functional changes.
The module allows to pass connections from Stream to other modules such as HTTP
or Mail, as well as back to Stream. Previously, this was only possible with
proxying. Connections with preread buffer read out from socket cannot be
passed.
The module allows selective SSL termination based on SNI.
stream {
server {
listen 8000 default_server;
ssl_preread on;
...
}
server {
listen 8000;
server_name foo.example.com;
pass 127.0.0.1:8001; # to HTTP
}
server {
listen 8000;
server_name bar.example.com;
...
}
}
http {
server {
listen 8001 ssl;
...
location / {
root html;
}
}
}
Server name is taken either from ngx_stream_ssl_module or
ngx_stream_ssl_preread_module.
The change adds "default_server" parameter to the "listen" directive,
as well as the following directives: "server_names_hash_max_size",
"server_names_hash_bucket_size", "server_name" and "ssl_reject_handshake".
Previously, preread buffer was always read out from socket, which made it
impossible to terminate SSL on the connection without introducing additional
SSL BIOs. The following patches will rely on this.
Now, when possible, recv(MSG_PEEK) is used instead, which keeps data in socket.
It's called if SSL is not already terminated and if an egde-triggered event
method is used. For epoll, EPOLLRDHUP support is also required.
Stream connection cleanup handler ngx_quic_stream_cleanup_handler() calls
ngx_quic_shutdown_stream() after which it resets the pointer from quic stream
to the connection (sc->connection = NULL). Previously if this call failed,
sc->connection retained the old value, while the connection was freed by the
application code. This resulted later in a second attempt to close the freed
connection, which lead to allocator double free error.
The fix is to reset the sc->connection pointer in case of error.
Inspired by RFC 9001, Section 6.3, trial packet decryption with the current
keys is now used to avoid a timing side-channel signal. Further, this fixes
segfault while accessing missing next keys (ticket #2585).
Previously if an MTU probe send failed early in ngx_quic_frame_sendto()
due to allocation error or congestion control, the application level packet
number was not increased, but was still saved as MTU probe packet number.
Later when a packet with this number was acknowledged, the unsent MTU probe
was acknowledged as well. This could result in discovering a bigger MTU than
supported by the path, which could lead to EMSGSIZE (Message too long) errors
while sending further packets.
The problem existed since PMTUD was introduced in 58afcd72446f (1.25.2).
Back then only the unlikely memory allocation error could trigger it. However
in efcdaa66df2e congestion control was added to ngx_quic_frame_sendto() which
can now trigger the issue with a higher probability.
Now "fastopen", "backlog", "accept_filter", "deferred", and "so_keepalive"
parameters are not allowed with "quic" in the "listen" directive.
Reported by Izorkin.
When filter finalization is triggered when working with an upstream server,
and error_page redirects request processing to some simple handler,
ngx_http_request_finalize() triggers request termination when the response
is sent. In particular, via the upstream cleanup handler, nginx will close
the upstream connection and the corresponding socket.
Still, this can happen to be with ngx_event_pipe() on stack. While
the code will set p->downstream_error due to NGX_ERROR returned from the
output filter chain by filter finalization, otherwise the error will be
ignored till control returns to ngx_http_upstream_process_request().
And event pipe might try reading from the (already closed) socket, resulting
in "readv() failed (9: Bad file descriptor) while reading upstream" errors
(or even segfaults with SSL).
Such errors were seen with the following configuration:
location /t2 {
proxy_pass http://127.0.0.1:8080/big;
image_filter_buffer 10m;
image_filter resize 150 100;
error_page 415 = /empty;
}
location /empty {
return 204;
}
location /big {
# big enough static file
}
Fix is to clear p->upstream in ngx_http_upstream_finalize_request(),
and ensure that p->upstream is checked in ngx_event_pipe_read_upstream()
and when handling events at ngx_event_pipe() exit.
When a request was terminated due to an error via ngx_http_terminate_request()
while an AIO operation was running in a subrequest, various issues were
observed. This happened because ngx_http_request_finalizer() was only set
in the subrequest where ngx_http_terminate_request() was called, but not
in the subrequest where the AIO operation was running. After completion
of the AIO operation normal processing of the subrequest was resumed, leading
to issues.
In particular, in case of the upstream module, termination of the request
called upstream cleanup, which closed the upstream connection. Attempts to
further work with the upstream connection after AIO operation completion
resulted in segfaults in ngx_ssl_recv(), "readv() failed (9: Bad file
descriptor) while reading upstream" errors, or socket leaks.
In ticket #2555, issues were observed with the following configuration
with cache background update (with thread writing instrumented to
introduce a delay, when a client closes the connection during an update):
location = /background-and-aio-write {
proxy_pass ...
proxy_cache one;
proxy_cache_valid 200 1s;
proxy_cache_background_update on;
proxy_cache_use_stale updating;
aio threads;
aio_write on;
limit_rate 1000;
}
Similarly, the same issue can be seen with SSI, and can be caused by
errors in subrequests, such as in the following configuration
(where "/proxy" uses AIO, and "/sleep" returns 444 after some delay,
causing request termination):
location = /ssi-active-boom {
ssi on;
ssi_types *;
return 200 '
<!--#include virtual="/proxy" -->
<!--#include virtual="/sleep" -->
';
limit_rate 1000;
}
Or the same with both AIO operation and the error in non-active subrequests
(which needs slightly different handling, see below):
location = /ssi-non-active-boom {
ssi on;
ssi_types *;
return 200 '
<!--#include virtual="/static" -->
<!--#include virtual="/proxy" -->
<!--#include virtual="/sleep" -->
';
limit_rate 1000;
}
Similarly, issues can be observed with just static files. However,
with static files potential impact is limited due to timeout safeguards
in ngx_http_writer(), and the fact that c->error is set during request
termination.
In a simple configuration with an AIO operation in the active subrequest,
such as in the following configuration, the connection is closed right
after completion of the AIO operation anyway, since ngx_http_writer()
tries to write to the connection and fails due to c->error set:
location = /ssi-active-static-boom {
ssi on;
ssi_types *;
return 200 '
<!--#include virtual="/static-aio" -->
<!--#include virtual="/sleep" -->
';
limit_rate 1000;
}
In the following configuration, with an AIO operation in a non-active
subrequest, the connection is closed only after send_timeout expires:
location = /ssi-non-active-static-boom {
ssi on;
ssi_types *;
return 200 '
<!--#include virtual="/static" -->
<!--#include virtual="/static-aio" -->
<!--#include virtual="/sleep" -->
';
limit_rate 1000;
}
Fix is to introduce r->main->terminated flag, which is to be checked
by AIO event handlers when the r->main->blocked counter is decremented.
When the flag is set, handlers are expected to wake up the connection
instead of the subrequest (which might be already cleaned up).
Additionally, now ngx_http_request_finalizer() is always set in the
active subrequest, so waking up the connection properly finalizes the
request even if termination happened in a non-active subrequest.
Each AIO (thread IO) operation being run is now accompanied with 1-minute
timer. This timer prevents unexpected shutdown of the worker process while
an AIO operation is running, and logs an alert if the operation is running
for too long.
This fixes "open socket left" alerts during worker processes shutdown
due to pending AIO (or thread IO) operations while corresponding requests
have no timers. In particular, such errors were observed while reading
cache headers (ticket #2162), and with worker_shutdown_timeout.
When graceful shutdown was requested, and then nginx was forced to
do fast shutdown, it used to (incorrectly) complain about open sockets
left in connections which weren't yet closed when fast shutdown
was requested.
Fix is to avoid complaining about open sockets when fast shutdown was
requested after graceful one. Abnormal termination, if requested with
the WINCH signal, can still happen though.
OPENSSL_VERSION_NUMBER is now redefined to 0x1010000fL for LibreSSL 3.5.0
and above. Building with older LibreSSL versions, such as 2.8.0, may now
produce warnings (see cab37803ebb3) and may require appropriate compiler
options to suppress them.
Notably, this allows to start using SSL_get0_verified_chain() appeared
in OpenSSL 1.1.0 and LibreSSL 3.5.0, without additional macro tests.
Prodded by Ilya Shipitsin.
Similar to 7356:e3ba4026c02d, as long as SSL_OP_NO_CLIENT_RENEGOTIATION
is defined, it is the library responsibility to prevent renegotiation.
Additionally, this allows to raise LibreSSL version used to redefine
OPENSSL_VERSION_NUMBER to 0x1010000fL, such that this won't result in
attempts to dereference SSL objects made opaque in LibreSSL 3.4.0.
Patch by Maxim Dounin.
rand() is used on win32. RAND_MAX is implementation defined. win32's is
0x7fff.
Existing uses of ngx_random() rely upon 0x7fffffff range provided by
POSIX implementations of random().
On-packet acknowledgement is made path aware, as per RFC 9000, Section 9.4:
Packets sent on the old path MUST NOT contribute to congestion control
or RTT estimation for the new path.
To make this possible in a single congestion control context, the first packet
to be sent after the new path has been validated, which includes resetting the
congestion controller and RTT estimator, is now remembered in the connection.
Packets sent previously, such as on the old path, are not taken into account.
Note that although the packet number is saved per-connection, the added checks
affect application level packets only. For non-application level packets,
which are only processed prior to the handshake is complete, the remembered
packet number remains set to zero.
As per RFC 9000, Section 8.2.1:
When an endpoint is unable to expand the datagram size to 1200 bytes due
to the anti-amplification limit, the path MTU will not be validated.
To ensure that the path MTU is large enough, the endpoint MUST perform a
second path validation by sending a PATH_CHALLENGE frame in a datagram of
at least 1200 bytes.
Previously ngx_quic_frame_sendto() ignored congestion control and did not
contribute to in_flight counter.
Now congestion control window is checked unless ignore_congestion flag is set.
Also, in_flight counter is incremented and the frame is stored in ctx->sent
queue if it's ack-eliciting. This behavior is now similar to
ngx_quic_output_packet().
According to RFC 9000, an endpoint SHOULD NOT send multiple PATH_CHALLENGE
frames in a single packet. The change adds a check to enforce this claim to
optimize server behavior. Previously each PATH_CHALLENGE always resulted in a
single response datagram being sent to client. The effect of this was however
limited by QUIC flood protection.
Also, PATH_CHALLENGE is explicitly disabled in Initial and Handshake levels,
see RFC 9000, Table 3. However, technically it may be sent by client in 0-RTT
over a new path without actual migration, even though the migration itself is
prohibited during handshake. This allows client to coalesce multiple 0-RTT
packets each carrying a PATH_CHALLENGE and end up with multiple PATH_CHALLENGEs
per datagram. This again leads to suboptimal behavior, see above. Since the
purpose of sending PATH_CHALLENGE frames in 0-RTT is unclear, these frames are
now only allowed in 1-RTT. For 0-RTT they are silently ignored.
Previously, when using ngx_quic_frame_sendto() to explicitly send a packet with
a single frame, anti-amplification limit was not properly enforced. Even when
there was no quota left for the packet, it was sent anyway, but with no padding.
Now the packet is not sent at all.
This function is called to send PATH_CHALLENGE/PATH_RESPONSE, PMTUD and probe
packets. For all these cases packet send is retried later in case the send was
not successful.
By default packets with these frames are expanded to 1200 bytes. Previously,
if anti-amplification limit did not allow this expansion, it was limited to
whatever size was allowed. However RFC 9000 clearly states no partial
expansion should happen in both cases.
Section 8.2.1. Initiating Path Validation:
An endpoint MUST expand datagrams that contain a PATH_CHALLENGE frame
to at least the smallest allowed maximum datagram size of 1200 bytes,
unless the anti-amplification limit for the path does not permit
sending a datagram of this size.
Section 8.2.2. Path Validation Responses:
An endpoint MUST expand datagrams that contain a PATH_RESPONSE frame
to at least the smallest allowed maximum datagram size of 1200 bytes.
...
However, an endpoint MUST NOT expand the datagram containing the
PATH_RESPONSE if the resulting data exceeds the anti-amplification limit.
If URI is not fully parsed yet, some pointers are not set. As a result,
the calculation of "new + (ptr - old)" expression is flawed.
According to C11, 6.5.6 Additive operators, p.9:
: When two pointers are subtracted, both shall point to elements
: of the same array object, or one past the last element of the
: array object
Since "ptr" is not set, subtraction leads to undefined behaviour, because
"ptr" and "old" are not in the same buffer (i.e. array objects).
Prodded by GCC undefined behaviour sanitizer.
Neither r->port_start nor r->port_end were ever used.
The r->port_end is set by the parser, though it was never used by
the following code (and was never usable, since not copied by the
ngx_http_alloc_large_header_buffer() without r->port_start set).
Currently, packets generated by ngx_quic_frame_sendto() and
ngx_quic_send_early_cc() are not logged, thus making it hard
to read logs due to gaps appearing in packet numbers sequence.
At frames level, it is handy to see immediately packet number
in which they arrived or being sent.
As part of normal HTTP/2 processing, incomplete frames are saved in the
control state using a fixed size memcpy of NGX_HTTP_V2_STATE_BUFFER_SIZE.
For this matter, two state buffers are reserved in the HTTP/2 recv buffer.
As part of HTTP/2 auto-detection on plain TCP connections, initial data
is first read into a buffer specified by the client_header_buffer_size
directive that doesn't have state reservation. Previously, this made it
possible to over-read the buffer as part of saving the state.
The fix is to read the available buffer size rather than a fixed size.
Although memcpy of a fixed size can produce a better optimized code,
handling of incomplete frames isn't a common execution path, so it was
sacrificed for the sake of simplicity of the fix.
It is made local as it is only needed now when creating crypto context.
BoringSSL lacks EVP interface for ChaCha20, providing instead
a function for one-shot encryption, thus hp is still preserved.
Based on a patch by Roman Arutyunyan.
Previously it was possible to generate ACK frames using formally discarded
protection keys, in particular, when acknowledging a client Handshake packet
used to complete the TLS handshake and to discard handshake protection keys.
As it happens late in packet processing, it could be possible to generate ACK
frames after the keys were already discarded.
ACK frames are generated from ngx_quic_ack_packet(), either using a posted
push event, which envolves ngx_quic_generate_ack() as a part of the final
packet assembling, or directly in ngx_quic_ack_packet(), such as when there
is no room to add a new ACK range or when the received packet is out of order.
The added keys availability check is used to avoid generating late ACK frames
in both cases.
In addition to triggering alert, it ensures that such packets won't be sent.
With the previous change that marks server keys as discarded by zeroing the
key lengh, it is now an error to send packets with discarded keys. OpenSSL
based stacks tolerate such behaviour because key length isn't used in packet
protection, but BoringSSL will raise the UNSUPPORTED_KEY_SIZE cipher error.
It won't be possible to use discarded keys with reused crypto contexts as it
happens in subsequent changes.
Keys may be released by TLS stack in different times, so it makes sense
to check this independently as well. This allows to fine-tune what key
direction is used when checking keys availability.
When discarding, server keys are now marked in addition to client keys.
This improves nginx startup times significantly when using very large number
of locations due to computational complexity of the sorting algorithm being
used: insertion sort is O(n*n) on average, while merge sort is O(n*log(n)).
In particular, in a test configuration with 20k locations total startup
time is reduced from 8 seconds to 0.9 seconds.
Prodded by Yusuke Nojima,
https://mailman.nginx.org/pipermail/nginx-devel/2023-September/NUL3Y2FPPFSHMPTFTL65KXSXNTX3NQMK.html
In ngx_regex_cleanup() allocator wasn't configured when calling
pcre2_compile_context_free() and pcre2_match_data_free(), resulting
in no ngx_free() call and leaked memory. Fix is ensure that allocator
is configured for global allocations, so that ngx_free() is actually
called to free memory.
Additionally, ngx_regex_compile_context was cleared in
ngx_regex_module_init(). It should be either not cleared, so it will
be freed by ngx_regex_cleanup(), or properly freed. Fix is to
not clear it, so ngx_regex_cleanup() will be able to free it.
Reported by ZhenZhong Wu,
https://mailman.nginx.org/pipermail/nginx-devel/2023-September/3Z5FIKUDRN2WBSL3JWTZJ7SXDA6YIWPB.html
To ensure that attempts to flood servers with many streams are detected
early, a limit of no more than 2 * max_concurrent_streams new streams per one
event loop iteration was introduced. This limit is applied even if
max_concurrent_streams is not yet reached - for example, if corresponding
streams are handled synchronously or reset.
Further, refused streams are now limited to maximum of max_concurrent_streams
and 100, similarly to priority_limit initial value, providing some tolerance
to clients trying to open several streams at the connection start, yet
low tolerance to flooding attempts.
The error may be triggered in add_handhshake_data() by incorrect transport
parameter sent by client. The expected behaviour in this case is to close
connection complaining about incorrect parameter. Currently the connection
just times out.
Previously, the timer was never reset due to an explicit check. The check was
added in 36b59521a41c as part of connection close simplification. The reason
was to retain the earliest timeout. However, the timeouts are all the same
while QUIC handshake is in progress and resetting the timer for the same value
has no performance implications. After handshake completion there's only
application level. The change removes the check.
Now the session object is assigned to c->data while ngx_http_connection_t
object is referenced by its http_connection field, similar to
ngx_http_v2_connection_t and ngx_http_request_t.
The change allows to eliminate v3_session field from ngx_http_connection_t.
The field was under NGX_HTTP_V3 macro, which was a source of binary
compatibility problems when nginx/module is build with/without HTTP/3 support.
Postponing is essential since c->data should retain the reference to
ngx_http_connection_t object throughout QUIC handshake, because SSL callbacks
ngx_http_ssl_servername() and ngx_http_ssl_alpn_select() rely on this.
Instead, when worker is shutting down and handshake is not yet completed,
connection is terminated immediately.
Previously the callback could be called while QUIC handshake was in progress
and, what's more important, before the init() callback. Now it's postponed
after init().
This change is a preparation to postponing HTTP/3 session creation to init().
Previously QUIC did not have such parameter and handshake duration was
controlled by HTTP/3. However that required creating and storing HTTP/3
session on first client datagram. Apparently there's no convenient way to
store the session object until QUIC handshake is complete. In the followup
patches session creation will be postponed to init() callback.
As explained in BoringSSL change[1], levels were introduced in the original
QUIC API to draw a line between when keys are released and when are active.
In the new QUIC API they are released in separate calls when it's needed.
BoringSSL has then a consideration to remove levels API, hence the change.
If not available e.g. from a QUIC packet header, levels can be taken based on
keys availability. The only real use of levels is to prevent using app keys
before they are active in QuicTLS that provides the old BoringSSL QUIC API,
it is replaced with an equivalent check of c->ssl->handshaked.
This change also removes OpenSSL compat shims since they are no longer used.
The only exception left is caching write level from the keylog callback in
the internal field which is a handy equivalent of checking keys availability.
[1] https://boringssl.googlesource.com/boringssl/+/1e859054
As per RFC 9000, section 10.2.3, to ensure that peer successfully removed
packet protection, CONNECTION_CLOSE can be sent in multiple packets using
different packet protection levels.
Now it is sent in all protection levels available.
This roughly corresponds to the following paragraph:
* Prior to confirming the handshake, a peer might be unable to process 1-RTT
packets, so an endpoint SHOULD send a CONNECTION_CLOSE frame in both Handshake
and 1-RTT packets. A server SHOULD also send a CONNECTION_CLOSE frame in an
Initial packet.
In practice, this change allows to avoid sending an Initial packet when we know
the client has handshake keys, by checking if we have discarded initial keys.
Also, this fixes sending CONNECTION_CLOSE when using QuicTLS with old QUIC API,
where TLS stack releases application read keys before handshake confirmation;
it is fixed by sending CONNECTION_CLOSE additionally in a Handshake packet.
Status header with an empty reason-phrase, such as "Status: 404 ", is
valid per CGI specification, but looses the trailing space during parsing.
Currently, this results in "HTTP/1.1 404" HTTP status line in the response,
which violates HTTP specification due to missing trailing space.
With this change, only the status code is used from such short Status
header lines, so nginx will generate status line itself, with the space
and appropriate reason phrase if available.
Reported at:
https://mailman.nginx.org/pipermail/nginx/2023-August/EX7G4JUUHJWJE5UOAZMO5UD6OJILCYGX.html
Previously, a socket error on a path being validated resulted in validation
error and subsequent QUIC connection closure. Now the error is ignored and
path validation proceeds as usual, with several retries and a timeout.
When validating the old path after an apparent migration, that path may already
be unavailable and sendmsg() may return an error, which should not result in
QUIC connection close.
When validating the new path, it's possible that the new client address is
spoofed (See RFC 9000, 9.3.2. On-Path Address Spoofing). This address may
as well be unavailable and should not trigger QUIC connection closure.
Previously, original dcid was used to receive initial client packets in case
server initial response was lost. However, last dcid should be used instead.
These two are the same unless retry is used. In case of retry, client resends
initial packet with a new dcid, that is different from the original dcid. If
server response is lost, the client resends this packet again with the same
dcid. This is shown in RFC 9000, 7.3. Authenticating Connection IDs, Figure 8.
The issue manifested itself with creating multiple server sessions in response
to each post-retry client initial packet, if server response is lost.
Since at least f9fbeb4ee0de and certainly after 924882f42dea, which
TLS Key Update support predates, queued data output is deferred to a
posted push handler. To address timing signals after these changes,
generating next keys is now posted to run after the push handler.