Skip to content

Commit

Permalink
Release 2.12.904 (#188)
Browse files Browse the repository at this point in the history
2.12.904 (2024-12-22)
=====================

- Fixed an issue when trying to force load Websocket over HTTP/2 or
HTTP/3.
- Ensured WebSocket via HTTP/2 with improved CI pipeline featuring
haproxy as the reverse proxy.
- Fixed ``RuntimeError`` when forcing HTTP/3 by disabling both HTTP/1,
and HTTP/2 and the remote is unable to negotiate HTTP/3.
This issue occurred because of our automatic downgrade procedure
introduced in our 2.10.x series. The downgrade ends in panic
due to unavailable lower protocols. This only improve the UX by not
downgrading and letting the original error out.
See jawah/niquests#189 for original user
report.
- Fixed negotiated extensions for WebSocket being ignored (e.g.
per-deflate message).
- Backported ``HTTPResponse.shutdown()`` and nullified it. The fix they
attempt to ship only concern
them, we are already safe (based on issue reproduction). See
urllib3#2868
- Backported ``proxy_is_tunneling`` property to ``HTTPConnection`` and
``HTTPSConnection``.
  See urllib3#3459
- Backported ``HTTPSConnection.is_verified`` to False when using a
forwarding proxy.
  See urllib3#3283
- Backported pickling support to ``NewConnectionError`` and
``NameResolutionError``.
  See urllib3#3480
  • Loading branch information
Ousret authored Dec 22, 2024
2 parents aeb972e + 99350bf commit dd8795d
Show file tree
Hide file tree
Showing 37 changed files with 539 additions and 100 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ jobs:
run: |
python -m coverage combine
python -m coverage html --skip-covered --skip-empty
python -m coverage report --ignore-errors --show-missing --fail-under=80
python -m coverage report --ignore-errors --show-missing --fail-under=86
- name: "Upload report"
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,6 @@
/build
/docs/_build
coverage.xml
traefik/httpbin.local.key
traefik/httpbin.local.pem
traefik/httpbin.local.pem.key
rootCA.pem
19 changes: 19 additions & 0 deletions CHANGES.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,22 @@
2.12.904 (2024-12-22)
=====================

- Fixed an issue when trying to force load Websocket over HTTP/2 or HTTP/3.
- Ensured WebSocket via HTTP/2 with improved CI pipeline featuring haproxy as the reverse proxy.
- Fixed ``RuntimeError`` when forcing HTTP/3 by disabling both HTTP/1, and HTTP/2 and the remote is unable to negotiate HTTP/3.
This issue occurred because of our automatic downgrade procedure introduced in our 2.10.x series. The downgrade ends in panic
due to unavailable lower protocols. This only improve the UX by not downgrading and letting the original error out.
See https://github.com/jawah/niquests/issues/189 for original user report.
- Fixed negotiated extensions for WebSocket being ignored (e.g. per-deflate message).
- Backported ``HTTPResponse.shutdown()`` and nullified it. The fix they attempt to ship only concern
them, we are already safe (based on issue reproduction). See https://github.com/urllib3/urllib3/issues/2868
- Backported ``proxy_is_tunneling`` property to ``HTTPConnection`` and ``HTTPSConnection``.
See https://github.com/urllib3/urllib3/pull/3459
- Backported ``HTTPSConnection.is_verified`` to False when using a forwarding proxy.
See https://github.com/urllib3/urllib3/pull/3283
- Backported pickling support to ``NewConnectionError`` and ``NameResolutionError``.
See https://github.com/urllib3/urllib3/pull/3480

2.12.903 (2024-12-09)
=====================

Expand Down
4 changes: 2 additions & 2 deletions dev-requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ pytest-timeout>=2.3.1,<3
trustme>=0.9.0,<2
# We have to install at most cryptography 39.0.2 for PyPy<7.3.10
# versions of Python 3.7, 3.8, and 3.9.
cryptography==39.0.2; implementation_name=="pypy" and implementation_version<"7.3.10"
cryptography==42.0.5; implementation_name!="pypy" or implementation_version>="7.3.10"
cryptography==39.0.2; implementation_name=="pypy" and implementation_version<="7.3.11"
cryptography==42.0.5; implementation_name!="pypy" or implementation_version>"7.3.11"
backports.zoneinfo==0.2.1; python_version<"3.9"
tzdata==2024.2; python_version<"3.8"
towncrier==21.9.0
Expand Down
6 changes: 3 additions & 3 deletions docker-compose.win.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ services:
proxy:
image: traefik:v3.2-windowsservercore-ltsc2022
restart: unless-stopped
depends_on:
httpbin:
condition: service_started
healthcheck:
test: [ "CMD", "traefik" ,"healthcheck", "--ping" ]
interval: 10s
Expand Down Expand Up @@ -72,9 +75,6 @@ services:
context: ./go-httpbin
dockerfile: patched.Dockerfile
restart: unless-stopped
depends_on:
proxy:
condition: service_healthy
labels:
- traefik.enable=true
- traefik.http.routers.httpbin-http.rule=Host(`httpbin.local`) || Host(`alt.httpbin.local`)
Expand Down
24 changes: 21 additions & 3 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ services:
proxy:
image: traefik:v3.2
restart: unless-stopped
depends_on:
httpbin:
condition: service_started
healthcheck:
test: [ "CMD", "traefik" ,"healthcheck", "--ping" ]
interval: 10s
Expand Down Expand Up @@ -64,12 +67,27 @@ services:
# Sadly unsupported for HTTP/3!
- --entrypoints.alt-https.transport.keepAliveMaxTime=5s

# haproxy is one of the very few
# capable of handling RFC8441 natively.
# todo: wait for Traefik to implement RFC8441, Caddy is almost ready (v2.9).
# golang stdlib is ready for it.
haproxy:
image: haproxy:3.1-alpine
restart: unless-stopped
depends_on:
httpbin:
condition: service_started
ports:
- target: 443
published: 9443
protocol: tcp
mode: host
volumes:
- ./traefik:/usr/local/etc/haproxy

httpbin:
image: mccutchen/go-httpbin:v2.15.0
restart: unless-stopped
depends_on:
proxy:
condition: service_healthy
labels:
- traefik.enable=true
- traefik.http.routers.httpbin-http.rule=Host(`httpbin.local`) || Host(`alt.httpbin.local`)
Expand Down
37 changes: 7 additions & 30 deletions noxfile.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,26 +55,10 @@ def traefik_boot(
if not os.path.exists("./traefik/httpbin.local.pem"):
session.log("Prepare fake certificates for our Traefik server...")

addon_proc = subprocess.Popen(
[
"python",
"-m",
"pip",
"install",
"cffi==1.17.0rc1; python_version > '3.12'",
"trustme",
]
)
session.install("trustme")

addon_proc.wait()

if addon_proc.returncode != 0:
yield
session.warn("Unable to install trustme outside of the nox Session")
return

trustme_proc = subprocess.Popen(
[
session.run(
*[
"python",
"-m",
"trustme",
Expand All @@ -86,19 +70,12 @@ def traefik_boot(
]
)

trustme_proc.wait()

if trustme_proc.returncode != 0:
session.warn("Unable to issue required certificates for our Traefik stack")
yield
return

shutil.move("./traefik/server.pem", "./traefik/httpbin.local.pem")

if os.path.exists("./traefik/httpbin.local.key"):
os.unlink("./traefik/httpbin.local.key")
if os.path.exists("./traefik/httpbin.local.pem.key"):
os.unlink("./traefik/httpbin.local.pem.key")

shutil.move("./traefik/server.key", "./traefik/httpbin.local.key")
shutil.move("./traefik/server.key", "./traefik/httpbin.local.pem.key")

if os.path.exists("./rootCA.pem"):
os.unlink("./rootCA.pem")
Expand Down Expand Up @@ -178,6 +155,7 @@ def traefik_boot(
RemoteDisconnected,
TimeoutError,
SocketTimeout,
ConnectionError,
) as e:
i += 1
time.sleep(1)
Expand Down Expand Up @@ -346,7 +324,6 @@ def downstream_botocore(session: nox.Session) -> None:
session.run("python", "scripts/ci/install")

session.cd(root)
session.install("setuptools<71")

session.install(".", silent=False)
session.cd(f"{tmp_dir}/botocore")
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@ include = [
"/docker-compose.win.yaml",
"/traefik/certificate.toml",
"/traefik/patched.Dockerfile",
"/traefik/haproxy.cfg",
]

[tool.hatch.build.targets.wheel]
Expand Down
53 changes: 45 additions & 8 deletions src/urllib3/_async/connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -273,6 +273,9 @@ async def connect(self) -> None:
# not using tunnelling.
self._has_connected_to_proxy = bool(self.proxy)

if self._has_connected_to_proxy:
self.proxy_is_verified = False

@property
def is_closed(self) -> bool:
return self.sock is None
Expand All @@ -289,6 +292,20 @@ def is_connected(self) -> bool:
def has_connected_to_proxy(self) -> bool:
return self._has_connected_to_proxy

@property
def proxy_is_forwarding(self) -> bool:
"""
Return True if a forwarding proxy is configured, else return False
"""
return bool(self.proxy) and self._tunnel_host is None

@property
def proxy_is_tunneling(self) -> bool:
"""
Return True if a tunneling proxy is configured, else return False
"""
return self._tunnel_host is not None

async def close(self) -> None: # type: ignore[override]
try:
await super().close()
Expand Down Expand Up @@ -791,13 +808,15 @@ async def connect(self) -> None:
alpn_protocols.append("h2")

# Do we need to establish a tunnel?
if self._tunnel_host is not None:
if self.proxy_is_tunneling:
# We're tunneling to an HTTPS origin so need to do TLS-in-TLS.
if self._tunnel_scheme == "https":
self.sock = sock = await self._connect_tls_proxy(
self.host, sock, ["http/1.1"]
)
tls_in_tls = True
elif self._tunnel_scheme == "http":
self.proxy_is_verified = False

await self._post_conn()

Expand All @@ -806,7 +825,7 @@ async def connect(self) -> None:

await self._tunnel()
# Override the host with the one we're requesting data from.
server_hostname = self._tunnel_host
server_hostname = self._tunnel_host # type: ignore[assignment]

if self.server_hostname is not None:
server_hostname = self.server_hostname
Expand All @@ -833,14 +852,32 @@ async def connect(self) -> None:
key_data=self.key_data,
)
self.sock = sock_and_verified.socket
self.is_verified = sock_and_verified.is_verified

await self._post_conn()
# If there's a proxy to be connected to we are fully connected.
# This is set twice (once above and here) due to forwarding proxies
# not using tunnelling.
self._has_connected_to_proxy = bool(self.proxy)

# Forwarding proxies can never have a verified target since
# the proxy is the one doing the verification. Should instead
# use a CONNECT tunnel in order to verify the target.
# See: https://github.com/urllib3/urllib3/issues/3267.
if self.proxy_is_forwarding:
self.is_verified = False
else:
self.is_verified = sock_and_verified.is_verified

# If there's a proxy to be connected to we are fully connected.
# This is set twice (once above and here) due to forwarding proxies
# not using tunnelling.
self._has_connected_to_proxy = bool(self.proxy)
# If there's a proxy to be connected to we are fully connected.
# This is set twice (once above and here) due to forwarding proxies
# not using tunnelling.
self._has_connected_to_proxy = bool(self.proxy)

# Set `self.proxy_is_verified` unless it's already set while
# establishing a tunnel.
if self._has_connected_to_proxy and self.proxy_is_verified is None:
self.proxy_is_verified = sock_and_verified.is_verified

await self._post_conn()

async def _connect_tls_proxy(
self,
Expand Down
2 changes: 1 addition & 1 deletion src/urllib3/_async/connectionpool.py
Original file line number Diff line number Diff line change
Expand Up @@ -2350,7 +2350,7 @@ async def _validate_conn(self, conn: AsyncHTTPConnection) -> None:
if conn.is_closed:
await conn.connect()

if not conn.is_verified:
if not conn.is_verified and not conn.proxy_is_verified:
warnings.warn(
(
f"Unverified HTTPS request is being made to host '{conn.host}'. "
Expand Down
2 changes: 1 addition & 1 deletion src/urllib3/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# This file is protected via CODEOWNERS
from __future__ import annotations

__version__ = "2.12.903"
__version__ = "2.12.904"
31 changes: 22 additions & 9 deletions src/urllib3/backend/_async/hface.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,7 @@ def __init__(
self.__remaining_body_length: int | None = None
self.__authority_bit_set: bool = False
self.__legacy_host_entry: bytes | None = None
self.__protocol_bit_set: bool = False

# h3 specifics
self.__custom_tls_settings: QuicTLSConfig | None = None
Expand Down Expand Up @@ -594,10 +595,16 @@ async def _post_conn(self) -> None: # type: ignore[override]
# this avoid the close() to attempt re-use the (dead) sock
self._protocol = None

raise MustDowngradeError(
f"The server yielded its support for {self._svn} through the Alt-Svc header while unable to do so. "
f"To remediate that issue, either disable {self._svn} or reach out to the server admin."
) from e
# we don't want to force downgrade if the user specifically said
# to kill support for all other supported protocols!
if (
HttpVersion.h11 not in self.disabled_svn
or HttpVersion.h2 not in self.disabled_svn
):
raise MustDowngradeError(
f"The server yielded its support for {self._svn} through the Alt-Svc header while unable to do so. "
f"To remediate that issue, either disable {self._svn} or reach out to the server admin."
) from e
raise

self._connected_at = time.monotonic()
Expand Down Expand Up @@ -964,9 +971,10 @@ async def __exchange_until(
if (self._svn == HttpVersion.h2 and event.error_code == 0xD) or (
self._svn == HttpVersion.h3 and event.error_code == 0x0110
):
raise MustDowngradeError(
f"The remote server is unable to serve this resource over {self._svn}"
)
if HttpVersion.h11 not in self.disabled_svn:
raise MustDowngradeError(
f"The remote server is unable to serve this resource over {self._svn}"
)

raise ProtocolError(
f"Stream {event.stream_id} was reset by remote peer. Reason: {hex(event.error_code)}."
Expand Down Expand Up @@ -1040,6 +1048,7 @@ def putrequest(
self.__remaining_body_length = None
self.__legacy_host_entry = None
self.__authority_bit_set = False
self.__protocol_bit_set = False

self._start_last_request = datetime.now(tz=timezone.utc)

Expand Down Expand Up @@ -1121,6 +1130,8 @@ def putheader(self, header: str, *values: str) -> None:
)

if encoded_header.startswith(b":"):
if encoded_header == b":protocol":
self.__protocol_bit_set = True
item_to_remove = None

for _k, _v in self.__headers:
Expand Down Expand Up @@ -1167,7 +1178,9 @@ async def endheaders( # type: ignore[override]
# only h2 and h3 support streams, it is faked/simulated for h1.
self._stream_id = self._protocol.get_available_stream_id()
# unless anything hint the opposite, the request head frame is the end stream
should_end_stream: bool = expect_body_afterward is False
should_end_stream: bool = (
expect_body_afterward is False and self.__protocol_bit_set is False
)

# handle cases where 'Host' header is set manually
if self.__legacy_host_entry is not None:
Expand Down Expand Up @@ -1220,7 +1233,7 @@ async def endheaders( # type: ignore[override]
else:
self._last_used_at = time.monotonic()

if should_end_stream:
if expect_body_afterward is False:
if self._start_last_request and self.conn_info:
self.conn_info.request_sent_latency = (
datetime.now(tz=timezone.utc) - self._start_last_request
Expand Down
Loading

0 comments on commit dd8795d

Please sign in to comment.