mirror of
https://github.com/varun-r-mallya/py-libp2p.git
synced 2026-02-12 16:10:57 +00:00
Compare commits
146 Commits
58b33ba2e8
...
varun-r-ma
| Author | SHA1 | Date | |
|---|---|---|---|
| af61523c87 | |||
| d2fdf70692 | |||
| 1ea50a3cf3 | |||
| f4247faa51 | |||
| 92e79bbb3f | |||
| eb3121b818 | |||
| 787648177f | |||
| fc9b28910a | |||
| 26d0ed2d81 | |||
| 618aff9368 | |||
| 32e545d9c7 | |||
| e712e6c0c4 | |||
| b143c96abd | |||
| 678b920992 | |||
| cb11f076c8 | |||
| 9ed44f5fa3 | |||
| 8786f06862 | |||
| 8c96c5a941 | |||
| 16445714f7 | |||
| 64bc388b33 | |||
| 09e151aafc | |||
| 2d335d4394 | |||
| 8b8b051885 | |||
| 07c8d4cd1f | |||
| 09e6feea8e | |||
| 601a8a3ef0 | |||
| 9d597012cc | |||
| 8625226be8 | |||
| c2b1738cd9 | |||
| 83acc38281 | |||
| 1899dac84c | |||
| aab2a0b603 | |||
| bab08c0900 | |||
| 6431fb8788 | |||
| c8053417d5 | |||
| 6eba9d8ca0 | |||
| 0e1b738cbb | |||
| 2ff5ae9c90 | |||
| ecc443dcfe | |||
| aa6039bcd3 | |||
| 8352d19113 | |||
| ceb9f7d3f7 | |||
| 9b667bd472 | |||
| eca548851b | |||
| e91f458446 | |||
| 0416572457 | |||
| 39375fb338 | |||
| 8bf261ca77 | |||
| 3a927c8419 | |||
| ec92af20e7 | |||
| 01db5d5fa0 | |||
| 21ee417793 | |||
| 37e4fee9f8 | |||
| c277cce2ed | |||
| 048e6deb96 | |||
| 2dc2dd4670 | |||
| e6a355d395 | |||
| 7b181f3ce5 | |||
| 0606788ab6 | |||
| 7d62a2f558 | |||
| 26fd169ccc | |||
| 99db5b309f | |||
| 7cfe5b9dc7 | |||
| 092b9c0c57 | |||
| fcf0546831 | |||
| 85bad2d0ae | |||
| 11560f5cc9 | |||
| 3507531344 | |||
| c9162beb2b | |||
| f587e50cab | |||
| d1a0f4f767 | |||
| 3ca27c6e93 | |||
| b4482e1a5e | |||
| ae82895d86 | |||
| 9f40d97a05 | |||
| 6fe28dcdd3 | |||
| 41b1ecb67c | |||
| e3c9b4bd54 | |||
| e132b154e3 | |||
| 62ea3bbf9a | |||
| 430527625b | |||
| 4115d033a8 | |||
| 93fc063e70 | |||
| 4bd24621f0 | |||
| 5315816521 | |||
| d5797572ea | |||
| 311b750511 | |||
| 42f07ae1ab | |||
| 773962c070 | |||
| 8ccf58bb83 | |||
| 06f0c7d35c | |||
| ab94e77310 | |||
| 23622ea1a0 | |||
| 6aeb217349 | |||
| 003e7bf278 | |||
| 719246c996 | |||
| e013e80689 | |||
| 6f33cde9a9 | |||
| 9c2560d000 | |||
| 9f38d48e26 | |||
| 2c1e50428a | |||
| 9e76940e75 | |||
| 53614200bd | |||
| 41ed0769f6 | |||
| 1c59653946 | |||
| 4bbb08ce2d | |||
| 912669a924 | |||
| 8ec67289da | |||
| 9cd3805542 | |||
| b81168dae9 | |||
| d03bdd75d6 | |||
| 8ff7bb1f20 | |||
| 5fcfc677f3 | |||
| dd14aad47c | |||
| 96434d9977 | |||
| 1507100632 | |||
| 21db1c3b72 | |||
| 3592ad308f | |||
| 9669a92976 | |||
| 2dfee68f20 | |||
| 505d3b2a8f | |||
| f4eb0158fe | |||
| b716d64184 | |||
| 198208aef3 | |||
| cda163fc48 | |||
| 26ed99dafd | |||
| a26fd95854 | |||
| 2965b4e364 | |||
| 242998ae9d | |||
| 5f497c7f5d | |||
| e65e38a3f1 | |||
| 8fb664bfdf | |||
| 3dcd99a2d1 | |||
| 75abc8b863 | |||
| 91dca97d83 | |||
| 80c686ddce | |||
| dcb199a6b7 | |||
| 16be6fab85 | |||
| cbb1e26a4f | |||
| 69a2cb00ba | |||
| ec20ca81dd | |||
| b5ec1bd7ee | |||
| ddbd190993 | |||
| 36be4c354b | |||
| befb2d31db | |||
| 12ad2dcdf4 |
8
Makefile
8
Makefile
@ -60,6 +60,7 @@ PB = libp2p/crypto/pb/crypto.proto \
|
||||
libp2p/identity/identify/pb/identify.proto \
|
||||
libp2p/host/autonat/pb/autonat.proto \
|
||||
libp2p/relay/circuit_v2/pb/circuit.proto \
|
||||
libp2p/relay/circuit_v2/pb/dcutr.proto \
|
||||
libp2p/kad_dht/pb/kademlia.proto
|
||||
|
||||
PY = $(PB:.proto=_pb2.py)
|
||||
@ -68,6 +69,8 @@ PYI = $(PB:.proto=_pb2.pyi)
|
||||
## Set default to `protobufs`, otherwise `format` is called when typing only `make`
|
||||
all: protobufs
|
||||
|
||||
.PHONY: protobufs clean-proto
|
||||
|
||||
protobufs: $(PY)
|
||||
|
||||
%_pb2.py: %.proto
|
||||
@ -76,6 +79,11 @@ protobufs: $(PY)
|
||||
clean-proto:
|
||||
rm -f $(PY) $(PYI)
|
||||
|
||||
# Force protobuf regeneration by making them always out of date
|
||||
$(PY): FORCE
|
||||
|
||||
FORCE:
|
||||
|
||||
# docs commands
|
||||
|
||||
docs: check-docs
|
||||
|
||||
48
README.md
48
README.md
@ -34,19 +34,19 @@ ______________________________________________________________________
|
||||
| -------------------------------------- | :--------: | :---------------------------------------------------------------------------------: |
|
||||
| **`libp2p-tcp`** | ✅ | [source](https://github.com/libp2p/py-libp2p/blob/main/libp2p/transport/tcp/tcp.py) |
|
||||
| **`libp2p-quic`** | 🌱 | |
|
||||
| **`libp2p-websocket`** | ❌ | |
|
||||
| **`libp2p-webrtc-browser-to-server`** | ❌ | |
|
||||
| **`libp2p-webrtc-private-to-private`** | ❌ | |
|
||||
| **`libp2p-websocket`** | 🌱 | |
|
||||
| **`libp2p-webrtc-browser-to-server`** | 🌱 | |
|
||||
| **`libp2p-webrtc-private-to-private`** | 🌱 | |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
### NAT Traversal
|
||||
|
||||
| **NAT Traversal** | **Status** |
|
||||
| ----------------------------- | :--------: |
|
||||
| **`libp2p-circuit-relay-v2`** | ❌ |
|
||||
| **`libp2p-autonat`** | ❌ |
|
||||
| **`libp2p-hole-punching`** | ❌ |
|
||||
| **NAT Traversal** | **Status** | **Source** |
|
||||
| ----------------------------- | :--------: | :-----------------------------------------------------------------------------: |
|
||||
| **`libp2p-circuit-relay-v2`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/relay/circuit_v2) |
|
||||
| **`libp2p-autonat`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/host/autonat) |
|
||||
| **`libp2p-hole-punching`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/relay/circuit_v2) |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
@ -54,27 +54,27 @@ ______________________________________________________________________
|
||||
|
||||
| **Secure Communication** | **Status** | **Source** |
|
||||
| ------------------------ | :--------: | :---------------------------------------------------------------------------: |
|
||||
| **`libp2p-noise`** | 🌱 | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/security/noise) |
|
||||
| **`libp2p-tls`** | ❌ | |
|
||||
| **`libp2p-noise`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/security/noise) |
|
||||
| **`libp2p-tls`** | 🌱 | |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
### Discovery
|
||||
|
||||
| **Discovery** | **Status** |
|
||||
| -------------------- | :--------: |
|
||||
| **`bootstrap`** | ❌ |
|
||||
| **`random-walk`** | ❌ |
|
||||
| **`mdns-discovery`** | ❌ |
|
||||
| **`rendezvous`** | ❌ |
|
||||
| **Discovery** | **Status** | **Source** |
|
||||
| -------------------- | :--------: | :--------------------------------------------------------------------------------: |
|
||||
| **`bootstrap`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/discovery/bootstrap) |
|
||||
| **`random-walk`** | 🌱 | |
|
||||
| **`mdns-discovery`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/discovery/mdns) |
|
||||
| **`rendezvous`** | 🌱 | |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
### Peer Routing
|
||||
|
||||
| **Peer Routing** | **Status** |
|
||||
| -------------------- | :--------: |
|
||||
| **`libp2p-kad-dht`** | ❌ |
|
||||
| **Peer Routing** | **Status** | **Source** |
|
||||
| -------------------- | :--------: | :--------------------------------------------------------------------: |
|
||||
| **`libp2p-kad-dht`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/kad_dht) |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
@ -89,10 +89,10 @@ ______________________________________________________________________
|
||||
|
||||
### Stream Muxers
|
||||
|
||||
| **Stream Muxers** | **Status** | **Status** |
|
||||
| ------------------ | :--------: | :----------------------------------------------------------------------------------------: |
|
||||
| **`libp2p-yamux`** | 🌱 | |
|
||||
| **`libp2p-mplex`** | 🛠️ | [source](https://github.com/libp2p/py-libp2p/blob/main/libp2p/stream_muxer/mplex/mplex.py) |
|
||||
| **Stream Muxers** | **Status** | **Source** |
|
||||
| ------------------ | :--------: | :-------------------------------------------------------------------------------: |
|
||||
| **`libp2p-yamux`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/stream_muxer/yamux) |
|
||||
| **`libp2p-mplex`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/stream_muxer/mplex) |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
@ -100,7 +100,7 @@ ______________________________________________________________________
|
||||
|
||||
| **Storage** | **Status** |
|
||||
| ------------------- | :--------: |
|
||||
| **`libp2p-record`** | ❌ |
|
||||
| **`libp2p-record`** | 🌱 |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
|
||||
13
docs/libp2p.discovery.bootstrap.rst
Normal file
13
docs/libp2p.discovery.bootstrap.rst
Normal file
@ -0,0 +1,13 @@
|
||||
libp2p.discovery.bootstrap package
|
||||
==================================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.discovery.bootstrap
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
@ -7,6 +7,7 @@ Subpackages
|
||||
.. toctree::
|
||||
:maxdepth: 4
|
||||
|
||||
libp2p.discovery.bootstrap
|
||||
libp2p.discovery.events
|
||||
libp2p.discovery.mdns
|
||||
|
||||
|
||||
@ -3,6 +3,65 @@ Release Notes
|
||||
|
||||
.. towncrier release notes start
|
||||
|
||||
py-libp2p v0.2.9 (2025-07-09)
|
||||
-----------------------------
|
||||
|
||||
Breaking Changes
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
- Reordered the arguments to ``upgrade_security`` to place ``is_initiator`` before ``peer_id``, and made ``peer_id`` optional.
|
||||
This allows the method to reflect the fact that peer identity is not required for inbound connections. (`#681 <https://github.com/libp2p/py-libp2p/issues/681>`__)
|
||||
|
||||
|
||||
Bugfixes
|
||||
~~~~~~~~
|
||||
|
||||
- Add timeout wrappers in:
|
||||
1. ``multiselect.py``: ``negotiate`` function
|
||||
2. ``multiselect_client.py``: ``select_one_of`` , ``query_multistream_command`` functions
|
||||
to prevent indefinite hangs when a remote peer does not respond. (`#696 <https://github.com/libp2p/py-libp2p/issues/696>`__)
|
||||
- Align stream creation logic with yamux specification (`#701 <https://github.com/libp2p/py-libp2p/issues/701>`__)
|
||||
- Fixed an issue in ``Pubsub`` where async validators were not handled reliably under concurrency. Now uses a safe aggregator list for consistent behavior. (`#702 <https://github.com/libp2p/py-libp2p/issues/702>`__)
|
||||
|
||||
|
||||
Features
|
||||
~~~~~~~~
|
||||
|
||||
- Added support for ``Kademlia DHT`` in py-libp2p. (`#579 <https://github.com/libp2p/py-libp2p/issues/579>`__)
|
||||
- Limit concurrency in ``push_identify_to_peers`` to prevent resource congestion under high peer counts. (`#621 <https://github.com/libp2p/py-libp2p/issues/621>`__)
|
||||
- Store public key and peer ID in peerstore during handshake
|
||||
|
||||
Modified the InsecureTransport class to accept an optional peerstore parameter and updated the handshake process to store the received public key and peer ID in the peerstore when available.
|
||||
|
||||
Added test cases to verify:
|
||||
1. The peerstore remains unchanged when handshake fails due to peer ID mismatch
|
||||
2. The handshake correctly adds a public key to a peer ID that already exists in the peerstore but doesn't have a public key yet (`#631 <https://github.com/libp2p/py-libp2p/issues/631>`__)
|
||||
- Fixed several flow-control and concurrency issues in the ``YamuxStream`` class. Previously, stress-testing revealed that transferring data over ``DEFAULT_WINDOW_SIZE`` would break the stream due to inconsistent window update handling and lock management. The fixes include:
|
||||
|
||||
- Removed sending of window updates during writes to maintain correct flow-control.
|
||||
- Added proper timeout handling when releasing and acquiring locks to prevent concurrency errors.
|
||||
- Corrected the ``read`` function to properly handle window updates for both ``read_until_EOF`` and ``read_n_bytes``.
|
||||
- Added event logging at ``send_window_updates`` and ``waiting_for_window_updates`` for better observability. (`#639 <https://github.com/libp2p/py-libp2p/issues/639>`__)
|
||||
- Added support for ``Multicast DNS`` in py-libp2p (`#649 <https://github.com/libp2p/py-libp2p/issues/649>`__)
|
||||
- Optimized pubsub publishing to send multiple topics in a single message instead of separate messages per topic. (`#685 <https://github.com/libp2p/py-libp2p/issues/685>`__)
|
||||
- Optimized pubsub message writing by implementing a write_msg() method that uses pre-allocated buffers and single write operations, improving performance by eliminating separate varint prefix encoding and write operations in FloodSub and GossipSub. (`#687 <https://github.com/libp2p/py-libp2p/issues/687>`__)
|
||||
- Added peer exchange and backoff logic as part of Gossipsub v1.1 upgrade (`#690 <https://github.com/libp2p/py-libp2p/issues/690>`__)
|
||||
|
||||
|
||||
Internal Changes - for py-libp2p Contributors
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Added sparse connect utility function to pubsub test utilities for creating test networks with configurable connectivity. (`#679 <https://github.com/libp2p/py-libp2p/issues/679>`__)
|
||||
- Added comprehensive tests for pubsub connection utility functions to verify degree limits are enforced, excess peers are handled correctly, and edge cases (degree=0, negative values, empty lists) are managed gracefully. (`#707 <https://github.com/libp2p/py-libp2p/issues/707>`__)
|
||||
- Added extra tests for identify push concurrency cap under high peer load (`#708 <https://github.com/libp2p/py-libp2p/issues/708>`__)
|
||||
|
||||
|
||||
Miscellaneous Changes
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- `#678 <https://github.com/libp2p/py-libp2p/issues/678>`__, `#684 <https://github.com/libp2p/py-libp2p/issues/684>`__
|
||||
|
||||
|
||||
py-libp2p v0.2.8 (2025-06-10)
|
||||
-----------------------------
|
||||
|
||||
|
||||
136
examples/bootstrap/bootstrap.py
Normal file
136
examples/bootstrap/bootstrap.py
Normal file
@ -0,0 +1,136 @@
|
||||
import argparse
|
||||
import logging
|
||||
import secrets
|
||||
|
||||
import multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p import new_host
|
||||
from libp2p.abc import PeerInfo
|
||||
from libp2p.crypto.secp256k1 import create_new_key_pair
|
||||
from libp2p.discovery.events.peerDiscovery import peerDiscovery
|
||||
|
||||
# Configure logging
|
||||
logger = logging.getLogger("libp2p.discovery.bootstrap")
|
||||
logger.setLevel(logging.INFO)
|
||||
handler = logging.StreamHandler()
|
||||
handler.setFormatter(
|
||||
logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
|
||||
)
|
||||
logger.addHandler(handler)
|
||||
|
||||
# Configure root logger to only show warnings and above to reduce noise
|
||||
# This prevents verbose DEBUG messages from multiaddr, DNS, etc.
|
||||
logging.getLogger().setLevel(logging.WARNING)
|
||||
|
||||
# Specifically silence noisy libraries
|
||||
logging.getLogger("multiaddr").setLevel(logging.WARNING)
|
||||
logging.getLogger("root").setLevel(logging.WARNING)
|
||||
|
||||
|
||||
def on_peer_discovery(peer_info: PeerInfo) -> None:
|
||||
"""Handler for peer discovery events."""
|
||||
logger.info(f"🔍 Discovered peer: {peer_info.peer_id}")
|
||||
logger.debug(f" Addresses: {[str(addr) for addr in peer_info.addrs]}")
|
||||
|
||||
|
||||
# Example bootstrap peers
|
||||
BOOTSTRAP_PEERS = [
|
||||
"/dnsaddr/github.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
|
||||
"/dnsaddr/cloudflare.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
|
||||
"/dnsaddr/google.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
|
||||
"/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
|
||||
"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
|
||||
"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
|
||||
"/ip6/2604:a880:1:20::203:d001/tcp/4001/p2p/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
|
||||
"/ip4/128.199.219.111/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
|
||||
"/ip4/104.236.76.40/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
|
||||
"/ip4/178.62.158.247/tcp/4001/p2p/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
|
||||
"/ip6/2604:a880:1:20::203:d001/tcp/4001/p2p/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
|
||||
"/ip6/2400:6180:0:d0::151:6001/tcp/4001/p2p/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
|
||||
"/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/p2p/QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm",
|
||||
]
|
||||
|
||||
|
||||
async def run(port: int, bootstrap_addrs: list[str]) -> None:
|
||||
"""Run the bootstrap discovery example."""
|
||||
# Generate key pair
|
||||
secret = secrets.token_bytes(32)
|
||||
key_pair = create_new_key_pair(secret)
|
||||
|
||||
# Create listen address
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
|
||||
# Register peer discovery handler
|
||||
peerDiscovery.register_peer_discovered_handler(on_peer_discovery)
|
||||
|
||||
logger.info("🚀 Starting Bootstrap Discovery Example")
|
||||
logger.info(f"📍 Listening on: {listen_addr}")
|
||||
logger.info(f"🌐 Bootstrap peers: {len(bootstrap_addrs)}")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Bootstrap Discovery Example")
|
||||
print("=" * 60)
|
||||
print("This example demonstrates connecting to bootstrap peers.")
|
||||
print("Watch the logs for peer discovery events!")
|
||||
print("Press Ctrl+C to exit.")
|
||||
print("=" * 60)
|
||||
|
||||
# Create and run host with bootstrap discovery
|
||||
host = new_host(key_pair=key_pair, bootstrap=bootstrap_addrs)
|
||||
|
||||
try:
|
||||
async with host.run(listen_addrs=[listen_addr]):
|
||||
# Keep running and log peer discovery events
|
||||
await trio.sleep_forever()
|
||||
except KeyboardInterrupt:
|
||||
logger.info("👋 Shutting down...")
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Main entry point."""
|
||||
description = """
|
||||
Bootstrap Discovery Example for py-libp2p
|
||||
|
||||
This example demonstrates how to use bootstrap peers for peer discovery.
|
||||
Bootstrap peers are predefined peers that help new nodes join the network.
|
||||
|
||||
Usage:
|
||||
python bootstrap.py -p 8000
|
||||
python bootstrap.py -p 8001 --custom-bootstrap \\
|
||||
"/ip4/127.0.0.1/tcp/8000/p2p/QmYourPeerID"
|
||||
"""
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description=description, formatter_class=argparse.RawDescriptionHelpFormatter
|
||||
)
|
||||
parser.add_argument(
|
||||
"-p", "--port", default=0, type=int, help="Port to listen on (default: random)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--custom-bootstrap",
|
||||
nargs="*",
|
||||
help="Custom bootstrap addresses (space-separated)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-v", "--verbose", action="store_true", help="Enable verbose output"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.verbose:
|
||||
logger.setLevel(logging.DEBUG)
|
||||
|
||||
# Use custom bootstrap addresses if provided, otherwise use defaults
|
||||
bootstrap_addrs = (
|
||||
args.custom_bootstrap if args.custom_bootstrap else BOOTSTRAP_PEERS
|
||||
)
|
||||
|
||||
try:
|
||||
trio.run(run, args.port, bootstrap_addrs)
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Exiting...")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -43,6 +43,9 @@ async def run(port: int, destination: str) -> None:
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
host = new_host()
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
if not destination: # its the server
|
||||
|
||||
async def stream_handler(stream: INetStream) -> None:
|
||||
|
||||
@ -45,7 +45,10 @@ async def run(port: int, destination: str, seed: int | None = None) -> None:
|
||||
secret = secrets.token_bytes(32)
|
||||
|
||||
host = new_host(key_pair=create_new_key_pair(secret))
|
||||
async with host.run(listen_addrs=[listen_addr]):
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
print(f"I am {host.get_id().to_string()}")
|
||||
|
||||
if not destination: # its the server
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
import argparse
|
||||
import base64
|
||||
import logging
|
||||
import sys
|
||||
|
||||
import multiaddr
|
||||
import trio
|
||||
@ -8,10 +9,13 @@ import trio
|
||||
from libp2p import (
|
||||
new_host,
|
||||
)
|
||||
from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID
|
||||
from libp2p.identity.identify.pb.identify_pb2 import (
|
||||
Identify,
|
||||
from libp2p.identity.identify.identify import (
|
||||
ID as IDENTIFY_PROTOCOL_ID,
|
||||
identify_handler_for,
|
||||
parse_identify_response,
|
||||
)
|
||||
from libp2p.identity.identify.pb.identify_pb2 import Identify
|
||||
from libp2p.peer.envelope import debug_dump_envelope, unmarshal_envelope
|
||||
from libp2p.peer.peerinfo import (
|
||||
info_from_p2p_addr,
|
||||
)
|
||||
@ -30,10 +34,11 @@ def decode_multiaddrs(raw_addrs):
|
||||
return decoded_addrs
|
||||
|
||||
|
||||
def print_identify_response(identify_response):
|
||||
def print_identify_response(identify_response: Identify):
|
||||
"""Pretty-print Identify response."""
|
||||
public_key_b64 = base64.b64encode(identify_response.public_key).decode("utf-8")
|
||||
listen_addrs = decode_multiaddrs(identify_response.listen_addrs)
|
||||
signed_peer_record = unmarshal_envelope(identify_response.signedPeerRecord)
|
||||
try:
|
||||
observed_addr_decoded = decode_multiaddrs([identify_response.observed_addr])
|
||||
except Exception:
|
||||
@ -49,8 +54,10 @@ def print_identify_response(identify_response):
|
||||
f" Agent Version: {identify_response.agent_version}"
|
||||
)
|
||||
|
||||
debug_dump_envelope(signed_peer_record)
|
||||
|
||||
async def run(port: int, destination: str) -> None:
|
||||
|
||||
async def run(port: int, destination: str, use_varint_format: bool = True) -> None:
|
||||
localhost_ip = "0.0.0.0"
|
||||
|
||||
if not destination:
|
||||
@ -58,39 +65,159 @@ async def run(port: int, destination: str) -> None:
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}")
|
||||
host_a = new_host()
|
||||
|
||||
async with host_a.run(listen_addrs=[listen_addr]):
|
||||
# Set up identify handler with specified format
|
||||
# Set use_varint_format = False, if want to checkout the Signed-PeerRecord
|
||||
identify_handler = identify_handler_for(
|
||||
host_a, use_varint_format=use_varint_format
|
||||
)
|
||||
host_a.set_stream_handler(IDENTIFY_PROTOCOL_ID, identify_handler)
|
||||
|
||||
async with (
|
||||
host_a.run(listen_addrs=[listen_addr]),
|
||||
trio.open_nursery() as nursery,
|
||||
):
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host_a.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
# Get the actual address and replace 0.0.0.0 with 127.0.0.1 for client
|
||||
# connections
|
||||
server_addr = str(host_a.get_addrs()[0])
|
||||
client_addr = server_addr.replace("/ip4/0.0.0.0/", "/ip4/127.0.0.1/")
|
||||
|
||||
format_name = "length-prefixed" if use_varint_format else "raw protobuf"
|
||||
format_flag = "--raw-format" if not use_varint_format else ""
|
||||
print(
|
||||
"First host listening. Run this from another console:\n\n"
|
||||
f"identify-demo "
|
||||
f"-d {host_a.get_addrs()[0]}\n"
|
||||
f"First host listening (using {format_name} format). "
|
||||
f"Run this from another console:\n\n"
|
||||
f"identify-demo {format_flag} -d {client_addr}\n"
|
||||
)
|
||||
print("Waiting for incoming identify request...")
|
||||
await trio.sleep_forever()
|
||||
|
||||
# Add a custom handler to show connection events
|
||||
async def custom_identify_handler(stream):
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
print(f"\n🔗 Received identify request from peer: {peer_id}")
|
||||
|
||||
# Show remote address in multiaddr format
|
||||
try:
|
||||
from libp2p.identity.identify.identify import (
|
||||
_remote_address_to_multiaddr,
|
||||
)
|
||||
|
||||
remote_address = stream.get_remote_address()
|
||||
if remote_address:
|
||||
observed_multiaddr = _remote_address_to_multiaddr(
|
||||
remote_address
|
||||
)
|
||||
# Add the peer ID to create a complete multiaddr
|
||||
complete_multiaddr = f"{observed_multiaddr}/p2p/{peer_id}"
|
||||
print(f" Remote address: {complete_multiaddr}")
|
||||
else:
|
||||
print(f" Remote address: {remote_address}")
|
||||
except Exception:
|
||||
print(f" Remote address: {stream.get_remote_address()}")
|
||||
|
||||
# Call the original handler
|
||||
await identify_handler(stream)
|
||||
|
||||
print(f"✅ Successfully processed identify request from {peer_id}")
|
||||
|
||||
# Replace the handler with our custom one
|
||||
host_a.set_stream_handler(IDENTIFY_PROTOCOL_ID, custom_identify_handler)
|
||||
|
||||
try:
|
||||
await trio.sleep_forever()
|
||||
except KeyboardInterrupt:
|
||||
print("\n🛑 Shutting down listener...")
|
||||
logger.info("Listener interrupted by user")
|
||||
return
|
||||
|
||||
else:
|
||||
# Create second host (dialer)
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}")
|
||||
host_b = new_host()
|
||||
|
||||
async with host_b.run(listen_addrs=[listen_addr]):
|
||||
async with (
|
||||
host_b.run(listen_addrs=[listen_addr]),
|
||||
trio.open_nursery() as nursery,
|
||||
):
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host_b.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
# Connect to the first host
|
||||
print(f"dialer (host_b) listening on {host_b.get_addrs()[0]}")
|
||||
maddr = multiaddr.Multiaddr(destination)
|
||||
info = info_from_p2p_addr(maddr)
|
||||
print(f"Second host connecting to peer: {info.peer_id}")
|
||||
|
||||
await host_b.connect(info)
|
||||
try:
|
||||
await host_b.connect(info)
|
||||
except Exception as e:
|
||||
error_msg = str(e)
|
||||
if "unable to connect" in error_msg or "SwarmException" in error_msg:
|
||||
print(f"\n❌ Cannot connect to peer: {info.peer_id}")
|
||||
print(f" Address: {destination}")
|
||||
print(f" Error: {error_msg}")
|
||||
print(
|
||||
"\n💡 Make sure the peer is running and the address is correct."
|
||||
)
|
||||
return
|
||||
else:
|
||||
# Re-raise other exceptions
|
||||
raise
|
||||
|
||||
stream = await host_b.new_stream(info.peer_id, (IDENTIFY_PROTOCOL_ID,))
|
||||
|
||||
try:
|
||||
print("Starting identify protocol...")
|
||||
response = await stream.read()
|
||||
|
||||
# Read the response using the utility function
|
||||
from libp2p.utils.varint import read_length_prefixed_protobuf
|
||||
|
||||
response = await read_length_prefixed_protobuf(
|
||||
stream, use_varint_format
|
||||
)
|
||||
full_response = response
|
||||
|
||||
await stream.close()
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(response)
|
||||
|
||||
# Parse the response using the robust protocol-level function
|
||||
# This handles both old and new formats automatically
|
||||
identify_msg = parse_identify_response(full_response)
|
||||
print_identify_response(identify_msg)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Identify protocol error: {e}")
|
||||
error_msg = str(e)
|
||||
print(f"Identify protocol error: {error_msg}")
|
||||
|
||||
# Check for specific format mismatch errors
|
||||
if "Error parsing message" in error_msg or "DecodeError" in error_msg:
|
||||
print("\n" + "=" * 60)
|
||||
print("FORMAT MISMATCH DETECTED!")
|
||||
print("=" * 60)
|
||||
if use_varint_format:
|
||||
print(
|
||||
"You are using length-prefixed format (default) but the "
|
||||
"listener"
|
||||
)
|
||||
print("is using raw protobuf format.")
|
||||
print(
|
||||
"\nTo fix this, run the dialer with the --raw-format flag:"
|
||||
)
|
||||
print(f"identify-demo --raw-format -d {destination}")
|
||||
else:
|
||||
print("You are using raw protobuf format but the listener")
|
||||
print("is using length-prefixed format (default).")
|
||||
print(
|
||||
"\nTo fix this, run the dialer without the --raw-format "
|
||||
"flag:"
|
||||
)
|
||||
print(f"identify-demo -d {destination}")
|
||||
print("=" * 60)
|
||||
else:
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
|
||||
return
|
||||
|
||||
@ -98,9 +225,12 @@ async def run(port: int, destination: str) -> None:
|
||||
def main() -> None:
|
||||
description = """
|
||||
This program demonstrates the libp2p identify protocol.
|
||||
First run identify-demo -p <PORT>' to start a listener.
|
||||
First run 'identify-demo -p <PORT> [--raw-format]' to start a listener.
|
||||
Then run 'identify-demo <ANOTHER_PORT> -d <DESTINATION>'
|
||||
where <DESTINATION> is the multiaddress shown by the listener.
|
||||
|
||||
Use --raw-format to send raw protobuf messages (old format) instead of
|
||||
length-prefixed protobuf messages (new format, default).
|
||||
"""
|
||||
|
||||
example_maddr = (
|
||||
@ -115,12 +245,35 @@ def main() -> None:
|
||||
type=str,
|
||||
help=f"destination multiaddr string, e.g. {example_maddr}",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--raw-format",
|
||||
action="store_true",
|
||||
help=(
|
||||
"use raw protobuf format (old format) instead of "
|
||||
"length-prefixed (new format)"
|
||||
),
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Determine format: use varint (length-prefixed) if --raw-format is specified,
|
||||
# otherwise use raw protobuf format (old format)
|
||||
use_varint_format = args.raw_format
|
||||
|
||||
try:
|
||||
trio.run(run, *(args.port, args.destination))
|
||||
if args.destination:
|
||||
# Run in dialer mode
|
||||
trio.run(run, *(args.port, args.destination, use_varint_format))
|
||||
else:
|
||||
# Run in listener mode
|
||||
trio.run(run, *(args.port, args.destination, use_varint_format))
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
print("\n👋 Goodbye!")
|
||||
logger.info("Application interrupted by user")
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {str(e)}")
|
||||
logger.error("Error: %s", str(e))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@ -11,23 +11,26 @@ This example shows how to:
|
||||
|
||||
import logging
|
||||
|
||||
import multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p import (
|
||||
new_host,
|
||||
)
|
||||
from libp2p.abc import (
|
||||
INetStream,
|
||||
)
|
||||
from libp2p.crypto.secp256k1 import (
|
||||
create_new_key_pair,
|
||||
)
|
||||
from libp2p.custom_types import (
|
||||
TProtocol,
|
||||
)
|
||||
from libp2p.identity.identify import (
|
||||
identify_handler_for,
|
||||
from libp2p.identity.identify.pb.identify_pb2 import (
|
||||
Identify,
|
||||
)
|
||||
from libp2p.identity.identify_push import (
|
||||
ID_PUSH,
|
||||
identify_push_handler_for,
|
||||
push_identify_to_peer,
|
||||
)
|
||||
from libp2p.peer.peerinfo import (
|
||||
@ -38,8 +41,145 @@ from libp2p.peer.peerinfo import (
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def create_custom_identify_handler(host, host_name: str):
|
||||
"""Create a custom identify handler that displays received information."""
|
||||
|
||||
async def handle_identify(stream: INetStream) -> None:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
print(f"\n🔍 {host_name} received identify request from peer: {peer_id}")
|
||||
|
||||
# Get the standard identify response using the existing function
|
||||
from libp2p.identity.identify.identify import (
|
||||
_mk_identify_protobuf,
|
||||
_remote_address_to_multiaddr,
|
||||
)
|
||||
|
||||
# Get observed address
|
||||
observed_multiaddr = None
|
||||
try:
|
||||
remote_address = stream.get_remote_address()
|
||||
if remote_address:
|
||||
observed_multiaddr = _remote_address_to_multiaddr(remote_address)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Build the identify protobuf
|
||||
identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
|
||||
response_data = identify_msg.SerializeToString()
|
||||
|
||||
print(f" 📋 {host_name} identify information:")
|
||||
if identify_msg.HasField("protocol_version"):
|
||||
print(f" Protocol Version: {identify_msg.protocol_version}")
|
||||
if identify_msg.HasField("agent_version"):
|
||||
print(f" Agent Version: {identify_msg.agent_version}")
|
||||
if identify_msg.HasField("public_key"):
|
||||
print(f" Public Key: {identify_msg.public_key.hex()[:16]}...")
|
||||
if identify_msg.listen_addrs:
|
||||
print(" Listen Addresses:")
|
||||
for addr_bytes in identify_msg.listen_addrs:
|
||||
addr = multiaddr.Multiaddr(addr_bytes)
|
||||
print(f" - {addr}")
|
||||
if identify_msg.protocols:
|
||||
print(" Supported Protocols:")
|
||||
for protocol in identify_msg.protocols:
|
||||
print(f" - {protocol}")
|
||||
|
||||
# Send the response
|
||||
await stream.write(response_data)
|
||||
await stream.close()
|
||||
|
||||
return handle_identify
|
||||
|
||||
|
||||
def create_custom_identify_push_handler(host, host_name: str):
|
||||
"""Create a custom identify/push handler that displays received information."""
|
||||
|
||||
async def handle_identify_push(stream: INetStream) -> None:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
print(f"\n📤 {host_name} received identify/push from peer: {peer_id}")
|
||||
|
||||
try:
|
||||
# Read the identify message using the utility function
|
||||
from libp2p.utils.varint import read_length_prefixed_protobuf
|
||||
|
||||
data = await read_length_prefixed_protobuf(stream, use_varint_format=True)
|
||||
|
||||
# Parse the identify message
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(data)
|
||||
|
||||
print(" 📋 Received identify information:")
|
||||
if identify_msg.HasField("protocol_version"):
|
||||
print(f" Protocol Version: {identify_msg.protocol_version}")
|
||||
if identify_msg.HasField("agent_version"):
|
||||
print(f" Agent Version: {identify_msg.agent_version}")
|
||||
if identify_msg.HasField("public_key"):
|
||||
print(f" Public Key: {identify_msg.public_key.hex()[:16]}...")
|
||||
if identify_msg.HasField("observed_addr") and identify_msg.observed_addr:
|
||||
observed_addr = multiaddr.Multiaddr(identify_msg.observed_addr)
|
||||
print(f" Observed Address: {observed_addr}")
|
||||
if identify_msg.listen_addrs:
|
||||
print(" Listen Addresses:")
|
||||
for addr_bytes in identify_msg.listen_addrs:
|
||||
addr = multiaddr.Multiaddr(addr_bytes)
|
||||
print(f" - {addr}")
|
||||
if identify_msg.protocols:
|
||||
print(" Supported Protocols:")
|
||||
for protocol in identify_msg.protocols:
|
||||
print(f" - {protocol}")
|
||||
|
||||
# Update the peerstore with the new information
|
||||
from libp2p.identity.identify_push.identify_push import (
|
||||
_update_peerstore_from_identify,
|
||||
)
|
||||
|
||||
await _update_peerstore_from_identify(
|
||||
host.get_peerstore(), peer_id, identify_msg
|
||||
)
|
||||
|
||||
print(f" ✅ {host_name} updated peerstore with new information")
|
||||
|
||||
except Exception as e:
|
||||
print(f" ❌ Error processing identify/push: {e}")
|
||||
finally:
|
||||
await stream.close()
|
||||
|
||||
return handle_identify_push
|
||||
|
||||
|
||||
async def display_peerstore_info(host, host_name: str, peer_id, description: str):
|
||||
"""Display peerstore information for a specific peer."""
|
||||
peerstore = host.get_peerstore()
|
||||
|
||||
try:
|
||||
addrs = peerstore.addrs(peer_id)
|
||||
except Exception:
|
||||
addrs = []
|
||||
|
||||
try:
|
||||
protocols = peerstore.get_protocols(peer_id)
|
||||
except Exception:
|
||||
protocols = []
|
||||
|
||||
print(f"\n📚 {host_name} peerstore for {description}:")
|
||||
print(f" Peer ID: {peer_id}")
|
||||
if addrs:
|
||||
print(" Addresses:")
|
||||
for addr in addrs:
|
||||
print(f" - {addr}")
|
||||
else:
|
||||
print(" Addresses: None")
|
||||
|
||||
if protocols:
|
||||
print(" Protocols:")
|
||||
for protocol in protocols:
|
||||
print(f" - {protocol}")
|
||||
else:
|
||||
print(" Protocols: None")
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
print("\n==== Starting Identify-Push Example ====\n")
|
||||
print("\n==== Starting Enhanced Identify-Push Example ====\n")
|
||||
|
||||
# Create key pairs for the two hosts
|
||||
key_pair_1 = create_new_key_pair()
|
||||
@ -48,45 +188,57 @@ async def main() -> None:
|
||||
# Create the first host
|
||||
host_1 = new_host(key_pair=key_pair_1)
|
||||
|
||||
# Set up the identify and identify/push handlers
|
||||
host_1.set_stream_handler(TProtocol("/ipfs/id/1.0.0"), identify_handler_for(host_1))
|
||||
host_1.set_stream_handler(ID_PUSH, identify_push_handler_for(host_1))
|
||||
# Set up custom identify and identify/push handlers
|
||||
host_1.set_stream_handler(
|
||||
TProtocol("/ipfs/id/1.0.0"), create_custom_identify_handler(host_1, "Host 1")
|
||||
)
|
||||
host_1.set_stream_handler(
|
||||
ID_PUSH, create_custom_identify_push_handler(host_1, "Host 1")
|
||||
)
|
||||
|
||||
# Create the second host
|
||||
host_2 = new_host(key_pair=key_pair_2)
|
||||
|
||||
# Set up the identify and identify/push handlers
|
||||
host_2.set_stream_handler(TProtocol("/ipfs/id/1.0.0"), identify_handler_for(host_2))
|
||||
host_2.set_stream_handler(ID_PUSH, identify_push_handler_for(host_2))
|
||||
# Set up custom identify and identify/push handlers
|
||||
host_2.set_stream_handler(
|
||||
TProtocol("/ipfs/id/1.0.0"), create_custom_identify_handler(host_2, "Host 2")
|
||||
)
|
||||
host_2.set_stream_handler(
|
||||
ID_PUSH, create_custom_identify_push_handler(host_2, "Host 2")
|
||||
)
|
||||
|
||||
# Start listening on random ports using the run context manager
|
||||
import multiaddr
|
||||
|
||||
listen_addr_1 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
|
||||
listen_addr_2 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
|
||||
|
||||
async with host_1.run([listen_addr_1]), host_2.run([listen_addr_2]):
|
||||
async with (
|
||||
host_1.run([listen_addr_1]),
|
||||
host_2.run([listen_addr_2]),
|
||||
trio.open_nursery() as nursery,
|
||||
):
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host_1.get_peerstore().start_cleanup_task, 60)
|
||||
nursery.start_soon(host_2.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
# Get the addresses of both hosts
|
||||
addr_1 = host_1.get_addrs()[0]
|
||||
logger.info(f"Host 1 listening on {addr_1}")
|
||||
print(f"Host 1 listening on {addr_1}")
|
||||
print(f"Peer ID: {host_1.get_id().pretty()}")
|
||||
|
||||
addr_2 = host_2.get_addrs()[0]
|
||||
logger.info(f"Host 2 listening on {addr_2}")
|
||||
print(f"Host 2 listening on {addr_2}")
|
||||
print(f"Peer ID: {host_2.get_id().pretty()}")
|
||||
|
||||
print("\nConnecting Host 2 to Host 1...")
|
||||
print("🏠 Host Configuration:")
|
||||
print(f" Host 1: {addr_1}")
|
||||
print(f" Host 1 Peer ID: {host_1.get_id().pretty()}")
|
||||
print(f" Host 2: {addr_2}")
|
||||
print(f" Host 2 Peer ID: {host_2.get_id().pretty()}")
|
||||
|
||||
print("\n🔗 Connecting Host 2 to Host 1...")
|
||||
|
||||
# Connect host_2 to host_1
|
||||
peer_info = info_from_p2p_addr(addr_1)
|
||||
await host_2.connect(peer_info)
|
||||
logger.info("Host 2 connected to Host 1")
|
||||
print("Host 2 successfully connected to Host 1")
|
||||
print("✅ Host 2 successfully connected to Host 1")
|
||||
|
||||
# Run the identify protocol from host_2 to host_1
|
||||
# (so Host 1 learns Host 2's address)
|
||||
print("\n🔄 Running identify protocol (Host 2 → Host 1)...")
|
||||
from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID
|
||||
|
||||
stream = await host_2.new_stream(host_1.get_id(), (IDENTIFY_PROTOCOL_ID,))
|
||||
@ -94,64 +246,58 @@ async def main() -> None:
|
||||
await stream.close()
|
||||
|
||||
# Run the identify protocol from host_1 to host_2
|
||||
# (so Host 2 learns Host 1's address)
|
||||
print("\n🔄 Running identify protocol (Host 1 → Host 2)...")
|
||||
stream = await host_1.new_stream(host_2.get_id(), (IDENTIFY_PROTOCOL_ID,))
|
||||
response = await stream.read()
|
||||
await stream.close()
|
||||
|
||||
# --- NEW CODE: Update Host 1's peerstore with Host 2's addresses ---
|
||||
from libp2p.identity.identify.pb.identify_pb2 import (
|
||||
Identify,
|
||||
)
|
||||
|
||||
# Update Host 1's peerstore with Host 2's addresses
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(response)
|
||||
peerstore_1 = host_1.get_peerstore()
|
||||
peer_id_2 = host_2.get_id()
|
||||
for addr_bytes in identify_msg.listen_addrs:
|
||||
maddr = multiaddr.Multiaddr(addr_bytes)
|
||||
# TTL can be any positive int
|
||||
peerstore_1.add_addr(
|
||||
peer_id_2,
|
||||
maddr,
|
||||
ttl=3600,
|
||||
)
|
||||
# --- END NEW CODE ---
|
||||
peerstore_1.add_addr(peer_id_2, maddr, ttl=3600)
|
||||
|
||||
# Now Host 1's peerstore should have Host 2's address
|
||||
peerstore_1 = host_1.get_peerstore()
|
||||
peer_id_2 = host_2.get_id()
|
||||
addrs_1_for_2 = peerstore_1.addrs(peer_id_2)
|
||||
logger.info(
|
||||
f"[DEBUG] Host 1 peerstore addresses for Host 2 before push: "
|
||||
f"{addrs_1_for_2}"
|
||||
)
|
||||
print(
|
||||
f"[DEBUG] Host 1 peerstore addresses for Host 2 before push: "
|
||||
f"{addrs_1_for_2}"
|
||||
# Display peerstore information before push
|
||||
await display_peerstore_info(
|
||||
host_1, "Host 1", peer_id_2, "Host 2 (before push)"
|
||||
)
|
||||
|
||||
# Push identify information from host_1 to host_2
|
||||
logger.info("Host 1 pushing identify information to Host 2")
|
||||
print("\nHost 1 pushing identify information to Host 2...")
|
||||
print("\n📤 Host 1 pushing identify information to Host 2...")
|
||||
|
||||
try:
|
||||
# Call push_identify_to_peer which now returns a boolean
|
||||
success = await push_identify_to_peer(host_1, host_2.get_id())
|
||||
|
||||
if success:
|
||||
logger.info("Identify push completed successfully")
|
||||
print("Identify push completed successfully!")
|
||||
print("✅ Identify push completed successfully!")
|
||||
else:
|
||||
logger.warning("Identify push didn't complete successfully")
|
||||
print("\nWarning: Identify push didn't complete successfully")
|
||||
print("⚠️ Identify push didn't complete successfully")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during identify push: {str(e)}")
|
||||
print(f"\nError during identify push: {str(e)}")
|
||||
print(f"❌ Error during identify push: {str(e)}")
|
||||
|
||||
# Add this at the end of your async with block:
|
||||
await trio.sleep(0.5) # Give background tasks time to finish
|
||||
# Give a moment for the identify/push processing to complete
|
||||
await trio.sleep(0.5)
|
||||
|
||||
# Display peerstore information after push
|
||||
await display_peerstore_info(host_1, "Host 1", peer_id_2, "Host 2 (after push)")
|
||||
await display_peerstore_info(
|
||||
host_2, "Host 2", host_1.get_id(), "Host 1 (after push)"
|
||||
)
|
||||
|
||||
# Give more time for background tasks to finish and connections to stabilize
|
||||
print("\n⏳ Waiting for background tasks to complete...")
|
||||
await trio.sleep(1.0)
|
||||
|
||||
# Gracefully close connections to prevent connection errors
|
||||
print("🔌 Closing connections...")
|
||||
await host_2.disconnect(host_1.get_id())
|
||||
await trio.sleep(0.2)
|
||||
|
||||
print("\n🎉 Example completed successfully!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@ -41,6 +41,9 @@ from libp2p.identity.identify import (
|
||||
ID as ID_IDENTIFY,
|
||||
identify_handler_for,
|
||||
)
|
||||
from libp2p.identity.identify.identify import (
|
||||
_remote_address_to_multiaddr,
|
||||
)
|
||||
from libp2p.identity.identify.pb.identify_pb2 import (
|
||||
Identify,
|
||||
)
|
||||
@ -57,18 +60,46 @@ from libp2p.peer.peerinfo import (
|
||||
logger = logging.getLogger("libp2p.identity.identify-push-example")
|
||||
|
||||
|
||||
def custom_identify_push_handler_for(host):
|
||||
def custom_identify_push_handler_for(host, use_varint_format: bool = True):
|
||||
"""
|
||||
Create a custom handler for the identify/push protocol that logs and prints
|
||||
the identity information received from the dialer.
|
||||
|
||||
Args:
|
||||
host: The libp2p host
|
||||
use_varint_format: If True, expect length-prefixed format; if False, expect
|
||||
raw protobuf
|
||||
|
||||
"""
|
||||
|
||||
async def handle_identify_push(stream: INetStream) -> None:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
|
||||
# Get remote address information
|
||||
try:
|
||||
# Read the identify message from the stream
|
||||
data = await stream.read()
|
||||
remote_address = stream.get_remote_address()
|
||||
if remote_address:
|
||||
observed_multiaddr = _remote_address_to_multiaddr(remote_address)
|
||||
logger.info(
|
||||
"Connection from remote peer %s, address: %s, multiaddr: %s",
|
||||
peer_id,
|
||||
remote_address,
|
||||
observed_multiaddr,
|
||||
)
|
||||
print(f"\n🔗 Received identify/push request from peer: {peer_id}")
|
||||
# Add the peer ID to create a complete multiaddr
|
||||
complete_multiaddr = f"{observed_multiaddr}/p2p/{peer_id}"
|
||||
print(f" Remote address: {complete_multiaddr}")
|
||||
except Exception as e:
|
||||
logger.error("Error getting remote address: %s", e)
|
||||
print(f"\n🔗 Received identify/push request from peer: {peer_id}")
|
||||
|
||||
try:
|
||||
# Use the utility function to read the protobuf message
|
||||
from libp2p.utils.varint import read_length_prefixed_protobuf
|
||||
|
||||
data = await read_length_prefixed_protobuf(stream, use_varint_format)
|
||||
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(data)
|
||||
|
||||
@ -117,11 +148,41 @@ def custom_identify_push_handler_for(host):
|
||||
await _update_peerstore_from_identify(peerstore, peer_id, identify_msg)
|
||||
|
||||
logger.info("Successfully processed identify/push from peer %s", peer_id)
|
||||
print(f"\nSuccessfully processed identify/push from peer {peer_id}")
|
||||
print(f"✅ Successfully processed identify/push from peer {peer_id}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error processing identify/push from %s: %s", peer_id, e)
|
||||
print(f"\nError processing identify/push from {peer_id}: {e}")
|
||||
error_msg = str(e)
|
||||
logger.error(
|
||||
"Error processing identify/push from %s: %s", peer_id, error_msg
|
||||
)
|
||||
print(f"\nError processing identify/push from {peer_id}: {error_msg}")
|
||||
|
||||
# Check for specific format mismatch errors
|
||||
if (
|
||||
"Error parsing message" in error_msg
|
||||
or "DecodeError" in error_msg
|
||||
or "ParseFromString" in error_msg
|
||||
):
|
||||
print("\n" + "=" * 60)
|
||||
print("FORMAT MISMATCH DETECTED!")
|
||||
print("=" * 60)
|
||||
if use_varint_format:
|
||||
print(
|
||||
"You are using length-prefixed format (default) but the "
|
||||
"dialer is using raw protobuf format."
|
||||
)
|
||||
print("\nTo fix this, run the dialer with the --raw-format flag:")
|
||||
print(
|
||||
"identify-push-listener-dialer-demo --raw-format -d <ADDRESS>"
|
||||
)
|
||||
else:
|
||||
print("You are using raw protobuf format but the dialer")
|
||||
print("is using length-prefixed format (default).")
|
||||
print(
|
||||
"\nTo fix this, run the dialer without the --raw-format flag:"
|
||||
)
|
||||
print("identify-push-listener-dialer-demo -d <ADDRESS>")
|
||||
print("=" * 60)
|
||||
finally:
|
||||
# Close the stream after processing
|
||||
await stream.close()
|
||||
@ -129,9 +190,15 @@ def custom_identify_push_handler_for(host):
|
||||
return handle_identify_push
|
||||
|
||||
|
||||
async def run_listener(port: int) -> None:
|
||||
async def run_listener(
|
||||
port: int, use_varint_format: bool = True, raw_format_flag: bool = False
|
||||
) -> None:
|
||||
"""Run a host in listener mode."""
|
||||
print(f"\n==== Starting Identify-Push Listener on port {port} ====\n")
|
||||
format_name = "length-prefixed" if use_varint_format else "raw protobuf"
|
||||
print(
|
||||
f"\n==== Starting Identify-Push Listener on port {port} "
|
||||
f"(using {format_name} format) ====\n"
|
||||
)
|
||||
|
||||
# Create key pair for the listener
|
||||
key_pair = create_new_key_pair()
|
||||
@ -139,35 +206,58 @@ async def run_listener(port: int) -> None:
|
||||
# Create the listener host
|
||||
host = new_host(key_pair=key_pair)
|
||||
|
||||
# Set up the identify and identify/push handlers
|
||||
host.set_stream_handler(ID_IDENTIFY, identify_handler_for(host))
|
||||
host.set_stream_handler(ID_IDENTIFY_PUSH, custom_identify_push_handler_for(host))
|
||||
# Set up the identify and identify/push handlers with specified format
|
||||
host.set_stream_handler(
|
||||
ID_IDENTIFY, identify_handler_for(host, use_varint_format=use_varint_format)
|
||||
)
|
||||
host.set_stream_handler(
|
||||
ID_IDENTIFY_PUSH,
|
||||
custom_identify_push_handler_for(host, use_varint_format=use_varint_format),
|
||||
)
|
||||
|
||||
# Start listening
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
|
||||
async with host.run([listen_addr]):
|
||||
addr = host.get_addrs()[0]
|
||||
logger.info("Listener host ready!")
|
||||
print("Listener host ready!")
|
||||
try:
|
||||
async with host.run([listen_addr]):
|
||||
addr = host.get_addrs()[0]
|
||||
logger.info("Listener host ready!")
|
||||
print("Listener host ready!")
|
||||
|
||||
logger.info(f"Listening on: {addr}")
|
||||
print(f"Listening on: {addr}")
|
||||
logger.info(f"Listening on: {addr}")
|
||||
print(f"Listening on: {addr}")
|
||||
|
||||
logger.info(f"Peer ID: {host.get_id().pretty()}")
|
||||
print(f"Peer ID: {host.get_id().pretty()}")
|
||||
logger.info(f"Peer ID: {host.get_id().pretty()}")
|
||||
print(f"Peer ID: {host.get_id().pretty()}")
|
||||
|
||||
print("\nRun dialer with command:")
|
||||
print(f"identify-push-listener-dialer-demo -d {addr}")
|
||||
print("\nWaiting for incoming connections... (Ctrl+C to exit)")
|
||||
print("\nRun dialer with command:")
|
||||
if raw_format_flag:
|
||||
print(f"identify-push-listener-dialer-demo -d {addr} --raw-format")
|
||||
else:
|
||||
print(f"identify-push-listener-dialer-demo -d {addr}")
|
||||
print("\nWaiting for incoming identify/push requests... (Ctrl+C to exit)")
|
||||
|
||||
# Keep running until interrupted
|
||||
await trio.sleep_forever()
|
||||
# Keep running until interrupted
|
||||
try:
|
||||
await trio.sleep_forever()
|
||||
except KeyboardInterrupt:
|
||||
print("\n🛑 Shutting down listener...")
|
||||
logger.info("Listener interrupted by user")
|
||||
return
|
||||
except Exception as e:
|
||||
logger.error(f"Listener error: {e}")
|
||||
raise
|
||||
|
||||
|
||||
async def run_dialer(port: int, destination: str) -> None:
|
||||
async def run_dialer(
|
||||
port: int, destination: str, use_varint_format: bool = True
|
||||
) -> None:
|
||||
"""Run a host in dialer mode that connects to a listener."""
|
||||
print(f"\n==== Starting Identify-Push Dialer on port {port} ====\n")
|
||||
format_name = "length-prefixed" if use_varint_format else "raw protobuf"
|
||||
print(
|
||||
f"\n==== Starting Identify-Push Dialer on port {port} "
|
||||
f"(using {format_name} format) ====\n"
|
||||
)
|
||||
|
||||
# Create key pair for the dialer
|
||||
key_pair = create_new_key_pair()
|
||||
@ -175,9 +265,14 @@ async def run_dialer(port: int, destination: str) -> None:
|
||||
# Create the dialer host
|
||||
host = new_host(key_pair=key_pair)
|
||||
|
||||
# Set up the identify and identify/push handlers
|
||||
host.set_stream_handler(ID_IDENTIFY, identify_handler_for(host))
|
||||
host.set_stream_handler(ID_IDENTIFY_PUSH, identify_push_handler_for(host))
|
||||
# Set up the identify and identify/push handlers with specified format
|
||||
host.set_stream_handler(
|
||||
ID_IDENTIFY, identify_handler_for(host, use_varint_format=use_varint_format)
|
||||
)
|
||||
host.set_stream_handler(
|
||||
ID_IDENTIFY_PUSH,
|
||||
identify_push_handler_for(host, use_varint_format=use_varint_format),
|
||||
)
|
||||
|
||||
# Start listening on a different port
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
@ -198,7 +293,9 @@ async def run_dialer(port: int, destination: str) -> None:
|
||||
try:
|
||||
await host.connect(peer_info)
|
||||
logger.info("Successfully connected to listener!")
|
||||
print("Successfully connected to listener!")
|
||||
print("✅ Successfully connected to listener!")
|
||||
print(f" Connected to: {peer_info.peer_id}")
|
||||
print(f" Full address: {destination}")
|
||||
|
||||
# Push identify information to the listener
|
||||
logger.info("Pushing identify information to listener...")
|
||||
@ -206,11 +303,13 @@ async def run_dialer(port: int, destination: str) -> None:
|
||||
|
||||
try:
|
||||
# Call push_identify_to_peer which returns a boolean
|
||||
success = await push_identify_to_peer(host, peer_info.peer_id)
|
||||
success = await push_identify_to_peer(
|
||||
host, peer_info.peer_id, use_varint_format=use_varint_format
|
||||
)
|
||||
|
||||
if success:
|
||||
logger.info("Identify push completed successfully!")
|
||||
print("Identify push completed successfully!")
|
||||
print("✅ Identify push completed successfully!")
|
||||
|
||||
logger.info("Example completed successfully!")
|
||||
print("\nExample completed successfully!")
|
||||
@ -221,17 +320,57 @@ async def run_dialer(port: int, destination: str) -> None:
|
||||
logger.warning("Example completed with warnings.")
|
||||
print("Example completed with warnings.")
|
||||
except Exception as e:
|
||||
logger.error(f"Error during identify push: {str(e)}")
|
||||
print(f"\nError during identify push: {str(e)}")
|
||||
error_msg = str(e)
|
||||
logger.error(f"Error during identify push: {error_msg}")
|
||||
print(f"\nError during identify push: {error_msg}")
|
||||
|
||||
# Check for specific format mismatch errors
|
||||
if (
|
||||
"Error parsing message" in error_msg
|
||||
or "DecodeError" in error_msg
|
||||
or "ParseFromString" in error_msg
|
||||
):
|
||||
print("\n" + "=" * 60)
|
||||
print("FORMAT MISMATCH DETECTED!")
|
||||
print("=" * 60)
|
||||
if use_varint_format:
|
||||
print(
|
||||
"You are using length-prefixed format (default) but the "
|
||||
"listener is using raw protobuf format."
|
||||
)
|
||||
print(
|
||||
"\nTo fix this, run the dialer with the --raw-format flag:"
|
||||
)
|
||||
print(
|
||||
f"identify-push-listener-dialer-demo --raw-format -d "
|
||||
f"{destination}"
|
||||
)
|
||||
else:
|
||||
print("You are using raw protobuf format but the listener")
|
||||
print("is using length-prefixed format (default).")
|
||||
print(
|
||||
"\nTo fix this, run the dialer without the --raw-format "
|
||||
"flag:"
|
||||
)
|
||||
print(f"identify-push-listener-dialer-demo -d {destination}")
|
||||
print("=" * 60)
|
||||
|
||||
logger.error("Example completed with errors.")
|
||||
print("Example completed with errors.")
|
||||
# Continue execution despite the push error
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during dialer operation: {str(e)}")
|
||||
print(f"\nError during dialer operation: {str(e)}")
|
||||
raise
|
||||
error_msg = str(e)
|
||||
if "unable to connect" in error_msg or "SwarmException" in error_msg:
|
||||
print(f"\n❌ Cannot connect to peer: {peer_info.peer_id}")
|
||||
print(f" Address: {destination}")
|
||||
print(f" Error: {error_msg}")
|
||||
print("\n💡 Make sure the peer is running and the address is correct.")
|
||||
return
|
||||
else:
|
||||
logger.error(f"Error during dialer operation: {error_msg}")
|
||||
print(f"\nError during dialer operation: {error_msg}")
|
||||
raise
|
||||
|
||||
|
||||
def main() -> None:
|
||||
@ -240,34 +379,55 @@ def main() -> None:
|
||||
This program demonstrates the libp2p identify/push protocol.
|
||||
Without arguments, it runs as a listener on random port.
|
||||
With -d parameter, it runs as a dialer on random port.
|
||||
|
||||
Port 0 (default) means the OS will automatically assign an available port.
|
||||
This prevents port conflicts when running multiple instances.
|
||||
|
||||
Use --raw-format to send raw protobuf messages (old format) instead of
|
||||
length-prefixed protobuf messages (new format, default).
|
||||
"""
|
||||
|
||||
example = (
|
||||
"/ip4/127.0.0.1/tcp/8000/p2p/QmQn4SwGkDZKkUEpBRBvTmheQycxAHJUNmVEnjA2v1qe8Q"
|
||||
)
|
||||
|
||||
parser = argparse.ArgumentParser(description=description)
|
||||
parser.add_argument("-p", "--port", default=0, type=int, help="source port number")
|
||||
parser.add_argument(
|
||||
"-p",
|
||||
"--port",
|
||||
default=0,
|
||||
type=int,
|
||||
help="source port number (0 = random available port)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-d",
|
||||
"--destination",
|
||||
type=str,
|
||||
help=f"destination multiaddr string, e.g. {example}",
|
||||
help="destination multiaddr string",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--raw-format",
|
||||
action="store_true",
|
||||
help=(
|
||||
"use raw protobuf format (old format) instead of "
|
||||
"length-prefixed (new format)"
|
||||
),
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Determine format: raw format if --raw-format is specified, otherwise
|
||||
# length-prefixed
|
||||
use_varint_format = not args.raw_format
|
||||
|
||||
try:
|
||||
if args.destination:
|
||||
# Run in dialer mode with random available port if not specified
|
||||
trio.run(run_dialer, args.port, args.destination)
|
||||
trio.run(run_dialer, args.port, args.destination, use_varint_format)
|
||||
else:
|
||||
# Run in listener mode with random available port if not specified
|
||||
trio.run(run_listener, args.port)
|
||||
trio.run(run_listener, args.port, use_varint_format, args.raw_format)
|
||||
except KeyboardInterrupt:
|
||||
print("\nInterrupted by user")
|
||||
logger.info("Interrupted by user")
|
||||
print("\n👋 Goodbye!")
|
||||
logger.info("Application interrupted by user")
|
||||
except Exception as e:
|
||||
print(f"\nError: {str(e)}")
|
||||
print(f"\n❌ Error: {str(e)}")
|
||||
logger.error("Error: %s", str(e))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
@ -151,7 +151,10 @@ async def run_node(
|
||||
host = new_host(key_pair=key_pair)
|
||||
listen_addr = Multiaddr(f"/ip4/127.0.0.1/tcp/{port}")
|
||||
|
||||
async with host.run(listen_addrs=[listen_addr]):
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
peer_id = host.get_id().pretty()
|
||||
addr_str = f"/ip4/127.0.0.1/tcp/{port}/p2p/{peer_id}"
|
||||
await connect_to_bootstrap_nodes(host, bootstrap_nodes)
|
||||
|
||||
@ -46,7 +46,10 @@ async def run(port: int) -> None:
|
||||
|
||||
logger.info("Starting peer Discovery")
|
||||
host = new_host(key_pair=key_pair, enable_mDNS=True)
|
||||
async with host.run(listen_addrs=[listen_addr]):
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
await trio.sleep_forever()
|
||||
|
||||
|
||||
|
||||
@ -59,6 +59,9 @@ async def run(port: int, destination: str) -> None:
|
||||
host = new_host(listen_addrs=[listen_addr])
|
||||
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
if not destination:
|
||||
host.set_stream_handler(PING_PROTOCOL_ID, handle_ping)
|
||||
|
||||
|
||||
@ -144,6 +144,9 @@ async def run(topic: str, destination: str | None, port: int | None) -> None:
|
||||
pubsub = Pubsub(host, gossipsub)
|
||||
termination_event = trio.Event() # Event to signal termination
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
logger.info(f"Node started with peer ID: {host.get_id()}")
|
||||
logger.info(f"Listening on: {listen_addr}")
|
||||
logger.info("Initializing PubSub and GossipSub...")
|
||||
|
||||
@ -251,6 +251,7 @@ def new_host(
|
||||
muxer_preference: Literal["YAMUX", "MPLEX"] | None = None,
|
||||
listen_addrs: Sequence[multiaddr.Multiaddr] | None = None,
|
||||
enable_mDNS: bool = False,
|
||||
bootstrap: list[str] | None = None,
|
||||
negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
|
||||
) -> IHost:
|
||||
"""
|
||||
@ -264,6 +265,7 @@ def new_host(
|
||||
:param muxer_preference: optional explicit muxer preference
|
||||
:param listen_addrs: optional list of multiaddrs to listen on
|
||||
:param enable_mDNS: whether to enable mDNS discovery
|
||||
:param bootstrap: optional list of bootstrap peer addresses as strings
|
||||
:return: return a host instance
|
||||
"""
|
||||
swarm = new_swarm(
|
||||
@ -276,7 +278,7 @@ def new_host(
|
||||
)
|
||||
|
||||
if disc_opt is not None:
|
||||
return RoutedHost(swarm, disc_opt, enable_mDNS)
|
||||
return BasicHost(network=swarm,enable_mDNS=enable_mDNS , negotitate_timeout=negotiate_timeout)
|
||||
return RoutedHost(swarm, disc_opt, enable_mDNS, bootstrap)
|
||||
return BasicHost(network=swarm,enable_mDNS=enable_mDNS , bootstrap=bootstrap, negotitate_timeout=negotiate_timeout)
|
||||
|
||||
__version__ = __version("libp2p")
|
||||
|
||||
264
libp2p/abc.py
264
libp2p/abc.py
@ -16,6 +16,7 @@ from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
AsyncContextManager,
|
||||
Optional,
|
||||
)
|
||||
|
||||
from multiaddr import (
|
||||
@ -41,11 +42,15 @@ from libp2p.io.abc import (
|
||||
from libp2p.peer.id import (
|
||||
ID,
|
||||
)
|
||||
import libp2p.peer.pb.peer_record_pb2 as pb
|
||||
from libp2p.peer.peerinfo import (
|
||||
PeerInfo,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from libp2p.peer.envelope import Envelope
|
||||
from libp2p.peer.peer_record import PeerRecord
|
||||
from libp2p.protocol_muxer.multiselect import Multiselect
|
||||
from libp2p.pubsub.pubsub import (
|
||||
Pubsub,
|
||||
)
|
||||
@ -352,6 +357,14 @@ class INetConn(Closer):
|
||||
:return: A tuple containing instances of INetStream.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_transport_addresses(self) -> list[Multiaddr]:
|
||||
"""
|
||||
Retrieve the transport addresses used by this connection.
|
||||
|
||||
:return: A list of multiaddresses used by the transport.
|
||||
"""
|
||||
|
||||
|
||||
# -------------------------- peermetadata interface.py --------------------------
|
||||
|
||||
@ -488,6 +501,71 @@ class IAddrBook(ABC):
|
||||
"""
|
||||
|
||||
|
||||
# ------------------ certified-addr-book interface.py ---------------------
|
||||
class ICertifiedAddrBook(ABC):
|
||||
"""
|
||||
Interface for a certified address book.
|
||||
|
||||
Provides methods for managing signed peer records
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def consume_peer_record(self, envelope: "Envelope", ttl: int) -> bool:
|
||||
"""
|
||||
Accept and store a signed PeerRecord, unless it's older than
|
||||
the one already stored.
|
||||
|
||||
This function:
|
||||
- Extracts the peer ID and sequence number from the envelope
|
||||
- Rejects the record if it's older (lower seq)
|
||||
- Updates the stored peer record and replaces associated
|
||||
addresses if accepted
|
||||
|
||||
|
||||
Parameters
|
||||
----------
|
||||
envelope:
|
||||
Signed envelope containing a PeerRecord.
|
||||
ttl:
|
||||
Time-to-live for the included multiaddrs (in seconds).
|
||||
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_peer_record(self, peer_id: ID) -> Optional["Envelope"]:
|
||||
"""
|
||||
Retrieve the most recent signed PeerRecord `Envelope` for a peer, if it exists
|
||||
and is still relevant.
|
||||
|
||||
First, it runs cleanup via `maybe_delete_peer_record` to purge stale data.
|
||||
Then it checks whether the peer has valid, unexpired addresses before
|
||||
returning the associated envelope.
|
||||
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to look up.
|
||||
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def maybe_delete_peer_record(self, peer_id: ID) -> None:
|
||||
"""
|
||||
Delete the signed peer record for a peer if it has no know
|
||||
(non-expired) addresses.
|
||||
|
||||
This is a garbage collection mechanism: if all addresses for a peer have expired
|
||||
or been cleared, there's no point holding onto its signed `Envelope`
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer whose record we may delete.
|
||||
|
||||
"""
|
||||
|
||||
|
||||
# -------------------------- keybook interface.py --------------------------
|
||||
|
||||
|
||||
@ -753,7 +831,9 @@ class IProtoBook(ABC):
|
||||
# -------------------------- peerstore interface.py --------------------------
|
||||
|
||||
|
||||
class IPeerStore(IPeerMetadata, IAddrBook, IKeyBook, IMetrics, IProtoBook):
|
||||
class IPeerStore(
|
||||
IPeerMetadata, IAddrBook, ICertifiedAddrBook, IKeyBook, IMetrics, IProtoBook
|
||||
):
|
||||
"""
|
||||
Interface for a peer store.
|
||||
|
||||
@ -888,7 +968,65 @@ class IPeerStore(IPeerMetadata, IAddrBook, IKeyBook, IMetrics, IProtoBook):
|
||||
|
||||
"""
|
||||
|
||||
# --------CERTIFIED-ADDR-BOOK----------
|
||||
|
||||
@abstractmethod
|
||||
def consume_peer_record(self, envelope: "Envelope", ttl: int) -> bool:
|
||||
"""
|
||||
Accept and store a signed PeerRecord, unless it's older
|
||||
than the one already stored.
|
||||
|
||||
This function:
|
||||
- Extracts the peer ID and sequence number from the envelope
|
||||
- Rejects the record if it's older (lower seq)
|
||||
- Updates the stored peer record and replaces associated addresses if accepted
|
||||
|
||||
|
||||
Parameters
|
||||
----------
|
||||
envelope:
|
||||
Signed envelope containing a PeerRecord.
|
||||
ttl:
|
||||
Time-to-live for the included multiaddrs (in seconds).
|
||||
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_peer_record(self, peer_id: ID) -> Optional["Envelope"]:
|
||||
"""
|
||||
Retrieve the most recent signed PeerRecord `Envelope` for a peer, if it exists
|
||||
and is still relevant.
|
||||
|
||||
First, it runs cleanup via `maybe_delete_peer_record` to purge stale data.
|
||||
Then it checks whether the peer has valid, unexpired addresses before
|
||||
returning the associated envelope.
|
||||
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to look up.
|
||||
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def maybe_delete_peer_record(self, peer_id: ID) -> None:
|
||||
"""
|
||||
Delete the signed peer record for a peer if it has no
|
||||
know (non-expired) addresses.
|
||||
|
||||
This is a garbage collection mechanism: if all addresses for a peer have expired
|
||||
or been cleared, there's no point holding onto its signed `Envelope`
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer whose record we may delete.
|
||||
|
||||
"""
|
||||
|
||||
# --------KEY-BOOK----------
|
||||
|
||||
@abstractmethod
|
||||
def pubkey(self, peer_id: ID) -> PublicKey:
|
||||
"""
|
||||
@ -1197,6 +1335,10 @@ class IPeerStore(IPeerMetadata, IAddrBook, IKeyBook, IMetrics, IProtoBook):
|
||||
def clear_peerdata(self, peer_id: ID) -> None:
|
||||
"""clear_peerdata"""
|
||||
|
||||
@abstractmethod
|
||||
async def start_cleanup_task(self, cleanup_interval: int = 3600) -> None:
|
||||
"""Start periodic cleanup of expired peer records and addresses."""
|
||||
|
||||
|
||||
# -------------------------- listener interface.py --------------------------
|
||||
|
||||
@ -1545,9 +1687,8 @@ class IHost(ABC):
|
||||
|
||||
"""
|
||||
|
||||
# FIXME: Replace with correct return type
|
||||
@abstractmethod
|
||||
def get_mux(self) -> Any:
|
||||
def get_mux(self) -> "Multiselect":
|
||||
"""
|
||||
Retrieve the muxer instance for the host.
|
||||
|
||||
@ -1685,6 +1826,121 @@ class IHost(ABC):
|
||||
"""
|
||||
|
||||
|
||||
# -------------------------- peer-record interface.py --------------------------
|
||||
class IPeerRecord(ABC):
|
||||
"""
|
||||
Interface for a libp2p PeerRecord object.
|
||||
|
||||
A PeerRecord contains metadata about a peer such as its ID, public addresses,
|
||||
and a strictly increasing sequence number for versioning.
|
||||
|
||||
PeerRecords are used in signed routing Envelopes for secure peer data propagation.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def domain(self) -> str:
|
||||
"""
|
||||
Return the domain string for this record type.
|
||||
|
||||
Used in envelope validation to distinguish different record types.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def codec(self) -> bytes:
|
||||
"""
|
||||
Return a binary codec prefix that identifies the PeerRecord type.
|
||||
|
||||
This is prepended in signed envelopes to allow type-safe decoding.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def to_protobuf(self) -> pb.PeerRecord:
|
||||
"""
|
||||
Convert this PeerRecord into its Protobuf representation.
|
||||
|
||||
:raises ValueError: if serialization fails (e.g., invalid peer ID).
|
||||
:return: A populated protobuf `PeerRecord` message.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def marshal_record(self) -> bytes:
|
||||
"""
|
||||
Serialize this PeerRecord into a byte string.
|
||||
|
||||
Used when signing or sealing the record in an envelope.
|
||||
|
||||
:raises ValueError: if protobuf serialization fails.
|
||||
:return: Byte-encoded PeerRecord.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def equal(self, other: object) -> bool:
|
||||
"""
|
||||
Compare this PeerRecord with another for equality.
|
||||
|
||||
Two PeerRecords are considered equal if:
|
||||
- They have the same `peer_id`
|
||||
- Their `seq` numbers match
|
||||
- Their address lists are identical and ordered
|
||||
|
||||
:param other: Object to compare with.
|
||||
:return: True if equal, False otherwise.
|
||||
"""
|
||||
|
||||
|
||||
# -------------------------- envelope interface.py --------------------------
|
||||
class IEnvelope(ABC):
|
||||
@abstractmethod
|
||||
def marshal_envelope(self) -> bytes:
|
||||
"""
|
||||
Serialize this Envelope into its protobuf wire format.
|
||||
|
||||
Converts all envelope fields into a `pb.Envelope` protobuf message
|
||||
and returns the serialized bytes.
|
||||
|
||||
:return: Serialized envelope as bytes.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def validate(self, domain: str) -> None:
|
||||
"""
|
||||
Verify the envelope's signature within the given domain scope.
|
||||
|
||||
This ensures that the envelope has not been tampered with
|
||||
and was signed under the correct usage context.
|
||||
|
||||
:param domain: Domain string that contextualizes the signature.
|
||||
:raises ValueError: If the signature is invalid.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def record(self) -> "PeerRecord":
|
||||
"""
|
||||
Lazily decode and return the embedded PeerRecord.
|
||||
|
||||
This method unmarshals the payload bytes into a `PeerRecord` instance,
|
||||
using the registered codec to identify the type. The decoded result
|
||||
is cached for future use.
|
||||
|
||||
:return: Decoded PeerRecord object.
|
||||
:raises Exception: If decoding fails or payload type is unsupported.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def equal(self, other: Any) -> bool:
|
||||
"""
|
||||
Compare this Envelope with another for structural equality.
|
||||
|
||||
Two envelopes are considered equal if:
|
||||
- They have the same public key
|
||||
- The payload type and payload bytes match
|
||||
- Their signatures are identical
|
||||
|
||||
:param other: Another object to compare.
|
||||
:return: True if equal, False otherwise.
|
||||
"""
|
||||
|
||||
|
||||
# -------------------------- peerdata interface.py --------------------------
|
||||
|
||||
|
||||
@ -2158,6 +2414,7 @@ class IMultiselectMuxer(ABC):
|
||||
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_protocols(self) -> tuple[TProtocol | None, ...]:
|
||||
"""
|
||||
Retrieve the protocols for which handlers have been registered.
|
||||
@ -2168,7 +2425,6 @@ class IMultiselectMuxer(ABC):
|
||||
A tuple of registered protocol names.
|
||||
|
||||
"""
|
||||
return tuple(self.handlers.keys())
|
||||
|
||||
@abstractmethod
|
||||
async def negotiate(
|
||||
|
||||
@ -13,7 +13,7 @@ _sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1dlibp2p/crypto/pb/crypto.proto\x12\tcrypto.pb\"?\n\tPublicKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c\"@\n\nPrivateKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c*G\n\x07KeyType\x12\x07\n\x03RSA\x10\x00\x12\x0b\n\x07\x45\x64\x32\x35\x35\x31\x39\x10\x01\x12\r\n\tSecp256k1\x10\x02\x12\t\n\x05\x45\x43\x44SA\x10\x03\x12\x0c\n\x08\x45\x43\x43_P256\x10\x04')
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1dlibp2p/crypto/pb/crypto.proto\x12\tcrypto.pb\"?\n\tPublicKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c\"@\n\nPrivateKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c*S\n\x07KeyType\x12\x07\n\x03RSA\x10\x00\x12\x0b\n\x07\x45\x64\x32\x35\x35\x31\x39\x10\x01\x12\r\n\tSecp256k1\x10\x02\x12\t\n\x05\x45\x43\x44SA\x10\x03\x12\x0c\n\x08\x45\x43\x43_P256\x10\x04\x12\n\n\x06X25519\x10\x05')
|
||||
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.crypto.pb.crypto_pb2', globals())
|
||||
@ -21,7 +21,7 @@ if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
|
||||
DESCRIPTOR._options = None
|
||||
_KEYTYPE._serialized_start=175
|
||||
_KEYTYPE._serialized_end=246
|
||||
_KEYTYPE._serialized_end=258
|
||||
_PUBLICKEY._serialized_start=44
|
||||
_PUBLICKEY._serialized_end=107
|
||||
_PRIVATEKEY._serialized_start=109
|
||||
|
||||
@ -28,6 +28,7 @@ class _KeyTypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTy
|
||||
Secp256k1: _KeyType.ValueType # 2
|
||||
ECDSA: _KeyType.ValueType # 3
|
||||
ECC_P256: _KeyType.ValueType # 4
|
||||
X25519: _KeyType.ValueType # 5
|
||||
|
||||
class KeyType(_KeyType, metaclass=_KeyTypeEnumTypeWrapper): ...
|
||||
|
||||
@ -36,6 +37,7 @@ Ed25519: KeyType.ValueType # 1
|
||||
Secp256k1: KeyType.ValueType # 2
|
||||
ECDSA: KeyType.ValueType # 3
|
||||
ECC_P256: KeyType.ValueType # 4
|
||||
X25519: KeyType.ValueType # 5
|
||||
global___KeyType = KeyType
|
||||
|
||||
@typing.final
|
||||
|
||||
5
libp2p/discovery/bootstrap/__init__.py
Normal file
5
libp2p/discovery/bootstrap/__init__.py
Normal file
@ -0,0 +1,5 @@
|
||||
"""Bootstrap peer discovery module for py-libp2p."""
|
||||
|
||||
from .bootstrap import BootstrapDiscovery
|
||||
|
||||
__all__ = ["BootstrapDiscovery"]
|
||||
94
libp2p/discovery/bootstrap/bootstrap.py
Normal file
94
libp2p/discovery/bootstrap/bootstrap.py
Normal file
@ -0,0 +1,94 @@
|
||||
import logging
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
from multiaddr.resolvers import DNSResolver
|
||||
|
||||
from libp2p.abc import ID, INetworkService, PeerInfo
|
||||
from libp2p.discovery.bootstrap.utils import validate_bootstrap_addresses
|
||||
from libp2p.discovery.events.peerDiscovery import peerDiscovery
|
||||
from libp2p.peer.peerinfo import info_from_p2p_addr
|
||||
|
||||
logger = logging.getLogger("libp2p.discovery.bootstrap")
|
||||
resolver = DNSResolver()
|
||||
|
||||
|
||||
class BootstrapDiscovery:
|
||||
"""
|
||||
Bootstrap-based peer discovery for py-libp2p.
|
||||
Connects to predefined bootstrap peers and adds them to peerstore.
|
||||
"""
|
||||
|
||||
def __init__(self, swarm: INetworkService, bootstrap_addrs: list[str]):
|
||||
self.swarm = swarm
|
||||
self.peerstore = swarm.peerstore
|
||||
self.bootstrap_addrs = bootstrap_addrs or []
|
||||
self.discovered_peers: set[str] = set()
|
||||
|
||||
async def start(self) -> None:
|
||||
"""Process bootstrap addresses and emit peer discovery events."""
|
||||
logger.debug(
|
||||
f"Starting bootstrap discovery with "
|
||||
f"{len(self.bootstrap_addrs)} bootstrap addresses"
|
||||
)
|
||||
|
||||
# Validate and filter bootstrap addresses
|
||||
self.bootstrap_addrs = validate_bootstrap_addresses(self.bootstrap_addrs)
|
||||
|
||||
for addr_str in self.bootstrap_addrs:
|
||||
try:
|
||||
await self._process_bootstrap_addr(addr_str)
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to process bootstrap address {addr_str}: {e}")
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Clean up bootstrap discovery resources."""
|
||||
logger.debug("Stopping bootstrap discovery")
|
||||
self.discovered_peers.clear()
|
||||
|
||||
async def _process_bootstrap_addr(self, addr_str: str) -> None:
|
||||
"""Convert string address to PeerInfo and add to peerstore."""
|
||||
try:
|
||||
multiaddr = Multiaddr(addr_str)
|
||||
except Exception as e:
|
||||
logger.debug(f"Invalid multiaddr format '{addr_str}': {e}")
|
||||
return
|
||||
if self.is_dns_addr(multiaddr):
|
||||
resolved_addrs = await resolver.resolve(multiaddr)
|
||||
peer_id_str = multiaddr.get_peer_id()
|
||||
if peer_id_str is None:
|
||||
logger.warning(f"Missing peer ID in DNS address: {addr_str}")
|
||||
return
|
||||
peer_id = ID.from_base58(peer_id_str)
|
||||
addrs = [addr for addr in resolved_addrs]
|
||||
if not addrs:
|
||||
logger.warning(f"No addresses resolved for DNS address: {addr_str}")
|
||||
return
|
||||
peer_info = PeerInfo(peer_id, addrs)
|
||||
self.add_addr(peer_info)
|
||||
else:
|
||||
self.add_addr(info_from_p2p_addr(multiaddr))
|
||||
|
||||
def is_dns_addr(self, addr: Multiaddr) -> bool:
|
||||
"""Check if the address is a DNS address."""
|
||||
return any(protocol.name == "dnsaddr" for protocol in addr.protocols())
|
||||
|
||||
def add_addr(self, peer_info: PeerInfo) -> None:
|
||||
"""Add a peer to the peerstore and emit discovery event."""
|
||||
# Skip if it's our own peer
|
||||
if peer_info.peer_id == self.swarm.get_peer_id():
|
||||
logger.debug(f"Skipping own peer ID: {peer_info.peer_id}")
|
||||
return
|
||||
|
||||
# Always add addresses to peerstore (allows multiple addresses for same peer)
|
||||
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
|
||||
|
||||
# Only emit discovery event if this is the first time we see this peer
|
||||
peer_id_str = str(peer_info.peer_id)
|
||||
if peer_id_str not in self.discovered_peers:
|
||||
# Track discovered peer
|
||||
self.discovered_peers.add(peer_id_str)
|
||||
# Emit peer discovery event
|
||||
peerDiscovery.emit_peer_discovered(peer_info)
|
||||
logger.debug(f"Peer discovered: {peer_info.peer_id}")
|
||||
else:
|
||||
logger.debug(f"Additional addresses added for peer: {peer_info.peer_id}")
|
||||
51
libp2p/discovery/bootstrap/utils.py
Normal file
51
libp2p/discovery/bootstrap/utils.py
Normal file
@ -0,0 +1,51 @@
|
||||
"""Utility functions for bootstrap discovery."""
|
||||
|
||||
import logging
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
|
||||
from libp2p.peer.peerinfo import InvalidAddrError, PeerInfo, info_from_p2p_addr
|
||||
|
||||
logger = logging.getLogger("libp2p.discovery.bootstrap.utils")
|
||||
|
||||
|
||||
def validate_bootstrap_addresses(addrs: list[str]) -> list[str]:
|
||||
"""
|
||||
Validate and filter bootstrap addresses.
|
||||
|
||||
:param addrs: List of bootstrap address strings
|
||||
:return: List of valid bootstrap addresses
|
||||
"""
|
||||
valid_addrs = []
|
||||
|
||||
for addr_str in addrs:
|
||||
try:
|
||||
# Try to parse as multiaddr
|
||||
multiaddr = Multiaddr(addr_str)
|
||||
|
||||
# Try to extract peer info (this validates the p2p component)
|
||||
info_from_p2p_addr(multiaddr)
|
||||
|
||||
valid_addrs.append(addr_str)
|
||||
logger.debug(f"Valid bootstrap address: {addr_str}")
|
||||
|
||||
except (InvalidAddrError, ValueError, Exception) as e:
|
||||
logger.warning(f"Invalid bootstrap address '{addr_str}': {e}")
|
||||
continue
|
||||
|
||||
return valid_addrs
|
||||
|
||||
|
||||
def parse_bootstrap_peer_info(addr_str: str) -> PeerInfo | None:
|
||||
"""
|
||||
Parse bootstrap address string into PeerInfo.
|
||||
|
||||
:param addr_str: Bootstrap address string
|
||||
:return: PeerInfo object or None if parsing fails
|
||||
"""
|
||||
try:
|
||||
multiaddr = Multiaddr(addr_str)
|
||||
return info_from_p2p_addr(multiaddr)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to parse bootstrap address '{addr_str}': {e}")
|
||||
return None
|
||||
@ -29,6 +29,7 @@ from libp2p.custom_types import (
|
||||
StreamHandlerFn,
|
||||
TProtocol,
|
||||
)
|
||||
from libp2p.discovery.bootstrap.bootstrap import BootstrapDiscovery
|
||||
from libp2p.discovery.mdns.mdns import MDNSDiscovery
|
||||
from libp2p.host.defaults import (
|
||||
get_default_protocols,
|
||||
@ -92,6 +93,7 @@ class BasicHost(IHost):
|
||||
self,
|
||||
network: INetworkService,
|
||||
enable_mDNS: bool = False,
|
||||
bootstrap: list[str] | None = None,
|
||||
default_protocols: Optional["OrderedDict[TProtocol, StreamHandlerFn]"] = None,
|
||||
negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
|
||||
) -> None:
|
||||
@ -105,6 +107,8 @@ class BasicHost(IHost):
|
||||
self.multiselect_client = MultiselectClient()
|
||||
if enable_mDNS:
|
||||
self.mDNS = MDNSDiscovery(network)
|
||||
if bootstrap:
|
||||
self.bootstrap = BootstrapDiscovery(network, bootstrap)
|
||||
|
||||
def get_id(self) -> ID:
|
||||
"""
|
||||
@ -172,11 +176,16 @@ class BasicHost(IHost):
|
||||
if hasattr(self, "mDNS") and self.mDNS is not None:
|
||||
logger.debug("Starting mDNS Discovery")
|
||||
self.mDNS.start()
|
||||
if hasattr(self, "bootstrap") and self.bootstrap is not None:
|
||||
logger.debug("Starting Bootstrap Discovery")
|
||||
await self.bootstrap.start()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
if hasattr(self, "mDNS") and self.mDNS is not None:
|
||||
self.mDNS.stop()
|
||||
if hasattr(self, "bootstrap") and self.bootstrap is not None:
|
||||
self.bootstrap.stop()
|
||||
|
||||
return _run()
|
||||
|
||||
|
||||
@ -26,5 +26,8 @@ if TYPE_CHECKING:
|
||||
|
||||
def get_default_protocols(host: IHost) -> "OrderedDict[TProtocol, StreamHandlerFn]":
|
||||
return OrderedDict(
|
||||
((IdentifyID, identify_handler_for(host)), (PingID, handle_ping))
|
||||
(
|
||||
(IdentifyID, identify_handler_for(host, use_varint_format=True)),
|
||||
(PingID, handle_ping),
|
||||
)
|
||||
)
|
||||
|
||||
@ -19,9 +19,13 @@ class RoutedHost(BasicHost):
|
||||
_router: IPeerRouting
|
||||
|
||||
def __init__(
|
||||
self, network: INetworkService, router: IPeerRouting, enable_mDNS: bool = False
|
||||
self,
|
||||
network: INetworkService,
|
||||
router: IPeerRouting,
|
||||
enable_mDNS: bool = False,
|
||||
bootstrap: list[str] | None = None,
|
||||
):
|
||||
super().__init__(network, enable_mDNS)
|
||||
super().__init__(network, enable_mDNS, bootstrap)
|
||||
self._router = router
|
||||
|
||||
async def connect(self, peer_info: PeerInfo) -> None:
|
||||
|
||||
@ -15,8 +15,12 @@ from libp2p.custom_types import (
|
||||
from libp2p.network.stream.exceptions import (
|
||||
StreamClosed,
|
||||
)
|
||||
from libp2p.peer.envelope import seal_record
|
||||
from libp2p.peer.peer_record import PeerRecord
|
||||
from libp2p.utils import (
|
||||
decode_varint_with_size,
|
||||
get_agent_version,
|
||||
varint,
|
||||
)
|
||||
|
||||
from .pb.identify_pb2 import (
|
||||
@ -59,7 +63,12 @@ def _mk_identify_protobuf(
|
||||
) -> Identify:
|
||||
public_key = host.get_public_key()
|
||||
laddrs = host.get_addrs()
|
||||
protocols = host.get_mux().get_protocols()
|
||||
protocols = tuple(str(p) for p in host.get_mux().get_protocols() if p is not None)
|
||||
|
||||
# Create a signed peer-record for the remote peer
|
||||
record = PeerRecord(host.get_id(), host.get_addrs())
|
||||
envelope = seal_record(record, host.get_private_key())
|
||||
protobuf = envelope.marshal_envelope()
|
||||
|
||||
observed_addr = observed_multiaddr.to_bytes() if observed_multiaddr else b""
|
||||
return Identify(
|
||||
@ -69,10 +78,51 @@ def _mk_identify_protobuf(
|
||||
listen_addrs=map(_multiaddr_to_bytes, laddrs),
|
||||
observed_addr=observed_addr,
|
||||
protocols=protocols,
|
||||
signedPeerRecord=protobuf,
|
||||
)
|
||||
|
||||
|
||||
def identify_handler_for(host: IHost) -> StreamHandlerFn:
|
||||
def parse_identify_response(response: bytes) -> Identify:
|
||||
"""
|
||||
Parse identify response that could be either:
|
||||
- Old format: raw protobuf
|
||||
- New format: length-prefixed protobuf
|
||||
|
||||
This function provides backward and forward compatibility.
|
||||
"""
|
||||
# Try new format first: length-prefixed protobuf
|
||||
if len(response) >= 1:
|
||||
length, varint_size = decode_varint_with_size(response)
|
||||
if varint_size > 0 and length > 0 and varint_size + length <= len(response):
|
||||
protobuf_data = response[varint_size : varint_size + length]
|
||||
try:
|
||||
identify_response = Identify()
|
||||
identify_response.ParseFromString(protobuf_data)
|
||||
# Sanity check: must have agent_version (protocol_version is optional)
|
||||
if identify_response.agent_version:
|
||||
logger.debug(
|
||||
"Parsed length-prefixed identify response (new format)"
|
||||
)
|
||||
return identify_response
|
||||
except Exception:
|
||||
pass # Fall through to old format
|
||||
|
||||
# Fall back to old format: raw protobuf
|
||||
try:
|
||||
identify_response = Identify()
|
||||
identify_response.ParseFromString(response)
|
||||
logger.debug("Parsed raw protobuf identify response (old format)")
|
||||
return identify_response
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to parse identify response: {e}")
|
||||
logger.error(f"Response length: {len(response)}")
|
||||
logger.error(f"Response hex: {response.hex()}")
|
||||
raise
|
||||
|
||||
|
||||
def identify_handler_for(
|
||||
host: IHost, use_varint_format: bool = True
|
||||
) -> StreamHandlerFn:
|
||||
async def handle_identify(stream: INetStream) -> None:
|
||||
# get observed address from ``stream``
|
||||
peer_id = (
|
||||
@ -100,7 +150,21 @@ def identify_handler_for(host: IHost) -> StreamHandlerFn:
|
||||
response = protobuf.SerializeToString()
|
||||
|
||||
try:
|
||||
await stream.write(response)
|
||||
if use_varint_format:
|
||||
# Send length-prefixed protobuf message (new format)
|
||||
await stream.write(varint.encode_uvarint(len(response)))
|
||||
await stream.write(response)
|
||||
logger.debug(
|
||||
"Sent new format (length-prefixed) identify response to %s",
|
||||
peer_id,
|
||||
)
|
||||
else:
|
||||
# Send raw protobuf message (old format for backward compatibility)
|
||||
await stream.write(response)
|
||||
logger.debug(
|
||||
"Sent old format (raw protobuf) identify response to %s",
|
||||
peer_id,
|
||||
)
|
||||
except StreamClosed:
|
||||
logger.debug("Fail to respond to %s request: stream closed", ID)
|
||||
else:
|
||||
|
||||
@ -9,4 +9,5 @@ message Identify {
|
||||
repeated bytes listen_addrs = 2;
|
||||
optional bytes observed_addr = 4;
|
||||
repeated string protocols = 3;
|
||||
optional bytes signedPeerRecord = 8;
|
||||
}
|
||||
|
||||
@ -13,7 +13,7 @@ _sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*libp2p/identity/identify/pb/identify.proto\x12\x0bidentify.pb\"\x8f\x01\n\x08Identify\x12\x18\n\x10protocol_version\x18\x05 \x01(\t\x12\x15\n\ragent_version\x18\x06 \x01(\t\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\x14\n\x0clisten_addrs\x18\x02 \x03(\x0c\x12\x15\n\robserved_addr\x18\x04 \x01(\x0c\x12\x11\n\tprotocols\x18\x03 \x03(\t')
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*libp2p/identity/identify/pb/identify.proto\x12\x0bidentify.pb\"\xa9\x01\n\x08Identify\x12\x18\n\x10protocol_version\x18\x05 \x01(\t\x12\x15\n\ragent_version\x18\x06 \x01(\t\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\x14\n\x0clisten_addrs\x18\x02 \x03(\x0c\x12\x15\n\robserved_addr\x18\x04 \x01(\x0c\x12\x11\n\tprotocols\x18\x03 \x03(\t\x12\x18\n\x10signedPeerRecord\x18\x08 \x01(\x0c')
|
||||
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.identity.identify.pb.identify_pb2', globals())
|
||||
@ -21,5 +21,5 @@ if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
|
||||
DESCRIPTOR._options = None
|
||||
_IDENTIFY._serialized_start=60
|
||||
_IDENTIFY._serialized_end=203
|
||||
_IDENTIFY._serialized_end=229
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
|
||||
@ -22,10 +22,12 @@ class Identify(google.protobuf.message.Message):
|
||||
LISTEN_ADDRS_FIELD_NUMBER: builtins.int
|
||||
OBSERVED_ADDR_FIELD_NUMBER: builtins.int
|
||||
PROTOCOLS_FIELD_NUMBER: builtins.int
|
||||
SIGNEDPEERRECORD_FIELD_NUMBER: builtins.int
|
||||
protocol_version: builtins.str
|
||||
agent_version: builtins.str
|
||||
public_key: builtins.bytes
|
||||
observed_addr: builtins.bytes
|
||||
signedPeerRecord: builtins.bytes
|
||||
@property
|
||||
def listen_addrs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]: ...
|
||||
@property
|
||||
@ -39,8 +41,9 @@ class Identify(google.protobuf.message.Message):
|
||||
listen_addrs: collections.abc.Iterable[builtins.bytes] | None = ...,
|
||||
observed_addr: builtins.bytes | None = ...,
|
||||
protocols: collections.abc.Iterable[builtins.str] | None = ...,
|
||||
signedPeerRecord: builtins.bytes | None = ...,
|
||||
) -> None: ...
|
||||
def HasField(self, field_name: typing.Literal["agent_version", b"agent_version", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "public_key", b"public_key"]) -> builtins.bool: ...
|
||||
def ClearField(self, field_name: typing.Literal["agent_version", b"agent_version", "listen_addrs", b"listen_addrs", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "protocols", b"protocols", "public_key", b"public_key"]) -> None: ...
|
||||
def HasField(self, field_name: typing.Literal["agent_version", b"agent_version", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "public_key", b"public_key", "signedPeerRecord", b"signedPeerRecord"]) -> builtins.bool: ...
|
||||
def ClearField(self, field_name: typing.Literal["agent_version", b"agent_version", "listen_addrs", b"listen_addrs", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "protocols", b"protocols", "public_key", b"public_key", "signedPeerRecord", b"signedPeerRecord"]) -> None: ...
|
||||
|
||||
global___Identify = Identify
|
||||
|
||||
@ -20,11 +20,16 @@ from libp2p.custom_types import (
|
||||
from libp2p.network.stream.exceptions import (
|
||||
StreamClosed,
|
||||
)
|
||||
from libp2p.peer.envelope import consume_envelope
|
||||
from libp2p.peer.id import (
|
||||
ID,
|
||||
)
|
||||
from libp2p.utils import (
|
||||
get_agent_version,
|
||||
varint,
|
||||
)
|
||||
from libp2p.utils.varint import (
|
||||
read_length_prefixed_protobuf,
|
||||
)
|
||||
|
||||
from ..identify.identify import (
|
||||
@ -43,20 +48,28 @@ AGENT_VERSION = get_agent_version()
|
||||
CONCURRENCY_LIMIT = 10
|
||||
|
||||
|
||||
def identify_push_handler_for(host: IHost) -> StreamHandlerFn:
|
||||
def identify_push_handler_for(
|
||||
host: IHost, use_varint_format: bool = True
|
||||
) -> StreamHandlerFn:
|
||||
"""
|
||||
Create a handler for the identify/push protocol.
|
||||
|
||||
This handler receives pushed identify messages from remote peers and updates
|
||||
the local peerstore with the new information.
|
||||
|
||||
Args:
|
||||
host: The libp2p host.
|
||||
use_varint_format: True=length-prefixed, False=raw protobuf.
|
||||
|
||||
"""
|
||||
|
||||
async def handle_identify_push(stream: INetStream) -> None:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
|
||||
try:
|
||||
# Read the identify message from the stream
|
||||
data = await stream.read()
|
||||
# Use the utility function to read the protobuf message
|
||||
data = await read_length_prefixed_protobuf(stream, use_varint_format)
|
||||
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(data)
|
||||
|
||||
@ -66,6 +79,11 @@ def identify_push_handler_for(host: IHost) -> StreamHandlerFn:
|
||||
)
|
||||
|
||||
logger.debug("Successfully processed identify/push from peer %s", peer_id)
|
||||
|
||||
# Send acknowledgment to indicate successful processing
|
||||
# This ensures the sender knows the message was received before closing
|
||||
await stream.write(b"OK")
|
||||
|
||||
except StreamClosed:
|
||||
logger.debug(
|
||||
"Stream closed while processing identify/push from %s", peer_id
|
||||
@ -74,7 +92,10 @@ def identify_push_handler_for(host: IHost) -> StreamHandlerFn:
|
||||
logger.error("Error processing identify/push from %s: %s", peer_id, e)
|
||||
finally:
|
||||
# Close the stream after processing
|
||||
await stream.close()
|
||||
try:
|
||||
await stream.close()
|
||||
except Exception:
|
||||
pass # Ignore errors when closing
|
||||
|
||||
return handle_identify_push
|
||||
|
||||
@ -120,6 +141,19 @@ async def _update_peerstore_from_identify(
|
||||
except Exception as e:
|
||||
logger.error("Error updating protocols for peer %s: %s", peer_id, e)
|
||||
|
||||
if identify_msg.HasField("signedPeerRecord"):
|
||||
try:
|
||||
# Convert the signed-peer-record(Envelope) from prtobuf bytes
|
||||
envelope, _ = consume_envelope(
|
||||
identify_msg.signedPeerRecord, "libp2p-peer-record"
|
||||
)
|
||||
# Use a default TTL of 2 hours (7200 seconds)
|
||||
if not peerstore.consume_peer_record(envelope, 7200):
|
||||
logger.error("Updating Certified-Addr-Book was unsuccessful")
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Error updating the certified addr book for peer %s: %s", peer_id, e
|
||||
)
|
||||
# Update observed address if present
|
||||
if identify_msg.HasField("observed_addr") and identify_msg.observed_addr:
|
||||
try:
|
||||
@ -137,6 +171,7 @@ async def push_identify_to_peer(
|
||||
peer_id: ID,
|
||||
observed_multiaddr: Multiaddr | None = None,
|
||||
limit: trio.Semaphore = trio.Semaphore(CONCURRENCY_LIMIT),
|
||||
use_varint_format: bool = True,
|
||||
) -> bool:
|
||||
"""
|
||||
Push an identify message to a specific peer.
|
||||
@ -144,10 +179,15 @@ async def push_identify_to_peer(
|
||||
This function opens a stream to the peer using the identify/push protocol,
|
||||
sends the identify message, and closes the stream.
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if the push was successful, False otherwise.
|
||||
Args:
|
||||
host: The libp2p host.
|
||||
peer_id: The peer ID to push to.
|
||||
observed_multiaddr: The observed multiaddress (optional).
|
||||
limit: Semaphore for concurrency control.
|
||||
use_varint_format: True=length-prefixed, False=raw protobuf.
|
||||
|
||||
Returns:
|
||||
bool: True if the push was successful, False otherwise.
|
||||
|
||||
"""
|
||||
async with limit:
|
||||
@ -159,10 +199,28 @@ async def push_identify_to_peer(
|
||||
identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
|
||||
response = identify_msg.SerializeToString()
|
||||
|
||||
# Send the identify message
|
||||
await stream.write(response)
|
||||
if use_varint_format:
|
||||
# Send length-prefixed identify message
|
||||
await stream.write(varint.encode_uvarint(len(response)))
|
||||
await stream.write(response)
|
||||
else:
|
||||
# Send raw protobuf message
|
||||
await stream.write(response)
|
||||
|
||||
# Close the stream
|
||||
# Wait for acknowledgment from the receiver with timeout
|
||||
# This ensures the message was processed before closing
|
||||
try:
|
||||
with trio.move_on_after(1.0): # 1 second timeout
|
||||
ack = await stream.read(2) # Read "OK" acknowledgment
|
||||
if ack != b"OK":
|
||||
logger.warning(
|
||||
"Unexpected acknowledgment from peer %s: %s", peer_id, ack
|
||||
)
|
||||
except Exception as e:
|
||||
logger.debug("No acknowledgment received from peer %s: %s", peer_id, e)
|
||||
# Continue anyway, as the message might have been processed
|
||||
|
||||
# Close the stream after acknowledgment (or timeout)
|
||||
await stream.close()
|
||||
|
||||
logger.debug("Successfully pushed identify to peer %s", peer_id)
|
||||
@ -176,18 +234,36 @@ async def push_identify_to_peers(
|
||||
host: IHost,
|
||||
peer_ids: set[ID] | None = None,
|
||||
observed_multiaddr: Multiaddr | None = None,
|
||||
use_varint_format: bool = True,
|
||||
) -> None:
|
||||
"""
|
||||
Push an identify message to multiple peers in parallel.
|
||||
|
||||
If peer_ids is None, push to all connected peers.
|
||||
|
||||
Args:
|
||||
host: The libp2p host.
|
||||
peer_ids: Set of peer IDs to push to (if None, push to all connected peers).
|
||||
observed_multiaddr: The observed multiaddress (optional).
|
||||
use_varint_format: True=length-prefixed, False=raw protobuf.
|
||||
|
||||
"""
|
||||
if peer_ids is None:
|
||||
# Get all connected peers
|
||||
peer_ids = set(host.get_connected_peers())
|
||||
|
||||
# Create a single shared semaphore for concurrency control
|
||||
limit = trio.Semaphore(CONCURRENCY_LIMIT)
|
||||
|
||||
# Push to each peer in parallel using a trio.Nursery
|
||||
# limiting concurrent connections to 10
|
||||
# limiting concurrent connections to CONCURRENCY_LIMIT
|
||||
async with trio.open_nursery() as nursery:
|
||||
for peer_id in peer_ids:
|
||||
nursery.start_soon(push_identify_to_peer, host, peer_id, observed_multiaddr)
|
||||
nursery.start_soon(
|
||||
push_identify_to_peer,
|
||||
host,
|
||||
peer_id,
|
||||
observed_multiaddr,
|
||||
limit,
|
||||
use_varint_format,
|
||||
)
|
||||
|
||||
@ -2,10 +2,10 @@
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/kad_dht/pb/kademlia.proto
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf.internal import builder as _builder
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
@ -15,19 +15,19 @@ _sym_db = _symbol_database.Default()
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n libp2p/kad_dht/pb/kademlia.proto\":\n\x06Record\x12\x0b\n\x03key\x18\x01 \x01(\x0c\x12\r\n\x05value\x18\x02 \x01(\x0c\x12\x14\n\x0ctimeReceived\x18\x05 \x01(\t\"\xca\x03\n\x07Message\x12\"\n\x04type\x18\x01 \x01(\x0e\x32\x14.Message.MessageType\x12\x17\n\x0f\x63lusterLevelRaw\x18\n \x01(\x05\x12\x0b\n\x03key\x18\x02 \x01(\x0c\x12\x17\n\x06record\x18\x03 \x01(\x0b\x32\x07.Record\x12\"\n\x0b\x63loserPeers\x18\x08 \x03(\x0b\x32\r.Message.Peer\x12$\n\rproviderPeers\x18\t \x03(\x0b\x32\r.Message.Peer\x1aN\n\x04Peer\x12\n\n\x02id\x18\x01 \x01(\x0c\x12\r\n\x05\x61\x64\x64rs\x18\x02 \x03(\x0c\x12+\n\nconnection\x18\x03 \x01(\x0e\x32\x17.Message.ConnectionType\"i\n\x0bMessageType\x12\r\n\tPUT_VALUE\x10\x00\x12\r\n\tGET_VALUE\x10\x01\x12\x10\n\x0c\x41\x44\x44_PROVIDER\x10\x02\x12\x11\n\rGET_PROVIDERS\x10\x03\x12\r\n\tFIND_NODE\x10\x04\x12\x08\n\x04PING\x10\x05\"W\n\x0e\x43onnectionType\x12\x11\n\rNOT_CONNECTED\x10\x00\x12\r\n\tCONNECTED\x10\x01\x12\x0f\n\x0b\x43\x41N_CONNECT\x10\x02\x12\x12\n\x0e\x43\x41NNOT_CONNECT\x10\x03\x62\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.kad_dht.pb.kademlia_pb2', _globals)
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.kad_dht.pb.kademlia_pb2', globals())
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
|
||||
DESCRIPTOR._options = None
|
||||
_globals['_RECORD']._serialized_start=36
|
||||
_globals['_RECORD']._serialized_end=94
|
||||
_globals['_MESSAGE']._serialized_start=97
|
||||
_globals['_MESSAGE']._serialized_end=555
|
||||
_globals['_MESSAGE_PEER']._serialized_start=281
|
||||
_globals['_MESSAGE_PEER']._serialized_end=359
|
||||
_globals['_MESSAGE_MESSAGETYPE']._serialized_start=361
|
||||
_globals['_MESSAGE_MESSAGETYPE']._serialized_end=466
|
||||
_globals['_MESSAGE_CONNECTIONTYPE']._serialized_start=468
|
||||
_globals['_MESSAGE_CONNECTIONTYPE']._serialized_end=555
|
||||
_RECORD._serialized_start=36
|
||||
_RECORD._serialized_end=94
|
||||
_MESSAGE._serialized_start=97
|
||||
_MESSAGE._serialized_end=555
|
||||
_MESSAGE_PEER._serialized_start=281
|
||||
_MESSAGE_PEER._serialized_end=359
|
||||
_MESSAGE_MESSAGETYPE._serialized_start=361
|
||||
_MESSAGE_MESSAGETYPE._serialized_end=466
|
||||
_MESSAGE_CONNECTIONTYPE._serialized_start=468
|
||||
_MESSAGE_CONNECTIONTYPE._serialized_end=555
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
|
||||
@ -3,6 +3,7 @@ from typing import (
|
||||
TYPE_CHECKING,
|
||||
)
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p.abc import (
|
||||
@ -147,6 +148,24 @@ class SwarmConn(INetConn):
|
||||
def get_streams(self) -> tuple[NetStream, ...]:
|
||||
return tuple(self.streams)
|
||||
|
||||
def get_transport_addresses(self) -> list[Multiaddr]:
|
||||
"""
|
||||
Retrieve the transport addresses used by this connection.
|
||||
|
||||
Returns
|
||||
-------
|
||||
list[Multiaddr]
|
||||
A list of multiaddresses used by the transport.
|
||||
|
||||
"""
|
||||
# Return the addresses from the peerstore for this peer
|
||||
try:
|
||||
peer_id = self.muxed_conn.peer_id
|
||||
return self.swarm.peerstore.addrs(peer_id)
|
||||
except Exception as e:
|
||||
logging.warning(f"Error getting transport addresses: {e}")
|
||||
return []
|
||||
|
||||
def remove_stream(self, stream: NetStream) -> None:
|
||||
if stream not in self.streams:
|
||||
return
|
||||
|
||||
271
libp2p/peer/envelope.py
Normal file
271
libp2p/peer/envelope.py
Normal file
@ -0,0 +1,271 @@
|
||||
from typing import Any, cast
|
||||
|
||||
from libp2p.crypto.ed25519 import Ed25519PublicKey
|
||||
from libp2p.crypto.keys import PrivateKey, PublicKey
|
||||
from libp2p.crypto.rsa import RSAPublicKey
|
||||
from libp2p.crypto.secp256k1 import Secp256k1PublicKey
|
||||
import libp2p.peer.pb.crypto_pb2 as cryto_pb
|
||||
import libp2p.peer.pb.envelope_pb2 as pb
|
||||
import libp2p.peer.pb.peer_record_pb2 as record_pb
|
||||
from libp2p.peer.peer_record import (
|
||||
PeerRecord,
|
||||
peer_record_from_protobuf,
|
||||
unmarshal_record,
|
||||
)
|
||||
from libp2p.utils.varint import encode_uvarint
|
||||
|
||||
ENVELOPE_DOMAIN = "libp2p-peer-record"
|
||||
PEER_RECORD_CODEC = b"\x03\x01"
|
||||
|
||||
|
||||
class Envelope:
|
||||
"""
|
||||
A signed wrapper around a serialized libp2p record.
|
||||
|
||||
Envelopes are cryptographically signed by the author's private key
|
||||
and are scoped to a specific 'domain' to prevent cross-protocol replay.
|
||||
|
||||
Attributes:
|
||||
public_key: The public key that can verify the envelope's signature.
|
||||
payload_type: A multicodec code identifying the type of payload inside.
|
||||
raw_payload: The raw serialized record data.
|
||||
signature: Signature over the domain-scoped payload content.
|
||||
|
||||
"""
|
||||
|
||||
public_key: PublicKey
|
||||
payload_type: bytes
|
||||
raw_payload: bytes
|
||||
signature: bytes
|
||||
|
||||
_cached_record: PeerRecord | None = None
|
||||
_unmarshal_error: Exception | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
public_key: PublicKey,
|
||||
payload_type: bytes,
|
||||
raw_payload: bytes,
|
||||
signature: bytes,
|
||||
):
|
||||
self.public_key = public_key
|
||||
self.payload_type = payload_type
|
||||
self.raw_payload = raw_payload
|
||||
self.signature = signature
|
||||
|
||||
def marshal_envelope(self) -> bytes:
|
||||
"""
|
||||
Serialize this Envelope into its protobuf wire format.
|
||||
|
||||
Converts all envelope fields into a `pb.Envelope` protobuf message
|
||||
and returns the serialized bytes.
|
||||
|
||||
:return: Serialized envelope as bytes.
|
||||
"""
|
||||
pb_env = pb.Envelope(
|
||||
public_key=pub_key_to_protobuf(self.public_key),
|
||||
payload_type=self.payload_type,
|
||||
payload=self.raw_payload,
|
||||
signature=self.signature,
|
||||
)
|
||||
return pb_env.SerializeToString()
|
||||
|
||||
def validate(self, domain: str) -> None:
|
||||
"""
|
||||
Verify the envelope's signature within the given domain scope.
|
||||
|
||||
This ensures that the envelope has not been tampered with
|
||||
and was signed under the correct usage context.
|
||||
|
||||
:param domain: Domain string that contextualizes the signature.
|
||||
:raises ValueError: If the signature is invalid.
|
||||
"""
|
||||
unsigned = make_unsigned(domain, self.payload_type, self.raw_payload)
|
||||
if not self.public_key.verify(unsigned, self.signature):
|
||||
raise ValueError("Invalid envelope signature")
|
||||
|
||||
def record(self) -> PeerRecord:
|
||||
"""
|
||||
Lazily decode and return the embedded PeerRecord.
|
||||
|
||||
This method unmarshals the payload bytes into a `PeerRecord` instance,
|
||||
using the registered codec to identify the type. The decoded result
|
||||
is cached for future use.
|
||||
|
||||
:return: Decoded PeerRecord object.
|
||||
:raises Exception: If decoding fails or payload type is unsupported.
|
||||
"""
|
||||
if self._cached_record is not None:
|
||||
return self._cached_record
|
||||
|
||||
try:
|
||||
if self.payload_type != PEER_RECORD_CODEC:
|
||||
raise ValueError("Unsuported payload type in envelope")
|
||||
msg = record_pb.PeerRecord()
|
||||
msg.ParseFromString(self.raw_payload)
|
||||
|
||||
self._cached_record = peer_record_from_protobuf(msg)
|
||||
return self._cached_record
|
||||
except Exception as e:
|
||||
self._unmarshal_error = e
|
||||
raise
|
||||
|
||||
def equal(self, other: Any) -> bool:
|
||||
"""
|
||||
Compare this Envelope with another for structural equality.
|
||||
|
||||
Two envelopes are considered equal if:
|
||||
- They have the same public key
|
||||
- The payload type and payload bytes match
|
||||
- Their signatures are identical
|
||||
|
||||
:param other: Another object to compare.
|
||||
:return: True if equal, False otherwise.
|
||||
"""
|
||||
if isinstance(other, Envelope):
|
||||
return (
|
||||
self.public_key.__eq__(other.public_key)
|
||||
and self.payload_type == other.payload_type
|
||||
and self.signature == other.signature
|
||||
and self.raw_payload == other.raw_payload
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def pub_key_to_protobuf(pub_key: PublicKey) -> cryto_pb.PublicKey:
|
||||
"""
|
||||
Convert a Python PublicKey object to its protobuf equivalent.
|
||||
|
||||
:param pub_key: A libp2p-compatible PublicKey instance.
|
||||
:return: Serialized protobuf PublicKey message.
|
||||
"""
|
||||
internal_key_type = pub_key.get_type()
|
||||
key_type = cast(cryto_pb.KeyType, internal_key_type.value)
|
||||
data = pub_key.to_bytes()
|
||||
protobuf_key = cryto_pb.PublicKey(Type=key_type, Data=data)
|
||||
return protobuf_key
|
||||
|
||||
|
||||
def pub_key_from_protobuf(pb_key: cryto_pb.PublicKey) -> PublicKey:
|
||||
"""
|
||||
Parse a protobuf PublicKey message into a native libp2p PublicKey.
|
||||
|
||||
Supports Ed25519, RSA, and Secp256k1 key types.
|
||||
|
||||
:param pb_key: Protobuf representation of a public key.
|
||||
:return: Parsed PublicKey object.
|
||||
:raises ValueError: If the key type is unrecognized.
|
||||
"""
|
||||
if pb_key.Type == cryto_pb.KeyType.Ed25519:
|
||||
return Ed25519PublicKey.from_bytes(pb_key.Data)
|
||||
elif pb_key.Type == cryto_pb.KeyType.RSA:
|
||||
return RSAPublicKey.from_bytes(pb_key.Data)
|
||||
elif pb_key.Type == cryto_pb.KeyType.Secp256k1:
|
||||
return Secp256k1PublicKey.from_bytes(pb_key.Data)
|
||||
# libp2p.crypto.ecdsa not implemented
|
||||
else:
|
||||
raise ValueError(f"Unknown key type: {pb_key.Type}")
|
||||
|
||||
|
||||
def seal_record(record: PeerRecord, private_key: PrivateKey) -> Envelope:
|
||||
"""
|
||||
Create and sign a new Envelope from a PeerRecord.
|
||||
|
||||
The record is serialized and signed in the scope of its domain and codec.
|
||||
The result is a self-contained, verifiable Envelope.
|
||||
|
||||
:param record: A PeerRecord to encapsulate.
|
||||
:param private_key: The signer's private key.
|
||||
:return: A signed Envelope instance.
|
||||
"""
|
||||
payload = record.marshal_record()
|
||||
|
||||
unsigned = make_unsigned(record.domain(), record.codec(), payload)
|
||||
signature = private_key.sign(unsigned)
|
||||
|
||||
return Envelope(
|
||||
public_key=private_key.get_public_key(),
|
||||
payload_type=record.codec(),
|
||||
raw_payload=payload,
|
||||
signature=signature,
|
||||
)
|
||||
|
||||
|
||||
def consume_envelope(data: bytes, domain: str) -> tuple[Envelope, PeerRecord]:
|
||||
"""
|
||||
Parse, validate, and decode an Envelope from bytes.
|
||||
|
||||
Validates the envelope's signature using the given domain and decodes
|
||||
the inner payload into a PeerRecord.
|
||||
|
||||
:param data: Serialized envelope bytes.
|
||||
:param domain: Domain string to verify signature against.
|
||||
:return: Tuple of (Envelope, PeerRecord).
|
||||
:raises ValueError: If signature validation or decoding fails.
|
||||
"""
|
||||
env = unmarshal_envelope(data)
|
||||
env.validate(domain)
|
||||
record = env.record()
|
||||
return env, record
|
||||
|
||||
|
||||
def unmarshal_envelope(data: bytes) -> Envelope:
|
||||
"""
|
||||
Deserialize an Envelope from its wire format.
|
||||
|
||||
This parses the protobuf fields without verifying the signature.
|
||||
|
||||
:param data: Serialized envelope bytes.
|
||||
:return: Parsed Envelope object.
|
||||
:raises DecodeError: If protobuf parsing fails.
|
||||
"""
|
||||
pb_env = pb.Envelope()
|
||||
pb_env.ParseFromString(data)
|
||||
pk = pub_key_from_protobuf(pb_env.public_key)
|
||||
|
||||
return Envelope(
|
||||
public_key=pk,
|
||||
payload_type=pb_env.payload_type,
|
||||
raw_payload=pb_env.payload,
|
||||
signature=pb_env.signature,
|
||||
)
|
||||
|
||||
|
||||
def make_unsigned(domain: str, payload_type: bytes, payload: bytes) -> bytes:
|
||||
"""
|
||||
Build a byte buffer to be signed for an Envelope.
|
||||
|
||||
The unsigned byte structure is:
|
||||
varint(len(domain)) || domain ||
|
||||
varint(len(payload_type)) || payload_type ||
|
||||
varint(len(payload)) || payload
|
||||
|
||||
This is the exact input used during signing and verification.
|
||||
|
||||
:param domain: Domain string for signature scoping.
|
||||
:param payload_type: Identifier for the type of payload.
|
||||
:param payload: Raw serialized payload bytes.
|
||||
:return: Byte buffer to be signed or verified.
|
||||
"""
|
||||
fields = [domain.encode(), payload_type, payload]
|
||||
buf = bytearray()
|
||||
|
||||
for field in fields:
|
||||
buf.extend(encode_uvarint(len(field)))
|
||||
buf.extend(field)
|
||||
|
||||
return bytes(buf)
|
||||
|
||||
|
||||
def debug_dump_envelope(env: Envelope) -> None:
|
||||
print("\n=== Envelope ===")
|
||||
print(f"Payload Type: {env.payload_type!r}")
|
||||
print(f"Signature: {env.signature.hex()} ({len(env.signature)} bytes)")
|
||||
print(f"Raw Payload: {env.raw_payload.hex()} ({len(env.raw_payload)} bytes)")
|
||||
|
||||
try:
|
||||
peer_record = unmarshal_record(env.raw_payload)
|
||||
print("\n=== Parsed PeerRecord ===")
|
||||
print(peer_record)
|
||||
except Exception as e:
|
||||
print("Failed to parse PeerRecord:", e)
|
||||
@ -1,3 +1,4 @@
|
||||
import functools
|
||||
import hashlib
|
||||
|
||||
import base58
|
||||
@ -36,25 +37,23 @@ if ENABLE_INLINING:
|
||||
|
||||
class ID:
|
||||
_bytes: bytes
|
||||
_xor_id: int | None = None
|
||||
_b58_str: str | None = None
|
||||
|
||||
def __init__(self, peer_id_bytes: bytes) -> None:
|
||||
self._bytes = peer_id_bytes
|
||||
|
||||
@property
|
||||
@functools.cached_property
|
||||
def xor_id(self) -> int:
|
||||
if not self._xor_id:
|
||||
self._xor_id = int(sha256_digest(self._bytes).hex(), 16)
|
||||
return self._xor_id
|
||||
return int(sha256_digest(self._bytes).hex(), 16)
|
||||
|
||||
@functools.cached_property
|
||||
def base58(self) -> str:
|
||||
return base58.b58encode(self._bytes).decode()
|
||||
|
||||
def to_bytes(self) -> bytes:
|
||||
return self._bytes
|
||||
|
||||
def to_base58(self) -> str:
|
||||
if not self._b58_str:
|
||||
self._b58_str = base58.b58encode(self._bytes).decode()
|
||||
return self._b58_str
|
||||
return self.base58
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<libp2p.peer.id.ID ({self!s})>"
|
||||
|
||||
22
libp2p/peer/pb/crypto.proto
Normal file
22
libp2p/peer/pb/crypto.proto
Normal file
@ -0,0 +1,22 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package libp2p.peer.pb.crypto;
|
||||
|
||||
option go_package = "github.com/libp2p/go-libp2p/core/crypto/pb";
|
||||
|
||||
enum KeyType {
|
||||
RSA = 0;
|
||||
Ed25519 = 1;
|
||||
Secp256k1 = 2;
|
||||
ECDSA = 3;
|
||||
}
|
||||
|
||||
message PublicKey {
|
||||
KeyType Type = 1;
|
||||
bytes Data = 2;
|
||||
}
|
||||
|
||||
message PrivateKey {
|
||||
KeyType Type = 1;
|
||||
bytes Data = 2;
|
||||
}
|
||||
31
libp2p/peer/pb/crypto_pb2.py
Normal file
31
libp2p/peer/pb/crypto_pb2.py
Normal file
@ -0,0 +1,31 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/peer/pb/crypto.proto
|
||||
# Protobuf Python Version: 4.25.3
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1blibp2p/peer/pb/crypto.proto\x12\x15libp2p.peer.pb.crypto\"G\n\tPublicKey\x12,\n\x04Type\x18\x01 \x01(\x0e\x32\x1e.libp2p.peer.pb.crypto.KeyType\x12\x0c\n\x04\x44\x61ta\x18\x02 \x01(\x0c\"H\n\nPrivateKey\x12,\n\x04Type\x18\x01 \x01(\x0e\x32\x1e.libp2p.peer.pb.crypto.KeyType\x12\x0c\n\x04\x44\x61ta\x18\x02 \x01(\x0c*9\n\x07KeyType\x12\x07\n\x03RSA\x10\x00\x12\x0b\n\x07\x45\x64\x32\x35\x35\x31\x39\x10\x01\x12\r\n\tSecp256k1\x10\x02\x12\t\n\x05\x45\x43\x44SA\x10\x03\x42,Z*github.com/libp2p/go-libp2p/core/crypto/pbb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.crypto_pb2', _globals)
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
_globals['DESCRIPTOR']._options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'Z*github.com/libp2p/go-libp2p/core/crypto/pb'
|
||||
_globals['_KEYTYPE']._serialized_start=201
|
||||
_globals['_KEYTYPE']._serialized_end=258
|
||||
_globals['_PUBLICKEY']._serialized_start=54
|
||||
_globals['_PUBLICKEY']._serialized_end=125
|
||||
_globals['_PRIVATEKEY']._serialized_start=127
|
||||
_globals['_PRIVATEKEY']._serialized_end=199
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
33
libp2p/peer/pb/crypto_pb2.pyi
Normal file
33
libp2p/peer/pb/crypto_pb2.pyi
Normal file
@ -0,0 +1,33 @@
|
||||
from google.protobuf.internal import enum_type_wrapper as _enum_type_wrapper
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import message as _message
|
||||
from typing import ClassVar as _ClassVar, Optional as _Optional, Union as _Union
|
||||
|
||||
DESCRIPTOR: _descriptor.FileDescriptor
|
||||
|
||||
class KeyType(int, metaclass=_enum_type_wrapper.EnumTypeWrapper):
|
||||
__slots__ = ()
|
||||
RSA: _ClassVar[KeyType]
|
||||
Ed25519: _ClassVar[KeyType]
|
||||
Secp256k1: _ClassVar[KeyType]
|
||||
ECDSA: _ClassVar[KeyType]
|
||||
RSA: KeyType
|
||||
Ed25519: KeyType
|
||||
Secp256k1: KeyType
|
||||
ECDSA: KeyType
|
||||
|
||||
class PublicKey(_message.Message):
|
||||
__slots__ = ("Type", "Data")
|
||||
TYPE_FIELD_NUMBER: _ClassVar[int]
|
||||
DATA_FIELD_NUMBER: _ClassVar[int]
|
||||
Type: KeyType
|
||||
Data: bytes
|
||||
def __init__(self, Type: _Optional[_Union[KeyType, str]] = ..., Data: _Optional[bytes] = ...) -> None: ...
|
||||
|
||||
class PrivateKey(_message.Message):
|
||||
__slots__ = ("Type", "Data")
|
||||
TYPE_FIELD_NUMBER: _ClassVar[int]
|
||||
DATA_FIELD_NUMBER: _ClassVar[int]
|
||||
Type: KeyType
|
||||
Data: bytes
|
||||
def __init__(self, Type: _Optional[_Union[KeyType, str]] = ..., Data: _Optional[bytes] = ...) -> None: ...
|
||||
14
libp2p/peer/pb/envelope.proto
Normal file
14
libp2p/peer/pb/envelope.proto
Normal file
@ -0,0 +1,14 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package libp2p.peer.pb.record;
|
||||
|
||||
import "libp2p/peer/pb/crypto.proto";
|
||||
|
||||
option go_package = "github.com/libp2p/go-libp2p/core/record/pb";
|
||||
|
||||
message Envelope {
|
||||
libp2p.peer.pb.crypto.PublicKey public_key = 1;
|
||||
bytes payload_type = 2;
|
||||
bytes payload = 3;
|
||||
bytes signature = 5;
|
||||
}
|
||||
28
libp2p/peer/pb/envelope_pb2.py
Normal file
28
libp2p/peer/pb/envelope_pb2.py
Normal file
@ -0,0 +1,28 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/peer/pb/envelope.proto
|
||||
# Protobuf Python Version: 4.25.3
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
from libp2p.peer.pb import crypto_pb2 as libp2p_dot_peer_dot_pb_dot_crypto__pb2
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1dlibp2p/peer/pb/envelope.proto\x12\x15libp2p.peer.pb.record\x1a\x1blibp2p/peer/pb/crypto.proto\"z\n\x08\x45nvelope\x12\x34\n\npublic_key\x18\x01 \x01(\x0b\x32 .libp2p.peer.pb.crypto.PublicKey\x12\x14\n\x0cpayload_type\x18\x02 \x01(\x0c\x12\x0f\n\x07payload\x18\x03 \x01(\x0c\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x42,Z*github.com/libp2p/go-libp2p/core/record/pbb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.envelope_pb2', _globals)
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
_globals['DESCRIPTOR']._options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'Z*github.com/libp2p/go-libp2p/core/record/pb'
|
||||
_globals['_ENVELOPE']._serialized_start=85
|
||||
_globals['_ENVELOPE']._serialized_end=207
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
18
libp2p/peer/pb/envelope_pb2.pyi
Normal file
18
libp2p/peer/pb/envelope_pb2.pyi
Normal file
@ -0,0 +1,18 @@
|
||||
from libp2p.peer.pb import crypto_pb2 as _crypto_pb2
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import message as _message
|
||||
from typing import ClassVar as _ClassVar, Mapping as _Mapping, Optional as _Optional, Union as _Union
|
||||
|
||||
DESCRIPTOR: _descriptor.FileDescriptor
|
||||
|
||||
class Envelope(_message.Message):
|
||||
__slots__ = ("public_key", "payload_type", "payload", "signature")
|
||||
PUBLIC_KEY_FIELD_NUMBER: _ClassVar[int]
|
||||
PAYLOAD_TYPE_FIELD_NUMBER: _ClassVar[int]
|
||||
PAYLOAD_FIELD_NUMBER: _ClassVar[int]
|
||||
SIGNATURE_FIELD_NUMBER: _ClassVar[int]
|
||||
public_key: _crypto_pb2.PublicKey
|
||||
payload_type: bytes
|
||||
payload: bytes
|
||||
signature: bytes
|
||||
def __init__(self, public_key: _Optional[_Union[_crypto_pb2.PublicKey, _Mapping]] = ..., payload_type: _Optional[bytes] = ..., payload: _Optional[bytes] = ..., signature: _Optional[bytes] = ...) -> None: ... # type: ignore[type-arg]
|
||||
31
libp2p/peer/pb/peer_record.proto
Normal file
31
libp2p/peer/pb/peer_record.proto
Normal file
@ -0,0 +1,31 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package peer.pb;
|
||||
|
||||
option go_package = "github.com/libp2p/go-libp2p/core/peer/pb";
|
||||
|
||||
// PeerRecord messages contain information that is useful to share with other peers.
|
||||
// Currently, a PeerRecord contains the public listen addresses for a peer, but this
|
||||
// is expected to expand to include other information in the future.
|
||||
//
|
||||
// PeerRecords are designed to be serialized to bytes and placed inside of
|
||||
// SignedEnvelopes before sharing with other peers.
|
||||
// See https://github.com/libp2p/go-libp2p/blob/master/core/record/pb/envelope.proto for
|
||||
// the SignedEnvelope definition.
|
||||
message PeerRecord {
|
||||
|
||||
// AddressInfo is a wrapper around a binary multiaddr. It is defined as a
|
||||
// separate message to allow us to add per-address metadata in the future.
|
||||
message AddressInfo {
|
||||
bytes multiaddr = 1;
|
||||
}
|
||||
|
||||
// peer_id contains a libp2p peer id in its binary representation.
|
||||
bytes peer_id = 1;
|
||||
|
||||
// seq contains a monotonically-increasing sequence counter to order PeerRecords in time.
|
||||
uint64 seq = 2;
|
||||
|
||||
// addresses is a list of public listen addresses for the peer.
|
||||
repeated AddressInfo addresses = 3;
|
||||
}
|
||||
29
libp2p/peer/pb/peer_record_pb2.py
Normal file
29
libp2p/peer/pb/peer_record_pb2.py
Normal file
@ -0,0 +1,29 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/peer/pb/peer_record.proto
|
||||
# Protobuf Python Version: 4.25.3
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n libp2p/peer/pb/peer_record.proto\x12\x07peer.pb\"\x80\x01\n\nPeerRecord\x12\x0f\n\x07peer_id\x18\x01 \x01(\x0c\x12\x0b\n\x03seq\x18\x02 \x01(\x04\x12\x32\n\taddresses\x18\x03 \x03(\x0b\x32\x1f.peer.pb.PeerRecord.AddressInfo\x1a \n\x0b\x41\x64\x64ressInfo\x12\x11\n\tmultiaddr\x18\x01 \x01(\x0c\x42*Z(github.com/libp2p/go-libp2p/core/peer/pbb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.peer_record_pb2', _globals)
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
_globals['DESCRIPTOR']._options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'Z(github.com/libp2p/go-libp2p/core/peer/pb'
|
||||
_globals['_PEERRECORD']._serialized_start=46
|
||||
_globals['_PEERRECORD']._serialized_end=174
|
||||
_globals['_PEERRECORD_ADDRESSINFO']._serialized_start=142
|
||||
_globals['_PEERRECORD_ADDRESSINFO']._serialized_end=174
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
21
libp2p/peer/pb/peer_record_pb2.pyi
Normal file
21
libp2p/peer/pb/peer_record_pb2.pyi
Normal file
@ -0,0 +1,21 @@
|
||||
from google.protobuf.internal import containers as _containers
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import message as _message
|
||||
from typing import ClassVar as _ClassVar, Iterable as _Iterable, Mapping as _Mapping, Optional as _Optional, Union as _Union
|
||||
|
||||
DESCRIPTOR: _descriptor.FileDescriptor
|
||||
|
||||
class PeerRecord(_message.Message):
|
||||
__slots__ = ("peer_id", "seq", "addresses")
|
||||
class AddressInfo(_message.Message):
|
||||
__slots__ = ("multiaddr",)
|
||||
MULTIADDR_FIELD_NUMBER: _ClassVar[int]
|
||||
multiaddr: bytes
|
||||
def __init__(self, multiaddr: _Optional[bytes] = ...) -> None: ...
|
||||
PEER_ID_FIELD_NUMBER: _ClassVar[int]
|
||||
SEQ_FIELD_NUMBER: _ClassVar[int]
|
||||
ADDRESSES_FIELD_NUMBER: _ClassVar[int]
|
||||
peer_id: bytes
|
||||
seq: int
|
||||
addresses: _containers.RepeatedCompositeFieldContainer[PeerRecord.AddressInfo]
|
||||
def __init__(self, peer_id: _Optional[bytes] = ..., seq: _Optional[int] = ..., addresses: _Optional[_Iterable[_Union[PeerRecord.AddressInfo, _Mapping]]] = ...) -> None: ... # type: ignore[type-arg]
|
||||
251
libp2p/peer/peer_record.py
Normal file
251
libp2p/peer/peer_record.py
Normal file
@ -0,0 +1,251 @@
|
||||
from collections.abc import Sequence
|
||||
import threading
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
|
||||
from libp2p.abc import IPeerRecord
|
||||
from libp2p.peer.id import ID
|
||||
import libp2p.peer.pb.peer_record_pb2 as pb
|
||||
from libp2p.peer.peerinfo import PeerInfo
|
||||
|
||||
PEER_RECORD_ENVELOPE_DOMAIN = "libp2p-peer-record"
|
||||
PEER_RECORD_ENVELOPE_PAYLOAD_TYPE = b"\x03\x01"
|
||||
|
||||
_last_timestamp_lock = threading.Lock()
|
||||
_last_timestamp: int = 0
|
||||
|
||||
|
||||
class PeerRecord(IPeerRecord):
|
||||
"""
|
||||
A record that contains metatdata about a peer in the libp2p network.
|
||||
|
||||
This includes:
|
||||
- `peer_id`: The peer's globally unique indentifier.
|
||||
- `addrs`: A list of the peer's publicly reachable multiaddrs.
|
||||
- `seq`: A strictly monotonically increasing timestamp used
|
||||
to order records over time.
|
||||
|
||||
PeerRecords are designed to be signed and transmitted in libp2p routing Envelopes.
|
||||
"""
|
||||
|
||||
peer_id: ID
|
||||
addrs: list[Multiaddr]
|
||||
seq: int
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
peer_id: ID | None = None,
|
||||
addrs: list[Multiaddr] | None = None,
|
||||
seq: int | None = None,
|
||||
) -> None:
|
||||
"""
|
||||
Initialize a new PeerRecord.
|
||||
If `seq` is not provided, a timestamp-based strictly increasing sequence
|
||||
number will be generated.
|
||||
|
||||
:param peer_id: ID of the peer this record refers to.
|
||||
:param addrs: Public multiaddrs of the peer.
|
||||
:param seq: Monotonic sequence number.
|
||||
|
||||
"""
|
||||
if peer_id is not None:
|
||||
self.peer_id = peer_id
|
||||
self.addrs = addrs or []
|
||||
if seq is not None:
|
||||
self.seq = seq
|
||||
else:
|
||||
self.seq = timestamp_seq()
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return (
|
||||
f"PeerRecord(\n"
|
||||
f" peer_id={self.peer_id},\n"
|
||||
f" multiaddrs={[str(m) for m in self.addrs]},\n"
|
||||
f" seq={self.seq}\n"
|
||||
f")"
|
||||
)
|
||||
|
||||
def domain(self) -> str:
|
||||
"""
|
||||
Return the domain string associated with this PeerRecord.
|
||||
|
||||
Used during record signing and envelope validation to identify the record type.
|
||||
"""
|
||||
return PEER_RECORD_ENVELOPE_DOMAIN
|
||||
|
||||
def codec(self) -> bytes:
|
||||
"""
|
||||
Return the codec identifier for PeerRecords.
|
||||
|
||||
This binary perfix helps distinguish PeerRecords in serialized envelopes.
|
||||
"""
|
||||
return PEER_RECORD_ENVELOPE_PAYLOAD_TYPE
|
||||
|
||||
def to_protobuf(self) -> pb.PeerRecord:
|
||||
"""
|
||||
Convert the current PeerRecord into a ProtoBuf PeerRecord message.
|
||||
|
||||
:raises ValueError: if peer_id serialization fails.
|
||||
:return: A ProtoBuf-encoded PeerRecord message object.
|
||||
"""
|
||||
try:
|
||||
id_bytes = self.peer_id.to_bytes()
|
||||
except Exception as e:
|
||||
raise ValueError(f"failed to marshal peer_id: {e}")
|
||||
|
||||
msg = pb.PeerRecord()
|
||||
msg.peer_id = id_bytes
|
||||
msg.seq = self.seq
|
||||
msg.addresses.extend(addrs_to_protobuf(self.addrs))
|
||||
return msg
|
||||
|
||||
def marshal_record(self) -> bytes:
|
||||
"""
|
||||
Serialize a PeerRecord into raw bytes suitable for embedding in an Envelope.
|
||||
|
||||
This is typically called during the process of signing or sealing the record.
|
||||
:raises ValueError: if serialization to protobuf fails.
|
||||
:return: Serialized PeerRecord bytes.
|
||||
"""
|
||||
try:
|
||||
msg = self.to_protobuf()
|
||||
return msg.SerializeToString()
|
||||
except Exception as e:
|
||||
raise ValueError(f"failed to marshal PeerRecord: {e}")
|
||||
|
||||
def equal(self, other: Any) -> bool:
|
||||
"""
|
||||
Check if this PeerRecord is identical to another.
|
||||
|
||||
Two PeerRecords are considered equal if:
|
||||
- Their peer IDs match.
|
||||
- Their sequence numbers are identical.
|
||||
- Their address lists are identical and in the same order.
|
||||
|
||||
:param other: Another PeerRecord instance.
|
||||
:return: True if all fields mathch, False otherwise.
|
||||
"""
|
||||
if isinstance(other, PeerRecord):
|
||||
if self.peer_id == other.peer_id:
|
||||
if self.seq == other.seq:
|
||||
if len(self.addrs) == len(other.addrs):
|
||||
for a1, a2 in zip(self.addrs, other.addrs):
|
||||
if a1 == a2:
|
||||
continue
|
||||
else:
|
||||
return False
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def unmarshal_record(data: bytes) -> PeerRecord:
|
||||
"""
|
||||
Deserialize a PeerRecord from its serialized byte representation.
|
||||
|
||||
Typically used when receiveing a PeerRecord inside a signed routing Envelope.
|
||||
|
||||
:param data: Serialized protobuf-encoded bytes.
|
||||
:raises ValueError: if parsing or conversion fails.
|
||||
:reurn: A valid PeerRecord instance.
|
||||
"""
|
||||
if data is None:
|
||||
raise ValueError("cannot unmarshal PeerRecord from None")
|
||||
|
||||
msg = pb.PeerRecord()
|
||||
try:
|
||||
msg.ParseFromString(data)
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to parse PeerRecord protobuf: {e}")
|
||||
|
||||
try:
|
||||
record = peer_record_from_protobuf(msg)
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to convert protobuf to PeerRecord: {e}")
|
||||
|
||||
return record
|
||||
|
||||
|
||||
def timestamp_seq() -> int:
|
||||
"""
|
||||
Generate a strictly increasing timestamp-based sequence number.
|
||||
|
||||
Ensures that even if multiple PeerRecords are generated in the same nanosecond,
|
||||
their `seq` values will still be strictly increasing by using a lock to track the
|
||||
last value.
|
||||
|
||||
:return: A strictly increasing integer timestamp.
|
||||
"""
|
||||
global _last_timestamp
|
||||
now = int(time.time_ns())
|
||||
with _last_timestamp_lock:
|
||||
if now <= _last_timestamp:
|
||||
now = _last_timestamp + 1
|
||||
_last_timestamp = now
|
||||
return now
|
||||
|
||||
|
||||
def peer_record_from_peer_info(info: PeerInfo) -> PeerRecord:
|
||||
"""
|
||||
Create a PeerRecord from a PeerInfo object.
|
||||
|
||||
This automatically assigns a timestamp-based sequence number to the record.
|
||||
:param info: A PeerInfo instance (contains peer_id and addrs).
|
||||
:return: A PeerRecord instance.
|
||||
"""
|
||||
record = PeerRecord()
|
||||
record.peer_id = info.peer_id
|
||||
record.addrs = info.addrs
|
||||
return record
|
||||
|
||||
|
||||
def peer_record_from_protobuf(msg: pb.PeerRecord) -> PeerRecord:
|
||||
"""
|
||||
Convert a protobuf PeerRecord message into a PeerRecord object.
|
||||
|
||||
:param msg: Protobuf PeerRecord message.
|
||||
:raises ValueError: if the peer_id cannot be parsed.
|
||||
:return: A deserialized PeerRecord instance.
|
||||
"""
|
||||
try:
|
||||
peer_id = ID(msg.peer_id)
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to unmarshal peer_id: {e}")
|
||||
|
||||
addrs = addrs_from_protobuf(msg.addresses)
|
||||
seq = msg.seq
|
||||
|
||||
return PeerRecord(peer_id, addrs, seq)
|
||||
|
||||
|
||||
def addrs_from_protobuf(addrs: Sequence[pb.PeerRecord.AddressInfo]) -> list[Multiaddr]:
|
||||
"""
|
||||
Convert a list of protobuf address records to Multiaddr objects.
|
||||
|
||||
:param addrs: A list of protobuf PeerRecord.AddressInfo messages.
|
||||
:return: A list of decoded Multiaddr instances (invalid ones are skipped).
|
||||
"""
|
||||
out = []
|
||||
for addr_info in addrs:
|
||||
try:
|
||||
addr = Multiaddr(addr_info.multiaddr)
|
||||
out.append(addr)
|
||||
except Exception:
|
||||
continue
|
||||
return out
|
||||
|
||||
|
||||
def addrs_to_protobuf(addrs: list[Multiaddr]) -> list[pb.PeerRecord.AddressInfo]:
|
||||
"""
|
||||
Convert a list of Multiaddr objects into their protobuf representation.
|
||||
|
||||
:param addrs: A list of Multiaddr instances.
|
||||
:return: A list of PeerRecord.AddressInfo protobuf messages.
|
||||
"""
|
||||
out = []
|
||||
for addr in addrs:
|
||||
addr_info = pb.PeerRecord.AddressInfo()
|
||||
addr_info.multiaddr = addr.to_bytes()
|
||||
out.append(addr_info)
|
||||
return out
|
||||
@ -3,9 +3,11 @@ from collections.abc import (
|
||||
)
|
||||
from typing import (
|
||||
Any,
|
||||
cast,
|
||||
)
|
||||
|
||||
import multiaddr
|
||||
from multiaddr.protocols import Protocol
|
||||
|
||||
from .id import (
|
||||
ID,
|
||||
@ -42,7 +44,8 @@ def info_from_p2p_addr(addr: multiaddr.Multiaddr) -> PeerInfo:
|
||||
p2p_protocols = p2p_part.protocols()
|
||||
if not p2p_protocols:
|
||||
raise InvalidAddrError("The last part of the address has no protocols")
|
||||
last_protocol = p2p_protocols[0]
|
||||
last_protocol = cast(Protocol, p2p_part.protocols()[0])
|
||||
|
||||
if last_protocol is None:
|
||||
raise InvalidAddrError("The last protocol is None")
|
||||
|
||||
|
||||
@ -23,6 +23,7 @@ from libp2p.crypto.keys import (
|
||||
PrivateKey,
|
||||
PublicKey,
|
||||
)
|
||||
from libp2p.peer.envelope import Envelope
|
||||
|
||||
from .id import (
|
||||
ID,
|
||||
@ -38,12 +39,23 @@ from .peerinfo import (
|
||||
PERMANENT_ADDR_TTL = 0
|
||||
|
||||
|
||||
class PeerRecordState:
|
||||
envelope: Envelope
|
||||
seq: int
|
||||
|
||||
def __init__(self, envelope: Envelope, seq: int):
|
||||
self.envelope = envelope
|
||||
self.seq = seq
|
||||
|
||||
|
||||
class PeerStore(IPeerStore):
|
||||
peer_data_map: dict[ID, PeerData]
|
||||
|
||||
def __init__(self) -> None:
|
||||
def __init__(self, max_records: int = 10000) -> None:
|
||||
self.peer_data_map = defaultdict(PeerData)
|
||||
self.addr_update_channels: dict[ID, MemorySendChannel[Multiaddr]] = {}
|
||||
self.peer_record_map: dict[ID, PeerRecordState] = {}
|
||||
self.max_records = max_records
|
||||
|
||||
def peer_info(self, peer_id: ID) -> PeerInfo:
|
||||
"""
|
||||
@ -64,7 +76,15 @@ class PeerStore(IPeerStore):
|
||||
return list(self.peer_data_map.keys())
|
||||
|
||||
def clear_peerdata(self, peer_id: ID) -> None:
|
||||
"""Clears the peer data of the peer"""
|
||||
"""Clears all data associated with the given peer_id."""
|
||||
if peer_id in self.peer_data_map:
|
||||
del self.peer_data_map[peer_id]
|
||||
else:
|
||||
raise PeerStoreError("peer ID not found")
|
||||
|
||||
# Clear the peer records
|
||||
if peer_id in self.peer_record_map:
|
||||
self.peer_record_map.pop(peer_id, None)
|
||||
|
||||
def valid_peer_ids(self) -> list[ID]:
|
||||
"""
|
||||
@ -78,6 +98,38 @@ class PeerStore(IPeerStore):
|
||||
peer_data.clear_addrs()
|
||||
return valid_peer_ids
|
||||
|
||||
def _enforce_record_limit(self) -> None:
|
||||
"""Enforce maximum number of stored records."""
|
||||
if len(self.peer_record_map) > self.max_records:
|
||||
# Record oldest records based on seequence number
|
||||
sorted_records = sorted(
|
||||
self.peer_record_map.items(), key=lambda x: x[1].seq
|
||||
)
|
||||
records_to_remove = len(self.peer_record_map) - self.max_records
|
||||
for peer_id, _ in sorted_records[:records_to_remove]:
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
del self.peer_record_map[peer_id]
|
||||
|
||||
async def start_cleanup_task(self, cleanup_interval: int = 3600) -> None:
|
||||
"""Start periodic cleanup of expired peer records and addresses."""
|
||||
while True:
|
||||
await trio.sleep(cleanup_interval)
|
||||
self._cleanup_expired_records()
|
||||
|
||||
def _cleanup_expired_records(self) -> None:
|
||||
"""Remove expired peer records and addresses"""
|
||||
expired_peers = []
|
||||
|
||||
for peer_id, peer_data in self.peer_data_map.items():
|
||||
if peer_data.is_expired():
|
||||
expired_peers.append(peer_id)
|
||||
|
||||
for peer_id in expired_peers:
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
del self.peer_data_map[peer_id]
|
||||
|
||||
self._enforce_record_limit()
|
||||
|
||||
# --------PROTO-BOOK--------
|
||||
|
||||
def get_protocols(self, peer_id: ID) -> list[str]:
|
||||
@ -161,6 +213,84 @@ class PeerStore(IPeerStore):
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
peer_data.clear_metadata()
|
||||
|
||||
# -----CERT-ADDR-BOOK-----
|
||||
|
||||
def maybe_delete_peer_record(self, peer_id: ID) -> None:
|
||||
"""
|
||||
Delete the signed peer record for a peer if it has no know
|
||||
(non-expired) addresses.
|
||||
|
||||
This is a garbage collection mechanism: if all addresses for a peer have expired
|
||||
or been cleared, there's no point holding onto its signed `Envelope`
|
||||
|
||||
:param peer_id: The peer whose record we may delete/
|
||||
"""
|
||||
if peer_id in self.peer_record_map:
|
||||
if not self.addrs(peer_id):
|
||||
self.peer_record_map.pop(peer_id, None)
|
||||
|
||||
def consume_peer_record(self, envelope: Envelope, ttl: int) -> bool:
|
||||
"""
|
||||
Accept and store a signed PeerRecord, unless it's older than
|
||||
the one already stored.
|
||||
|
||||
This function:
|
||||
- Extracts the peer ID and sequence number from the envelope
|
||||
- Rejects the record if it's older (lower seq)
|
||||
- Updates the stored peer record and replaces associated addresses if accepted
|
||||
|
||||
:param envelope: Signed envelope containing a PeerRecord.
|
||||
:param ttl: Time-to-live for the included multiaddrs (in seconds).
|
||||
:return: True if the record was accepted and stored; False if it was rejected.
|
||||
"""
|
||||
record = envelope.record()
|
||||
peer_id = record.peer_id
|
||||
|
||||
existing = self.peer_record_map.get(peer_id)
|
||||
if existing and existing.seq > record.seq:
|
||||
return False # reject older record
|
||||
|
||||
new_addrs = set(record.addrs)
|
||||
|
||||
self.peer_record_map[peer_id] = PeerRecordState(envelope, record.seq)
|
||||
self.peer_data_map[peer_id].clear_addrs()
|
||||
self.add_addrs(peer_id, list(new_addrs), ttl)
|
||||
|
||||
return True
|
||||
|
||||
def consume_peer_records(self, envelopes: list[Envelope], ttl: int) -> list[bool]:
|
||||
"""Consume multiple peer records in a single operation."""
|
||||
results = []
|
||||
for envelope in envelopes:
|
||||
results.append(self.consume_peer_record(envelope, ttl))
|
||||
return results
|
||||
|
||||
def get_peer_record(self, peer_id: ID) -> Envelope | None:
|
||||
"""
|
||||
Retrieve the most recent signed PeerRecord `Envelope` for a peer, if it exists
|
||||
and is still relevant.
|
||||
|
||||
First, it runs cleanup via `maybe_delete_peer_record` to purge stale data.
|
||||
Then it checks whether the peer has valid, unexpired addresses before
|
||||
returning the associated envelope.
|
||||
|
||||
:param peer_id: The peer to look up.
|
||||
:return: The signed Envelope if the peer is known and has valid
|
||||
addresses; None otherwise.
|
||||
|
||||
"""
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
|
||||
# Check if the peer has any valid addresses
|
||||
if (
|
||||
peer_id in self.peer_data_map
|
||||
and not self.peer_data_map[peer_id].is_expired()
|
||||
):
|
||||
state = self.peer_record_map.get(peer_id)
|
||||
if state is not None:
|
||||
return state.envelope
|
||||
return None
|
||||
|
||||
# -------ADDR-BOOK--------
|
||||
|
||||
def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int = 0) -> None:
|
||||
@ -189,6 +319,8 @@ class PeerStore(IPeerStore):
|
||||
except trio.WouldBlock:
|
||||
pass # Or consider logging / dropping / replacing stream
|
||||
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
|
||||
def addrs(self, peer_id: ID) -> list[Multiaddr]:
|
||||
"""
|
||||
:param peer_id: peer ID to get addrs for
|
||||
@ -212,6 +344,8 @@ class PeerStore(IPeerStore):
|
||||
if peer_id in self.peer_data_map:
|
||||
self.peer_data_map[peer_id].clear_addrs()
|
||||
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
|
||||
def peers_with_addrs(self) -> list[ID]:
|
||||
"""
|
||||
:return: all of the peer IDs which has addrsfloat stored in peer store
|
||||
|
||||
@ -101,6 +101,18 @@ class Multiselect(IMultiselectMuxer):
|
||||
except trio.TooSlowError:
|
||||
raise MultiselectError("handshake read timeout")
|
||||
|
||||
def get_protocols(self) -> tuple[TProtocol | None, ...]:
|
||||
"""
|
||||
Retrieve the protocols for which handlers have been registered.
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple[TProtocol, ...]
|
||||
A tuple of registered protocol names.
|
||||
|
||||
"""
|
||||
return tuple(self.handlers.keys())
|
||||
|
||||
async def handshake(self, communicator: IMultiselectCommunicator) -> None:
|
||||
"""
|
||||
Perform handshake to agree on multiselect protocol.
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: rpc.proto
|
||||
# source: libp2p/pubsub/pb/rpc.proto
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf.internal import builder as _builder
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
@ -13,39 +13,39 @@ _sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\trpc.proto\x12\tpubsub.pb\"\xb4\x01\n\x03RPC\x12-\n\rsubscriptions\x18\x01 \x03(\x0b\x32\x16.pubsub.pb.RPC.SubOpts\x12#\n\x07publish\x18\x02 \x03(\x0b\x32\x12.pubsub.pb.Message\x12*\n\x07\x63ontrol\x18\x03 \x01(\x0b\x32\x19.pubsub.pb.ControlMessage\x1a-\n\x07SubOpts\x12\x11\n\tsubscribe\x18\x01 \x01(\x08\x12\x0f\n\x07topicid\x18\x02 \x01(\t\"i\n\x07Message\x12\x0f\n\x07\x66rom_id\x18\x01 \x01(\x0c\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\x12\r\n\x05seqno\x18\x03 \x01(\x0c\x12\x10\n\x08topicIDs\x18\x04 \x03(\t\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x12\x0b\n\x03key\x18\x06 \x01(\x0c\"\xb0\x01\n\x0e\x43ontrolMessage\x12&\n\x05ihave\x18\x01 \x03(\x0b\x32\x17.pubsub.pb.ControlIHave\x12&\n\x05iwant\x18\x02 \x03(\x0b\x32\x17.pubsub.pb.ControlIWant\x12&\n\x05graft\x18\x03 \x03(\x0b\x32\x17.pubsub.pb.ControlGraft\x12&\n\x05prune\x18\x04 \x03(\x0b\x32\x17.pubsub.pb.ControlPrune\"3\n\x0c\x43ontrolIHave\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\x12\n\nmessageIDs\x18\x02 \x03(\t\"\"\n\x0c\x43ontrolIWant\x12\x12\n\nmessageIDs\x18\x01 \x03(\t\"\x1f\n\x0c\x43ontrolGraft\x12\x0f\n\x07topicID\x18\x01 \x01(\t\"T\n\x0c\x43ontrolPrune\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\"\n\x05peers\x18\x02 \x03(\x0b\x32\x13.pubsub.pb.PeerInfo\x12\x0f\n\x07\x62\x61\x63koff\x18\x03 \x01(\x04\"4\n\x08PeerInfo\x12\x0e\n\x06peerID\x18\x01 \x01(\x0c\x12\x18\n\x10signedPeerRecord\x18\x02 \x01(\x0c\"\x87\x03\n\x0fTopicDescriptor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x31\n\x04\x61uth\x18\x02 \x01(\x0b\x32#.pubsub.pb.TopicDescriptor.AuthOpts\x12/\n\x03\x65nc\x18\x03 \x01(\x0b\x32\".pubsub.pb.TopicDescriptor.EncOpts\x1a|\n\x08\x41uthOpts\x12:\n\x04mode\x18\x01 \x01(\x0e\x32,.pubsub.pb.TopicDescriptor.AuthOpts.AuthMode\x12\x0c\n\x04keys\x18\x02 \x03(\x0c\"&\n\x08\x41uthMode\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03KEY\x10\x01\x12\x07\n\x03WOT\x10\x02\x1a\x83\x01\n\x07\x45ncOpts\x12\x38\n\x04mode\x18\x01 \x01(\x0e\x32*.pubsub.pb.TopicDescriptor.EncOpts.EncMode\x12\x11\n\tkeyHashes\x18\x02 \x03(\x0c\"+\n\x07\x45ncMode\x12\x08\n\x04NONE\x10\x00\x12\r\n\tSHAREDKEY\x10\x01\x12\x07\n\x03WOT\x10\x02')
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1alibp2p/pubsub/pb/rpc.proto\x12\tpubsub.pb\"\xb4\x01\n\x03RPC\x12-\n\rsubscriptions\x18\x01 \x03(\x0b\x32\x16.pubsub.pb.RPC.SubOpts\x12#\n\x07publish\x18\x02 \x03(\x0b\x32\x12.pubsub.pb.Message\x12*\n\x07\x63ontrol\x18\x03 \x01(\x0b\x32\x19.pubsub.pb.ControlMessage\x1a-\n\x07SubOpts\x12\x11\n\tsubscribe\x18\x01 \x01(\x08\x12\x0f\n\x07topicid\x18\x02 \x01(\t\"i\n\x07Message\x12\x0f\n\x07\x66rom_id\x18\x01 \x01(\x0c\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\x12\r\n\x05seqno\x18\x03 \x01(\x0c\x12\x10\n\x08topicIDs\x18\x04 \x03(\t\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x12\x0b\n\x03key\x18\x06 \x01(\x0c\"\xb0\x01\n\x0e\x43ontrolMessage\x12&\n\x05ihave\x18\x01 \x03(\x0b\x32\x17.pubsub.pb.ControlIHave\x12&\n\x05iwant\x18\x02 \x03(\x0b\x32\x17.pubsub.pb.ControlIWant\x12&\n\x05graft\x18\x03 \x03(\x0b\x32\x17.pubsub.pb.ControlGraft\x12&\n\x05prune\x18\x04 \x03(\x0b\x32\x17.pubsub.pb.ControlPrune\"3\n\x0c\x43ontrolIHave\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\x12\n\nmessageIDs\x18\x02 \x03(\t\"\"\n\x0c\x43ontrolIWant\x12\x12\n\nmessageIDs\x18\x01 \x03(\t\"\x1f\n\x0c\x43ontrolGraft\x12\x0f\n\x07topicID\x18\x01 \x01(\t\"T\n\x0c\x43ontrolPrune\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\"\n\x05peers\x18\x02 \x03(\x0b\x32\x13.pubsub.pb.PeerInfo\x12\x0f\n\x07\x62\x61\x63koff\x18\x03 \x01(\x04\"4\n\x08PeerInfo\x12\x0e\n\x06peerID\x18\x01 \x01(\x0c\x12\x18\n\x10signedPeerRecord\x18\x02 \x01(\x0c\"\x87\x03\n\x0fTopicDescriptor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x31\n\x04\x61uth\x18\x02 \x01(\x0b\x32#.pubsub.pb.TopicDescriptor.AuthOpts\x12/\n\x03\x65nc\x18\x03 \x01(\x0b\x32\".pubsub.pb.TopicDescriptor.EncOpts\x1a|\n\x08\x41uthOpts\x12:\n\x04mode\x18\x01 \x01(\x0e\x32,.pubsub.pb.TopicDescriptor.AuthOpts.AuthMode\x12\x0c\n\x04keys\x18\x02 \x03(\x0c\"&\n\x08\x41uthMode\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03KEY\x10\x01\x12\x07\n\x03WOT\x10\x02\x1a\x83\x01\n\x07\x45ncOpts\x12\x38\n\x04mode\x18\x01 \x01(\x0e\x32*.pubsub.pb.TopicDescriptor.EncOpts.EncMode\x12\x11\n\tkeyHashes\x18\x02 \x03(\x0c\"+\n\x07\x45ncMode\x12\x08\n\x04NONE\x10\x00\x12\r\n\tSHAREDKEY\x10\x01\x12\x07\n\x03WOT\x10\x02')
|
||||
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'rpc_pb2', globals())
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.pubsub.pb.rpc_pb2', globals())
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
|
||||
DESCRIPTOR._options = None
|
||||
_RPC._serialized_start=25
|
||||
_RPC._serialized_end=205
|
||||
_RPC_SUBOPTS._serialized_start=160
|
||||
_RPC_SUBOPTS._serialized_end=205
|
||||
_MESSAGE._serialized_start=207
|
||||
_MESSAGE._serialized_end=312
|
||||
_CONTROLMESSAGE._serialized_start=315
|
||||
_CONTROLMESSAGE._serialized_end=491
|
||||
_CONTROLIHAVE._serialized_start=493
|
||||
_CONTROLIHAVE._serialized_end=544
|
||||
_CONTROLIWANT._serialized_start=546
|
||||
_CONTROLIWANT._serialized_end=580
|
||||
_CONTROLGRAFT._serialized_start=582
|
||||
_CONTROLGRAFT._serialized_end=613
|
||||
_CONTROLPRUNE._serialized_start=615
|
||||
_CONTROLPRUNE._serialized_end=699
|
||||
_PEERINFO._serialized_start=701
|
||||
_PEERINFO._serialized_end=753
|
||||
_TOPICDESCRIPTOR._serialized_start=756
|
||||
_TOPICDESCRIPTOR._serialized_end=1147
|
||||
_TOPICDESCRIPTOR_AUTHOPTS._serialized_start=889
|
||||
_TOPICDESCRIPTOR_AUTHOPTS._serialized_end=1013
|
||||
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_start=975
|
||||
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_end=1013
|
||||
_TOPICDESCRIPTOR_ENCOPTS._serialized_start=1016
|
||||
_TOPICDESCRIPTOR_ENCOPTS._serialized_end=1147
|
||||
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_start=1104
|
||||
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_end=1147
|
||||
_RPC._serialized_start=42
|
||||
_RPC._serialized_end=222
|
||||
_RPC_SUBOPTS._serialized_start=177
|
||||
_RPC_SUBOPTS._serialized_end=222
|
||||
_MESSAGE._serialized_start=224
|
||||
_MESSAGE._serialized_end=329
|
||||
_CONTROLMESSAGE._serialized_start=332
|
||||
_CONTROLMESSAGE._serialized_end=508
|
||||
_CONTROLIHAVE._serialized_start=510
|
||||
_CONTROLIHAVE._serialized_end=561
|
||||
_CONTROLIWANT._serialized_start=563
|
||||
_CONTROLIWANT._serialized_end=597
|
||||
_CONTROLGRAFT._serialized_start=599
|
||||
_CONTROLGRAFT._serialized_end=630
|
||||
_CONTROLPRUNE._serialized_start=632
|
||||
_CONTROLPRUNE._serialized_end=716
|
||||
_PEERINFO._serialized_start=718
|
||||
_PEERINFO._serialized_end=770
|
||||
_TOPICDESCRIPTOR._serialized_start=773
|
||||
_TOPICDESCRIPTOR._serialized_end=1164
|
||||
_TOPICDESCRIPTOR_AUTHOPTS._serialized_start=906
|
||||
_TOPICDESCRIPTOR_AUTHOPTS._serialized_end=1030
|
||||
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_start=992
|
||||
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_end=1030
|
||||
_TOPICDESCRIPTOR_ENCOPTS._serialized_start=1033
|
||||
_TOPICDESCRIPTOR_ENCOPTS._serialized_end=1164
|
||||
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_start=1121
|
||||
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_end=1164
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
|
||||
@ -102,6 +102,9 @@ class TopicValidator(NamedTuple):
|
||||
is_async: bool
|
||||
|
||||
|
||||
MAX_CONCURRENT_VALIDATORS = 10
|
||||
|
||||
|
||||
class Pubsub(Service, IPubsub):
|
||||
host: IHost
|
||||
|
||||
@ -109,6 +112,7 @@ class Pubsub(Service, IPubsub):
|
||||
|
||||
peer_receive_channel: trio.MemoryReceiveChannel[ID]
|
||||
dead_peer_receive_channel: trio.MemoryReceiveChannel[ID]
|
||||
_validator_semaphore: trio.Semaphore
|
||||
|
||||
seen_messages: LastSeenCache
|
||||
|
||||
@ -143,6 +147,7 @@ class Pubsub(Service, IPubsub):
|
||||
msg_id_constructor: Callable[
|
||||
[rpc_pb2.Message], bytes
|
||||
] = get_peer_and_seqno_msg_id,
|
||||
max_concurrent_validator_count: int = MAX_CONCURRENT_VALIDATORS,
|
||||
) -> None:
|
||||
"""
|
||||
Construct a new Pubsub object, which is responsible for handling all
|
||||
@ -168,6 +173,7 @@ class Pubsub(Service, IPubsub):
|
||||
# Therefore, we can only close from the receive side.
|
||||
self.peer_receive_channel = peer_receive
|
||||
self.dead_peer_receive_channel = dead_peer_receive
|
||||
self._validator_semaphore = trio.Semaphore(max_concurrent_validator_count)
|
||||
# Register a notifee
|
||||
self.host.get_network().register_notifee(
|
||||
PubsubNotifee(peer_send, dead_peer_send)
|
||||
@ -657,7 +663,11 @@ class Pubsub(Service, IPubsub):
|
||||
|
||||
logger.debug("successfully published message %s", msg)
|
||||
|
||||
async def validate_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None:
|
||||
async def validate_msg(
|
||||
self,
|
||||
msg_forwarder: ID,
|
||||
msg: rpc_pb2.Message,
|
||||
) -> None:
|
||||
"""
|
||||
Validate the received message.
|
||||
|
||||
@ -680,23 +690,34 @@ class Pubsub(Service, IPubsub):
|
||||
if not validator(msg_forwarder, msg):
|
||||
raise ValidationError(f"Validation failed for msg={msg}")
|
||||
|
||||
# TODO: Implement throttle on async validators
|
||||
|
||||
if len(async_topic_validators) > 0:
|
||||
# Appends to lists are thread safe in CPython
|
||||
results = []
|
||||
|
||||
async def run_async_validator(func: AsyncValidatorFn) -> None:
|
||||
result = await func(msg_forwarder, msg)
|
||||
results.append(result)
|
||||
results: list[bool] = []
|
||||
|
||||
async with trio.open_nursery() as nursery:
|
||||
for async_validator in async_topic_validators:
|
||||
nursery.start_soon(run_async_validator, async_validator)
|
||||
nursery.start_soon(
|
||||
self._run_async_validator,
|
||||
async_validator,
|
||||
msg_forwarder,
|
||||
msg,
|
||||
results,
|
||||
)
|
||||
|
||||
if not all(results):
|
||||
raise ValidationError(f"Validation failed for msg={msg}")
|
||||
|
||||
async def _run_async_validator(
|
||||
self,
|
||||
func: AsyncValidatorFn,
|
||||
msg_forwarder: ID,
|
||||
msg: rpc_pb2.Message,
|
||||
results: list[bool],
|
||||
) -> None:
|
||||
async with self._validator_semaphore:
|
||||
result = await func(msg_forwarder, msg)
|
||||
results.append(result)
|
||||
|
||||
async def push_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None:
|
||||
"""
|
||||
Push a pubsub message to others.
|
||||
|
||||
@ -15,6 +15,10 @@ from libp2p.relay.circuit_v2 import (
|
||||
RelayLimits,
|
||||
RelayResourceManager,
|
||||
Reservation,
|
||||
DCUTR_PROTOCOL_ID,
|
||||
DCUtRProtocol,
|
||||
ReachabilityChecker,
|
||||
is_private_ip,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
@ -25,4 +29,9 @@ __all__ = [
|
||||
"RelayLimits",
|
||||
"RelayResourceManager",
|
||||
"Reservation",
|
||||
"DCUtRProtocol",
|
||||
"DCUTR_PROTOCOL_ID",
|
||||
"ReachabilityChecker",
|
||||
"is_private_ip"
|
||||
|
||||
]
|
||||
|
||||
@ -5,6 +5,16 @@ This package implements the Circuit Relay v2 protocol as specified in:
|
||||
https://github.com/libp2p/specs/blob/master/relay/circuit-v2.md
|
||||
"""
|
||||
|
||||
from .dcutr import (
|
||||
DCUtRProtocol,
|
||||
)
|
||||
from .dcutr import PROTOCOL_ID as DCUTR_PROTOCOL_ID
|
||||
|
||||
from .nat import (
|
||||
ReachabilityChecker,
|
||||
is_private_ip,
|
||||
)
|
||||
|
||||
from .discovery import (
|
||||
RelayDiscovery,
|
||||
)
|
||||
@ -29,4 +39,8 @@ __all__ = [
|
||||
"RelayResourceManager",
|
||||
"CircuitV2Transport",
|
||||
"RelayDiscovery",
|
||||
"DCUtRProtocol",
|
||||
"DCUTR_PROTOCOL_ID",
|
||||
"ReachabilityChecker",
|
||||
"is_private_ip",
|
||||
]
|
||||
|
||||
580
libp2p/relay/circuit_v2/dcutr.py
Normal file
580
libp2p/relay/circuit_v2/dcutr.py
Normal file
@ -0,0 +1,580 @@
|
||||
"""
|
||||
Direct Connection Upgrade through Relay (DCUtR) protocol implementation.
|
||||
|
||||
This module implements the DCUtR protocol as specified in:
|
||||
https://github.com/libp2p/specs/blob/master/relay/DCUtR.md
|
||||
|
||||
DCUtR enables peers behind NAT to establish direct connections
|
||||
using hole punching techniques.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p.abc import (
|
||||
IHost,
|
||||
INetConn,
|
||||
INetStream,
|
||||
)
|
||||
from libp2p.custom_types import (
|
||||
TProtocol,
|
||||
)
|
||||
from libp2p.peer.id import (
|
||||
ID,
|
||||
)
|
||||
from libp2p.peer.peerinfo import (
|
||||
PeerInfo,
|
||||
)
|
||||
from libp2p.relay.circuit_v2.nat import (
|
||||
ReachabilityChecker,
|
||||
)
|
||||
from libp2p.relay.circuit_v2.pb.dcutr_pb2 import (
|
||||
HolePunch,
|
||||
)
|
||||
from libp2p.tools.async_service import (
|
||||
Service,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Protocol ID for DCUtR
|
||||
PROTOCOL_ID = TProtocol("/libp2p/dcutr")
|
||||
|
||||
# Maximum message size for DCUtR (4KiB as per spec)
|
||||
MAX_MESSAGE_SIZE = 4 * 1024
|
||||
|
||||
# Timeouts
|
||||
STREAM_READ_TIMEOUT = 30 # seconds
|
||||
STREAM_WRITE_TIMEOUT = 30 # seconds
|
||||
DIAL_TIMEOUT = 10 # seconds
|
||||
|
||||
# Maximum number of hole punch attempts per peer
|
||||
MAX_HOLE_PUNCH_ATTEMPTS = 5
|
||||
|
||||
# Delay between retry attempts
|
||||
HOLE_PUNCH_RETRY_DELAY = 30 # seconds
|
||||
|
||||
# Maximum observed addresses to exchange
|
||||
MAX_OBSERVED_ADDRS = 20
|
||||
|
||||
|
||||
class DCUtRProtocol(Service):
|
||||
"""
|
||||
DCUtRProtocol implements the Direct Connection Upgrade through Relay protocol.
|
||||
|
||||
This protocol allows two NATed peers to establish direct connections through
|
||||
hole punching, after they have established an initial connection through a relay.
|
||||
"""
|
||||
|
||||
def __init__(self, host: IHost):
|
||||
"""
|
||||
Initialize the DCUtR protocol.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
host : IHost
|
||||
The libp2p host this protocol is running on
|
||||
|
||||
"""
|
||||
super().__init__()
|
||||
self.host = host
|
||||
self.event_started = trio.Event()
|
||||
self._hole_punch_attempts: dict[ID, int] = {}
|
||||
self._direct_connections: set[ID] = set()
|
||||
self._in_progress: set[ID] = set()
|
||||
self._reachability_checker = ReachabilityChecker(host)
|
||||
self._nursery: trio.Nursery | None = None
|
||||
|
||||
async def run(self, *, task_status: Any = trio.TASK_STATUS_IGNORED) -> None:
|
||||
"""Run the protocol service."""
|
||||
try:
|
||||
# Register the DCUtR protocol handler
|
||||
logger.debug("Registering DCUtR protocol handler")
|
||||
self.host.set_stream_handler(PROTOCOL_ID, self._handle_dcutr_stream)
|
||||
|
||||
# Signal that we're ready
|
||||
self.event_started.set()
|
||||
|
||||
# Start the service
|
||||
async with trio.open_nursery() as nursery:
|
||||
self._nursery = nursery
|
||||
task_status.started()
|
||||
logger.debug("DCUtR protocol service started")
|
||||
|
||||
# Wait for service to be stopped
|
||||
await self.manager.wait_finished()
|
||||
finally:
|
||||
# Clean up
|
||||
try:
|
||||
# Use empty async lambda instead of None for stream handler
|
||||
async def empty_handler(_: INetStream) -> None:
|
||||
pass
|
||||
|
||||
self.host.set_stream_handler(PROTOCOL_ID, empty_handler)
|
||||
logger.debug("DCUtR protocol handler unregistered")
|
||||
except Exception as e:
|
||||
logger.error("Error unregistering DCUtR protocol handler: %s", str(e))
|
||||
|
||||
# Clear state
|
||||
self._hole_punch_attempts.clear()
|
||||
self._direct_connections.clear()
|
||||
self._in_progress.clear()
|
||||
self._nursery = None
|
||||
|
||||
async def _handle_dcutr_stream(self, stream: INetStream) -> None:
|
||||
"""
|
||||
Handle incoming DCUtR streams.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
stream : INetStream
|
||||
The incoming stream
|
||||
|
||||
"""
|
||||
try:
|
||||
# Get the remote peer ID
|
||||
remote_peer_id = stream.muxed_conn.peer_id
|
||||
logger.debug("Received DCUtR stream from peer %s", remote_peer_id)
|
||||
|
||||
# Check if we already have a direct connection
|
||||
if await self._have_direct_connection(remote_peer_id):
|
||||
logger.debug(
|
||||
"Already have direct connection to %s, closing stream",
|
||||
remote_peer_id,
|
||||
)
|
||||
await stream.close()
|
||||
return
|
||||
|
||||
# Check if there's already an active hole punch attempt
|
||||
if remote_peer_id in self._in_progress:
|
||||
logger.debug("Hole punch already in progress with %s", remote_peer_id)
|
||||
# Let the existing attempt continue
|
||||
await stream.close()
|
||||
return
|
||||
|
||||
# Mark as in progress
|
||||
self._in_progress.add(remote_peer_id)
|
||||
|
||||
try:
|
||||
# Read the CONNECT message
|
||||
with trio.fail_after(STREAM_READ_TIMEOUT):
|
||||
msg_bytes = await stream.read(MAX_MESSAGE_SIZE)
|
||||
|
||||
# Parse the message
|
||||
connect_msg = HolePunch()
|
||||
connect_msg.ParseFromString(msg_bytes)
|
||||
|
||||
# Verify it's a CONNECT message
|
||||
if connect_msg.type != HolePunch.CONNECT:
|
||||
logger.warning("Expected CONNECT message, got %s", connect_msg.type)
|
||||
await stream.close()
|
||||
return
|
||||
|
||||
logger.debug(
|
||||
"Received CONNECT message from %s with %d addresses",
|
||||
remote_peer_id,
|
||||
len(connect_msg.ObsAddrs),
|
||||
)
|
||||
|
||||
# Process observed addresses from the peer
|
||||
peer_addrs = self._decode_observed_addrs(list(connect_msg.ObsAddrs))
|
||||
logger.debug("Decoded %d valid addresses from peer", len(peer_addrs))
|
||||
|
||||
# Store the addresses in the peerstore
|
||||
if peer_addrs:
|
||||
self.host.get_peerstore().add_addrs(
|
||||
remote_peer_id, peer_addrs, 10 * 60
|
||||
) # 10 minute TTL
|
||||
|
||||
# Send our CONNECT message with our observed addresses
|
||||
our_addrs = await self._get_observed_addrs()
|
||||
response = HolePunch()
|
||||
response.type = HolePunch.CONNECT
|
||||
response.ObsAddrs.extend(our_addrs)
|
||||
|
||||
with trio.fail_after(STREAM_WRITE_TIMEOUT):
|
||||
await stream.write(response.SerializeToString())
|
||||
|
||||
logger.debug(
|
||||
"Sent CONNECT response to %s with %d addresses",
|
||||
remote_peer_id,
|
||||
len(our_addrs),
|
||||
)
|
||||
|
||||
# Wait for SYNC message
|
||||
with trio.fail_after(STREAM_READ_TIMEOUT):
|
||||
sync_bytes = await stream.read(MAX_MESSAGE_SIZE)
|
||||
|
||||
# Parse the SYNC message
|
||||
sync_msg = HolePunch()
|
||||
sync_msg.ParseFromString(sync_bytes)
|
||||
|
||||
# Verify it's a SYNC message
|
||||
if sync_msg.type != HolePunch.SYNC:
|
||||
logger.warning("Expected SYNC message, got %s", sync_msg.type)
|
||||
await stream.close()
|
||||
return
|
||||
|
||||
logger.debug("Received SYNC message from %s", remote_peer_id)
|
||||
|
||||
# Perform hole punch
|
||||
success = await self._perform_hole_punch(remote_peer_id, peer_addrs)
|
||||
|
||||
if success:
|
||||
logger.info(
|
||||
"Successfully established direct connection with %s",
|
||||
remote_peer_id,
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
"Failed to establish direct connection with %s", remote_peer_id
|
||||
)
|
||||
|
||||
except trio.TooSlowError:
|
||||
logger.warning("Timeout in DCUtR protocol with peer %s", remote_peer_id)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Error in DCUtR protocol with peer %s: %s", remote_peer_id, str(e)
|
||||
)
|
||||
finally:
|
||||
# Clean up
|
||||
self._in_progress.discard(remote_peer_id)
|
||||
await stream.close()
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error handling DCUtR stream: %s", str(e))
|
||||
await stream.close()
|
||||
|
||||
async def initiate_hole_punch(self, peer_id: ID) -> bool:
|
||||
"""
|
||||
Initiate a hole punch with a peer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to hole punch with
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if hole punch was successful, False otherwise
|
||||
|
||||
"""
|
||||
# Check if we already have a direct connection
|
||||
if await self._have_direct_connection(peer_id):
|
||||
logger.debug("Already have direct connection to %s", peer_id)
|
||||
return True
|
||||
|
||||
# Check if there's already an active hole punch attempt
|
||||
if peer_id in self._in_progress:
|
||||
logger.debug("Hole punch already in progress with %s", peer_id)
|
||||
return False
|
||||
|
||||
# Check if we've exceeded the maximum number of attempts
|
||||
attempts = self._hole_punch_attempts.get(peer_id, 0)
|
||||
if attempts >= MAX_HOLE_PUNCH_ATTEMPTS:
|
||||
logger.warning("Maximum hole punch attempts reached for peer %s", peer_id)
|
||||
return False
|
||||
|
||||
# Mark as in progress and increment attempt counter
|
||||
self._in_progress.add(peer_id)
|
||||
self._hole_punch_attempts[peer_id] = attempts + 1
|
||||
|
||||
try:
|
||||
# Open a DCUtR stream to the peer
|
||||
logger.debug("Opening DCUtR stream to peer %s", peer_id)
|
||||
stream = await self.host.new_stream(peer_id, [PROTOCOL_ID])
|
||||
if not stream:
|
||||
logger.warning("Failed to open DCUtR stream to peer %s", peer_id)
|
||||
return False
|
||||
|
||||
try:
|
||||
# Send our CONNECT message with our observed addresses
|
||||
our_addrs = await self._get_observed_addrs()
|
||||
connect_msg = HolePunch()
|
||||
connect_msg.type = HolePunch.CONNECT
|
||||
connect_msg.ObsAddrs.extend(our_addrs)
|
||||
|
||||
start_time = time.time()
|
||||
with trio.fail_after(STREAM_WRITE_TIMEOUT):
|
||||
await stream.write(connect_msg.SerializeToString())
|
||||
|
||||
logger.debug(
|
||||
"Sent CONNECT message to %s with %d addresses",
|
||||
peer_id,
|
||||
len(our_addrs),
|
||||
)
|
||||
|
||||
# Receive the peer's CONNECT message
|
||||
with trio.fail_after(STREAM_READ_TIMEOUT):
|
||||
resp_bytes = await stream.read(MAX_MESSAGE_SIZE)
|
||||
|
||||
# Calculate RTT
|
||||
rtt = time.time() - start_time
|
||||
|
||||
# Parse the response
|
||||
resp = HolePunch()
|
||||
resp.ParseFromString(resp_bytes)
|
||||
|
||||
# Verify it's a CONNECT message
|
||||
if resp.type != HolePunch.CONNECT:
|
||||
logger.warning("Expected CONNECT message, got %s", resp.type)
|
||||
return False
|
||||
|
||||
logger.debug(
|
||||
"Received CONNECT response from %s with %d addresses",
|
||||
peer_id,
|
||||
len(resp.ObsAddrs),
|
||||
)
|
||||
|
||||
# Process observed addresses from the peer
|
||||
peer_addrs = self._decode_observed_addrs(list(resp.ObsAddrs))
|
||||
logger.debug("Decoded %d valid addresses from peer", len(peer_addrs))
|
||||
|
||||
# Store the addresses in the peerstore
|
||||
if peer_addrs:
|
||||
self.host.get_peerstore().add_addrs(
|
||||
peer_id, peer_addrs, 10 * 60
|
||||
) # 10 minute TTL
|
||||
|
||||
# Send SYNC message with timing information
|
||||
# We'll use a future time that's 2*RTT from now to ensure both sides
|
||||
# are ready
|
||||
punch_time = time.time() + (2 * rtt) + 1 # Add 1 second buffer
|
||||
|
||||
sync_msg = HolePunch()
|
||||
sync_msg.type = HolePunch.SYNC
|
||||
|
||||
with trio.fail_after(STREAM_WRITE_TIMEOUT):
|
||||
await stream.write(sync_msg.SerializeToString())
|
||||
|
||||
logger.debug("Sent SYNC message to %s", peer_id)
|
||||
|
||||
# Perform the synchronized hole punch
|
||||
success = await self._perform_hole_punch(
|
||||
peer_id, peer_addrs, punch_time
|
||||
)
|
||||
|
||||
if success:
|
||||
logger.info(
|
||||
"Successfully established direct connection with %s", peer_id
|
||||
)
|
||||
return True
|
||||
else:
|
||||
logger.warning(
|
||||
"Failed to establish direct connection with %s", peer_id
|
||||
)
|
||||
return False
|
||||
|
||||
except trio.TooSlowError:
|
||||
logger.warning("Timeout in DCUtR protocol with peer %s", peer_id)
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Error in DCUtR protocol with peer %s: %s", peer_id, str(e)
|
||||
)
|
||||
return False
|
||||
finally:
|
||||
await stream.close()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Error initiating hole punch with peer %s: %s", peer_id, str(e)
|
||||
)
|
||||
return False
|
||||
finally:
|
||||
self._in_progress.discard(peer_id)
|
||||
|
||||
# This should never be reached, but add explicit return for type checking
|
||||
return False
|
||||
|
||||
async def _perform_hole_punch(
|
||||
self, peer_id: ID, addrs: list[Multiaddr], punch_time: float | None = None
|
||||
) -> bool:
|
||||
"""
|
||||
Perform a hole punch attempt with a peer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to hole punch with
|
||||
addrs : list[Multiaddr]
|
||||
List of addresses to try
|
||||
punch_time : Optional[float]
|
||||
Time to perform the punch (if None, do it immediately)
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if hole punch was successful
|
||||
|
||||
"""
|
||||
if not addrs:
|
||||
logger.warning("No addresses to try for hole punch with %s", peer_id)
|
||||
return False
|
||||
|
||||
# If punch_time is specified, wait until that time
|
||||
if punch_time is not None:
|
||||
now = time.time()
|
||||
if punch_time > now:
|
||||
wait_time = punch_time - now
|
||||
logger.debug("Waiting %.2f seconds before hole punch", wait_time)
|
||||
await trio.sleep(wait_time)
|
||||
|
||||
# Try to dial each address
|
||||
logger.debug(
|
||||
"Starting hole punch with peer %s using %d addresses", peer_id, len(addrs)
|
||||
)
|
||||
|
||||
# Filter to only include non-relay addresses
|
||||
direct_addrs = [
|
||||
addr for addr in addrs if not str(addr).startswith("/p2p-circuit")
|
||||
]
|
||||
|
||||
if not direct_addrs:
|
||||
logger.warning("No direct addresses found for peer %s", peer_id)
|
||||
return False
|
||||
|
||||
# Start dialing attempts in parallel
|
||||
async with trio.open_nursery() as nursery:
|
||||
for addr in direct_addrs[
|
||||
:5
|
||||
]: # Limit to 5 addresses to avoid too many connections
|
||||
nursery.start_soon(self._dial_peer, peer_id, addr)
|
||||
|
||||
# Check if we established a direct connection
|
||||
return await self._have_direct_connection(peer_id)
|
||||
|
||||
async def _dial_peer(self, peer_id: ID, addr: Multiaddr) -> None:
|
||||
"""
|
||||
Attempt to dial a peer at a specific address.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to dial
|
||||
addr : Multiaddr
|
||||
The address to dial
|
||||
|
||||
"""
|
||||
try:
|
||||
logger.debug("Attempting to dial %s at %s", peer_id, addr)
|
||||
|
||||
# Create peer info
|
||||
peer_info = PeerInfo(peer_id, [addr])
|
||||
|
||||
# Try to connect with timeout
|
||||
with trio.fail_after(DIAL_TIMEOUT):
|
||||
await self.host.connect(peer_info)
|
||||
|
||||
logger.info("Successfully connected to %s at %s", peer_id, addr)
|
||||
|
||||
# Add to direct connections set
|
||||
self._direct_connections.add(peer_id)
|
||||
|
||||
except trio.TooSlowError:
|
||||
logger.debug("Timeout dialing %s at %s", peer_id, addr)
|
||||
except Exception as e:
|
||||
logger.debug("Error dialing %s at %s: %s", peer_id, addr, str(e))
|
||||
|
||||
async def _have_direct_connection(self, peer_id: ID) -> bool:
|
||||
"""
|
||||
Check if we already have a direct connection to a peer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if we have a direct connection, False otherwise
|
||||
|
||||
"""
|
||||
# Check our direct connections cache first
|
||||
if peer_id in self._direct_connections:
|
||||
return True
|
||||
|
||||
# Check if the peer is connected
|
||||
network = self.host.get_network()
|
||||
conn_or_conns = network.connections.get(peer_id)
|
||||
if not conn_or_conns:
|
||||
return False
|
||||
|
||||
# Handle both single connection and list of connections
|
||||
connections: list[INetConn] = (
|
||||
[conn_or_conns] if not isinstance(conn_or_conns, list) else conn_or_conns
|
||||
)
|
||||
|
||||
# Check if any connection is direct (not relayed)
|
||||
for conn in connections:
|
||||
# Get the transport addresses
|
||||
addrs = conn.get_transport_addresses()
|
||||
|
||||
# If any address doesn't start with /p2p-circuit, it's a direct connection
|
||||
if any(not str(addr).startswith("/p2p-circuit") for addr in addrs):
|
||||
# Cache this result
|
||||
self._direct_connections.add(peer_id)
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
async def _get_observed_addrs(self) -> list[bytes]:
|
||||
"""
|
||||
Get our observed addresses to share with the peer.
|
||||
|
||||
Returns
|
||||
-------
|
||||
List[bytes]
|
||||
List of observed addresses as bytes
|
||||
|
||||
"""
|
||||
# Get all listen addresses
|
||||
addrs = self.host.get_addrs()
|
||||
|
||||
# Filter out relay addresses
|
||||
direct_addrs = [
|
||||
addr for addr in addrs if not str(addr).startswith("/p2p-circuit")
|
||||
]
|
||||
|
||||
# Limit the number of addresses
|
||||
if len(direct_addrs) > MAX_OBSERVED_ADDRS:
|
||||
direct_addrs = direct_addrs[:MAX_OBSERVED_ADDRS]
|
||||
|
||||
# Convert to bytes
|
||||
addr_bytes = [addr.to_bytes() for addr in direct_addrs]
|
||||
|
||||
return addr_bytes
|
||||
|
||||
def _decode_observed_addrs(self, addr_bytes: list[bytes]) -> list[Multiaddr]:
|
||||
"""
|
||||
Decode observed addresses received from a peer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
addr_bytes : List[bytes]
|
||||
The encoded addresses
|
||||
|
||||
Returns
|
||||
-------
|
||||
List[Multiaddr]
|
||||
The decoded multiaddresses
|
||||
|
||||
"""
|
||||
result = []
|
||||
|
||||
for addr_byte in addr_bytes:
|
||||
try:
|
||||
addr = Multiaddr(addr_byte)
|
||||
# Validate the address (basic check)
|
||||
if str(addr).startswith("/ip"):
|
||||
result.append(addr)
|
||||
except Exception as e:
|
||||
logger.debug("Error decoding multiaddr: %s", str(e))
|
||||
|
||||
return result
|
||||
@ -234,7 +234,8 @@ class RelayDiscovery(Service):
|
||||
|
||||
if not callable(proto_getter):
|
||||
return None
|
||||
|
||||
if peer_id not in peerstore.peer_ids():
|
||||
return None
|
||||
try:
|
||||
# Try to get protocols
|
||||
proto_result = proto_getter(peer_id)
|
||||
@ -283,8 +284,6 @@ class RelayDiscovery(Service):
|
||||
return None
|
||||
|
||||
mux = self.host.get_mux()
|
||||
if not hasattr(mux, "protocols"):
|
||||
return None
|
||||
|
||||
peer_protocols = set()
|
||||
# Get protocols from mux with proper type safety
|
||||
@ -293,7 +292,9 @@ class RelayDiscovery(Service):
|
||||
# Get protocols with proper typing
|
||||
mux_protocols = mux.get_protocols()
|
||||
if isinstance(mux_protocols, (list, tuple)):
|
||||
available_protocols = list(mux_protocols)
|
||||
available_protocols = [
|
||||
p for p in mux.get_protocols() if p is not None
|
||||
]
|
||||
|
||||
for protocol in available_protocols:
|
||||
try:
|
||||
@ -313,7 +314,7 @@ class RelayDiscovery(Service):
|
||||
|
||||
self._protocol_cache[peer_id] = peer_protocols
|
||||
protocol_str = str(PROTOCOL_ID)
|
||||
for protocol in peer_protocols:
|
||||
for protocol in map(TProtocol, peer_protocols):
|
||||
if protocol == protocol_str:
|
||||
return True
|
||||
return False
|
||||
|
||||
300
libp2p/relay/circuit_v2/nat.py
Normal file
300
libp2p/relay/circuit_v2/nat.py
Normal file
@ -0,0 +1,300 @@
|
||||
"""
|
||||
NAT traversal utilities for libp2p.
|
||||
|
||||
This module provides utilities for NAT traversal and reachability detection.
|
||||
"""
|
||||
|
||||
import ipaddress
|
||||
import logging
|
||||
|
||||
from multiaddr import (
|
||||
Multiaddr,
|
||||
)
|
||||
|
||||
from libp2p.abc import (
|
||||
IHost,
|
||||
INetConn,
|
||||
)
|
||||
from libp2p.peer.id import (
|
||||
ID,
|
||||
)
|
||||
|
||||
logger = logging.getLogger("libp2p.relay.circuit_v2.nat")
|
||||
|
||||
# Timeout for reachability checks
|
||||
REACHABILITY_TIMEOUT = 10 # seconds
|
||||
|
||||
# Define private IP ranges
|
||||
PRIVATE_IP_RANGES = [
|
||||
("10.0.0.0", "10.255.255.255"), # Class A private network: 10.0.0.0/8
|
||||
("172.16.0.0", "172.31.255.255"), # Class B private network: 172.16.0.0/12
|
||||
("192.168.0.0", "192.168.255.255"), # Class C private network: 192.168.0.0/16
|
||||
]
|
||||
|
||||
# Link-local address range: 169.254.0.0/16
|
||||
LINK_LOCAL_RANGE = ("169.254.0.0", "169.254.255.255")
|
||||
|
||||
# Loopback address range: 127.0.0.0/8
|
||||
LOOPBACK_RANGE = ("127.0.0.0", "127.255.255.255")
|
||||
|
||||
|
||||
def ip_to_int(ip: str) -> int:
|
||||
"""
|
||||
Convert an IP address to an integer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
ip : str
|
||||
IP address to convert
|
||||
|
||||
Returns
|
||||
-------
|
||||
int
|
||||
Integer representation of the IP
|
||||
|
||||
"""
|
||||
try:
|
||||
return int(ipaddress.IPv4Address(ip))
|
||||
except ipaddress.AddressValueError:
|
||||
# Handle IPv6 addresses
|
||||
return int(ipaddress.IPv6Address(ip))
|
||||
|
||||
|
||||
def is_ip_in_range(ip: str, start_range: str, end_range: str) -> bool:
|
||||
"""
|
||||
Check if an IP address is within a range.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
ip : str
|
||||
IP address to check
|
||||
start_range : str
|
||||
Start of the range
|
||||
end_range : str
|
||||
End of the range
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if the IP is in the range
|
||||
|
||||
"""
|
||||
try:
|
||||
ip_int = ip_to_int(ip)
|
||||
start_int = ip_to_int(start_range)
|
||||
end_int = ip_to_int(end_range)
|
||||
return start_int <= ip_int <= end_int
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def is_private_ip(ip: str) -> bool:
|
||||
"""
|
||||
Check if an IP address is private.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
ip : str
|
||||
IP address to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if IP is private
|
||||
|
||||
"""
|
||||
for start_range, end_range in PRIVATE_IP_RANGES:
|
||||
if is_ip_in_range(ip, start_range, end_range):
|
||||
return True
|
||||
|
||||
# Check for link-local addresses
|
||||
if is_ip_in_range(ip, *LINK_LOCAL_RANGE):
|
||||
return True
|
||||
|
||||
# Check for loopback addresses
|
||||
if is_ip_in_range(ip, *LOOPBACK_RANGE):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def extract_ip_from_multiaddr(addr: Multiaddr) -> str | None:
|
||||
"""
|
||||
Extract the IP address from a multiaddr.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
addr : Multiaddr
|
||||
Multiaddr to extract from
|
||||
|
||||
Returns
|
||||
-------
|
||||
Optional[str]
|
||||
IP address or None if not found
|
||||
|
||||
"""
|
||||
# Convert to string representation
|
||||
addr_str = str(addr)
|
||||
|
||||
# Look for IPv4 address
|
||||
ipv4_start = addr_str.find("/ip4/")
|
||||
if ipv4_start != -1:
|
||||
# Extract the IPv4 address
|
||||
ipv4_end = addr_str.find("/", ipv4_start + 5)
|
||||
if ipv4_end != -1:
|
||||
return addr_str[ipv4_start + 5 : ipv4_end]
|
||||
|
||||
# Look for IPv6 address
|
||||
ipv6_start = addr_str.find("/ip6/")
|
||||
if ipv6_start != -1:
|
||||
# Extract the IPv6 address
|
||||
ipv6_end = addr_str.find("/", ipv6_start + 5)
|
||||
if ipv6_end != -1:
|
||||
return addr_str[ipv6_start + 5 : ipv6_end]
|
||||
|
||||
return None
|
||||
|
||||
|
||||
class ReachabilityChecker:
|
||||
"""
|
||||
Utility class for checking peer reachability.
|
||||
|
||||
This class assesses whether a peer's addresses are likely
|
||||
to be directly reachable or behind NAT.
|
||||
"""
|
||||
|
||||
def __init__(self, host: IHost):
|
||||
"""
|
||||
Initialize the reachability checker.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
host : IHost
|
||||
The libp2p host
|
||||
|
||||
"""
|
||||
self.host = host
|
||||
self._peer_reachability: dict[ID, bool] = {}
|
||||
self._known_public_peers: set[ID] = set()
|
||||
|
||||
def is_addr_public(self, addr: Multiaddr) -> bool:
|
||||
"""
|
||||
Check if an address is likely to be publicly reachable.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
addr : Multiaddr
|
||||
The multiaddr to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if address is likely public
|
||||
|
||||
"""
|
||||
# Extract the IP address
|
||||
ip = extract_ip_from_multiaddr(addr)
|
||||
if not ip:
|
||||
return False
|
||||
|
||||
# Check if it's a private IP
|
||||
return not is_private_ip(ip)
|
||||
|
||||
def get_public_addrs(self, addrs: list[Multiaddr]) -> list[Multiaddr]:
|
||||
"""
|
||||
Filter a list of addresses to only include likely public ones.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
addrs : List[Multiaddr]
|
||||
List of addresses to filter
|
||||
|
||||
Returns
|
||||
-------
|
||||
List[Multiaddr]
|
||||
List of likely public addresses
|
||||
|
||||
"""
|
||||
return [addr for addr in addrs if self.is_addr_public(addr)]
|
||||
|
||||
async def check_peer_reachability(self, peer_id: ID) -> bool:
|
||||
"""
|
||||
Check if a peer is directly reachable.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer ID to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if peer is likely directly reachable
|
||||
|
||||
"""
|
||||
# Check if we already know
|
||||
if peer_id in self._peer_reachability:
|
||||
return self._peer_reachability[peer_id]
|
||||
|
||||
# Check if the peer is connected
|
||||
network = self.host.get_network()
|
||||
connections: INetConn | list[INetConn] | None = network.connections.get(peer_id)
|
||||
if not connections:
|
||||
# Not connected, can't determine reachability
|
||||
return False
|
||||
|
||||
# Check if any connection is direct (not relayed)
|
||||
if isinstance(connections, list):
|
||||
for conn in connections:
|
||||
# Get the transport addresses
|
||||
addrs = conn.get_transport_addresses()
|
||||
|
||||
# If any address doesn't start with /p2p-circuit,
|
||||
# it's a direct connection
|
||||
if any(not str(addr).startswith("/p2p-circuit") for addr in addrs):
|
||||
self._peer_reachability[peer_id] = True
|
||||
return True
|
||||
else:
|
||||
# Handle single connection case
|
||||
addrs = connections.get_transport_addresses()
|
||||
if any(not str(addr).startswith("/p2p-circuit") for addr in addrs):
|
||||
self._peer_reachability[peer_id] = True
|
||||
return True
|
||||
|
||||
# Get the peer's addresses from peerstore
|
||||
try:
|
||||
addrs = self.host.get_peerstore().addrs(peer_id)
|
||||
# Check if peer has any public addresses
|
||||
public_addrs = self.get_public_addrs(addrs)
|
||||
if public_addrs:
|
||||
self._peer_reachability[peer_id] = True
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.debug("Error getting peer addresses: %s", str(e))
|
||||
|
||||
# Default to not directly reachable
|
||||
self._peer_reachability[peer_id] = False
|
||||
return False
|
||||
|
||||
async def check_self_reachability(self) -> tuple[bool, list[Multiaddr]]:
|
||||
"""
|
||||
Check if this host is likely directly reachable.
|
||||
|
||||
Returns
|
||||
-------
|
||||
Tuple[bool, List[Multiaddr]]
|
||||
Tuple of (is_reachable, public_addresses)
|
||||
|
||||
"""
|
||||
# Get all host addresses
|
||||
addrs = self.host.get_addrs()
|
||||
|
||||
# Filter for public addresses
|
||||
public_addrs = self.get_public_addrs(addrs)
|
||||
|
||||
# If we have public addresses, assume we're reachable
|
||||
# This is a simplified assumption - real reachability would need
|
||||
# external checking
|
||||
is_reachable = len(public_addrs) > 0
|
||||
|
||||
return is_reachable, public_addrs
|
||||
@ -5,6 +5,11 @@ Contains generated protobuf code for circuit_v2 relay protocol.
|
||||
"""
|
||||
|
||||
# Import the classes to be accessible directly from the package
|
||||
|
||||
from .dcutr_pb2 import (
|
||||
HolePunch,
|
||||
)
|
||||
|
||||
from .circuit_pb2 import (
|
||||
HopMessage,
|
||||
Limit,
|
||||
@ -13,4 +18,4 @@ from .circuit_pb2 import (
|
||||
StopMessage,
|
||||
)
|
||||
|
||||
__all__ = ["HopMessage", "Limit", "Reservation", "Status", "StopMessage"]
|
||||
__all__ = ["HopMessage", "Limit", "Reservation", "Status", "StopMessage", "HolePunch"]
|
||||
|
||||
@ -1,6 +1,5 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: libp2p/relay/circuit_v2/pb/circuit.proto
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf.internal import builder as _builder
|
||||
@ -12,11 +11,14 @@ from google.protobuf import symbol_database as _symbol_database
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n(libp2p/relay/circuit_v2/pb/circuit.proto\x12\rcircuit.pb.v2\"\xf3\x01\n\nHopMessage\x12,\n\x04type\x18\x01 \x01(\x0e\x32\x1e.circuit.pb.v2.HopMessage.Type\x12\x0c\n\x04peer\x18\x02 \x01(\x0c\x12/\n\x0breservation\x18\x03 \x01(\x0b\x32\x1a.circuit.pb.v2.Reservation\x12#\n\x05limit\x18\x04 \x01(\x0b\x32\x14.circuit.pb.v2.Limit\x12%\n\x06status\x18\x05 \x01(\x0b\x32\x15.circuit.pb.v2.Status\",\n\x04Type\x12\x0b\n\x07RESERVE\x10\x00\x12\x0b\n\x07\x43ONNECT\x10\x01\x12\n\n\x06STATUS\x10\x02\"\x92\x01\n\x0bStopMessage\x12-\n\x04type\x18\x01 \x01(\x0e\x32\x1f.circuit.pb.v2.StopMessage.Type\x12\x0c\n\x04peer\x18\x02 \x01(\x0c\x12%\n\x06status\x18\x03 \x01(\x0b\x32\x15.circuit.pb.v2.Status\"\x1f\n\x04Type\x12\x0b\n\x07\x43ONNECT\x10\x00\x12\n\n\x06STATUS\x10\x01\"A\n\x0bReservation\x12\x0f\n\x07voucher\x18\x01 \x01(\x0c\x12\x11\n\tsignature\x18\x02 \x01(\x0c\x12\x0e\n\x06\x65xpire\x18\x03 \x01(\x03\"\'\n\x05Limit\x12\x10\n\x08\x64uration\x18\x01 \x01(\x03\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x03\"\xf6\x01\n\x06Status\x12(\n\x04\x63ode\x18\x01 \x01(\x0e\x32\x1a.circuit.pb.v2.Status.Code\x12\x0f\n\x07message\x18\x02 \x01(\t\"\xb0\x01\n\x04\x43ode\x12\x06\n\x02OK\x10\x00\x12\x17\n\x13RESERVATION_REFUSED\x10\x64\x12\x1b\n\x17RESOURCE_LIMIT_EXCEEDED\x10\x65\x12\x15\n\x11PERMISSION_DENIED\x10\x66\x12\x16\n\x11\x43ONNECTION_FAILED\x10\xc8\x01\x12\x11\n\x0c\x44IAL_REFUSED\x10\xc9\x01\x12\x10\n\x0bSTOP_FAILED\x10\xac\x02\x12\x16\n\x11MALFORMED_MESSAGE\x10\x90\x03\x62\x06proto3')
|
||||
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.relay.circuit_v2.pb.circuit_pb2', globals())
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
|
||||
DESCRIPTOR._options = None
|
||||
_HOPMESSAGE._serialized_start=60
|
||||
_HOPMESSAGE._serialized_end=303
|
||||
|
||||
14
libp2p/relay/circuit_v2/pb/dcutr.proto
Normal file
14
libp2p/relay/circuit_v2/pb/dcutr.proto
Normal file
@ -0,0 +1,14 @@
|
||||
syntax = "proto2";
|
||||
|
||||
package holepunch.pb;
|
||||
|
||||
message HolePunch {
|
||||
enum Type {
|
||||
CONNECT = 100;
|
||||
SYNC = 300;
|
||||
}
|
||||
|
||||
required Type type = 1;
|
||||
|
||||
repeated bytes ObsAddrs = 2;
|
||||
}
|
||||
27
libp2p/relay/circuit_v2/pb/dcutr_pb2.py
Normal file
27
libp2p/relay/circuit_v2/pb/dcutr_pb2.py
Normal file
@ -0,0 +1,27 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/relay/circuit_v2/pb/dcutr.proto
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf.internal import builder as _builder
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n&libp2p/relay/circuit_v2/pb/dcutr.proto\x12\x0cholepunch.pb\"i\n\tHolePunch\x12*\n\x04type\x18\x01 \x02(\x0e\x32\x1c.holepunch.pb.HolePunch.Type\x12\x10\n\x08ObsAddrs\x18\x02 \x03(\x0c\"\x1e\n\x04Type\x12\x0b\n\x07\x43ONNECT\x10\x64\x12\t\n\x04SYNC\x10\xac\x02')
|
||||
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.relay.circuit_v2.pb.dcutr_pb2', globals())
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
|
||||
DESCRIPTOR._options = None
|
||||
_HOLEPUNCH._serialized_start=56
|
||||
_HOLEPUNCH._serialized_end=161
|
||||
_HOLEPUNCH_TYPE._serialized_start=131
|
||||
_HOLEPUNCH_TYPE._serialized_end=161
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
53
libp2p/relay/circuit_v2/pb/dcutr_pb2.pyi
Normal file
53
libp2p/relay/circuit_v2/pb/dcutr_pb2.pyi
Normal file
@ -0,0 +1,53 @@
|
||||
"""
|
||||
@generated by mypy-protobuf. Do not edit manually!
|
||||
isort:skip_file
|
||||
"""
|
||||
|
||||
import builtins
|
||||
import collections.abc
|
||||
import google.protobuf.descriptor
|
||||
import google.protobuf.internal.containers
|
||||
import google.protobuf.internal.enum_type_wrapper
|
||||
import google.protobuf.message
|
||||
import sys
|
||||
import typing
|
||||
|
||||
if sys.version_info >= (3, 10):
|
||||
import typing as typing_extensions
|
||||
else:
|
||||
import typing_extensions
|
||||
|
||||
DESCRIPTOR: google.protobuf.descriptor.FileDescriptor
|
||||
|
||||
@typing.final
|
||||
class HolePunch(google.protobuf.message.Message):
|
||||
DESCRIPTOR: google.protobuf.descriptor.Descriptor
|
||||
|
||||
class _Type:
|
||||
ValueType = typing.NewType("ValueType", builtins.int)
|
||||
V: typing_extensions.TypeAlias = ValueType
|
||||
|
||||
class _TypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[HolePunch._Type.ValueType], builtins.type):
|
||||
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
|
||||
CONNECT: HolePunch._Type.ValueType # 100
|
||||
SYNC: HolePunch._Type.ValueType # 300
|
||||
|
||||
class Type(_Type, metaclass=_TypeEnumTypeWrapper): ...
|
||||
CONNECT: HolePunch.Type.ValueType # 100
|
||||
SYNC: HolePunch.Type.ValueType # 300
|
||||
|
||||
TYPE_FIELD_NUMBER: builtins.int
|
||||
OBSADDRS_FIELD_NUMBER: builtins.int
|
||||
type: global___HolePunch.Type.ValueType
|
||||
@property
|
||||
def ObsAddrs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]: ...
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
type: global___HolePunch.Type.ValueType | None = ...,
|
||||
ObsAddrs: collections.abc.Iterable[builtins.bytes] | None = ...,
|
||||
) -> None: ...
|
||||
def HasField(self, field_name: typing.Literal["type", b"type"]) -> builtins.bool: ...
|
||||
def ClearField(self, field_name: typing.Literal["ObsAddrs", b"ObsAddrs", "type", b"type"]) -> None: ...
|
||||
|
||||
global___HolePunch = HolePunch
|
||||
@ -41,7 +41,8 @@ class BaseNoiseMsgReadWriter(EncryptedMsgReadWriter):
|
||||
read_writer: NoisePacketReadWriter
|
||||
noise_state: NoiseState
|
||||
|
||||
# FIXME: This prefix is added in msg#3 in Go. Check whether it's a desired behavior.
|
||||
# NOTE: This prefix is added in msg#3 in Go.
|
||||
# Support in py-libp2p is available but not used
|
||||
prefix: bytes = b"\x00" * 32
|
||||
|
||||
def __init__(self, conn: IRawConnection, noise_state: NoiseState) -> None:
|
||||
|
||||
@ -29,11 +29,6 @@ class Transport(ISecureTransport):
|
||||
early_data: bytes | None
|
||||
with_noise_pipes: bool
|
||||
|
||||
# NOTE: Implementations that support Noise Pipes must decide whether to use
|
||||
# an XX or IK handshake based on whether they possess a cached static
|
||||
# Noise key for the remote peer.
|
||||
# TODO: A storage of seen noise static keys for pattern IK?
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
libp2p_keypair: KeyPair,
|
||||
|
||||
@ -1,3 +1,5 @@
|
||||
from collections.abc import AsyncGenerator
|
||||
from contextlib import asynccontextmanager
|
||||
from types import (
|
||||
TracebackType,
|
||||
)
|
||||
@ -32,6 +34,72 @@ if TYPE_CHECKING:
|
||||
)
|
||||
|
||||
|
||||
class ReadWriteLock:
|
||||
"""
|
||||
A read-write lock that allows multiple concurrent readers
|
||||
or one exclusive writer, implemented using Trio primitives.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._readers = 0
|
||||
self._readers_lock = trio.Lock() # Protects access to _readers count
|
||||
self._writer_lock = trio.Semaphore(1) # Allows only one writer at a time
|
||||
|
||||
async def acquire_read(self) -> None:
|
||||
"""Acquire a read lock. Multiple readers can hold it simultaneously."""
|
||||
try:
|
||||
async with self._readers_lock:
|
||||
if self._readers == 0:
|
||||
await self._writer_lock.acquire()
|
||||
self._readers += 1
|
||||
except trio.Cancelled:
|
||||
raise
|
||||
|
||||
async def release_read(self) -> None:
|
||||
"""Release a read lock."""
|
||||
async with self._readers_lock:
|
||||
if self._readers == 1:
|
||||
self._writer_lock.release()
|
||||
self._readers -= 1
|
||||
|
||||
async def acquire_write(self) -> None:
|
||||
"""Acquire an exclusive write lock."""
|
||||
try:
|
||||
await self._writer_lock.acquire()
|
||||
except trio.Cancelled:
|
||||
raise
|
||||
|
||||
def release_write(self) -> None:
|
||||
"""Release the exclusive write lock."""
|
||||
self._writer_lock.release()
|
||||
|
||||
@asynccontextmanager
|
||||
async def read_lock(self) -> AsyncGenerator[None, None]:
|
||||
"""Context manager for acquiring and releasing a read lock safely."""
|
||||
acquire = False
|
||||
try:
|
||||
await self.acquire_read()
|
||||
acquire = True
|
||||
yield
|
||||
finally:
|
||||
if acquire:
|
||||
with trio.CancelScope() as scope:
|
||||
scope.shield = True
|
||||
await self.release_read()
|
||||
|
||||
@asynccontextmanager
|
||||
async def write_lock(self) -> AsyncGenerator[None, None]:
|
||||
"""Context manager for acquiring and releasing a write lock safely."""
|
||||
acquire = False
|
||||
try:
|
||||
await self.acquire_write()
|
||||
acquire = True
|
||||
yield
|
||||
finally:
|
||||
if acquire:
|
||||
self.release_write()
|
||||
|
||||
|
||||
class MplexStream(IMuxedStream):
|
||||
"""
|
||||
reference: https://github.com/libp2p/go-mplex/blob/master/stream.go
|
||||
@ -46,7 +114,7 @@ class MplexStream(IMuxedStream):
|
||||
read_deadline: int | None
|
||||
write_deadline: int | None
|
||||
|
||||
# TODO: Add lock for read/write to avoid interleaving receiving messages?
|
||||
rw_lock: ReadWriteLock
|
||||
close_lock: trio.Lock
|
||||
|
||||
# NOTE: `dataIn` is size of 8 in Go implementation.
|
||||
@ -80,6 +148,7 @@ class MplexStream(IMuxedStream):
|
||||
self.event_remote_closed = trio.Event()
|
||||
self.event_reset = trio.Event()
|
||||
self.close_lock = trio.Lock()
|
||||
self.rw_lock = ReadWriteLock()
|
||||
self.incoming_data_channel = incoming_data_channel
|
||||
self._buf = bytearray()
|
||||
|
||||
@ -113,48 +182,49 @@ class MplexStream(IMuxedStream):
|
||||
:param n: number of bytes to read
|
||||
:return: bytes actually read
|
||||
"""
|
||||
if n is not None and n < 0:
|
||||
raise ValueError(
|
||||
"the number of bytes to read `n` must be non-negative or "
|
||||
f"`None` to indicate read until EOF, got n={n}"
|
||||
)
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
if n is None:
|
||||
return await self._read_until_eof()
|
||||
if len(self._buf) == 0:
|
||||
data: bytes
|
||||
# Peek whether there is data available. If yes, we just read until there is
|
||||
# no data, then return.
|
||||
try:
|
||||
data = self.incoming_data_channel.receive_nowait()
|
||||
self._buf.extend(data)
|
||||
except trio.EndOfChannel:
|
||||
raise MplexStreamEOF
|
||||
except trio.WouldBlock:
|
||||
# We know `receive` will be blocked here. Wait for data here with
|
||||
# `receive` and catch all kinds of errors here.
|
||||
async with self.rw_lock.read_lock():
|
||||
if n is not None and n < 0:
|
||||
raise ValueError(
|
||||
"the number of bytes to read `n` must be non-negative or "
|
||||
f"`None` to indicate read until EOF, got n={n}"
|
||||
)
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
if n is None:
|
||||
return await self._read_until_eof()
|
||||
if len(self._buf) == 0:
|
||||
data: bytes
|
||||
# Peek whether there is data available. If yes, we just read until
|
||||
# there is no data, then return.
|
||||
try:
|
||||
data = await self.incoming_data_channel.receive()
|
||||
data = self.incoming_data_channel.receive_nowait()
|
||||
self._buf.extend(data)
|
||||
except trio.EndOfChannel:
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
if self.event_remote_closed.is_set():
|
||||
raise MplexStreamEOF
|
||||
except trio.ClosedResourceError as error:
|
||||
# Probably `incoming_data_channel` is closed in `reset` when we are
|
||||
# waiting for `receive`.
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
raise Exception(
|
||||
"`incoming_data_channel` is closed but stream is not reset. "
|
||||
"This should never happen."
|
||||
) from error
|
||||
self._buf.extend(self._read_return_when_blocked())
|
||||
payload = self._buf[:n]
|
||||
self._buf = self._buf[len(payload) :]
|
||||
return bytes(payload)
|
||||
raise MplexStreamEOF
|
||||
except trio.WouldBlock:
|
||||
# We know `receive` will be blocked here. Wait for data here with
|
||||
# `receive` and catch all kinds of errors here.
|
||||
try:
|
||||
data = await self.incoming_data_channel.receive()
|
||||
self._buf.extend(data)
|
||||
except trio.EndOfChannel:
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
if self.event_remote_closed.is_set():
|
||||
raise MplexStreamEOF
|
||||
except trio.ClosedResourceError as error:
|
||||
# Probably `incoming_data_channel` is closed in `reset` when
|
||||
# we are waiting for `receive`.
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
raise Exception(
|
||||
"`incoming_data_channel` is closed but stream is not reset."
|
||||
"This should never happen."
|
||||
) from error
|
||||
self._buf.extend(self._read_return_when_blocked())
|
||||
payload = self._buf[:n]
|
||||
self._buf = self._buf[len(payload) :]
|
||||
return bytes(payload)
|
||||
|
||||
async def write(self, data: bytes) -> None:
|
||||
"""
|
||||
@ -162,22 +232,21 @@ class MplexStream(IMuxedStream):
|
||||
|
||||
:return: number of bytes written
|
||||
"""
|
||||
if self.event_local_closed.is_set():
|
||||
raise MplexStreamClosed(f"cannot write to closed stream: data={data!r}")
|
||||
flag = (
|
||||
HeaderTags.MessageInitiator
|
||||
if self.is_initiator
|
||||
else HeaderTags.MessageReceiver
|
||||
)
|
||||
await self.muxed_conn.send_message(flag, data, self.stream_id)
|
||||
async with self.rw_lock.write_lock():
|
||||
if self.event_local_closed.is_set():
|
||||
raise MplexStreamClosed(f"cannot write to closed stream: data={data!r}")
|
||||
flag = (
|
||||
HeaderTags.MessageInitiator
|
||||
if self.is_initiator
|
||||
else HeaderTags.MessageReceiver
|
||||
)
|
||||
await self.muxed_conn.send_message(flag, data, self.stream_id)
|
||||
|
||||
async def close(self) -> None:
|
||||
"""
|
||||
Closing a stream closes it for writing and closes the remote end for
|
||||
reading but allows writing in the other direction.
|
||||
"""
|
||||
# TODO error handling with timeout
|
||||
|
||||
async with self.close_lock:
|
||||
if self.event_local_closed.is_set():
|
||||
return
|
||||
@ -185,8 +254,17 @@ class MplexStream(IMuxedStream):
|
||||
flag = (
|
||||
HeaderTags.CloseInitiator if self.is_initiator else HeaderTags.CloseReceiver
|
||||
)
|
||||
# TODO: Raise when `muxed_conn.send_message` fails and `Mplex` isn't shutdown.
|
||||
await self.muxed_conn.send_message(flag, None, self.stream_id)
|
||||
|
||||
try:
|
||||
with trio.fail_after(5): # timeout in seconds
|
||||
await self.muxed_conn.send_message(flag, None, self.stream_id)
|
||||
except trio.TooSlowError:
|
||||
raise TimeoutError("Timeout while trying to close the stream")
|
||||
except MuxedConnUnavailable:
|
||||
if not self.muxed_conn.event_shutting_down.is_set():
|
||||
raise RuntimeError(
|
||||
"Failed to send close message and Mplex isn't shutting down"
|
||||
)
|
||||
|
||||
_is_remote_closed: bool
|
||||
async with self.close_lock:
|
||||
|
||||
@ -45,6 +45,9 @@ from libp2p.stream_muxer.exceptions import (
|
||||
MuxedStreamReset,
|
||||
)
|
||||
|
||||
# Configure logger for this module
|
||||
logger = logging.getLogger("libp2p.stream_muxer.yamux")
|
||||
|
||||
PROTOCOL_ID = "/yamux/1.0.0"
|
||||
TYPE_DATA = 0x0
|
||||
TYPE_WINDOW_UPDATE = 0x1
|
||||
@ -98,13 +101,13 @@ class YamuxStream(IMuxedStream):
|
||||
# Flow control: Check if we have enough send window
|
||||
total_len = len(data)
|
||||
sent = 0
|
||||
logging.debug(f"Stream {self.stream_id}: Starts writing {total_len} bytes ")
|
||||
logger.debug(f"Stream {self.stream_id}: Starts writing {total_len} bytes ")
|
||||
while sent < total_len:
|
||||
# Wait for available window with timeout
|
||||
timeout = False
|
||||
async with self.window_lock:
|
||||
if self.send_window == 0:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Window is zero, waiting for update"
|
||||
)
|
||||
# Release lock and wait with timeout
|
||||
@ -152,12 +155,12 @@ class YamuxStream(IMuxedStream):
|
||||
"""
|
||||
if increment <= 0:
|
||||
# If increment is zero or negative, skip sending update
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Skipping window update"
|
||||
f"(increment={increment})"
|
||||
)
|
||||
return
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Sending window update with increment={increment}"
|
||||
)
|
||||
|
||||
@ -185,7 +188,7 @@ class YamuxStream(IMuxedStream):
|
||||
|
||||
# If the stream is closed for receiving and the buffer is empty, raise EOF
|
||||
if self.recv_closed and not self.conn.stream_buffers.get(self.stream_id):
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Stream closed for receiving and buffer empty"
|
||||
)
|
||||
raise MuxedStreamEOF("Stream is closed for receiving")
|
||||
@ -198,7 +201,7 @@ class YamuxStream(IMuxedStream):
|
||||
|
||||
# If buffer is not available, check if stream is closed
|
||||
if buffer is None:
|
||||
logging.debug(f"Stream {self.stream_id}: No buffer available")
|
||||
logger.debug(f"Stream {self.stream_id}: No buffer available")
|
||||
raise MuxedStreamEOF("Stream buffer closed")
|
||||
|
||||
# If we have data in buffer, process it
|
||||
@ -210,34 +213,34 @@ class YamuxStream(IMuxedStream):
|
||||
# Send window update for the chunk we just read
|
||||
async with self.window_lock:
|
||||
self.recv_window += len(chunk)
|
||||
logging.debug(f"Stream {self.stream_id}: Update {len(chunk)}")
|
||||
logger.debug(f"Stream {self.stream_id}: Update {len(chunk)}")
|
||||
await self.send_window_update(len(chunk), skip_lock=True)
|
||||
|
||||
# If stream is closed (FIN received) and buffer is empty, break
|
||||
if self.recv_closed and len(buffer) == 0:
|
||||
logging.debug(f"Stream {self.stream_id}: Closed with empty buffer")
|
||||
logger.debug(f"Stream {self.stream_id}: Closed with empty buffer")
|
||||
break
|
||||
|
||||
# If stream was reset, raise reset error
|
||||
if self.reset_received:
|
||||
logging.debug(f"Stream {self.stream_id}: Stream was reset")
|
||||
logger.debug(f"Stream {self.stream_id}: Stream was reset")
|
||||
raise MuxedStreamReset("Stream was reset")
|
||||
|
||||
# Wait for more data or stream closure
|
||||
logging.debug(f"Stream {self.stream_id}: Waiting for data or FIN")
|
||||
logger.debug(f"Stream {self.stream_id}: Waiting for data or FIN")
|
||||
await self.conn.stream_events[self.stream_id].wait()
|
||||
self.conn.stream_events[self.stream_id] = trio.Event()
|
||||
|
||||
# After loop exit, first check if we have data to return
|
||||
if data:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Returning {len(data)} bytes after loop"
|
||||
)
|
||||
return data
|
||||
|
||||
# No data accumulated, now check why we exited the loop
|
||||
if self.conn.event_shutting_down.is_set():
|
||||
logging.debug(f"Stream {self.stream_id}: Connection shutting down")
|
||||
logger.debug(f"Stream {self.stream_id}: Connection shutting down")
|
||||
raise MuxedStreamEOF("Connection shut down")
|
||||
|
||||
# Return empty data
|
||||
@ -246,7 +249,7 @@ class YamuxStream(IMuxedStream):
|
||||
data = await self.conn.read_stream(self.stream_id, n)
|
||||
async with self.window_lock:
|
||||
self.recv_window += len(data)
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Sending window update after read, "
|
||||
f"increment={len(data)}"
|
||||
)
|
||||
@ -255,7 +258,7 @@ class YamuxStream(IMuxedStream):
|
||||
|
||||
async def close(self) -> None:
|
||||
if not self.send_closed:
|
||||
logging.debug(f"Half-closing stream {self.stream_id} (local end)")
|
||||
logger.debug(f"Half-closing stream {self.stream_id} (local end)")
|
||||
header = struct.pack(
|
||||
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_FIN, self.stream_id, 0
|
||||
)
|
||||
@ -271,7 +274,7 @@ class YamuxStream(IMuxedStream):
|
||||
|
||||
async def reset(self) -> None:
|
||||
if not self.closed:
|
||||
logging.debug(f"Resetting stream {self.stream_id}")
|
||||
logger.debug(f"Resetting stream {self.stream_id}")
|
||||
header = struct.pack(
|
||||
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_RST, self.stream_id, 0
|
||||
)
|
||||
@ -349,7 +352,7 @@ class Yamux(IMuxedConn):
|
||||
self._nursery: Nursery | None = None
|
||||
|
||||
async def start(self) -> None:
|
||||
logging.debug(f"Starting Yamux for {self.peer_id}")
|
||||
logger.debug(f"Starting Yamux for {self.peer_id}")
|
||||
if self.event_started.is_set():
|
||||
return
|
||||
async with trio.open_nursery() as nursery:
|
||||
@ -362,7 +365,7 @@ class Yamux(IMuxedConn):
|
||||
return self.is_initiator_value
|
||||
|
||||
async def close(self, error_code: int = GO_AWAY_NORMAL) -> None:
|
||||
logging.debug(f"Closing Yamux connection with code {error_code}")
|
||||
logger.debug(f"Closing Yamux connection with code {error_code}")
|
||||
async with self.streams_lock:
|
||||
if not self.event_shutting_down.is_set():
|
||||
try:
|
||||
@ -371,7 +374,7 @@ class Yamux(IMuxedConn):
|
||||
)
|
||||
await self.secured_conn.write(header)
|
||||
except Exception as e:
|
||||
logging.debug(f"Failed to send GO_AWAY: {e}")
|
||||
logger.debug(f"Failed to send GO_AWAY: {e}")
|
||||
self.event_shutting_down.set()
|
||||
for stream in self.streams.values():
|
||||
stream.closed = True
|
||||
@ -382,12 +385,12 @@ class Yamux(IMuxedConn):
|
||||
self.stream_events.clear()
|
||||
try:
|
||||
await self.secured_conn.close()
|
||||
logging.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
|
||||
logger.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
|
||||
except Exception as e:
|
||||
logging.debug(f"Error closing secured_conn for peer {self.peer_id}: {e}")
|
||||
logger.debug(f"Error closing secured_conn for peer {self.peer_id}: {e}")
|
||||
self.event_closed.set()
|
||||
if self.on_close:
|
||||
logging.debug(f"Calling on_close in Yamux.close for peer {self.peer_id}")
|
||||
logger.debug(f"Calling on_close in Yamux.close for peer {self.peer_id}")
|
||||
if inspect.iscoroutinefunction(self.on_close):
|
||||
if self.on_close is not None:
|
||||
await self.on_close()
|
||||
@ -416,7 +419,7 @@ class Yamux(IMuxedConn):
|
||||
header = struct.pack(
|
||||
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_SYN, stream_id, 0
|
||||
)
|
||||
logging.debug(f"Sending SYN header for stream {stream_id}")
|
||||
logger.debug(f"Sending SYN header for stream {stream_id}")
|
||||
await self.secured_conn.write(header)
|
||||
return stream
|
||||
except Exception as e:
|
||||
@ -424,32 +427,32 @@ class Yamux(IMuxedConn):
|
||||
raise e
|
||||
|
||||
async def accept_stream(self) -> IMuxedStream:
|
||||
logging.debug("Waiting for new stream")
|
||||
logger.debug("Waiting for new stream")
|
||||
try:
|
||||
stream = await self.new_stream_receive_channel.receive()
|
||||
logging.debug(f"Received stream {stream.stream_id}")
|
||||
logger.debug(f"Received stream {stream.stream_id}")
|
||||
return stream
|
||||
except trio.EndOfChannel:
|
||||
raise MuxedStreamError("No new streams available")
|
||||
|
||||
async def read_stream(self, stream_id: int, n: int = -1) -> bytes:
|
||||
logging.debug(f"Reading from stream {self.peer_id}:{stream_id}, n={n}")
|
||||
logger.debug(f"Reading from stream {self.peer_id}:{stream_id}, n={n}")
|
||||
if n is None:
|
||||
n = -1
|
||||
|
||||
while True:
|
||||
async with self.streams_lock:
|
||||
if stream_id not in self.streams:
|
||||
logging.debug(f"Stream {self.peer_id}:{stream_id} unknown")
|
||||
logger.debug(f"Stream {self.peer_id}:{stream_id} unknown")
|
||||
raise MuxedStreamEOF("Stream closed")
|
||||
if self.event_shutting_down.is_set():
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}: connection shutting down"
|
||||
)
|
||||
raise MuxedStreamEOF("Connection shut down")
|
||||
stream = self.streams[stream_id]
|
||||
buffer = self.stream_buffers.get(stream_id)
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}: "
|
||||
f"closed={stream.closed}, "
|
||||
f"recv_closed={stream.recv_closed}, "
|
||||
@ -457,7 +460,7 @@ class Yamux(IMuxedConn):
|
||||
f"buffer_len={len(buffer) if buffer else 0}"
|
||||
)
|
||||
if buffer is None:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}:"
|
||||
f"Buffer gone, assuming closed"
|
||||
)
|
||||
@ -470,7 +473,7 @@ class Yamux(IMuxedConn):
|
||||
else:
|
||||
data = bytes(buffer[:n])
|
||||
del buffer[:n]
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Returning {len(data)} bytes"
|
||||
f"from stream {self.peer_id}:{stream_id}, "
|
||||
f"buffer_len={len(buffer)}"
|
||||
@ -478,7 +481,7 @@ class Yamux(IMuxedConn):
|
||||
return data
|
||||
# If reset received and buffer is empty, raise reset
|
||||
if stream.reset_received:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}:"
|
||||
f"reset_received=True, raising MuxedStreamReset"
|
||||
)
|
||||
@ -491,7 +494,7 @@ class Yamux(IMuxedConn):
|
||||
else:
|
||||
data = bytes(buffer[:n])
|
||||
del buffer[:n]
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Returning {len(data)} bytes"
|
||||
f"from stream {self.peer_id}:{stream_id}, "
|
||||
f"buffer_len={len(buffer)}"
|
||||
@ -499,21 +502,21 @@ class Yamux(IMuxedConn):
|
||||
return data
|
||||
# Check if stream is closed
|
||||
if stream.closed:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}:"
|
||||
f"closed=True, raising MuxedStreamReset"
|
||||
)
|
||||
raise MuxedStreamReset("Stream is reset or closed")
|
||||
# Check if recv_closed and buffer empty
|
||||
if stream.recv_closed:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}:"
|
||||
f"recv_closed=True, buffer empty, raising EOF"
|
||||
)
|
||||
raise MuxedStreamEOF("Stream is closed for receiving")
|
||||
|
||||
# Wait for data if stream is still open
|
||||
logging.debug(f"Waiting for data on stream {self.peer_id}:{stream_id}")
|
||||
logger.debug(f"Waiting for data on stream {self.peer_id}:{stream_id}")
|
||||
try:
|
||||
await self.stream_events[stream_id].wait()
|
||||
self.stream_events[stream_id] = trio.Event()
|
||||
@ -528,7 +531,7 @@ class Yamux(IMuxedConn):
|
||||
try:
|
||||
header = await self.secured_conn.read(HEADER_SIZE)
|
||||
if not header or len(header) < HEADER_SIZE:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Connection closed orincomplete header for peer {self.peer_id}"
|
||||
)
|
||||
self.event_shutting_down.set()
|
||||
@ -537,7 +540,7 @@ class Yamux(IMuxedConn):
|
||||
version, typ, flags, stream_id, length = struct.unpack(
|
||||
YAMUX_HEADER_FORMAT, header
|
||||
)
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Received header for peer {self.peer_id}:"
|
||||
f"type={typ}, flags={flags}, stream_id={stream_id},"
|
||||
f"length={length}"
|
||||
@ -558,7 +561,7 @@ class Yamux(IMuxedConn):
|
||||
0,
|
||||
)
|
||||
await self.secured_conn.write(ack_header)
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Sending stream {stream_id}"
|
||||
f"to channel for peer {self.peer_id}"
|
||||
)
|
||||
@ -576,7 +579,7 @@ class Yamux(IMuxedConn):
|
||||
elif typ == TYPE_DATA and flags & FLAG_RST:
|
||||
async with self.streams_lock:
|
||||
if stream_id in self.streams:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Resetting stream {stream_id} for peer {self.peer_id}"
|
||||
)
|
||||
self.streams[stream_id].closed = True
|
||||
@ -585,27 +588,27 @@ class Yamux(IMuxedConn):
|
||||
elif typ == TYPE_DATA and flags & FLAG_ACK:
|
||||
async with self.streams_lock:
|
||||
if stream_id in self.streams:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Received ACK for stream"
|
||||
f"{stream_id} for peer {self.peer_id}"
|
||||
)
|
||||
elif typ == TYPE_GO_AWAY:
|
||||
error_code = length
|
||||
if error_code == GO_AWAY_NORMAL:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Received GO_AWAY for peer"
|
||||
f"{self.peer_id}: Normal termination"
|
||||
)
|
||||
elif error_code == GO_AWAY_PROTOCOL_ERROR:
|
||||
logging.error(
|
||||
logger.error(
|
||||
f"Received GO_AWAY for peer{self.peer_id}: Protocol error"
|
||||
)
|
||||
elif error_code == GO_AWAY_INTERNAL_ERROR:
|
||||
logging.error(
|
||||
logger.error(
|
||||
f"Received GO_AWAY for peer {self.peer_id}: Internal error"
|
||||
)
|
||||
else:
|
||||
logging.error(
|
||||
logger.error(
|
||||
f"Received GO_AWAY for peer {self.peer_id}"
|
||||
f"with unknown error code: {error_code}"
|
||||
)
|
||||
@ -614,7 +617,7 @@ class Yamux(IMuxedConn):
|
||||
break
|
||||
elif typ == TYPE_PING:
|
||||
if flags & FLAG_SYN:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Received ping request with value"
|
||||
f"{length} for peer {self.peer_id}"
|
||||
)
|
||||
@ -623,7 +626,7 @@ class Yamux(IMuxedConn):
|
||||
)
|
||||
await self.secured_conn.write(ping_header)
|
||||
elif flags & FLAG_ACK:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Received ping response with value"
|
||||
f"{length} for peer {self.peer_id}"
|
||||
)
|
||||
@ -637,7 +640,7 @@ class Yamux(IMuxedConn):
|
||||
self.stream_buffers[stream_id].extend(data)
|
||||
self.stream_events[stream_id].set()
|
||||
if flags & FLAG_FIN:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Received FIN for stream {self.peer_id}:"
|
||||
f"{stream_id}, marking recv_closed"
|
||||
)
|
||||
@ -645,7 +648,7 @@ class Yamux(IMuxedConn):
|
||||
if self.streams[stream_id].send_closed:
|
||||
self.streams[stream_id].closed = True
|
||||
except Exception as e:
|
||||
logging.error(f"Error reading data for stream {stream_id}: {e}")
|
||||
logger.error(f"Error reading data for stream {stream_id}: {e}")
|
||||
# Mark stream as closed on read error
|
||||
async with self.streams_lock:
|
||||
if stream_id in self.streams:
|
||||
@ -659,7 +662,7 @@ class Yamux(IMuxedConn):
|
||||
if stream_id in self.streams:
|
||||
stream = self.streams[stream_id]
|
||||
async with stream.window_lock:
|
||||
logging.debug(
|
||||
logger.debug(
|
||||
f"Received window update for stream"
|
||||
f"{self.peer_id}:{stream_id},"
|
||||
f" increment: {increment}"
|
||||
@ -674,7 +677,7 @@ class Yamux(IMuxedConn):
|
||||
and details.get("requested_count") == 2
|
||||
and details.get("received_count") == 0
|
||||
):
|
||||
logging.info(
|
||||
logger.info(
|
||||
f"Stream closed cleanly for peer {self.peer_id}"
|
||||
+ f" (IncompleteReadError: {details})"
|
||||
)
|
||||
@ -682,15 +685,32 @@ class Yamux(IMuxedConn):
|
||||
await self._cleanup_on_error()
|
||||
break
|
||||
else:
|
||||
logging.error(
|
||||
logger.error(
|
||||
f"Error in handle_incoming for peer {self.peer_id}: "
|
||||
+ f"{type(e).__name__}: {str(e)}"
|
||||
)
|
||||
else:
|
||||
logging.error(
|
||||
f"Error in handle_incoming for peer {self.peer_id}: "
|
||||
+ f"{type(e).__name__}: {str(e)}"
|
||||
)
|
||||
# Handle RawConnError with more nuance
|
||||
if isinstance(e, RawConnError):
|
||||
error_msg = str(e)
|
||||
# If RawConnError is empty, it's likely normal cleanup
|
||||
if not error_msg.strip():
|
||||
logger.info(
|
||||
f"RawConnError (empty) during cleanup for peer "
|
||||
f"{self.peer_id} (normal connection shutdown)"
|
||||
)
|
||||
else:
|
||||
# Log non-empty RawConnError as warning
|
||||
logger.warning(
|
||||
f"RawConnError during connection handling for peer "
|
||||
f"{self.peer_id}: {error_msg}"
|
||||
)
|
||||
else:
|
||||
# Log all other errors normally
|
||||
logger.error(
|
||||
f"Error in handle_incoming for peer {self.peer_id}: "
|
||||
+ f"{type(e).__name__}: {str(e)}"
|
||||
)
|
||||
# Don't crash the whole connection for temporary errors
|
||||
if self.event_shutting_down.is_set() or isinstance(
|
||||
e, (RawConnError, OSError)
|
||||
@ -720,9 +740,9 @@ class Yamux(IMuxedConn):
|
||||
# Close the secured connection
|
||||
try:
|
||||
await self.secured_conn.close()
|
||||
logging.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
|
||||
logger.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
|
||||
except Exception as close_error:
|
||||
logging.error(
|
||||
logger.error(
|
||||
f"Error closing secured_conn for peer {self.peer_id}: {close_error}"
|
||||
)
|
||||
|
||||
@ -731,14 +751,14 @@ class Yamux(IMuxedConn):
|
||||
|
||||
# Call on_close callback if provided
|
||||
if self.on_close:
|
||||
logging.debug(f"Calling on_close for peer {self.peer_id}")
|
||||
logger.debug(f"Calling on_close for peer {self.peer_id}")
|
||||
try:
|
||||
if inspect.iscoroutinefunction(self.on_close):
|
||||
await self.on_close()
|
||||
else:
|
||||
self.on_close()
|
||||
except Exception as callback_error:
|
||||
logging.error(f"Error in on_close callback: {callback_error}")
|
||||
logger.error(f"Error in on_close callback: {callback_error}")
|
||||
|
||||
# Cancel nursery tasks
|
||||
if self._nursery:
|
||||
|
||||
@ -7,6 +7,9 @@ from libp2p.utils.varint import (
|
||||
encode_varint_prefixed,
|
||||
read_delim,
|
||||
read_varint_prefixed_bytes,
|
||||
decode_varint_from_bytes,
|
||||
decode_varint_with_size,
|
||||
read_length_prefixed_protobuf,
|
||||
)
|
||||
from libp2p.utils.version import (
|
||||
get_agent_version,
|
||||
@ -20,4 +23,7 @@ __all__ = [
|
||||
"get_agent_version",
|
||||
"read_delim",
|
||||
"read_varint_prefixed_bytes",
|
||||
"decode_varint_from_bytes",
|
||||
"decode_varint_with_size",
|
||||
"read_length_prefixed_protobuf",
|
||||
]
|
||||
|
||||
@ -1,7 +1,9 @@
|
||||
import itertools
|
||||
import logging
|
||||
import math
|
||||
from typing import BinaryIO
|
||||
|
||||
from libp2p.abc import INetStream
|
||||
from libp2p.exceptions import (
|
||||
ParseError,
|
||||
)
|
||||
@ -25,18 +27,41 @@ HIGH_MASK = 2**7
|
||||
SHIFT_64_BIT_MAX = int(math.ceil(64 / 7)) * 7
|
||||
|
||||
|
||||
def encode_uvarint(number: int) -> bytes:
|
||||
"""Pack `number` into varint bytes."""
|
||||
buf = b""
|
||||
while True:
|
||||
towrite = number & 0x7F
|
||||
number >>= 7
|
||||
if number:
|
||||
buf += bytes((towrite | 0x80,))
|
||||
else:
|
||||
buf += bytes((towrite,))
|
||||
def encode_uvarint(value: int) -> bytes:
|
||||
"""Encode an unsigned integer as a varint."""
|
||||
if value < 0:
|
||||
raise ValueError("Cannot encode negative value as uvarint")
|
||||
|
||||
result = bytearray()
|
||||
while value >= 0x80:
|
||||
result.append((value & 0x7F) | 0x80)
|
||||
value >>= 7
|
||||
result.append(value & 0x7F)
|
||||
return bytes(result)
|
||||
|
||||
|
||||
def decode_uvarint(data: bytes) -> int:
|
||||
"""Decode a varint from bytes."""
|
||||
if not data:
|
||||
raise ParseError("Unexpected end of data")
|
||||
|
||||
result = 0
|
||||
shift = 0
|
||||
|
||||
for byte in data:
|
||||
result |= (byte & 0x7F) << shift
|
||||
if (byte & 0x80) == 0:
|
||||
break
|
||||
return buf
|
||||
shift += 7
|
||||
if shift >= 64:
|
||||
raise ValueError("Varint too long")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def decode_varint_from_bytes(data: bytes) -> int:
|
||||
"""Decode a varint from bytes (alias for decode_uvarint for backward comp)."""
|
||||
return decode_uvarint(data)
|
||||
|
||||
|
||||
async def decode_uvarint_from_stream(reader: Reader) -> int:
|
||||
@ -44,7 +69,9 @@ async def decode_uvarint_from_stream(reader: Reader) -> int:
|
||||
res = 0
|
||||
for shift in itertools.count(0, 7):
|
||||
if shift > SHIFT_64_BIT_MAX:
|
||||
raise ParseError("TODO: better exception msg: Integer is too large...")
|
||||
raise ParseError(
|
||||
"Varint decoding error: integer exceeds maximum size of 64 bits."
|
||||
)
|
||||
|
||||
byte = await read_exactly(reader, 1)
|
||||
value = byte[0]
|
||||
@ -56,9 +83,35 @@ async def decode_uvarint_from_stream(reader: Reader) -> int:
|
||||
return res
|
||||
|
||||
|
||||
def encode_varint_prefixed(msg_bytes: bytes) -> bytes:
|
||||
varint_len = encode_uvarint(len(msg_bytes))
|
||||
return varint_len + msg_bytes
|
||||
def decode_varint_with_size(data: bytes) -> tuple[int, int]:
|
||||
"""
|
||||
Decode a varint from bytes and return both the value and the number of bytes
|
||||
consumed.
|
||||
|
||||
Returns:
|
||||
Tuple[int, int]: (value, bytes_consumed)
|
||||
|
||||
"""
|
||||
result = 0
|
||||
shift = 0
|
||||
bytes_consumed = 0
|
||||
|
||||
for byte in data:
|
||||
result |= (byte & 0x7F) << shift
|
||||
bytes_consumed += 1
|
||||
if (byte & 0x80) == 0:
|
||||
break
|
||||
shift += 7
|
||||
if shift >= 64:
|
||||
raise ValueError("Varint too long")
|
||||
|
||||
return result, bytes_consumed
|
||||
|
||||
|
||||
def encode_varint_prefixed(data: bytes) -> bytes:
|
||||
"""Encode data with a varint length prefix."""
|
||||
length_bytes = encode_uvarint(len(data))
|
||||
return length_bytes + data
|
||||
|
||||
|
||||
async def read_varint_prefixed_bytes(reader: Reader) -> bytes:
|
||||
@ -85,3 +138,95 @@ async def read_delim(reader: Reader) -> bytes:
|
||||
f'`msg_bytes` is not delimited by b"\\n": `msg_bytes`={msg_bytes!r}'
|
||||
)
|
||||
return msg_bytes[:-1]
|
||||
|
||||
|
||||
def read_varint_prefixed_bytes_sync(
|
||||
stream: BinaryIO, max_length: int = 1024 * 1024
|
||||
) -> bytes:
|
||||
"""
|
||||
Read varint-prefixed bytes from a stream.
|
||||
|
||||
Args:
|
||||
stream: A stream-like object with a read() method
|
||||
max_length: Maximum allowed data length to prevent memory exhaustion
|
||||
|
||||
Returns:
|
||||
bytes: The data without the length prefix
|
||||
|
||||
Raises:
|
||||
ValueError: If the length prefix is invalid or too large
|
||||
EOFError: If the stream ends unexpectedly
|
||||
|
||||
"""
|
||||
# Read the varint length prefix
|
||||
length_bytes = b""
|
||||
while True:
|
||||
byte_data = stream.read(1)
|
||||
if not byte_data:
|
||||
raise EOFError("Stream ended while reading varint length prefix")
|
||||
|
||||
length_bytes += byte_data
|
||||
if byte_data[0] & 0x80 == 0:
|
||||
break
|
||||
|
||||
# Decode the length
|
||||
length = decode_uvarint(length_bytes)
|
||||
|
||||
if length > max_length:
|
||||
raise ValueError(f"Data length {length} exceeds maximum allowed {max_length}")
|
||||
|
||||
# Read the data
|
||||
data = stream.read(length)
|
||||
if len(data) != length:
|
||||
raise EOFError(f"Expected {length} bytes, got {len(data)}")
|
||||
|
||||
return data
|
||||
|
||||
|
||||
async def read_length_prefixed_protobuf(
|
||||
stream: INetStream, use_varint_format: bool = True, max_length: int = 1024 * 1024
|
||||
) -> bytes:
|
||||
"""Read a protobuf message from a stream, handling both formats."""
|
||||
if use_varint_format:
|
||||
# Read length-prefixed protobuf message from the stream
|
||||
# First read the varint length prefix
|
||||
length_bytes = b""
|
||||
while True:
|
||||
b = await stream.read(1)
|
||||
if not b:
|
||||
raise Exception("No length prefix received")
|
||||
|
||||
length_bytes += b
|
||||
if b[0] & 0x80 == 0:
|
||||
break
|
||||
|
||||
msg_length = decode_varint_from_bytes(length_bytes)
|
||||
|
||||
if msg_length > max_length:
|
||||
raise Exception(
|
||||
f"Message length {msg_length} exceeds maximum allowed {max_length}"
|
||||
)
|
||||
|
||||
# Read the protobuf message
|
||||
data = await stream.read(msg_length)
|
||||
if len(data) != msg_length:
|
||||
raise Exception(
|
||||
f"Incomplete message: expected {msg_length}, got {len(data)}"
|
||||
)
|
||||
|
||||
return data
|
||||
else:
|
||||
# Read raw protobuf message from the stream
|
||||
# For raw format, read all available data in one go
|
||||
data = await stream.read()
|
||||
|
||||
# If we got no data, raise an exception
|
||||
if not data:
|
||||
raise Exception("No data received in raw format")
|
||||
|
||||
if len(data) > max_length:
|
||||
raise Exception(
|
||||
f"Message length {len(data)} exceeds maximum allowed {max_length}"
|
||||
)
|
||||
|
||||
return data
|
||||
|
||||
@ -1 +0,0 @@
|
||||
Added support for ``Kademlia DHT`` in py-libp2p.
|
||||
1
newsfragments/592.internal.rst
Normal file
1
newsfragments/592.internal.rst
Normal file
@ -0,0 +1 @@
|
||||
remove FIXME comment since it's obsolete and 32-byte prefix support is there but not enabled by default
|
||||
@ -1 +0,0 @@
|
||||
Limit concurrency in `push_identify_to_peers` to prevent resource congestion under high peer counts.
|
||||
@ -1,7 +0,0 @@
|
||||
Store public key and peer ID in peerstore during handshake
|
||||
|
||||
Modified the InsecureTransport class to accept an optional peerstore parameter and updated the handshake process to store the received public key and peer ID in the peerstore when available.
|
||||
|
||||
Added test cases to verify:
|
||||
1. The peerstore remains unchanged when handshake fails due to peer ID mismatch
|
||||
2. The handshake correctly adds a public key to a peer ID that already exists in the peerstore but doesn't have a public key yet
|
||||
@ -1,6 +0,0 @@
|
||||
Fixed several flow-control and concurrency issues in the `YamuxStream` class. Previously, stress-testing revealed that transferring data over `DEFAULT_WINDOW_SIZE` would break the stream due to inconsistent window update handling and lock management. The fixes include:
|
||||
|
||||
- Removed sending of window updates during writes to maintain correct flow-control.
|
||||
- Added proper timeout handling when releasing and acquiring locks to prevent concurrency errors.
|
||||
- Corrected the `read` function to properly handle window updates for both `read_until_EOF` and `read_n_bytes`.
|
||||
- Added event logging at `send_window_updates` and `waiting_for_window_updates` for better observability.
|
||||
@ -1 +0,0 @@
|
||||
Added support for ``Multicast DNS`` in py-libp2p
|
||||
@ -1 +0,0 @@
|
||||
Refactored gossipsub heartbeat logic to use a single helper method `_handle_topic_heartbeat` that handles both fanout and gossip heartbeats.
|
||||
@ -1 +0,0 @@
|
||||
Added sparse connect utility function to pubsub test utilities for creating test networks with configurable connectivity.
|
||||
@ -1,2 +0,0 @@
|
||||
Reordered the arguments to `upgrade_security` to place `is_initiator` before `peer_id`, and made `peer_id` optional.
|
||||
This allows the method to reflect the fact that peer identity is not required for inbound connections.
|
||||
@ -1 +0,0 @@
|
||||
Uses the `decapsulate` method of the `Multiaddr` class to clean up the observed address.
|
||||
@ -1 +0,0 @@
|
||||
Optimized pubsub publishing to send multiple topics in a single message instead of separate messages per topic.
|
||||
@ -1 +0,0 @@
|
||||
Optimized pubsub message writing by implementing a write_msg() method that uses pre-allocated buffers and single write operations, improving performance by eliminating separate varint prefix encoding and write operations in FloodSub and GossipSub.
|
||||
@ -1 +0,0 @@
|
||||
added peer exchange and backoff logic as part of Gossipsub v1.1 upgrade
|
||||
@ -1,4 +0,0 @@
|
||||
Add timeout wrappers in:
|
||||
1. multiselect.py: `negotiate` function
|
||||
2. multiselect_client.py: `select_one_of` , `query_multistream_command` functions
|
||||
to prevent indefinite hangs when a remote peer does not respond.
|
||||
@ -1 +0,0 @@
|
||||
align stream creation logic with yamux specification
|
||||
@ -1 +0,0 @@
|
||||
Fixed an issue in `Pubsub` where async validators were not handled reliably under concurrency. Now uses a safe aggregator list for consistent behavior.
|
||||
@ -1 +0,0 @@
|
||||
Added comprehensive tests for pubsub connection utility functions to verify degree limits are enforced, excess peers are handled correctly, and edge cases (degree=0, negative values, empty lists) are managed gracefully.
|
||||
@ -1 +0,0 @@
|
||||
Added extra tests for identify push concurrency cap under high peer load
|
||||
1
newsfragments/711.feature.rst
Normal file
1
newsfragments/711.feature.rst
Normal file
@ -0,0 +1 @@
|
||||
Added `Bootstrap` peer discovery module that allows nodes to connect to predefined bootstrap peers for network discovery.
|
||||
@ -1 +0,0 @@
|
||||
update cryptographic dependencies: pycryptodome to ≥3.19.1, pynacl to ≥1.5.0, coincurve to ≥21.0.0
|
||||
3
newsfragments/746.bugfix.rst
Normal file
3
newsfragments/746.bugfix.rst
Normal file
@ -0,0 +1,3 @@
|
||||
Improved type safety in `get_mux()` and `get_protocols()` by returning properly typed values instead
|
||||
of `Any`. Also updated `identify.py` and `discovery.py` to handle `None` values safely and
|
||||
compare protocols correctly.
|
||||
1
newsfragments/748.feature.rst
Normal file
1
newsfragments/748.feature.rst
Normal file
@ -0,0 +1 @@
|
||||
Add lock for read/write to avoid interleaving receiving messages in mplex_stream.py
|
||||
1
newsfragments/749.internal.rst
Normal file
1
newsfragments/749.internal.rst
Normal file
@ -0,0 +1 @@
|
||||
Add comprehensive tests for relay_discovery method in circuit_relay_v2
|
||||
1
newsfragments/750.feature.rst
Normal file
1
newsfragments/750.feature.rst
Normal file
@ -0,0 +1 @@
|
||||
Add logic to clear_peerdata method in peerstore
|
||||
1
newsfragments/752.internal.rst
Normal file
1
newsfragments/752.internal.rst
Normal file
@ -0,0 +1 @@
|
||||
[mplex] Add timeout and error handling during stream close
|
||||
2
newsfragments/753.feature.rst
Normal file
2
newsfragments/753.feature.rst
Normal file
@ -0,0 +1,2 @@
|
||||
Added the `Certified Addr-Book` interface supported by `Envelope` and `PeerRecord` class.
|
||||
Integrated the signed-peer-record transfer in the identify/push protocols.
|
||||
2
newsfragments/755.performance.rst
Normal file
2
newsfragments/755.performance.rst
Normal file
@ -0,0 +1,2 @@
|
||||
Added throttling for async topic validators in validate_msg, enforcing a
|
||||
concurrency limit to prevent resource exhaustion under heavy load.
|
||||
1
newsfragments/757.bugfix.rst
Normal file
1
newsfragments/757.bugfix.rst
Normal file
@ -0,0 +1 @@
|
||||
fixed malformed PeerId in test_peerinfo
|
||||
1
newsfragments/757.internal.rst
Normal file
1
newsfragments/757.internal.rst
Normal file
@ -0,0 +1 @@
|
||||
fixed a typecheck error using cast in peerinfo.py
|
||||
1
newsfragments/760.docs.rst
Normal file
1
newsfragments/760.docs.rst
Normal file
@ -0,0 +1 @@
|
||||
Improve error message under the function decode_uvarint_from_stream in libp2p/utils/varint.py file
|
||||
1
newsfragments/761.breaking.rst
Normal file
1
newsfragments/761.breaking.rst
Normal file
@ -0,0 +1 @@
|
||||
identify protocol use now prefix-length messages by default. use use_varint_format param for old raw messages
|
||||
1
newsfragments/761.feature.rst
Normal file
1
newsfragments/761.feature.rst
Normal file
@ -0,0 +1 @@
|
||||
add length-prefixed support to identify protocol
|
||||
1
newsfragments/761.internal.rst
Normal file
1
newsfragments/761.internal.rst
Normal file
@ -0,0 +1 @@
|
||||
Fix raw format reading in identify/push protocol and add comprehensive test coverage for both varint and raw formats
|
||||
1
newsfragments/766.internal.rst
Normal file
1
newsfragments/766.internal.rst
Normal file
@ -0,0 +1 @@
|
||||
Pin py-multiaddr dependency to specific git commit db8124e2321f316d3b7d2733c7df11d6ad9c03e6
|
||||
1
newsfragments/772.internal.rst
Normal file
1
newsfragments/772.internal.rst
Normal file
@ -0,0 +1 @@
|
||||
Replace the libp2p.peer.ID cache attributes with functools.cached_property functional decorator.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user