5 Commits

180 changed files with 554 additions and 11533 deletions

View File

@ -60,7 +60,6 @@ PB = libp2p/crypto/pb/crypto.proto \
libp2p/identity/identify/pb/identify.proto \ libp2p/identity/identify/pb/identify.proto \
libp2p/host/autonat/pb/autonat.proto \ libp2p/host/autonat/pb/autonat.proto \
libp2p/relay/circuit_v2/pb/circuit.proto \ libp2p/relay/circuit_v2/pb/circuit.proto \
libp2p/relay/circuit_v2/pb/dcutr.proto \
libp2p/kad_dht/pb/kademlia.proto libp2p/kad_dht/pb/kademlia.proto
PY = $(PB:.proto=_pb2.py) PY = $(PB:.proto=_pb2.py)
@ -69,8 +68,6 @@ PYI = $(PB:.proto=_pb2.pyi)
## Set default to `protobufs`, otherwise `format` is called when typing only `make` ## Set default to `protobufs`, otherwise `format` is called when typing only `make`
all: protobufs all: protobufs
.PHONY: protobufs clean-proto
protobufs: $(PY) protobufs: $(PY)
%_pb2.py: %.proto %_pb2.py: %.proto
@ -79,11 +76,6 @@ protobufs: $(PY)
clean-proto: clean-proto:
rm -f $(PY) $(PYI) rm -f $(PY) $(PYI)
# Force protobuf regeneration by making them always out of date
$(PY): FORCE
FORCE:
# docs commands # docs commands
docs: check-docs docs: check-docs

View File

@ -12,13 +12,13 @@
[![Build Status](https://img.shields.io/github/actions/workflow/status/libp2p/py-libp2p/tox.yml?branch=main&label=build%20status)](https://github.com/libp2p/py-libp2p/actions/workflows/tox.yml) [![Build Status](https://img.shields.io/github/actions/workflow/status/libp2p/py-libp2p/tox.yml?branch=main&label=build%20status)](https://github.com/libp2p/py-libp2p/actions/workflows/tox.yml)
[![Docs build](https://readthedocs.org/projects/py-libp2p/badge/?version=latest)](http://py-libp2p.readthedocs.io/en/latest/?badge=latest) [![Docs build](https://readthedocs.org/projects/py-libp2p/badge/?version=latest)](http://py-libp2p.readthedocs.io/en/latest/?badge=latest)
> py-libp2p has moved beyond its experimental roots and is steadily progressing toward production readiness. The core features are stable, and were focused on refining performance, expanding protocol support, and ensuring smooth interop with other libp2p implementations. We welcome contributions and real-world usage feedback to help us reach full production maturity. > ⚠️ **Warning:** py-libp2p is an experimental and work-in-progress repo under development. We do not yet recommend using py-libp2p in production environments.
Read more in the [documentation on ReadTheDocs](https://py-libp2p.readthedocs.io/). [View the release notes](https://py-libp2p.readthedocs.io/en/latest/release_notes.html). Read more in the [documentation on ReadTheDocs](https://py-libp2p.readthedocs.io/). [View the release notes](https://py-libp2p.readthedocs.io/en/latest/release_notes.html).
## Maintainers ## Maintainers
Currently maintained by [@pacrob](https://github.com/pacrob), [@seetadev](https://github.com/seetadev) and [@dhuseby](https://github.com/dhuseby). Please reach out to us for collaboration or active feedback. If you have questions, feel free to open a new [discussion](https://github.com/libp2p/py-libp2p/discussions). We are also available on the libp2p Discord — join us at #py-libp2p [sub-channel](https://discord.gg/d92MEugb). Currently maintained by [@pacrob](https://github.com/pacrob), [@seetadev](https://github.com/seetadev) and [@dhuseby](https://github.com/dhuseby), looking for assistance!
## Feature Breakdown ## Feature Breakdown
@ -34,19 +34,19 @@ ______________________________________________________________________
| -------------------------------------- | :--------: | :---------------------------------------------------------------------------------: | | -------------------------------------- | :--------: | :---------------------------------------------------------------------------------: |
| **`libp2p-tcp`** | ✅ | [source](https://github.com/libp2p/py-libp2p/blob/main/libp2p/transport/tcp/tcp.py) | | **`libp2p-tcp`** | ✅ | [source](https://github.com/libp2p/py-libp2p/blob/main/libp2p/transport/tcp/tcp.py) |
| **`libp2p-quic`** | 🌱 | | | **`libp2p-quic`** | 🌱 | |
| **`libp2p-websocket`** | 🌱 | | | **`libp2p-websocket`** | | |
| **`libp2p-webrtc-browser-to-server`** | 🌱 | | | **`libp2p-webrtc-browser-to-server`** | | |
| **`libp2p-webrtc-private-to-private`** | 🌱 | | | **`libp2p-webrtc-private-to-private`** | | |
______________________________________________________________________ ______________________________________________________________________
### NAT Traversal ### NAT Traversal
| **NAT Traversal** | **Status** | **Source** | | **NAT Traversal** | **Status** |
| ----------------------------- | :--------: | :-----------------------------------------------------------------------------: | | ----------------------------- | :--------: |
| **`libp2p-circuit-relay-v2`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/relay/circuit_v2) | | **`libp2p-circuit-relay-v2`** | |
| **`libp2p-autonat`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/host/autonat) | | **`libp2p-autonat`** | |
| **`libp2p-hole-punching`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/relay/circuit_v2) | | **`libp2p-hole-punching`** | |
______________________________________________________________________ ______________________________________________________________________
@ -54,27 +54,27 @@ ______________________________________________________________________
| **Secure Communication** | **Status** | **Source** | | **Secure Communication** | **Status** | **Source** |
| ------------------------ | :--------: | :---------------------------------------------------------------------------: | | ------------------------ | :--------: | :---------------------------------------------------------------------------: |
| **`libp2p-noise`** | | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/security/noise) | | **`libp2p-noise`** | 🌱 | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/security/noise) |
| **`libp2p-tls`** | 🌱 | | | **`libp2p-tls`** | | |
______________________________________________________________________ ______________________________________________________________________
### Discovery ### Discovery
| **Discovery** | **Status** | **Source** | | **Discovery** | **Status** |
| -------------------- | :--------: | :--------------------------------------------------------------------------------: | | -------------------- | :--------: |
| **`bootstrap`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/discovery/bootstrap) | | **`bootstrap`** | |
| **`random-walk`** | 🌱 | | | **`random-walk`** | |
| **`mdns-discovery`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/discovery/mdns) | | **`mdns-discovery`** | |
| **`rendezvous`** | 🌱 | | | **`rendezvous`** | |
______________________________________________________________________ ______________________________________________________________________
### Peer Routing ### Peer Routing
| **Peer Routing** | **Status** | **Source** | | **Peer Routing** | **Status** |
| -------------------- | :--------: | :--------------------------------------------------------------------: | | -------------------- | :--------: |
| **`libp2p-kad-dht`** | | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/kad_dht) | | **`libp2p-kad-dht`** | |
______________________________________________________________________ ______________________________________________________________________
@ -89,10 +89,10 @@ ______________________________________________________________________
### Stream Muxers ### Stream Muxers
| **Stream Muxers** | **Status** | **Source** | | **Stream Muxers** | **Status** | **Status** |
| ------------------ | :--------: | :-------------------------------------------------------------------------------: | | ------------------ | :--------: | :----------------------------------------------------------------------------------------: |
| **`libp2p-yamux`** | | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/stream_muxer/yamux) | | **`libp2p-yamux`** | 🌱 | |
| **`libp2p-mplex`** | | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/stream_muxer/mplex) | | **`libp2p-mplex`** | 🛠️ | [source](https://github.com/libp2p/py-libp2p/blob/main/libp2p/stream_muxer/mplex/mplex.py) |
______________________________________________________________________ ______________________________________________________________________
@ -100,7 +100,7 @@ ______________________________________________________________________
| **Storage** | **Status** | | **Storage** | **Status** |
| ------------------- | :--------: | | ------------------- | :--------: |
| **`libp2p-record`** | 🌱 | | **`libp2p-record`** | |
______________________________________________________________________ ______________________________________________________________________

View File

@ -1,131 +0,0 @@
Random Walk Example
===================
This example demonstrates the Random Walk module's peer discovery capabilities using real libp2p hosts and Kademlia DHT.
It shows how the Random Walk module automatically discovers new peers and maintains routing table health.
The Random Walk implementation performs the following key operations:
* **Automatic Peer Discovery**: Generates random peer IDs and queries the DHT network to discover new peers
* **Routing Table Maintenance**: Periodically refreshes the routing table to maintain network connectivity
* **Connection Management**: Maintains optimal connections to healthy peers in the network
* **Real-time Statistics**: Displays routing table size, connected peers, and peerstore statistics
.. code-block:: console
$ python -m pip install libp2p
Collecting libp2p
...
Successfully installed libp2p-x.x.x
$ cd examples/random_walk
$ python random_walk.py --mode server
2025-08-12 19:51:25,424 - random-walk-example - INFO - === Random Walk Example for py-libp2p ===
2025-08-12 19:51:25,424 - random-walk-example - INFO - Mode: server, Port: 0 Demo interval: 30s
2025-08-12 19:51:25,426 - random-walk-example - INFO - Starting server node on port 45123
2025-08-12 19:51:25,426 - random-walk-example - INFO - Node peer ID: 16Uiu2HAm7EsNv5vvjPAehGAVfChjYjD63ZHyWogQRdzntSbAg9ef
2025-08-12 19:51:25,426 - random-walk-example - INFO - Node address: /ip4/0.0.0.0/tcp/45123/p2p/16Uiu2HAm7EsNv5vvjPAehGAVfChjYjD63ZHyWogQRdzntSbAg9ef
2025-08-12 19:51:25,427 - random-walk-example - INFO - Initial routing table size: 0
2025-08-12 19:51:25,427 - random-walk-example - INFO - DHT service started in SERVER mode
2025-08-12 19:51:25,430 - libp2p.discovery.random_walk.rt_refresh_manager - INFO - RT Refresh Manager started
2025-08-12 19:51:55,432 - random-walk-example - INFO - --- Iteration 1 ---
2025-08-12 19:51:55,432 - random-walk-example - INFO - Routing table size: 15
2025-08-12 19:51:55,432 - random-walk-example - INFO - Connected peers: 8
2025-08-12 19:51:55,432 - random-walk-example - INFO - Peerstore size: 42
You can also run the example in client mode:
.. code-block:: console
$ python random_walk.py --mode client
2025-08-12 19:52:15,424 - random-walk-example - INFO - === Random Walk Example for py-libp2p ===
2025-08-12 19:52:15,424 - random-walk-example - INFO - Mode: client, Port: 0 Demo interval: 30s
2025-08-12 19:52:15,426 - random-walk-example - INFO - Starting client node on port 51234
2025-08-12 19:52:15,426 - random-walk-example - INFO - Node peer ID: 16Uiu2HAmAbc123xyz...
2025-08-12 19:52:15,427 - random-walk-example - INFO - DHT service started in CLIENT mode
2025-08-12 19:52:45,432 - random-walk-example - INFO - --- Iteration 1 ---
2025-08-12 19:52:45,432 - random-walk-example - INFO - Routing table size: 8
2025-08-12 19:52:45,432 - random-walk-example - INFO - Connected peers: 5
2025-08-12 19:52:45,432 - random-walk-example - INFO - Peerstore size: 25
Command Line Options
--------------------
The example supports several command-line options:
.. code-block:: console
$ python random_walk.py --help
usage: random_walk.py [-h] [--mode {server,client}] [--port PORT]
[--demo-interval DEMO_INTERVAL] [--verbose]
Random Walk Example for py-libp2p Kademlia DHT
optional arguments:
-h, --help show this help message and exit
--mode {server,client}
Node mode: server (DHT server), or client (DHT client)
--port PORT Port to listen on (0 for random)
--demo-interval DEMO_INTERVAL
Interval between random walk demonstrations in seconds
--verbose Enable verbose logging
Key Features Demonstrated
-------------------------
**Automatic Random Walk Discovery**
The example shows how the Random Walk module automatically:
* Generates random 256-bit peer IDs for discovery queries
* Performs concurrent random walks to maximize peer discovery
* Validates discovered peers and adds them to the routing table
* Maintains routing table health through periodic refreshes
**Real-time Network Statistics**
The example displays live statistics every 30 seconds (configurable):
* **Routing Table Size**: Number of peers in the Kademlia routing table
* **Connected Peers**: Number of actively connected peers
* **Peerstore Size**: Total number of known peers with addresses
**Connection Management**
The example includes sophisticated connection management:
* Automatically maintains connections to healthy peers
* Filters for compatible peers (TCP + IPv4 addresses)
* Reconnects to maintain optimal network connectivity
* Handles connection failures gracefully
**DHT Integration**
Shows seamless integration between Random Walk and Kademlia DHT:
* RT Refresh Manager coordinates with the DHT routing table
* Peer discovery feeds directly into DHT operations
* Both SERVER and CLIENT modes supported
* Bootstrap connectivity to public IPFS nodes
Understanding the Output
------------------------
When you run the example, you'll see periodic statistics that show how the Random Walk module is working:
* **Initial Phase**: Routing table starts empty and quickly discovers peers
* **Growth Phase**: Routing table size increases as more peers are discovered
* **Maintenance Phase**: Routing table size stabilizes as the system maintains optimal peer connections
The Random Walk module runs automatically in the background, performing peer discovery queries every few minutes to ensure the routing table remains populated with fresh, reachable peers.
Configuration
-------------
The Random Walk module can be configured through the following parameters in ``libp2p.discovery.random_walk.config``:
* ``RANDOM_WALK_ENABLED``: Enable/disable automatic random walks (default: True)
* ``REFRESH_INTERVAL``: Time between automatic refreshes in seconds (default: 300)
* ``RANDOM_WALK_CONCURRENCY``: Number of concurrent random walks (default: 3)
* ``MIN_RT_REFRESH_THRESHOLD``: Minimum routing table size before triggering refresh (default: 4)
See Also
--------
* :doc:`examples.kademlia` - Kademlia DHT value storage and content routing
* :doc:`libp2p.discovery.random_walk` - Random Walk module API documentation

View File

@ -14,4 +14,3 @@ Examples
examples.circuit_relay examples.circuit_relay
examples.kademlia examples.kademlia
examples.mDNS examples.mDNS
examples.random_walk

View File

@ -1,13 +0,0 @@
libp2p.discovery.bootstrap package
==================================
Submodules
----------
Module contents
---------------
.. automodule:: libp2p.discovery.bootstrap
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,48 +0,0 @@
libp2p.discovery.random_walk package
====================================
The Random Walk module implements a peer discovery mechanism.
It performs random walks through the DHT network to discover new peers and maintain routing table health through periodic refreshes.
Submodules
----------
libp2p.discovery.random_walk.config module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: libp2p.discovery.random_walk.config
:members:
:undoc-members:
:show-inheritance:
libp2p.discovery.random_walk.exceptions module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: libp2p.discovery.random_walk.exceptions
:members:
:undoc-members:
:show-inheritance:
libp2p.discovery.random_walk.random_walk module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: libp2p.discovery.random_walk.random_walk
:members:
:undoc-members:
:show-inheritance:
libp2p.discovery.random_walk.rt_refresh_manager module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: libp2p.discovery.random_walk.rt_refresh_manager
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: libp2p.discovery.random_walk
:members:
:undoc-members:
:show-inheritance:

View File

@ -7,10 +7,8 @@ Subpackages
.. toctree:: .. toctree::
:maxdepth: 4 :maxdepth: 4
libp2p.discovery.bootstrap
libp2p.discovery.events libp2p.discovery.events
libp2p.discovery.mdns libp2p.discovery.mdns
libp2p.discovery.random_walk
Submodules Submodules
---------- ----------

View File

@ -3,65 +3,6 @@ Release Notes
.. towncrier release notes start .. towncrier release notes start
py-libp2p v0.2.9 (2025-07-09)
-----------------------------
Breaking Changes
~~~~~~~~~~~~~~~~
- Reordered the arguments to ``upgrade_security`` to place ``is_initiator`` before ``peer_id``, and made ``peer_id`` optional.
This allows the method to reflect the fact that peer identity is not required for inbound connections. (`#681 <https://github.com/libp2p/py-libp2p/issues/681>`__)
Bugfixes
~~~~~~~~
- Add timeout wrappers in:
1. ``multiselect.py``: ``negotiate`` function
2. ``multiselect_client.py``: ``select_one_of`` , ``query_multistream_command`` functions
to prevent indefinite hangs when a remote peer does not respond. (`#696 <https://github.com/libp2p/py-libp2p/issues/696>`__)
- Align stream creation logic with yamux specification (`#701 <https://github.com/libp2p/py-libp2p/issues/701>`__)
- Fixed an issue in ``Pubsub`` where async validators were not handled reliably under concurrency. Now uses a safe aggregator list for consistent behavior. (`#702 <https://github.com/libp2p/py-libp2p/issues/702>`__)
Features
~~~~~~~~
- Added support for ``Kademlia DHT`` in py-libp2p. (`#579 <https://github.com/libp2p/py-libp2p/issues/579>`__)
- Limit concurrency in ``push_identify_to_peers`` to prevent resource congestion under high peer counts. (`#621 <https://github.com/libp2p/py-libp2p/issues/621>`__)
- Store public key and peer ID in peerstore during handshake
Modified the InsecureTransport class to accept an optional peerstore parameter and updated the handshake process to store the received public key and peer ID in the peerstore when available.
Added test cases to verify:
1. The peerstore remains unchanged when handshake fails due to peer ID mismatch
2. The handshake correctly adds a public key to a peer ID that already exists in the peerstore but doesn't have a public key yet (`#631 <https://github.com/libp2p/py-libp2p/issues/631>`__)
- Fixed several flow-control and concurrency issues in the ``YamuxStream`` class. Previously, stress-testing revealed that transferring data over ``DEFAULT_WINDOW_SIZE`` would break the stream due to inconsistent window update handling and lock management. The fixes include:
- Removed sending of window updates during writes to maintain correct flow-control.
- Added proper timeout handling when releasing and acquiring locks to prevent concurrency errors.
- Corrected the ``read`` function to properly handle window updates for both ``read_until_EOF`` and ``read_n_bytes``.
- Added event logging at ``send_window_updates`` and ``waiting_for_window_updates`` for better observability. (`#639 <https://github.com/libp2p/py-libp2p/issues/639>`__)
- Added support for ``Multicast DNS`` in py-libp2p (`#649 <https://github.com/libp2p/py-libp2p/issues/649>`__)
- Optimized pubsub publishing to send multiple topics in a single message instead of separate messages per topic. (`#685 <https://github.com/libp2p/py-libp2p/issues/685>`__)
- Optimized pubsub message writing by implementing a write_msg() method that uses pre-allocated buffers and single write operations, improving performance by eliminating separate varint prefix encoding and write operations in FloodSub and GossipSub. (`#687 <https://github.com/libp2p/py-libp2p/issues/687>`__)
- Added peer exchange and backoff logic as part of Gossipsub v1.1 upgrade (`#690 <https://github.com/libp2p/py-libp2p/issues/690>`__)
Internal Changes - for py-libp2p Contributors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Added sparse connect utility function to pubsub test utilities for creating test networks with configurable connectivity. (`#679 <https://github.com/libp2p/py-libp2p/issues/679>`__)
- Added comprehensive tests for pubsub connection utility functions to verify degree limits are enforced, excess peers are handled correctly, and edge cases (degree=0, negative values, empty lists) are managed gracefully. (`#707 <https://github.com/libp2p/py-libp2p/issues/707>`__)
- Added extra tests for identify push concurrency cap under high peer load (`#708 <https://github.com/libp2p/py-libp2p/issues/708>`__)
Miscellaneous Changes
~~~~~~~~~~~~~~~~~~~~~
- `#678 <https://github.com/libp2p/py-libp2p/issues/678>`__, `#684 <https://github.com/libp2p/py-libp2p/issues/684>`__
py-libp2p v0.2.8 (2025-06-10) py-libp2p v0.2.8 (2025-06-10)
----------------------------- -----------------------------

View File

@ -1,63 +0,0 @@
"""
Advanced demonstration of Thin Waist address handling.
Run:
python -m examples.advanced.network_discovery
"""
from __future__ import annotations
from multiaddr import Multiaddr
try:
from libp2p.utils.address_validation import (
expand_wildcard_address,
get_available_interfaces,
get_optimal_binding_address,
)
except ImportError:
# Fallbacks if utilities are missing
def get_available_interfaces(port: int, protocol: str = "tcp"):
return [Multiaddr(f"/ip4/0.0.0.0/{protocol}/{port}")]
def expand_wildcard_address(addr: Multiaddr, port: int | None = None):
if port is None:
return [addr]
addr_str = str(addr).rsplit("/", 1)[0]
return [Multiaddr(addr_str + f"/{port}")]
def get_optimal_binding_address(port: int, protocol: str = "tcp"):
return Multiaddr(f"/ip4/0.0.0.0/{protocol}/{port}")
def main() -> None:
port = 8080
interfaces = get_available_interfaces(port)
print(f"Discovered interfaces for port {port}:")
for a in interfaces:
print(f" - {a}")
wildcard_v4 = Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
expanded_v4 = expand_wildcard_address(wildcard_v4)
print("\nExpanded IPv4 wildcard:")
for a in expanded_v4:
print(f" - {a}")
wildcard_v6 = Multiaddr(f"/ip6/::/tcp/{port}")
expanded_v6 = expand_wildcard_address(wildcard_v6)
print("\nExpanded IPv6 wildcard:")
for a in expanded_v6:
print(f" - {a}")
print("\nOptimal binding address heuristic result:")
print(f" -> {get_optimal_binding_address(port)}")
override_port = 9000
overridden = expand_wildcard_address(wildcard_v4, port=override_port)
print(f"\nPort override expansion to {override_port}:")
for a in overridden:
print(f" - {a}")
if __name__ == "__main__":
main()

View File

@ -1,136 +0,0 @@
import argparse
import logging
import secrets
import multiaddr
import trio
from libp2p import new_host
from libp2p.abc import PeerInfo
from libp2p.crypto.secp256k1 import create_new_key_pair
from libp2p.discovery.events.peerDiscovery import peerDiscovery
# Configure logging
logger = logging.getLogger("libp2p.discovery.bootstrap")
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
handler.setFormatter(
logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
)
logger.addHandler(handler)
# Configure root logger to only show warnings and above to reduce noise
# This prevents verbose DEBUG messages from multiaddr, DNS, etc.
logging.getLogger().setLevel(logging.WARNING)
# Specifically silence noisy libraries
logging.getLogger("multiaddr").setLevel(logging.WARNING)
logging.getLogger("root").setLevel(logging.WARNING)
def on_peer_discovery(peer_info: PeerInfo) -> None:
"""Handler for peer discovery events."""
logger.info(f"🔍 Discovered peer: {peer_info.peer_id}")
logger.debug(f" Addresses: {[str(addr) for addr in peer_info.addrs]}")
# Example bootstrap peers
BOOTSTRAP_PEERS = [
"/dnsaddr/github.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/cloudflare.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/google.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"/ip6/2604:a880:1:20::203:d001/tcp/4001/p2p/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip4/128.199.219.111/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip4/104.236.76.40/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip4/178.62.158.247/tcp/4001/p2p/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
"/ip6/2604:a880:1:20::203:d001/tcp/4001/p2p/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip6/2400:6180:0:d0::151:6001/tcp/4001/p2p/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/p2p/QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm",
]
async def run(port: int, bootstrap_addrs: list[str]) -> None:
"""Run the bootstrap discovery example."""
# Generate key pair
secret = secrets.token_bytes(32)
key_pair = create_new_key_pair(secret)
# Create listen address
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
# Register peer discovery handler
peerDiscovery.register_peer_discovered_handler(on_peer_discovery)
logger.info("🚀 Starting Bootstrap Discovery Example")
logger.info(f"📍 Listening on: {listen_addr}")
logger.info(f"🌐 Bootstrap peers: {len(bootstrap_addrs)}")
print("\n" + "=" * 60)
print("Bootstrap Discovery Example")
print("=" * 60)
print("This example demonstrates connecting to bootstrap peers.")
print("Watch the logs for peer discovery events!")
print("Press Ctrl+C to exit.")
print("=" * 60)
# Create and run host with bootstrap discovery
host = new_host(key_pair=key_pair, bootstrap=bootstrap_addrs)
try:
async with host.run(listen_addrs=[listen_addr]):
# Keep running and log peer discovery events
await trio.sleep_forever()
except KeyboardInterrupt:
logger.info("👋 Shutting down...")
def main() -> None:
"""Main entry point."""
description = """
Bootstrap Discovery Example for py-libp2p
This example demonstrates how to use bootstrap peers for peer discovery.
Bootstrap peers are predefined peers that help new nodes join the network.
Usage:
python bootstrap.py -p 8000
python bootstrap.py -p 8001 --custom-bootstrap \\
"/ip4/127.0.0.1/tcp/8000/p2p/QmYourPeerID"
"""
parser = argparse.ArgumentParser(
description=description, formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument(
"-p", "--port", default=0, type=int, help="Port to listen on (default: random)"
)
parser.add_argument(
"--custom-bootstrap",
nargs="*",
help="Custom bootstrap addresses (space-separated)",
)
parser.add_argument(
"-v", "--verbose", action="store_true", help="Enable verbose output"
)
args = parser.parse_args()
if args.verbose:
logger.setLevel(logging.DEBUG)
# Use custom bootstrap addresses if provided, otherwise use defaults
bootstrap_addrs = (
args.custom_bootstrap if args.custom_bootstrap else BOOTSTRAP_PEERS
)
try:
trio.run(run, args.port, bootstrap_addrs)
except KeyboardInterrupt:
logger.info("Exiting...")
if __name__ == "__main__":
main()

View File

@ -43,9 +43,6 @@ async def run(port: int, destination: str) -> None:
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
host = new_host() host = new_host()
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery: async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
if not destination: # its the server if not destination: # its the server
async def stream_handler(stream: INetStream) -> None: async def stream_handler(stream: INetStream) -> None:

View File

@ -1,6 +1,4 @@
import argparse import argparse
import random
import secrets
import multiaddr import multiaddr
import trio import trio
@ -14,71 +12,49 @@ from libp2p.crypto.secp256k1 import (
from libp2p.custom_types import ( from libp2p.custom_types import (
TProtocol, TProtocol,
) )
from libp2p.network.stream.exceptions import (
StreamEOF,
)
from libp2p.network.stream.net_stream import ( from libp2p.network.stream.net_stream import (
INetStream, INetStream,
) )
from libp2p.peer.peerinfo import ( from libp2p.peer.peerinfo import (
info_from_p2p_addr, info_from_p2p_addr,
) )
from libp2p.utils.address_validation import (
find_free_port,
get_available_interfaces,
)
PROTOCOL_ID = TProtocol("/echo/1.0.0") PROTOCOL_ID = TProtocol("/echo/1.0.0")
MAX_READ_LEN = 2**32 - 1 MAX_READ_LEN = 2**32 - 1
async def _echo_stream_handler(stream: INetStream) -> None: async def _echo_stream_handler(stream: INetStream) -> None:
try: # Wait until EOF
peer_id = stream.muxed_conn.peer_id msg = await stream.read(MAX_READ_LEN)
print(f"Received connection from {peer_id}") await stream.write(msg)
# Wait until EOF await stream.close()
msg = await stream.read(MAX_READ_LEN)
print(f"Echoing message: {msg.decode('utf-8')}")
await stream.write(msg)
except StreamEOF:
print("Stream closed by remote peer.")
except Exception as e:
print(f"Error in echo handler: {e}")
finally:
await stream.close()
async def run(port: int, destination: str, seed: int | None = None) -> None: async def run(port: int, destination: str, seed: int | None = None) -> None:
if port <= 0: listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
port = find_free_port()
listen_addr = get_available_interfaces(port)
if seed: if seed:
import random
random.seed(seed) random.seed(seed)
secret_number = random.getrandbits(32 * 8) secret_number = random.getrandbits(32 * 8)
secret = secret_number.to_bytes(length=32, byteorder="big") secret = secret_number.to_bytes(length=32, byteorder="big")
else: else:
import secrets
secret = secrets.token_bytes(32) secret = secrets.token_bytes(32)
host = new_host(key_pair=create_new_key_pair(secret)) host = new_host(key_pair=create_new_key_pair(secret))
async with host.run(listen_addrs=listen_addr), trio.open_nursery() as nursery: async with host.run(listen_addrs=[listen_addr]):
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
print(f"I am {host.get_id().to_string()}") print(f"I am {host.get_id().to_string()}")
if not destination: # its the server if not destination: # its the server
host.set_stream_handler(PROTOCOL_ID, _echo_stream_handler) host.set_stream_handler(PROTOCOL_ID, _echo_stream_handler)
# Print all listen addresses with peer ID (JS parity)
print("Listener ready, listening on:\n")
peer_id = host.get_id().to_string()
for addr in listen_addr:
print(f"{addr}/p2p/{peer_id}")
print( print(
"\nRun this from the same folder in another console:\n\n" "Run this from the same folder in another console:\n\n"
f"echo-demo -d {host.get_addrs()[0]}\n" f"echo-demo "
f"-d {host.get_addrs()[0]}\n"
) )
print("Waiting for incoming connections...") print("Waiting for incoming connections...")
await trio.sleep_forever() await trio.sleep_forever()

View File

@ -1,7 +1,6 @@
import argparse import argparse
import base64 import base64
import logging import logging
import sys
import multiaddr import multiaddr
import trio import trio
@ -9,13 +8,10 @@ import trio
from libp2p import ( from libp2p import (
new_host, new_host,
) )
from libp2p.identity.identify.identify import ( from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID
ID as IDENTIFY_PROTOCOL_ID, from libp2p.identity.identify.pb.identify_pb2 import (
identify_handler_for, Identify,
parse_identify_response,
) )
from libp2p.identity.identify.pb.identify_pb2 import Identify
from libp2p.peer.envelope import debug_dump_envelope, unmarshal_envelope
from libp2p.peer.peerinfo import ( from libp2p.peer.peerinfo import (
info_from_p2p_addr, info_from_p2p_addr,
) )
@ -34,11 +30,10 @@ def decode_multiaddrs(raw_addrs):
return decoded_addrs return decoded_addrs
def print_identify_response(identify_response: Identify): def print_identify_response(identify_response):
"""Pretty-print Identify response.""" """Pretty-print Identify response."""
public_key_b64 = base64.b64encode(identify_response.public_key).decode("utf-8") public_key_b64 = base64.b64encode(identify_response.public_key).decode("utf-8")
listen_addrs = decode_multiaddrs(identify_response.listen_addrs) listen_addrs = decode_multiaddrs(identify_response.listen_addrs)
signed_peer_record = unmarshal_envelope(identify_response.signedPeerRecord)
try: try:
observed_addr_decoded = decode_multiaddrs([identify_response.observed_addr]) observed_addr_decoded = decode_multiaddrs([identify_response.observed_addr])
except Exception: except Exception:
@ -54,10 +49,8 @@ def print_identify_response(identify_response: Identify):
f" Agent Version: {identify_response.agent_version}" f" Agent Version: {identify_response.agent_version}"
) )
debug_dump_envelope(signed_peer_record)
async def run(port: int, destination: str) -> None:
async def run(port: int, destination: str, use_varint_format: bool = True) -> None:
localhost_ip = "0.0.0.0" localhost_ip = "0.0.0.0"
if not destination: if not destination:
@ -65,159 +58,39 @@ async def run(port: int, destination: str, use_varint_format: bool = True) -> No
listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}")
host_a = new_host() host_a = new_host()
# Set up identify handler with specified format async with host_a.run(listen_addrs=[listen_addr]):
# Set use_varint_format = False, if want to checkout the Signed-PeerRecord
identify_handler = identify_handler_for(
host_a, use_varint_format=use_varint_format
)
host_a.set_stream_handler(IDENTIFY_PROTOCOL_ID, identify_handler)
async with (
host_a.run(listen_addrs=[listen_addr]),
trio.open_nursery() as nursery,
):
# Start the peer-store cleanup task
nursery.start_soon(host_a.get_peerstore().start_cleanup_task, 60)
# Get the actual address and replace 0.0.0.0 with 127.0.0.1 for client
# connections
server_addr = str(host_a.get_addrs()[0])
client_addr = server_addr.replace("/ip4/0.0.0.0/", "/ip4/127.0.0.1/")
format_name = "length-prefixed" if use_varint_format else "raw protobuf"
format_flag = "--raw-format" if not use_varint_format else ""
print( print(
f"First host listening (using {format_name} format). " "First host listening. Run this from another console:\n\n"
f"Run this from another console:\n\n" f"identify-demo "
f"identify-demo {format_flag} -d {client_addr}\n" f"-d {host_a.get_addrs()[0]}\n"
) )
print("Waiting for incoming identify request...") print("Waiting for incoming identify request...")
await trio.sleep_forever()
# Add a custom handler to show connection events
async def custom_identify_handler(stream):
peer_id = stream.muxed_conn.peer_id
print(f"\n🔗 Received identify request from peer: {peer_id}")
# Show remote address in multiaddr format
try:
from libp2p.identity.identify.identify import (
_remote_address_to_multiaddr,
)
remote_address = stream.get_remote_address()
if remote_address:
observed_multiaddr = _remote_address_to_multiaddr(
remote_address
)
# Add the peer ID to create a complete multiaddr
complete_multiaddr = f"{observed_multiaddr}/p2p/{peer_id}"
print(f" Remote address: {complete_multiaddr}")
else:
print(f" Remote address: {remote_address}")
except Exception:
print(f" Remote address: {stream.get_remote_address()}")
# Call the original handler
await identify_handler(stream)
print(f"✅ Successfully processed identify request from {peer_id}")
# Replace the handler with our custom one
host_a.set_stream_handler(IDENTIFY_PROTOCOL_ID, custom_identify_handler)
try:
await trio.sleep_forever()
except KeyboardInterrupt:
print("\n🛑 Shutting down listener...")
logger.info("Listener interrupted by user")
return
else: else:
# Create second host (dialer) # Create second host (dialer)
listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}")
host_b = new_host() host_b = new_host()
async with ( async with host_b.run(listen_addrs=[listen_addr]):
host_b.run(listen_addrs=[listen_addr]),
trio.open_nursery() as nursery,
):
# Start the peer-store cleanup task
nursery.start_soon(host_b.get_peerstore().start_cleanup_task, 60)
# Connect to the first host # Connect to the first host
print(f"dialer (host_b) listening on {host_b.get_addrs()[0]}") print(f"dialer (host_b) listening on {host_b.get_addrs()[0]}")
maddr = multiaddr.Multiaddr(destination) maddr = multiaddr.Multiaddr(destination)
info = info_from_p2p_addr(maddr) info = info_from_p2p_addr(maddr)
print(f"Second host connecting to peer: {info.peer_id}") print(f"Second host connecting to peer: {info.peer_id}")
try: await host_b.connect(info)
await host_b.connect(info)
except Exception as e:
error_msg = str(e)
if "unable to connect" in error_msg or "SwarmException" in error_msg:
print(f"\n❌ Cannot connect to peer: {info.peer_id}")
print(f" Address: {destination}")
print(f" Error: {error_msg}")
print(
"\n💡 Make sure the peer is running and the address is correct."
)
return
else:
# Re-raise other exceptions
raise
stream = await host_b.new_stream(info.peer_id, (IDENTIFY_PROTOCOL_ID,)) stream = await host_b.new_stream(info.peer_id, (IDENTIFY_PROTOCOL_ID,))
try: try:
print("Starting identify protocol...") print("Starting identify protocol...")
response = await stream.read()
# Read the response using the utility function
from libp2p.utils.varint import read_length_prefixed_protobuf
response = await read_length_prefixed_protobuf(
stream, use_varint_format
)
full_response = response
await stream.close() await stream.close()
identify_msg = Identify()
# Parse the response using the robust protocol-level function identify_msg.ParseFromString(response)
# This handles both old and new formats automatically
identify_msg = parse_identify_response(full_response)
print_identify_response(identify_msg) print_identify_response(identify_msg)
except Exception as e: except Exception as e:
error_msg = str(e) print(f"Identify protocol error: {e}")
print(f"Identify protocol error: {error_msg}")
# Check for specific format mismatch errors
if "Error parsing message" in error_msg or "DecodeError" in error_msg:
print("\n" + "=" * 60)
print("FORMAT MISMATCH DETECTED!")
print("=" * 60)
if use_varint_format:
print(
"You are using length-prefixed format (default) but the "
"listener"
)
print("is using raw protobuf format.")
print(
"\nTo fix this, run the dialer with the --raw-format flag:"
)
print(f"identify-demo --raw-format -d {destination}")
else:
print("You are using raw protobuf format but the listener")
print("is using length-prefixed format (default).")
print(
"\nTo fix this, run the dialer without the --raw-format "
"flag:"
)
print(f"identify-demo -d {destination}")
print("=" * 60)
else:
import traceback
traceback.print_exc()
return return
@ -225,12 +98,9 @@ async def run(port: int, destination: str, use_varint_format: bool = True) -> No
def main() -> None: def main() -> None:
description = """ description = """
This program demonstrates the libp2p identify protocol. This program demonstrates the libp2p identify protocol.
First run 'identify-demo -p <PORT> [--raw-format]' to start a listener. First run identify-demo -p <PORT>' to start a listener.
Then run 'identify-demo <ANOTHER_PORT> -d <DESTINATION>' Then run 'identify-demo <ANOTHER_PORT> -d <DESTINATION>'
where <DESTINATION> is the multiaddress shown by the listener. where <DESTINATION> is the multiaddress shown by the listener.
Use --raw-format to send raw protobuf messages (old format) instead of
length-prefixed protobuf messages (new format, default).
""" """
example_maddr = ( example_maddr = (
@ -245,35 +115,12 @@ def main() -> None:
type=str, type=str,
help=f"destination multiaddr string, e.g. {example_maddr}", help=f"destination multiaddr string, e.g. {example_maddr}",
) )
parser.add_argument(
"--raw-format",
action="store_true",
help=(
"use raw protobuf format (old format) instead of "
"length-prefixed (new format)"
),
)
args = parser.parse_args() args = parser.parse_args()
# Determine format: use varint (length-prefixed) if --raw-format is specified,
# otherwise use raw protobuf format (old format)
use_varint_format = args.raw_format
try: try:
if args.destination: trio.run(run, *(args.port, args.destination))
# Run in dialer mode
trio.run(run, *(args.port, args.destination, use_varint_format))
else:
# Run in listener mode
trio.run(run, *(args.port, args.destination, use_varint_format))
except KeyboardInterrupt: except KeyboardInterrupt:
print("\n👋 Goodbye!") pass
logger.info("Application interrupted by user")
except Exception as e:
print(f"\n❌ Error: {str(e)}")
logger.error("Error: %s", str(e))
sys.exit(1)
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -11,26 +11,23 @@ This example shows how to:
import logging import logging
import multiaddr
import trio import trio
from libp2p import ( from libp2p import (
new_host, new_host,
) )
from libp2p.abc import (
INetStream,
)
from libp2p.crypto.secp256k1 import ( from libp2p.crypto.secp256k1 import (
create_new_key_pair, create_new_key_pair,
) )
from libp2p.custom_types import ( from libp2p.custom_types import (
TProtocol, TProtocol,
) )
from libp2p.identity.identify.pb.identify_pb2 import ( from libp2p.identity.identify import (
Identify, identify_handler_for,
) )
from libp2p.identity.identify_push import ( from libp2p.identity.identify_push import (
ID_PUSH, ID_PUSH,
identify_push_handler_for,
push_identify_to_peer, push_identify_to_peer,
) )
from libp2p.peer.peerinfo import ( from libp2p.peer.peerinfo import (
@ -41,145 +38,8 @@ from libp2p.peer.peerinfo import (
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def create_custom_identify_handler(host, host_name: str):
"""Create a custom identify handler that displays received information."""
async def handle_identify(stream: INetStream) -> None:
peer_id = stream.muxed_conn.peer_id
print(f"\n🔍 {host_name} received identify request from peer: {peer_id}")
# Get the standard identify response using the existing function
from libp2p.identity.identify.identify import (
_mk_identify_protobuf,
_remote_address_to_multiaddr,
)
# Get observed address
observed_multiaddr = None
try:
remote_address = stream.get_remote_address()
if remote_address:
observed_multiaddr = _remote_address_to_multiaddr(remote_address)
except Exception:
pass
# Build the identify protobuf
identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
response_data = identify_msg.SerializeToString()
print(f" 📋 {host_name} identify information:")
if identify_msg.HasField("protocol_version"):
print(f" Protocol Version: {identify_msg.protocol_version}")
if identify_msg.HasField("agent_version"):
print(f" Agent Version: {identify_msg.agent_version}")
if identify_msg.HasField("public_key"):
print(f" Public Key: {identify_msg.public_key.hex()[:16]}...")
if identify_msg.listen_addrs:
print(" Listen Addresses:")
for addr_bytes in identify_msg.listen_addrs:
addr = multiaddr.Multiaddr(addr_bytes)
print(f" - {addr}")
if identify_msg.protocols:
print(" Supported Protocols:")
for protocol in identify_msg.protocols:
print(f" - {protocol}")
# Send the response
await stream.write(response_data)
await stream.close()
return handle_identify
def create_custom_identify_push_handler(host, host_name: str):
"""Create a custom identify/push handler that displays received information."""
async def handle_identify_push(stream: INetStream) -> None:
peer_id = stream.muxed_conn.peer_id
print(f"\n📤 {host_name} received identify/push from peer: {peer_id}")
try:
# Read the identify message using the utility function
from libp2p.utils.varint import read_length_prefixed_protobuf
data = await read_length_prefixed_protobuf(stream, use_varint_format=True)
# Parse the identify message
identify_msg = Identify()
identify_msg.ParseFromString(data)
print(" 📋 Received identify information:")
if identify_msg.HasField("protocol_version"):
print(f" Protocol Version: {identify_msg.protocol_version}")
if identify_msg.HasField("agent_version"):
print(f" Agent Version: {identify_msg.agent_version}")
if identify_msg.HasField("public_key"):
print(f" Public Key: {identify_msg.public_key.hex()[:16]}...")
if identify_msg.HasField("observed_addr") and identify_msg.observed_addr:
observed_addr = multiaddr.Multiaddr(identify_msg.observed_addr)
print(f" Observed Address: {observed_addr}")
if identify_msg.listen_addrs:
print(" Listen Addresses:")
for addr_bytes in identify_msg.listen_addrs:
addr = multiaddr.Multiaddr(addr_bytes)
print(f" - {addr}")
if identify_msg.protocols:
print(" Supported Protocols:")
for protocol in identify_msg.protocols:
print(f" - {protocol}")
# Update the peerstore with the new information
from libp2p.identity.identify_push.identify_push import (
_update_peerstore_from_identify,
)
await _update_peerstore_from_identify(
host.get_peerstore(), peer_id, identify_msg
)
print(f"{host_name} updated peerstore with new information")
except Exception as e:
print(f" ❌ Error processing identify/push: {e}")
finally:
await stream.close()
return handle_identify_push
async def display_peerstore_info(host, host_name: str, peer_id, description: str):
"""Display peerstore information for a specific peer."""
peerstore = host.get_peerstore()
try:
addrs = peerstore.addrs(peer_id)
except Exception:
addrs = []
try:
protocols = peerstore.get_protocols(peer_id)
except Exception:
protocols = []
print(f"\n📚 {host_name} peerstore for {description}:")
print(f" Peer ID: {peer_id}")
if addrs:
print(" Addresses:")
for addr in addrs:
print(f" - {addr}")
else:
print(" Addresses: None")
if protocols:
print(" Protocols:")
for protocol in protocols:
print(f" - {protocol}")
else:
print(" Protocols: None")
async def main() -> None: async def main() -> None:
print("\n==== Starting Enhanced Identify-Push Example ====\n") print("\n==== Starting Identify-Push Example ====\n")
# Create key pairs for the two hosts # Create key pairs for the two hosts
key_pair_1 = create_new_key_pair() key_pair_1 = create_new_key_pair()
@ -188,57 +48,45 @@ async def main() -> None:
# Create the first host # Create the first host
host_1 = new_host(key_pair=key_pair_1) host_1 = new_host(key_pair=key_pair_1)
# Set up custom identify and identify/push handlers # Set up the identify and identify/push handlers
host_1.set_stream_handler( host_1.set_stream_handler(TProtocol("/ipfs/id/1.0.0"), identify_handler_for(host_1))
TProtocol("/ipfs/id/1.0.0"), create_custom_identify_handler(host_1, "Host 1") host_1.set_stream_handler(ID_PUSH, identify_push_handler_for(host_1))
)
host_1.set_stream_handler(
ID_PUSH, create_custom_identify_push_handler(host_1, "Host 1")
)
# Create the second host # Create the second host
host_2 = new_host(key_pair=key_pair_2) host_2 = new_host(key_pair=key_pair_2)
# Set up custom identify and identify/push handlers # Set up the identify and identify/push handlers
host_2.set_stream_handler( host_2.set_stream_handler(TProtocol("/ipfs/id/1.0.0"), identify_handler_for(host_2))
TProtocol("/ipfs/id/1.0.0"), create_custom_identify_handler(host_2, "Host 2") host_2.set_stream_handler(ID_PUSH, identify_push_handler_for(host_2))
)
host_2.set_stream_handler(
ID_PUSH, create_custom_identify_push_handler(host_2, "Host 2")
)
# Start listening on random ports using the run context manager # Start listening on random ports using the run context manager
import multiaddr
listen_addr_1 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0") listen_addr_1 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
listen_addr_2 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0") listen_addr_2 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
async with ( async with host_1.run([listen_addr_1]), host_2.run([listen_addr_2]):
host_1.run([listen_addr_1]),
host_2.run([listen_addr_2]),
trio.open_nursery() as nursery,
):
# Start the peer-store cleanup task
nursery.start_soon(host_1.get_peerstore().start_cleanup_task, 60)
nursery.start_soon(host_2.get_peerstore().start_cleanup_task, 60)
# Get the addresses of both hosts # Get the addresses of both hosts
addr_1 = host_1.get_addrs()[0] addr_1 = host_1.get_addrs()[0]
logger.info(f"Host 1 listening on {addr_1}")
print(f"Host 1 listening on {addr_1}")
print(f"Peer ID: {host_1.get_id().pretty()}")
addr_2 = host_2.get_addrs()[0] addr_2 = host_2.get_addrs()[0]
logger.info(f"Host 2 listening on {addr_2}")
print(f"Host 2 listening on {addr_2}")
print(f"Peer ID: {host_2.get_id().pretty()}")
print("🏠 Host Configuration:") print("\nConnecting Host 2 to Host 1...")
print(f" Host 1: {addr_1}")
print(f" Host 1 Peer ID: {host_1.get_id().pretty()}")
print(f" Host 2: {addr_2}")
print(f" Host 2 Peer ID: {host_2.get_id().pretty()}")
print("\n🔗 Connecting Host 2 to Host 1...")
# Connect host_2 to host_1 # Connect host_2 to host_1
peer_info = info_from_p2p_addr(addr_1) peer_info = info_from_p2p_addr(addr_1)
await host_2.connect(peer_info) await host_2.connect(peer_info)
print("Host 2 successfully connected to Host 1") logger.info("Host 2 connected to Host 1")
print("Host 2 successfully connected to Host 1")
# Run the identify protocol from host_2 to host_1 # Run the identify protocol from host_2 to host_1
print("\n🔄 Running identify protocol (Host 2 → Host 1)...") # (so Host 1 learns Host 2's address)
from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID
stream = await host_2.new_stream(host_1.get_id(), (IDENTIFY_PROTOCOL_ID,)) stream = await host_2.new_stream(host_1.get_id(), (IDENTIFY_PROTOCOL_ID,))
@ -246,58 +94,64 @@ async def main() -> None:
await stream.close() await stream.close()
# Run the identify protocol from host_1 to host_2 # Run the identify protocol from host_1 to host_2
print("\n🔄 Running identify protocol (Host 1 → Host 2)...") # (so Host 2 learns Host 1's address)
stream = await host_1.new_stream(host_2.get_id(), (IDENTIFY_PROTOCOL_ID,)) stream = await host_1.new_stream(host_2.get_id(), (IDENTIFY_PROTOCOL_ID,))
response = await stream.read() response = await stream.read()
await stream.close() await stream.close()
# Update Host 1's peerstore with Host 2's addresses # --- NEW CODE: Update Host 1's peerstore with Host 2's addresses ---
from libp2p.identity.identify.pb.identify_pb2 import (
Identify,
)
identify_msg = Identify() identify_msg = Identify()
identify_msg.ParseFromString(response) identify_msg.ParseFromString(response)
peerstore_1 = host_1.get_peerstore() peerstore_1 = host_1.get_peerstore()
peer_id_2 = host_2.get_id() peer_id_2 = host_2.get_id()
for addr_bytes in identify_msg.listen_addrs: for addr_bytes in identify_msg.listen_addrs:
maddr = multiaddr.Multiaddr(addr_bytes) maddr = multiaddr.Multiaddr(addr_bytes)
peerstore_1.add_addr(peer_id_2, maddr, ttl=3600) # TTL can be any positive int
peerstore_1.add_addr(
peer_id_2,
maddr,
ttl=3600,
)
# --- END NEW CODE ---
# Display peerstore information before push # Now Host 1's peerstore should have Host 2's address
await display_peerstore_info( peerstore_1 = host_1.get_peerstore()
host_1, "Host 1", peer_id_2, "Host 2 (before push)" peer_id_2 = host_2.get_id()
addrs_1_for_2 = peerstore_1.addrs(peer_id_2)
logger.info(
f"[DEBUG] Host 1 peerstore addresses for Host 2 before push: "
f"{addrs_1_for_2}"
)
print(
f"[DEBUG] Host 1 peerstore addresses for Host 2 before push: "
f"{addrs_1_for_2}"
) )
# Push identify information from host_1 to host_2 # Push identify information from host_1 to host_2
print("\n📤 Host 1 pushing identify information to Host 2...") logger.info("Host 1 pushing identify information to Host 2")
print("\nHost 1 pushing identify information to Host 2...")
try: try:
# Call push_identify_to_peer which now returns a boolean
success = await push_identify_to_peer(host_1, host_2.get_id()) success = await push_identify_to_peer(host_1, host_2.get_id())
if success: if success:
print("Identify push completed successfully!") logger.info("Identify push completed successfully")
print("Identify push completed successfully!")
else: else:
print("⚠️ Identify push didn't complete successfully") logger.warning("Identify push didn't complete successfully")
print("\nWarning: Identify push didn't complete successfully")
except Exception as e: except Exception as e:
print(f"Error during identify push: {str(e)}") logger.error(f"Error during identify push: {str(e)}")
print(f"\nError during identify push: {str(e)}")
# Give a moment for the identify/push processing to complete # Add this at the end of your async with block:
await trio.sleep(0.5) await trio.sleep(0.5) # Give background tasks time to finish
# Display peerstore information after push
await display_peerstore_info(host_1, "Host 1", peer_id_2, "Host 2 (after push)")
await display_peerstore_info(
host_2, "Host 2", host_1.get_id(), "Host 1 (after push)"
)
# Give more time for background tasks to finish and connections to stabilize
print("\n⏳ Waiting for background tasks to complete...")
await trio.sleep(1.0)
# Gracefully close connections to prevent connection errors
print("🔌 Closing connections...")
await host_2.disconnect(host_1.get_id())
await trio.sleep(0.2)
print("\n🎉 Example completed successfully!")
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -41,9 +41,6 @@ from libp2p.identity.identify import (
ID as ID_IDENTIFY, ID as ID_IDENTIFY,
identify_handler_for, identify_handler_for,
) )
from libp2p.identity.identify.identify import (
_remote_address_to_multiaddr,
)
from libp2p.identity.identify.pb.identify_pb2 import ( from libp2p.identity.identify.pb.identify_pb2 import (
Identify, Identify,
) )
@ -60,46 +57,18 @@ from libp2p.peer.peerinfo import (
logger = logging.getLogger("libp2p.identity.identify-push-example") logger = logging.getLogger("libp2p.identity.identify-push-example")
def custom_identify_push_handler_for(host, use_varint_format: bool = True): def custom_identify_push_handler_for(host):
""" """
Create a custom handler for the identify/push protocol that logs and prints Create a custom handler for the identify/push protocol that logs and prints
the identity information received from the dialer. the identity information received from the dialer.
Args:
host: The libp2p host
use_varint_format: If True, expect length-prefixed format; if False, expect
raw protobuf
""" """
async def handle_identify_push(stream: INetStream) -> None: async def handle_identify_push(stream: INetStream) -> None:
peer_id = stream.muxed_conn.peer_id peer_id = stream.muxed_conn.peer_id
# Get remote address information
try: try:
remote_address = stream.get_remote_address() # Read the identify message from the stream
if remote_address: data = await stream.read()
observed_multiaddr = _remote_address_to_multiaddr(remote_address)
logger.info(
"Connection from remote peer %s, address: %s, multiaddr: %s",
peer_id,
remote_address,
observed_multiaddr,
)
print(f"\n🔗 Received identify/push request from peer: {peer_id}")
# Add the peer ID to create a complete multiaddr
complete_multiaddr = f"{observed_multiaddr}/p2p/{peer_id}"
print(f" Remote address: {complete_multiaddr}")
except Exception as e:
logger.error("Error getting remote address: %s", e)
print(f"\n🔗 Received identify/push request from peer: {peer_id}")
try:
# Use the utility function to read the protobuf message
from libp2p.utils.varint import read_length_prefixed_protobuf
data = await read_length_prefixed_protobuf(stream, use_varint_format)
identify_msg = Identify() identify_msg = Identify()
identify_msg.ParseFromString(data) identify_msg.ParseFromString(data)
@ -148,41 +117,11 @@ def custom_identify_push_handler_for(host, use_varint_format: bool = True):
await _update_peerstore_from_identify(peerstore, peer_id, identify_msg) await _update_peerstore_from_identify(peerstore, peer_id, identify_msg)
logger.info("Successfully processed identify/push from peer %s", peer_id) logger.info("Successfully processed identify/push from peer %s", peer_id)
print(f"Successfully processed identify/push from peer {peer_id}") print(f"\nSuccessfully processed identify/push from peer {peer_id}")
except Exception as e: except Exception as e:
error_msg = str(e) logger.error("Error processing identify/push from %s: %s", peer_id, e)
logger.error( print(f"\nError processing identify/push from {peer_id}: {e}")
"Error processing identify/push from %s: %s", peer_id, error_msg
)
print(f"\nError processing identify/push from {peer_id}: {error_msg}")
# Check for specific format mismatch errors
if (
"Error parsing message" in error_msg
or "DecodeError" in error_msg
or "ParseFromString" in error_msg
):
print("\n" + "=" * 60)
print("FORMAT MISMATCH DETECTED!")
print("=" * 60)
if use_varint_format:
print(
"You are using length-prefixed format (default) but the "
"dialer is using raw protobuf format."
)
print("\nTo fix this, run the dialer with the --raw-format flag:")
print(
"identify-push-listener-dialer-demo --raw-format -d <ADDRESS>"
)
else:
print("You are using raw protobuf format but the dialer")
print("is using length-prefixed format (default).")
print(
"\nTo fix this, run the dialer without the --raw-format flag:"
)
print("identify-push-listener-dialer-demo -d <ADDRESS>")
print("=" * 60)
finally: finally:
# Close the stream after processing # Close the stream after processing
await stream.close() await stream.close()
@ -190,15 +129,9 @@ def custom_identify_push_handler_for(host, use_varint_format: bool = True):
return handle_identify_push return handle_identify_push
async def run_listener( async def run_listener(port: int) -> None:
port: int, use_varint_format: bool = True, raw_format_flag: bool = False
) -> None:
"""Run a host in listener mode.""" """Run a host in listener mode."""
format_name = "length-prefixed" if use_varint_format else "raw protobuf" print(f"\n==== Starting Identify-Push Listener on port {port} ====\n")
print(
f"\n==== Starting Identify-Push Listener on port {port} "
f"(using {format_name} format) ====\n"
)
# Create key pair for the listener # Create key pair for the listener
key_pair = create_new_key_pair() key_pair = create_new_key_pair()
@ -206,58 +139,35 @@ async def run_listener(
# Create the listener host # Create the listener host
host = new_host(key_pair=key_pair) host = new_host(key_pair=key_pair)
# Set up the identify and identify/push handlers with specified format # Set up the identify and identify/push handlers
host.set_stream_handler( host.set_stream_handler(ID_IDENTIFY, identify_handler_for(host))
ID_IDENTIFY, identify_handler_for(host, use_varint_format=use_varint_format) host.set_stream_handler(ID_IDENTIFY_PUSH, custom_identify_push_handler_for(host))
)
host.set_stream_handler(
ID_IDENTIFY_PUSH,
custom_identify_push_handler_for(host, use_varint_format=use_varint_format),
)
# Start listening # Start listening
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
try: async with host.run([listen_addr]):
async with host.run([listen_addr]): addr = host.get_addrs()[0]
addr = host.get_addrs()[0] logger.info("Listener host ready!")
logger.info("Listener host ready!") print("Listener host ready!")
print("Listener host ready!")
logger.info(f"Listening on: {addr}") logger.info(f"Listening on: {addr}")
print(f"Listening on: {addr}") print(f"Listening on: {addr}")
logger.info(f"Peer ID: {host.get_id().pretty()}") logger.info(f"Peer ID: {host.get_id().pretty()}")
print(f"Peer ID: {host.get_id().pretty()}") print(f"Peer ID: {host.get_id().pretty()}")
print("\nRun dialer with command:") print("\nRun dialer with command:")
if raw_format_flag: print(f"identify-push-listener-dialer-demo -d {addr}")
print(f"identify-push-listener-dialer-demo -d {addr} --raw-format") print("\nWaiting for incoming connections... (Ctrl+C to exit)")
else:
print(f"identify-push-listener-dialer-demo -d {addr}")
print("\nWaiting for incoming identify/push requests... (Ctrl+C to exit)")
# Keep running until interrupted # Keep running until interrupted
try: await trio.sleep_forever()
await trio.sleep_forever()
except KeyboardInterrupt:
print("\n🛑 Shutting down listener...")
logger.info("Listener interrupted by user")
return
except Exception as e:
logger.error(f"Listener error: {e}")
raise
async def run_dialer( async def run_dialer(port: int, destination: str) -> None:
port: int, destination: str, use_varint_format: bool = True
) -> None:
"""Run a host in dialer mode that connects to a listener.""" """Run a host in dialer mode that connects to a listener."""
format_name = "length-prefixed" if use_varint_format else "raw protobuf" print(f"\n==== Starting Identify-Push Dialer on port {port} ====\n")
print(
f"\n==== Starting Identify-Push Dialer on port {port} "
f"(using {format_name} format) ====\n"
)
# Create key pair for the dialer # Create key pair for the dialer
key_pair = create_new_key_pair() key_pair = create_new_key_pair()
@ -265,14 +175,9 @@ async def run_dialer(
# Create the dialer host # Create the dialer host
host = new_host(key_pair=key_pair) host = new_host(key_pair=key_pair)
# Set up the identify and identify/push handlers with specified format # Set up the identify and identify/push handlers
host.set_stream_handler( host.set_stream_handler(ID_IDENTIFY, identify_handler_for(host))
ID_IDENTIFY, identify_handler_for(host, use_varint_format=use_varint_format) host.set_stream_handler(ID_IDENTIFY_PUSH, identify_push_handler_for(host))
)
host.set_stream_handler(
ID_IDENTIFY_PUSH,
identify_push_handler_for(host, use_varint_format=use_varint_format),
)
# Start listening on a different port # Start listening on a different port
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
@ -293,9 +198,7 @@ async def run_dialer(
try: try:
await host.connect(peer_info) await host.connect(peer_info)
logger.info("Successfully connected to listener!") logger.info("Successfully connected to listener!")
print("Successfully connected to listener!") print("Successfully connected to listener!")
print(f" Connected to: {peer_info.peer_id}")
print(f" Full address: {destination}")
# Push identify information to the listener # Push identify information to the listener
logger.info("Pushing identify information to listener...") logger.info("Pushing identify information to listener...")
@ -303,13 +206,11 @@ async def run_dialer(
try: try:
# Call push_identify_to_peer which returns a boolean # Call push_identify_to_peer which returns a boolean
success = await push_identify_to_peer( success = await push_identify_to_peer(host, peer_info.peer_id)
host, peer_info.peer_id, use_varint_format=use_varint_format
)
if success: if success:
logger.info("Identify push completed successfully!") logger.info("Identify push completed successfully!")
print("Identify push completed successfully!") print("Identify push completed successfully!")
logger.info("Example completed successfully!") logger.info("Example completed successfully!")
print("\nExample completed successfully!") print("\nExample completed successfully!")
@ -320,57 +221,17 @@ async def run_dialer(
logger.warning("Example completed with warnings.") logger.warning("Example completed with warnings.")
print("Example completed with warnings.") print("Example completed with warnings.")
except Exception as e: except Exception as e:
error_msg = str(e) logger.error(f"Error during identify push: {str(e)}")
logger.error(f"Error during identify push: {error_msg}") print(f"\nError during identify push: {str(e)}")
print(f"\nError during identify push: {error_msg}")
# Check for specific format mismatch errors
if (
"Error parsing message" in error_msg
or "DecodeError" in error_msg
or "ParseFromString" in error_msg
):
print("\n" + "=" * 60)
print("FORMAT MISMATCH DETECTED!")
print("=" * 60)
if use_varint_format:
print(
"You are using length-prefixed format (default) but the "
"listener is using raw protobuf format."
)
print(
"\nTo fix this, run the dialer with the --raw-format flag:"
)
print(
f"identify-push-listener-dialer-demo --raw-format -d "
f"{destination}"
)
else:
print("You are using raw protobuf format but the listener")
print("is using length-prefixed format (default).")
print(
"\nTo fix this, run the dialer without the --raw-format "
"flag:"
)
print(f"identify-push-listener-dialer-demo -d {destination}")
print("=" * 60)
logger.error("Example completed with errors.") logger.error("Example completed with errors.")
print("Example completed with errors.") print("Example completed with errors.")
# Continue execution despite the push error # Continue execution despite the push error
except Exception as e: except Exception as e:
error_msg = str(e) logger.error(f"Error during dialer operation: {str(e)}")
if "unable to connect" in error_msg or "SwarmException" in error_msg: print(f"\nError during dialer operation: {str(e)}")
print(f"\n❌ Cannot connect to peer: {peer_info.peer_id}") raise
print(f" Address: {destination}")
print(f" Error: {error_msg}")
print("\n💡 Make sure the peer is running and the address is correct.")
return
else:
logger.error(f"Error during dialer operation: {error_msg}")
print(f"\nError during dialer operation: {error_msg}")
raise
def main() -> None: def main() -> None:
@ -379,55 +240,34 @@ def main() -> None:
This program demonstrates the libp2p identify/push protocol. This program demonstrates the libp2p identify/push protocol.
Without arguments, it runs as a listener on random port. Without arguments, it runs as a listener on random port.
With -d parameter, it runs as a dialer on random port. With -d parameter, it runs as a dialer on random port.
Port 0 (default) means the OS will automatically assign an available port.
This prevents port conflicts when running multiple instances.
Use --raw-format to send raw protobuf messages (old format) instead of
length-prefixed protobuf messages (new format, default).
""" """
parser = argparse.ArgumentParser(description=description) example = (
parser.add_argument( "/ip4/127.0.0.1/tcp/8000/p2p/QmQn4SwGkDZKkUEpBRBvTmheQycxAHJUNmVEnjA2v1qe8Q"
"-p",
"--port",
default=0,
type=int,
help="source port number (0 = random available port)",
) )
parser = argparse.ArgumentParser(description=description)
parser.add_argument("-p", "--port", default=0, type=int, help="source port number")
parser.add_argument( parser.add_argument(
"-d", "-d",
"--destination", "--destination",
type=str, type=str,
help="destination multiaddr string", help=f"destination multiaddr string, e.g. {example}",
) )
parser.add_argument(
"--raw-format",
action="store_true",
help=(
"use raw protobuf format (old format) instead of "
"length-prefixed (new format)"
),
)
args = parser.parse_args() args = parser.parse_args()
# Determine format: raw format if --raw-format is specified, otherwise
# length-prefixed
use_varint_format = not args.raw_format
try: try:
if args.destination: if args.destination:
# Run in dialer mode with random available port if not specified # Run in dialer mode with random available port if not specified
trio.run(run_dialer, args.port, args.destination, use_varint_format) trio.run(run_dialer, args.port, args.destination)
else: else:
# Run in listener mode with random available port if not specified # Run in listener mode with random available port if not specified
trio.run(run_listener, args.port, use_varint_format, args.raw_format) trio.run(run_listener, args.port)
except KeyboardInterrupt: except KeyboardInterrupt:
print("\n👋 Goodbye!") print("\nInterrupted by user")
logger.info("Application interrupted by user") logger.info("Interrupted by user")
except Exception as e: except Exception as e:
print(f"\nError: {str(e)}") print(f"\nError: {str(e)}")
logger.error("Error: %s", str(e)) logger.error("Error: %s", str(e))
sys.exit(1) sys.exit(1)

View File

@ -151,10 +151,7 @@ async def run_node(
host = new_host(key_pair=key_pair) host = new_host(key_pair=key_pair)
listen_addr = Multiaddr(f"/ip4/127.0.0.1/tcp/{port}") listen_addr = Multiaddr(f"/ip4/127.0.0.1/tcp/{port}")
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery: async with host.run(listen_addrs=[listen_addr]):
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
peer_id = host.get_id().pretty() peer_id = host.get_id().pretty()
addr_str = f"/ip4/127.0.0.1/tcp/{port}/p2p/{peer_id}" addr_str = f"/ip4/127.0.0.1/tcp/{port}/p2p/{peer_id}"
await connect_to_bootstrap_nodes(host, bootstrap_nodes) await connect_to_bootstrap_nodes(host, bootstrap_nodes)
@ -227,7 +224,7 @@ async def run_node(
# Keep the node running # Keep the node running
while True: while True:
logger.info( logger.debug(
"Status - Connected peers: %d," "Status - Connected peers: %d,"
"Peers in store: %d, Values in store: %d", "Peers in store: %d, Values in store: %d",
len(dht.host.get_connected_peers()), len(dht.host.get_connected_peers()),

View File

@ -46,10 +46,7 @@ async def run(port: int) -> None:
logger.info("Starting peer Discovery") logger.info("Starting peer Discovery")
host = new_host(key_pair=key_pair, enable_mDNS=True) host = new_host(key_pair=key_pair, enable_mDNS=True)
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery: async with host.run(listen_addrs=[listen_addr]):
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
await trio.sleep_forever() await trio.sleep_forever()

View File

@ -59,9 +59,6 @@ async def run(port: int, destination: str) -> None:
host = new_host(listen_addrs=[listen_addr]) host = new_host(listen_addrs=[listen_addr])
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery: async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
if not destination: if not destination:
host.set_stream_handler(PING_PROTOCOL_ID, handle_ping) host.set_stream_handler(PING_PROTOCOL_ID, handle_ping)

View File

@ -1,5 +1,6 @@
import argparse import argparse
import logging import logging
import socket
import base58 import base58
import multiaddr import multiaddr
@ -30,9 +31,6 @@ from libp2p.stream_muxer.mplex.mplex import (
from libp2p.tools.async_service.trio_service import ( from libp2p.tools.async_service.trio_service import (
background_trio_service, background_trio_service,
) )
from libp2p.utils.address_validation import (
find_free_port,
)
# Configure logging # Configure logging
logging.basicConfig( logging.basicConfig(
@ -79,6 +77,13 @@ async def publish_loop(pubsub, topic, termination_event):
await trio.sleep(1) # Avoid tight loop on error await trio.sleep(1) # Avoid tight loop on error
def find_free_port():
"""Find a free port on localhost."""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(("", 0)) # Bind to a free port provided by the OS
return s.getsockname()[1]
async def monitor_peer_topics(pubsub, nursery, termination_event): async def monitor_peer_topics(pubsub, nursery, termination_event):
""" """
Monitor for new topics that peers are subscribed to and Monitor for new topics that peers are subscribed to and
@ -139,9 +144,6 @@ async def run(topic: str, destination: str | None, port: int | None) -> None:
pubsub = Pubsub(host, gossipsub) pubsub = Pubsub(host, gossipsub)
termination_event = trio.Event() # Event to signal termination termination_event = trio.Event() # Event to signal termination
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery: async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
logger.info(f"Node started with peer ID: {host.get_id()}") logger.info(f"Node started with peer ID: {host.get_id()}")
logger.info(f"Listening on: {listen_addr}") logger.info(f"Listening on: {listen_addr}")
logger.info("Initializing PubSub and GossipSub...") logger.info("Initializing PubSub and GossipSub...")

View File

@ -1,221 +0,0 @@
"""
Random Walk Example for py-libp2p Kademlia DHT
This example demonstrates the Random Walk module's peer discovery capabilities
using real libp2p hosts and Kademlia DHT. It shows how the Random Walk module
automatically discovers new peers and maintains routing table health.
Usage:
# Start server nodes (they will discover peers via random walk)
python3 random_walk.py --mode server
"""
import argparse
import logging
import random
import secrets
import sys
from multiaddr import Multiaddr
import trio
from libp2p import new_host
from libp2p.abc import IHost
from libp2p.crypto.secp256k1 import create_new_key_pair
from libp2p.kad_dht.kad_dht import DHTMode, KadDHT
from libp2p.tools.async_service import background_trio_service
# Simple logging configuration
def setup_logging(verbose: bool = False):
"""Setup unified logging configuration."""
level = logging.DEBUG if verbose else logging.INFO
logging.basicConfig(
level=level,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
handlers=[logging.StreamHandler()],
)
# Configure key module loggers
for module in ["libp2p.discovery.random_walk", "libp2p.kad_dht"]:
logging.getLogger(module).setLevel(level)
# Suppress noisy logs
logging.getLogger("multiaddr").setLevel(logging.WARNING)
logger = logging.getLogger("random-walk-example")
# Default bootstrap nodes
DEFAULT_BOOTSTRAP_NODES = [
"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ"
]
def filter_compatible_peer_info(peer_info) -> bool:
"""Filter peer info to check if it has compatible addresses (TCP + IPv4)."""
if not hasattr(peer_info, "addrs") or not peer_info.addrs:
return False
for addr in peer_info.addrs:
addr_str = str(addr)
if "/tcp/" in addr_str and "/ip4/" in addr_str and "/quic" not in addr_str:
return True
return False
async def maintain_connections(host: IHost) -> None:
"""Maintain connections to ensure the host remains connected to healthy peers."""
while True:
try:
connected_peers = host.get_connected_peers()
list_peers = host.get_peerstore().peers_with_addrs()
if len(connected_peers) < 20:
logger.debug("Reconnecting to maintain peer connections...")
# Find compatible peers
compatible_peers = []
for peer_id in list_peers:
try:
peer_info = host.get_peerstore().peer_info(peer_id)
if filter_compatible_peer_info(peer_info):
compatible_peers.append(peer_id)
except Exception:
continue
# Connect to random subset of compatible peers
if compatible_peers:
random_peers = random.sample(
compatible_peers, min(50, len(compatible_peers))
)
for peer_id in random_peers:
if peer_id not in connected_peers:
try:
with trio.move_on_after(5):
peer_info = host.get_peerstore().peer_info(peer_id)
await host.connect(peer_info)
logger.debug(f"Connected to peer: {peer_id}")
except Exception as e:
logger.debug(f"Failed to connect to {peer_id}: {e}")
await trio.sleep(15)
except Exception as e:
logger.error(f"Error maintaining connections: {e}")
async def demonstrate_random_walk_discovery(dht: KadDHT, interval: int = 30) -> None:
"""Demonstrate Random Walk peer discovery with periodic statistics."""
iteration = 0
while True:
iteration += 1
logger.info(f"--- Iteration {iteration} ---")
logger.info(f"Routing table size: {dht.get_routing_table_size()}")
logger.info(f"Connected peers: {len(dht.host.get_connected_peers())}")
logger.info(f"Peerstore size: {len(dht.host.get_peerstore().peer_ids())}")
await trio.sleep(interval)
async def run_node(port: int, mode: str, demo_interval: int = 30) -> None:
"""Run a node that demonstrates Random Walk peer discovery."""
try:
if port <= 0:
port = random.randint(10000, 60000)
logger.info(f"Starting {mode} node on port {port}")
# Determine DHT mode
dht_mode = DHTMode.SERVER if mode == "server" else DHTMode.CLIENT
# Create host and DHT
key_pair = create_new_key_pair(secrets.token_bytes(32))
host = new_host(key_pair=key_pair, bootstrap=DEFAULT_BOOTSTRAP_NODES)
listen_addr = Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start maintenance tasks
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
nursery.start_soon(maintain_connections, host)
peer_id = host.get_id().pretty()
logger.info(f"Node peer ID: {peer_id}")
logger.info(f"Node address: /ip4/0.0.0.0/tcp/{port}/p2p/{peer_id}")
# Create and start DHT with Random Walk enabled
dht = KadDHT(host, dht_mode, enable_random_walk=True)
logger.info(f"Initial routing table size: {dht.get_routing_table_size()}")
async with background_trio_service(dht):
logger.info(f"DHT service started in {dht_mode.value} mode")
logger.info(f"Random Walk enabled: {dht.is_random_walk_enabled()}")
async with trio.open_nursery() as task_nursery:
# Start demonstration and status reporting
task_nursery.start_soon(
demonstrate_random_walk_discovery, dht, demo_interval
)
# Periodic status updates
async def status_reporter():
while True:
await trio.sleep(30)
logger.debug(
f"Connected: {len(dht.host.get_connected_peers())}, "
f"Routing table: {dht.get_routing_table_size()}, "
f"Peerstore: {len(dht.host.get_peerstore().peer_ids())}"
)
task_nursery.start_soon(status_reporter)
await trio.sleep_forever()
except Exception as e:
logger.error(f"Node error: {e}", exc_info=True)
sys.exit(1)
def parse_args():
"""Parse command line arguments."""
parser = argparse.ArgumentParser(
description="Random Walk Example for py-libp2p Kademlia DHT",
)
parser.add_argument(
"--mode",
choices=["server", "client"],
default="server",
help="Node mode: server (DHT server), or client (DHT client)",
)
parser.add_argument(
"--port", type=int, default=0, help="Port to listen on (0 for random)"
)
parser.add_argument(
"--demo-interval",
type=int,
default=30,
help="Interval between random walk demonstrations in seconds",
)
parser.add_argument("--verbose", action="store_true", help="Enable verbose logging")
return parser.parse_args()
def main():
"""Main entry point for the random walk example."""
try:
args = parse_args()
setup_logging(args.verbose)
logger.info("=== Random Walk Example for py-libp2p ===")
logger.info(
f"Mode: {args.mode}, Port: {args.port} Demo interval: {args.demo_interval}s"
)
trio.run(run_node, args.port, args.mode, args.demo_interval)
except KeyboardInterrupt:
logger.info("Received interrupt signal, shutting down...")
except Exception as e:
logger.critical(f"Example failed: {e}", exc_info=True)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -251,7 +251,6 @@ def new_host(
muxer_preference: Literal["YAMUX", "MPLEX"] | None = None, muxer_preference: Literal["YAMUX", "MPLEX"] | None = None,
listen_addrs: Sequence[multiaddr.Multiaddr] | None = None, listen_addrs: Sequence[multiaddr.Multiaddr] | None = None,
enable_mDNS: bool = False, enable_mDNS: bool = False,
bootstrap: list[str] | None = None,
negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT, negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> IHost: ) -> IHost:
""" """
@ -265,7 +264,6 @@ def new_host(
:param muxer_preference: optional explicit muxer preference :param muxer_preference: optional explicit muxer preference
:param listen_addrs: optional list of multiaddrs to listen on :param listen_addrs: optional list of multiaddrs to listen on
:param enable_mDNS: whether to enable mDNS discovery :param enable_mDNS: whether to enable mDNS discovery
:param bootstrap: optional list of bootstrap peer addresses as strings
:return: return a host instance :return: return a host instance
""" """
swarm = new_swarm( swarm = new_swarm(
@ -278,7 +276,7 @@ def new_host(
) )
if disc_opt is not None: if disc_opt is not None:
return RoutedHost(swarm, disc_opt, enable_mDNS, bootstrap) return RoutedHost(swarm, disc_opt, enable_mDNS)
return BasicHost(network=swarm,enable_mDNS=enable_mDNS , bootstrap=bootstrap, negotitate_timeout=negotiate_timeout) return BasicHost(network=swarm,enable_mDNS=enable_mDNS , negotitate_timeout=negotiate_timeout)
__version__ = __version("libp2p") __version__ = __version("libp2p")

View File

@ -16,7 +16,6 @@ from typing import (
TYPE_CHECKING, TYPE_CHECKING,
Any, Any,
AsyncContextManager, AsyncContextManager,
Optional,
) )
from multiaddr import ( from multiaddr import (
@ -42,15 +41,11 @@ from libp2p.io.abc import (
from libp2p.peer.id import ( from libp2p.peer.id import (
ID, ID,
) )
import libp2p.peer.pb.peer_record_pb2 as pb
from libp2p.peer.peerinfo import ( from libp2p.peer.peerinfo import (
PeerInfo, PeerInfo,
) )
if TYPE_CHECKING: if TYPE_CHECKING:
from libp2p.peer.envelope import Envelope
from libp2p.peer.peer_record import PeerRecord
from libp2p.protocol_muxer.multiselect import Multiselect
from libp2p.pubsub.pubsub import ( from libp2p.pubsub.pubsub import (
Pubsub, Pubsub,
) )
@ -357,14 +352,6 @@ class INetConn(Closer):
:return: A tuple containing instances of INetStream. :return: A tuple containing instances of INetStream.
""" """
@abstractmethod
def get_transport_addresses(self) -> list[Multiaddr]:
"""
Retrieve the transport addresses used by this connection.
:return: A list of multiaddresses used by the transport.
"""
# -------------------------- peermetadata interface.py -------------------------- # -------------------------- peermetadata interface.py --------------------------
@ -501,71 +488,6 @@ class IAddrBook(ABC):
""" """
# ------------------ certified-addr-book interface.py ---------------------
class ICertifiedAddrBook(ABC):
"""
Interface for a certified address book.
Provides methods for managing signed peer records
"""
@abstractmethod
def consume_peer_record(self, envelope: "Envelope", ttl: int) -> bool:
"""
Accept and store a signed PeerRecord, unless it's older than
the one already stored.
This function:
- Extracts the peer ID and sequence number from the envelope
- Rejects the record if it's older (lower seq)
- Updates the stored peer record and replaces associated
addresses if accepted
Parameters
----------
envelope:
Signed envelope containing a PeerRecord.
ttl:
Time-to-live for the included multiaddrs (in seconds).
"""
@abstractmethod
def get_peer_record(self, peer_id: ID) -> Optional["Envelope"]:
"""
Retrieve the most recent signed PeerRecord `Envelope` for a peer, if it exists
and is still relevant.
First, it runs cleanup via `maybe_delete_peer_record` to purge stale data.
Then it checks whether the peer has valid, unexpired addresses before
returning the associated envelope.
Parameters
----------
peer_id : ID
The peer to look up.
"""
@abstractmethod
def maybe_delete_peer_record(self, peer_id: ID) -> None:
"""
Delete the signed peer record for a peer if it has no know
(non-expired) addresses.
This is a garbage collection mechanism: if all addresses for a peer have expired
or been cleared, there's no point holding onto its signed `Envelope`
Parameters
----------
peer_id : ID
The peer whose record we may delete.
"""
# -------------------------- keybook interface.py -------------------------- # -------------------------- keybook interface.py --------------------------
@ -831,9 +753,7 @@ class IProtoBook(ABC):
# -------------------------- peerstore interface.py -------------------------- # -------------------------- peerstore interface.py --------------------------
class IPeerStore( class IPeerStore(IPeerMetadata, IAddrBook, IKeyBook, IMetrics, IProtoBook):
IPeerMetadata, IAddrBook, ICertifiedAddrBook, IKeyBook, IMetrics, IProtoBook
):
""" """
Interface for a peer store. Interface for a peer store.
@ -968,65 +888,7 @@ class IPeerStore(
""" """
# --------CERTIFIED-ADDR-BOOK----------
@abstractmethod
def consume_peer_record(self, envelope: "Envelope", ttl: int) -> bool:
"""
Accept and store a signed PeerRecord, unless it's older
than the one already stored.
This function:
- Extracts the peer ID and sequence number from the envelope
- Rejects the record if it's older (lower seq)
- Updates the stored peer record and replaces associated addresses if accepted
Parameters
----------
envelope:
Signed envelope containing a PeerRecord.
ttl:
Time-to-live for the included multiaddrs (in seconds).
"""
@abstractmethod
def get_peer_record(self, peer_id: ID) -> Optional["Envelope"]:
"""
Retrieve the most recent signed PeerRecord `Envelope` for a peer, if it exists
and is still relevant.
First, it runs cleanup via `maybe_delete_peer_record` to purge stale data.
Then it checks whether the peer has valid, unexpired addresses before
returning the associated envelope.
Parameters
----------
peer_id : ID
The peer to look up.
"""
@abstractmethod
def maybe_delete_peer_record(self, peer_id: ID) -> None:
"""
Delete the signed peer record for a peer if it has no
know (non-expired) addresses.
This is a garbage collection mechanism: if all addresses for a peer have expired
or been cleared, there's no point holding onto its signed `Envelope`
Parameters
----------
peer_id : ID
The peer whose record we may delete.
"""
# --------KEY-BOOK---------- # --------KEY-BOOK----------
@abstractmethod @abstractmethod
def pubkey(self, peer_id: ID) -> PublicKey: def pubkey(self, peer_id: ID) -> PublicKey:
""" """
@ -1335,10 +1197,6 @@ class IPeerStore(
def clear_peerdata(self, peer_id: ID) -> None: def clear_peerdata(self, peer_id: ID) -> None:
"""clear_peerdata""" """clear_peerdata"""
@abstractmethod
async def start_cleanup_task(self, cleanup_interval: int = 3600) -> None:
"""Start periodic cleanup of expired peer records and addresses."""
# -------------------------- listener interface.py -------------------------- # -------------------------- listener interface.py --------------------------
@ -1687,8 +1545,9 @@ class IHost(ABC):
""" """
# FIXME: Replace with correct return type
@abstractmethod @abstractmethod
def get_mux(self) -> "Multiselect": def get_mux(self) -> Any:
""" """
Retrieve the muxer instance for the host. Retrieve the muxer instance for the host.
@ -1826,121 +1685,6 @@ class IHost(ABC):
""" """
# -------------------------- peer-record interface.py --------------------------
class IPeerRecord(ABC):
"""
Interface for a libp2p PeerRecord object.
A PeerRecord contains metadata about a peer such as its ID, public addresses,
and a strictly increasing sequence number for versioning.
PeerRecords are used in signed routing Envelopes for secure peer data propagation.
"""
@abstractmethod
def domain(self) -> str:
"""
Return the domain string for this record type.
Used in envelope validation to distinguish different record types.
"""
@abstractmethod
def codec(self) -> bytes:
"""
Return a binary codec prefix that identifies the PeerRecord type.
This is prepended in signed envelopes to allow type-safe decoding.
"""
@abstractmethod
def to_protobuf(self) -> pb.PeerRecord:
"""
Convert this PeerRecord into its Protobuf representation.
:raises ValueError: if serialization fails (e.g., invalid peer ID).
:return: A populated protobuf `PeerRecord` message.
"""
@abstractmethod
def marshal_record(self) -> bytes:
"""
Serialize this PeerRecord into a byte string.
Used when signing or sealing the record in an envelope.
:raises ValueError: if protobuf serialization fails.
:return: Byte-encoded PeerRecord.
"""
@abstractmethod
def equal(self, other: object) -> bool:
"""
Compare this PeerRecord with another for equality.
Two PeerRecords are considered equal if:
- They have the same `peer_id`
- Their `seq` numbers match
- Their address lists are identical and ordered
:param other: Object to compare with.
:return: True if equal, False otherwise.
"""
# -------------------------- envelope interface.py --------------------------
class IEnvelope(ABC):
@abstractmethod
def marshal_envelope(self) -> bytes:
"""
Serialize this Envelope into its protobuf wire format.
Converts all envelope fields into a `pb.Envelope` protobuf message
and returns the serialized bytes.
:return: Serialized envelope as bytes.
"""
@abstractmethod
def validate(self, domain: str) -> None:
"""
Verify the envelope's signature within the given domain scope.
This ensures that the envelope has not been tampered with
and was signed under the correct usage context.
:param domain: Domain string that contextualizes the signature.
:raises ValueError: If the signature is invalid.
"""
@abstractmethod
def record(self) -> "PeerRecord":
"""
Lazily decode and return the embedded PeerRecord.
This method unmarshals the payload bytes into a `PeerRecord` instance,
using the registered codec to identify the type. The decoded result
is cached for future use.
:return: Decoded PeerRecord object.
:raises Exception: If decoding fails or payload type is unsupported.
"""
@abstractmethod
def equal(self, other: Any) -> bool:
"""
Compare this Envelope with another for structural equality.
Two envelopes are considered equal if:
- They have the same public key
- The payload type and payload bytes match
- Their signatures are identical
:param other: Another object to compare.
:return: True if equal, False otherwise.
"""
# -------------------------- peerdata interface.py -------------------------- # -------------------------- peerdata interface.py --------------------------
@ -2414,7 +2158,6 @@ class IMultiselectMuxer(ABC):
""" """
@abstractmethod
def get_protocols(self) -> tuple[TProtocol | None, ...]: def get_protocols(self) -> tuple[TProtocol | None, ...]:
""" """
Retrieve the protocols for which handlers have been registered. Retrieve the protocols for which handlers have been registered.
@ -2425,6 +2168,7 @@ class IMultiselectMuxer(ABC):
A tuple of registered protocol names. A tuple of registered protocol names.
""" """
return tuple(self.handlers.keys())
@abstractmethod @abstractmethod
async def negotiate( async def negotiate(

View File

@ -13,7 +13,7 @@ _sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1dlibp2p/crypto/pb/crypto.proto\x12\tcrypto.pb\"?\n\tPublicKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c\"@\n\nPrivateKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c*S\n\x07KeyType\x12\x07\n\x03RSA\x10\x00\x12\x0b\n\x07\x45\x64\x32\x35\x35\x31\x39\x10\x01\x12\r\n\tSecp256k1\x10\x02\x12\t\n\x05\x45\x43\x44SA\x10\x03\x12\x0c\n\x08\x45\x43\x43_P256\x10\x04\x12\n\n\x06X25519\x10\x05') DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1dlibp2p/crypto/pb/crypto.proto\x12\tcrypto.pb\"?\n\tPublicKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c\"@\n\nPrivateKey\x12$\n\x08key_type\x18\x01 \x02(\x0e\x32\x12.crypto.pb.KeyType\x12\x0c\n\x04\x64\x61ta\x18\x02 \x02(\x0c*G\n\x07KeyType\x12\x07\n\x03RSA\x10\x00\x12\x0b\n\x07\x45\x64\x32\x35\x35\x31\x39\x10\x01\x12\r\n\tSecp256k1\x10\x02\x12\t\n\x05\x45\x43\x44SA\x10\x03\x12\x0c\n\x08\x45\x43\x43_P256\x10\x04')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.crypto.pb.crypto_pb2', globals()) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.crypto.pb.crypto_pb2', globals())
@ -21,7 +21,7 @@ if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None DESCRIPTOR._options = None
_KEYTYPE._serialized_start=175 _KEYTYPE._serialized_start=175
_KEYTYPE._serialized_end=258 _KEYTYPE._serialized_end=246
_PUBLICKEY._serialized_start=44 _PUBLICKEY._serialized_start=44
_PUBLICKEY._serialized_end=107 _PUBLICKEY._serialized_end=107
_PRIVATEKEY._serialized_start=109 _PRIVATEKEY._serialized_start=109

View File

@ -28,7 +28,6 @@ class _KeyTypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTy
Secp256k1: _KeyType.ValueType # 2 Secp256k1: _KeyType.ValueType # 2
ECDSA: _KeyType.ValueType # 3 ECDSA: _KeyType.ValueType # 3
ECC_P256: _KeyType.ValueType # 4 ECC_P256: _KeyType.ValueType # 4
X25519: _KeyType.ValueType # 5
class KeyType(_KeyType, metaclass=_KeyTypeEnumTypeWrapper): ... class KeyType(_KeyType, metaclass=_KeyTypeEnumTypeWrapper): ...
@ -37,7 +36,6 @@ Ed25519: KeyType.ValueType # 1
Secp256k1: KeyType.ValueType # 2 Secp256k1: KeyType.ValueType # 2
ECDSA: KeyType.ValueType # 3 ECDSA: KeyType.ValueType # 3
ECC_P256: KeyType.ValueType # 4 ECC_P256: KeyType.ValueType # 4
X25519: KeyType.ValueType # 5
global___KeyType = KeyType global___KeyType = KeyType
@typing.final @typing.final

View File

@ -1,5 +0,0 @@
"""Bootstrap peer discovery module for py-libp2p."""
from .bootstrap import BootstrapDiscovery
__all__ = ["BootstrapDiscovery"]

View File

@ -1,94 +0,0 @@
import logging
from multiaddr import Multiaddr
from multiaddr.resolvers import DNSResolver
from libp2p.abc import ID, INetworkService, PeerInfo
from libp2p.discovery.bootstrap.utils import validate_bootstrap_addresses
from libp2p.discovery.events.peerDiscovery import peerDiscovery
from libp2p.peer.peerinfo import info_from_p2p_addr
logger = logging.getLogger("libp2p.discovery.bootstrap")
resolver = DNSResolver()
class BootstrapDiscovery:
"""
Bootstrap-based peer discovery for py-libp2p.
Connects to predefined bootstrap peers and adds them to peerstore.
"""
def __init__(self, swarm: INetworkService, bootstrap_addrs: list[str]):
self.swarm = swarm
self.peerstore = swarm.peerstore
self.bootstrap_addrs = bootstrap_addrs or []
self.discovered_peers: set[str] = set()
async def start(self) -> None:
"""Process bootstrap addresses and emit peer discovery events."""
logger.debug(
f"Starting bootstrap discovery with "
f"{len(self.bootstrap_addrs)} bootstrap addresses"
)
# Validate and filter bootstrap addresses
self.bootstrap_addrs = validate_bootstrap_addresses(self.bootstrap_addrs)
for addr_str in self.bootstrap_addrs:
try:
await self._process_bootstrap_addr(addr_str)
except Exception as e:
logger.debug(f"Failed to process bootstrap address {addr_str}: {e}")
def stop(self) -> None:
"""Clean up bootstrap discovery resources."""
logger.debug("Stopping bootstrap discovery")
self.discovered_peers.clear()
async def _process_bootstrap_addr(self, addr_str: str) -> None:
"""Convert string address to PeerInfo and add to peerstore."""
try:
multiaddr = Multiaddr(addr_str)
except Exception as e:
logger.debug(f"Invalid multiaddr format '{addr_str}': {e}")
return
if self.is_dns_addr(multiaddr):
resolved_addrs = await resolver.resolve(multiaddr)
peer_id_str = multiaddr.get_peer_id()
if peer_id_str is None:
logger.warning(f"Missing peer ID in DNS address: {addr_str}")
return
peer_id = ID.from_base58(peer_id_str)
addrs = [addr for addr in resolved_addrs]
if not addrs:
logger.warning(f"No addresses resolved for DNS address: {addr_str}")
return
peer_info = PeerInfo(peer_id, addrs)
self.add_addr(peer_info)
else:
self.add_addr(info_from_p2p_addr(multiaddr))
def is_dns_addr(self, addr: Multiaddr) -> bool:
"""Check if the address is a DNS address."""
return any(protocol.name == "dnsaddr" for protocol in addr.protocols())
def add_addr(self, peer_info: PeerInfo) -> None:
"""Add a peer to the peerstore and emit discovery event."""
# Skip if it's our own peer
if peer_info.peer_id == self.swarm.get_peer_id():
logger.debug(f"Skipping own peer ID: {peer_info.peer_id}")
return
# Always add addresses to peerstore (allows multiple addresses for same peer)
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
# Only emit discovery event if this is the first time we see this peer
peer_id_str = str(peer_info.peer_id)
if peer_id_str not in self.discovered_peers:
# Track discovered peer
self.discovered_peers.add(peer_id_str)
# Emit peer discovery event
peerDiscovery.emit_peer_discovered(peer_info)
logger.debug(f"Peer discovered: {peer_info.peer_id}")
else:
logger.debug(f"Additional addresses added for peer: {peer_info.peer_id}")

View File

@ -1,51 +0,0 @@
"""Utility functions for bootstrap discovery."""
import logging
from multiaddr import Multiaddr
from libp2p.peer.peerinfo import InvalidAddrError, PeerInfo, info_from_p2p_addr
logger = logging.getLogger("libp2p.discovery.bootstrap.utils")
def validate_bootstrap_addresses(addrs: list[str]) -> list[str]:
"""
Validate and filter bootstrap addresses.
:param addrs: List of bootstrap address strings
:return: List of valid bootstrap addresses
"""
valid_addrs = []
for addr_str in addrs:
try:
# Try to parse as multiaddr
multiaddr = Multiaddr(addr_str)
# Try to extract peer info (this validates the p2p component)
info_from_p2p_addr(multiaddr)
valid_addrs.append(addr_str)
logger.debug(f"Valid bootstrap address: {addr_str}")
except (InvalidAddrError, ValueError, Exception) as e:
logger.warning(f"Invalid bootstrap address '{addr_str}': {e}")
continue
return valid_addrs
def parse_bootstrap_peer_info(addr_str: str) -> PeerInfo | None:
"""
Parse bootstrap address string into PeerInfo.
:param addr_str: Bootstrap address string
:return: PeerInfo object or None if parsing fails
"""
try:
multiaddr = Multiaddr(addr_str)
return info_from_p2p_addr(multiaddr)
except Exception as e:
logger.error(f"Failed to parse bootstrap address '{addr_str}': {e}")
return None

View File

@ -1,17 +0,0 @@
"""Random walk discovery modules for py-libp2p."""
from .rt_refresh_manager import RTRefreshManager
from .random_walk import RandomWalk
from .exceptions import (
RoutingTableRefreshError,
RandomWalkError,
PeerValidationError,
)
__all__ = [
"RTRefreshManager",
"RandomWalk",
"RoutingTableRefreshError",
"RandomWalkError",
"PeerValidationError",
]

View File

@ -1,16 +0,0 @@
from typing import Final
# Timing constants (matching go-libp2p)
PEER_PING_TIMEOUT: Final[float] = 10.0 # seconds
REFRESH_QUERY_TIMEOUT: Final[float] = 60.0 # seconds
REFRESH_INTERVAL: Final[float] = 300.0 # 5 minutes
SUCCESSFUL_OUTBOUND_QUERY_GRACE_PERIOD: Final[float] = 60.0 # 1 minute
# Routing table thresholds
MIN_RT_REFRESH_THRESHOLD: Final[int] = 4 # Minimum peers before triggering refresh
MAX_N_BOOTSTRAPPERS: Final[int] = 2 # Maximum bootstrap peers to try
# Random walk specific
RANDOM_WALK_CONCURRENCY: Final[int] = 3 # Number of concurrent random walks
RANDOM_WALK_ENABLED: Final[bool] = True # Enable automatic random walks
RANDOM_WALK_RT_THRESHOLD: Final[int] = 20 # RT size threshold for peerstore fallback

View File

@ -1,19 +0,0 @@
from libp2p.exceptions import BaseLibp2pError
class RoutingTableRefreshError(BaseLibp2pError):
"""Base exception for routing table refresh operations."""
pass
class RandomWalkError(RoutingTableRefreshError):
"""Exception raised during random walk operations."""
pass
class PeerValidationError(RoutingTableRefreshError):
"""Exception raised when peer validation fails."""
pass

View File

@ -1,218 +0,0 @@
from collections.abc import Awaitable, Callable
import logging
import secrets
import trio
from libp2p.abc import IHost
from libp2p.discovery.random_walk.config import (
RANDOM_WALK_CONCURRENCY,
RANDOM_WALK_RT_THRESHOLD,
REFRESH_QUERY_TIMEOUT,
)
from libp2p.discovery.random_walk.exceptions import RandomWalkError
from libp2p.peer.id import ID
from libp2p.peer.peerinfo import PeerInfo
logger = logging.getLogger("libp2p.discovery.random_walk")
class RandomWalk:
"""
Random Walk implementation for peer discovery in Kademlia DHT.
Generates random peer IDs and performs FIND_NODE queries to discover
new peers and populate the routing table.
"""
def __init__(
self,
host: IHost,
local_peer_id: ID,
query_function: Callable[[bytes], Awaitable[list[ID]]],
):
"""
Initialize Random Walk module.
Args:
host: The libp2p host instance
local_peer_id: Local peer ID
query_function: Function to query for closest peers given target key bytes
"""
self.host = host
self.local_peer_id = local_peer_id
self.query_function = query_function
def generate_random_peer_id(self) -> str:
"""
Generate a completely random peer ID
for random walk queries.
Returns:
Random peer ID as string
"""
# Generate 32 random bytes (256 bits) - same as go-libp2p
random_bytes = secrets.token_bytes(32)
# Convert to hex string for query
return random_bytes.hex()
async def perform_random_walk(self) -> list[PeerInfo]:
"""
Perform a single random walk operation.
Returns:
List of validated peers discovered during the walk
"""
try:
# Generate random peer ID
random_peer_id = self.generate_random_peer_id()
logger.info(f"Starting random walk for peer ID: {random_peer_id}")
# Perform FIND_NODE query
discovered_peer_ids: list[ID] = []
with trio.move_on_after(REFRESH_QUERY_TIMEOUT):
# Call the query function with target key bytes
target_key = bytes.fromhex(random_peer_id)
discovered_peer_ids = await self.query_function(target_key) or []
if not discovered_peer_ids:
logger.debug(f"No peers discovered in random walk for {random_peer_id}")
return []
logger.info(
f"Discovered {len(discovered_peer_ids)} peers in random walk "
f"for {random_peer_id[:8]}..." # Show only first 8 chars for brevity
)
# Convert peer IDs to PeerInfo objects and validate
validated_peers: list[PeerInfo] = []
for peer_id in discovered_peer_ids:
try:
# Get addresses from peerstore
addrs = self.host.get_peerstore().addrs(peer_id)
if addrs:
peer_info = PeerInfo(peer_id, addrs)
validated_peers.append(peer_info)
except Exception as e:
logger.debug(f"Failed to create PeerInfo for {peer_id}: {e}")
continue
return validated_peers
except Exception as e:
logger.error(f"Random walk failed: {e}")
raise RandomWalkError(f"Random walk operation failed: {e}") from e
async def run_concurrent_random_walks(
self, count: int = RANDOM_WALK_CONCURRENCY, current_routing_table_size: int = 0
) -> list[PeerInfo]:
"""
Run multiple random walks concurrently.
Args:
count: Number of concurrent random walks to perform
current_routing_table_size: Current size of routing table (for optimization)
Returns:
Combined list of all validated peers discovered
"""
all_validated_peers: list[PeerInfo] = []
logger.info(f"Starting {count} concurrent random walks")
# First, try to add peers from peerstore if routing table is small
if current_routing_table_size < RANDOM_WALK_RT_THRESHOLD:
try:
peerstore_peers = self._get_peerstore_peers()
if peerstore_peers:
logger.debug(
f"RT size ({current_routing_table_size}) below threshold, "
f"adding {len(peerstore_peers)} peerstore peers"
)
all_validated_peers.extend(peerstore_peers)
except Exception as e:
logger.warning(f"Error processing peerstore peers: {e}")
async def single_walk() -> None:
try:
peers = await self.perform_random_walk()
all_validated_peers.extend(peers)
except Exception as e:
logger.warning(f"Concurrent random walk failed: {e}")
return
# Run concurrent random walks
async with trio.open_nursery() as nursery:
for _ in range(count):
nursery.start_soon(single_walk)
# Remove duplicates based on peer ID
unique_peers = {}
for peer in all_validated_peers:
unique_peers[peer.peer_id] = peer
result = list(unique_peers.values())
logger.info(
f"Concurrent random walks completed: {len(result)} unique peers discovered"
)
return result
def _get_peerstore_peers(self) -> list[PeerInfo]:
"""
Get peer info objects from the host's peerstore.
Returns:
List of PeerInfo objects from peerstore
"""
try:
peerstore = self.host.get_peerstore()
peer_ids = peerstore.peers_with_addrs()
peer_infos = []
for peer_id in peer_ids:
try:
# Skip local peer
if peer_id == self.local_peer_id:
continue
peer_info = peerstore.peer_info(peer_id)
if peer_info and peer_info.addrs:
# Filter for compatible addresses (TCP + IPv4)
if self._has_compatible_addresses(peer_info):
peer_infos.append(peer_info)
except Exception as e:
logger.debug(f"Error getting peer info for {peer_id}: {e}")
return peer_infos
except Exception as e:
logger.warning(f"Error accessing peerstore: {e}")
return []
def _has_compatible_addresses(self, peer_info: PeerInfo) -> bool:
"""
Check if a peer has TCP+IPv4 compatible addresses.
Args:
peer_info: PeerInfo to check
Returns:
True if peer has compatible addresses
"""
if not peer_info.addrs:
return False
for addr in peer_info.addrs:
addr_str = str(addr)
# Check for TCP and IPv4 compatibility, avoid QUIC
if "/tcp/" in addr_str and "/ip4/" in addr_str and "/quic" not in addr_str:
return True
return False

View File

@ -1,208 +0,0 @@
from collections.abc import Awaitable, Callable
import logging
import time
from typing import Protocol
import trio
from libp2p.abc import IHost
from libp2p.discovery.random_walk.config import (
MIN_RT_REFRESH_THRESHOLD,
RANDOM_WALK_CONCURRENCY,
RANDOM_WALK_ENABLED,
REFRESH_INTERVAL,
)
from libp2p.discovery.random_walk.exceptions import RoutingTableRefreshError
from libp2p.discovery.random_walk.random_walk import RandomWalk
from libp2p.peer.id import ID
from libp2p.peer.peerinfo import PeerInfo
class RoutingTableProtocol(Protocol):
"""Protocol for routing table operations needed by RT refresh manager."""
def size(self) -> int:
"""Return the current size of the routing table."""
...
async def add_peer(self, peer_obj: PeerInfo) -> bool:
"""Add a peer to the routing table."""
...
logger = logging.getLogger("libp2p.discovery.random_walk.rt_refresh_manager")
class RTRefreshManager:
"""
Routing Table Refresh Manager for py-libp2p.
Manages periodic routing table refreshes and random walk operations
to maintain routing table health and discover new peers.
"""
def __init__(
self,
host: IHost,
routing_table: RoutingTableProtocol,
local_peer_id: ID,
query_function: Callable[[bytes], Awaitable[list[ID]]],
enable_auto_refresh: bool = RANDOM_WALK_ENABLED,
refresh_interval: float = REFRESH_INTERVAL,
min_refresh_threshold: int = MIN_RT_REFRESH_THRESHOLD,
):
"""
Initialize RT Refresh Manager.
Args:
host: The libp2p host instance
routing_table: Routing table of host
local_peer_id: Local peer ID
query_function: Function to query for closest peers given target key bytes
enable_auto_refresh: Whether to enable automatic refresh
refresh_interval: Interval between refreshes in seconds
min_refresh_threshold: Minimum RT size before triggering refresh
"""
self.host = host
self.routing_table = routing_table
self.local_peer_id = local_peer_id
self.query_function = query_function
self.enable_auto_refresh = enable_auto_refresh
self.refresh_interval = refresh_interval
self.min_refresh_threshold = min_refresh_threshold
# Initialize random walk module
self.random_walk = RandomWalk(
host=host,
local_peer_id=self.local_peer_id,
query_function=query_function,
)
# Control variables
self._running = False
self._nursery: trio.Nursery | None = None
# Tracking
self._last_refresh_time = 0.0
self._refresh_done_callbacks: list[Callable[[], None]] = []
async def start(self) -> None:
"""Start the RT Refresh Manager."""
if self._running:
logger.warning("RT Refresh Manager is already running")
return
self._running = True
logger.info("Starting RT Refresh Manager")
# Start the main loop
async with trio.open_nursery() as nursery:
self._nursery = nursery
nursery.start_soon(self._main_loop)
async def stop(self) -> None:
"""Stop the RT Refresh Manager."""
if not self._running:
return
logger.info("Stopping RT Refresh Manager")
self._running = False
async def _main_loop(self) -> None:
"""Main loop for the RT Refresh Manager."""
logger.info("RT Refresh Manager main loop started")
# Initial refresh if auto-refresh is enabled
if self.enable_auto_refresh:
await self._do_refresh(force=True)
try:
while self._running:
async with trio.open_nursery() as nursery:
# Schedule periodic refresh if enabled
if self.enable_auto_refresh:
nursery.start_soon(self._periodic_refresh_task)
except Exception as e:
logger.error(f"RT Refresh Manager main loop error: {e}")
finally:
logger.info("RT Refresh Manager main loop stopped")
async def _periodic_refresh_task(self) -> None:
"""Task for periodic refreshes."""
while self._running:
await trio.sleep(self.refresh_interval)
if self._running:
await self._do_refresh()
async def _do_refresh(self, force: bool = False) -> None:
"""
Perform routing table refresh operation.
Args:
force: Whether to force refresh regardless of timing
"""
try:
current_time = time.time()
# Check if refresh is needed
if not force:
if current_time - self._last_refresh_time < self.refresh_interval:
logger.debug("Skipping refresh: interval not elapsed")
return
if self.routing_table.size() >= self.min_refresh_threshold:
logger.debug("Skipping refresh: routing table size above threshold")
return
logger.info(f"Starting routing table refresh (force={force})")
start_time = current_time
# Perform random walks to discover new peers
logger.info("Running concurrent random walks to discover new peers")
current_rt_size = self.routing_table.size()
discovered_peers = await self.random_walk.run_concurrent_random_walks(
count=RANDOM_WALK_CONCURRENCY,
current_routing_table_size=current_rt_size,
)
# Add discovered peers to routing table
added_count = 0
for peer_info in discovered_peers:
result = await self.routing_table.add_peer(peer_info)
if result:
added_count += 1
self._last_refresh_time = current_time
duration = time.time() - start_time
logger.info(
f"Routing table refresh completed: "
f"{added_count}/{len(discovered_peers)} peers added, "
f"RT size: {self.routing_table.size()}, "
f"duration: {duration:.2f}s"
)
# Notify refresh completion
for callback in self._refresh_done_callbacks:
try:
callback()
except Exception as e:
logger.warning(f"Refresh callback error: {e}")
except Exception as e:
logger.error(f"Routing table refresh failed: {e}")
raise RoutingTableRefreshError(f"Refresh operation failed: {e}") from e
def add_refresh_done_callback(self, callback: Callable[[], None]) -> None:
"""Add a callback to be called when refresh completes."""
self._refresh_done_callbacks.append(callback)
def remove_refresh_done_callback(self, callback: Callable[[], None]) -> None:
"""Remove a refresh completion callback."""
if callback in self._refresh_done_callbacks:
self._refresh_done_callbacks.remove(callback)

View File

@ -29,7 +29,6 @@ from libp2p.custom_types import (
StreamHandlerFn, StreamHandlerFn,
TProtocol, TProtocol,
) )
from libp2p.discovery.bootstrap.bootstrap import BootstrapDiscovery
from libp2p.discovery.mdns.mdns import MDNSDiscovery from libp2p.discovery.mdns.mdns import MDNSDiscovery
from libp2p.host.defaults import ( from libp2p.host.defaults import (
get_default_protocols, get_default_protocols,
@ -93,7 +92,6 @@ class BasicHost(IHost):
self, self,
network: INetworkService, network: INetworkService,
enable_mDNS: bool = False, enable_mDNS: bool = False,
bootstrap: list[str] | None = None,
default_protocols: Optional["OrderedDict[TProtocol, StreamHandlerFn]"] = None, default_protocols: Optional["OrderedDict[TProtocol, StreamHandlerFn]"] = None,
negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT, negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> None: ) -> None:
@ -107,8 +105,6 @@ class BasicHost(IHost):
self.multiselect_client = MultiselectClient() self.multiselect_client = MultiselectClient()
if enable_mDNS: if enable_mDNS:
self.mDNS = MDNSDiscovery(network) self.mDNS = MDNSDiscovery(network)
if bootstrap:
self.bootstrap = BootstrapDiscovery(network, bootstrap)
def get_id(self) -> ID: def get_id(self) -> ID:
""" """
@ -176,16 +172,11 @@ class BasicHost(IHost):
if hasattr(self, "mDNS") and self.mDNS is not None: if hasattr(self, "mDNS") and self.mDNS is not None:
logger.debug("Starting mDNS Discovery") logger.debug("Starting mDNS Discovery")
self.mDNS.start() self.mDNS.start()
if hasattr(self, "bootstrap") and self.bootstrap is not None:
logger.debug("Starting Bootstrap Discovery")
await self.bootstrap.start()
try: try:
yield yield
finally: finally:
if hasattr(self, "mDNS") and self.mDNS is not None: if hasattr(self, "mDNS") and self.mDNS is not None:
self.mDNS.stop() self.mDNS.stop()
if hasattr(self, "bootstrap") and self.bootstrap is not None:
self.bootstrap.stop()
return _run() return _run()
@ -295,13 +286,6 @@ class BasicHost(IHost):
) )
await net_stream.reset() await net_stream.reset()
return return
if protocol is None:
logger.debug(
"no protocol negotiated, closing stream from peer %s",
net_stream.muxed_conn.peer_id,
)
await net_stream.reset()
return
net_stream.set_protocol(protocol) net_stream.set_protocol(protocol)
if handler is None: if handler is None:
logger.debug( logger.debug(

View File

@ -26,8 +26,5 @@ if TYPE_CHECKING:
def get_default_protocols(host: IHost) -> "OrderedDict[TProtocol, StreamHandlerFn]": def get_default_protocols(host: IHost) -> "OrderedDict[TProtocol, StreamHandlerFn]":
return OrderedDict( return OrderedDict(
( ((IdentifyID, identify_handler_for(host)), (PingID, handle_ping))
(IdentifyID, identify_handler_for(host, use_varint_format=True)),
(PingID, handle_ping),
)
) )

View File

@ -19,13 +19,9 @@ class RoutedHost(BasicHost):
_router: IPeerRouting _router: IPeerRouting
def __init__( def __init__(
self, self, network: INetworkService, router: IPeerRouting, enable_mDNS: bool = False
network: INetworkService,
router: IPeerRouting,
enable_mDNS: bool = False,
bootstrap: list[str] | None = None,
): ):
super().__init__(network, enable_mDNS, bootstrap) super().__init__(network, enable_mDNS)
self._router = router self._router = router
async def connect(self, peer_info: PeerInfo) -> None: async def connect(self, peer_info: PeerInfo) -> None:

View File

@ -15,12 +15,8 @@ from libp2p.custom_types import (
from libp2p.network.stream.exceptions import ( from libp2p.network.stream.exceptions import (
StreamClosed, StreamClosed,
) )
from libp2p.peer.envelope import seal_record
from libp2p.peer.peer_record import PeerRecord
from libp2p.utils import ( from libp2p.utils import (
decode_varint_with_size,
get_agent_version, get_agent_version,
varint,
) )
from .pb.identify_pb2 import ( from .pb.identify_pb2 import (
@ -63,12 +59,7 @@ def _mk_identify_protobuf(
) -> Identify: ) -> Identify:
public_key = host.get_public_key() public_key = host.get_public_key()
laddrs = host.get_addrs() laddrs = host.get_addrs()
protocols = tuple(str(p) for p in host.get_mux().get_protocols() if p is not None) protocols = host.get_mux().get_protocols()
# Create a signed peer-record for the remote peer
record = PeerRecord(host.get_id(), host.get_addrs())
envelope = seal_record(record, host.get_private_key())
protobuf = envelope.marshal_envelope()
observed_addr = observed_multiaddr.to_bytes() if observed_multiaddr else b"" observed_addr = observed_multiaddr.to_bytes() if observed_multiaddr else b""
return Identify( return Identify(
@ -78,51 +69,10 @@ def _mk_identify_protobuf(
listen_addrs=map(_multiaddr_to_bytes, laddrs), listen_addrs=map(_multiaddr_to_bytes, laddrs),
observed_addr=observed_addr, observed_addr=observed_addr,
protocols=protocols, protocols=protocols,
signedPeerRecord=protobuf,
) )
def parse_identify_response(response: bytes) -> Identify: def identify_handler_for(host: IHost) -> StreamHandlerFn:
"""
Parse identify response that could be either:
- Old format: raw protobuf
- New format: length-prefixed protobuf
This function provides backward and forward compatibility.
"""
# Try new format first: length-prefixed protobuf
if len(response) >= 1:
length, varint_size = decode_varint_with_size(response)
if varint_size > 0 and length > 0 and varint_size + length <= len(response):
protobuf_data = response[varint_size : varint_size + length]
try:
identify_response = Identify()
identify_response.ParseFromString(protobuf_data)
# Sanity check: must have agent_version (protocol_version is optional)
if identify_response.agent_version:
logger.debug(
"Parsed length-prefixed identify response (new format)"
)
return identify_response
except Exception:
pass # Fall through to old format
# Fall back to old format: raw protobuf
try:
identify_response = Identify()
identify_response.ParseFromString(response)
logger.debug("Parsed raw protobuf identify response (old format)")
return identify_response
except Exception as e:
logger.error(f"Failed to parse identify response: {e}")
logger.error(f"Response length: {len(response)}")
logger.error(f"Response hex: {response.hex()}")
raise
def identify_handler_for(
host: IHost, use_varint_format: bool = True
) -> StreamHandlerFn:
async def handle_identify(stream: INetStream) -> None: async def handle_identify(stream: INetStream) -> None:
# get observed address from ``stream`` # get observed address from ``stream``
peer_id = ( peer_id = (
@ -150,21 +100,7 @@ def identify_handler_for(
response = protobuf.SerializeToString() response = protobuf.SerializeToString()
try: try:
if use_varint_format: await stream.write(response)
# Send length-prefixed protobuf message (new format)
await stream.write(varint.encode_uvarint(len(response)))
await stream.write(response)
logger.debug(
"Sent new format (length-prefixed) identify response to %s",
peer_id,
)
else:
# Send raw protobuf message (old format for backward compatibility)
await stream.write(response)
logger.debug(
"Sent old format (raw protobuf) identify response to %s",
peer_id,
)
except StreamClosed: except StreamClosed:
logger.debug("Fail to respond to %s request: stream closed", ID) logger.debug("Fail to respond to %s request: stream closed", ID)
else: else:

View File

@ -9,5 +9,4 @@ message Identify {
repeated bytes listen_addrs = 2; repeated bytes listen_addrs = 2;
optional bytes observed_addr = 4; optional bytes observed_addr = 4;
repeated string protocols = 3; repeated string protocols = 3;
optional bytes signedPeerRecord = 8;
} }

View File

@ -13,7 +13,7 @@ _sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*libp2p/identity/identify/pb/identify.proto\x12\x0bidentify.pb\"\xa9\x01\n\x08Identify\x12\x18\n\x10protocol_version\x18\x05 \x01(\t\x12\x15\n\ragent_version\x18\x06 \x01(\t\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\x14\n\x0clisten_addrs\x18\x02 \x03(\x0c\x12\x15\n\robserved_addr\x18\x04 \x01(\x0c\x12\x11\n\tprotocols\x18\x03 \x03(\t\x12\x18\n\x10signedPeerRecord\x18\x08 \x01(\x0c') DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*libp2p/identity/identify/pb/identify.proto\x12\x0bidentify.pb\"\x8f\x01\n\x08Identify\x12\x18\n\x10protocol_version\x18\x05 \x01(\t\x12\x15\n\ragent_version\x18\x06 \x01(\t\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\x14\n\x0clisten_addrs\x18\x02 \x03(\x0c\x12\x15\n\robserved_addr\x18\x04 \x01(\x0c\x12\x11\n\tprotocols\x18\x03 \x03(\t')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.identity.identify.pb.identify_pb2', globals()) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.identity.identify.pb.identify_pb2', globals())
@ -21,5 +21,5 @@ if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None DESCRIPTOR._options = None
_IDENTIFY._serialized_start=60 _IDENTIFY._serialized_start=60
_IDENTIFY._serialized_end=229 _IDENTIFY._serialized_end=203
# @@protoc_insertion_point(module_scope) # @@protoc_insertion_point(module_scope)

View File

@ -22,12 +22,10 @@ class Identify(google.protobuf.message.Message):
LISTEN_ADDRS_FIELD_NUMBER: builtins.int LISTEN_ADDRS_FIELD_NUMBER: builtins.int
OBSERVED_ADDR_FIELD_NUMBER: builtins.int OBSERVED_ADDR_FIELD_NUMBER: builtins.int
PROTOCOLS_FIELD_NUMBER: builtins.int PROTOCOLS_FIELD_NUMBER: builtins.int
SIGNEDPEERRECORD_FIELD_NUMBER: builtins.int
protocol_version: builtins.str protocol_version: builtins.str
agent_version: builtins.str agent_version: builtins.str
public_key: builtins.bytes public_key: builtins.bytes
observed_addr: builtins.bytes observed_addr: builtins.bytes
signedPeerRecord: builtins.bytes
@property @property
def listen_addrs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]: ... def listen_addrs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]: ...
@property @property
@ -41,9 +39,8 @@ class Identify(google.protobuf.message.Message):
listen_addrs: collections.abc.Iterable[builtins.bytes] | None = ..., listen_addrs: collections.abc.Iterable[builtins.bytes] | None = ...,
observed_addr: builtins.bytes | None = ..., observed_addr: builtins.bytes | None = ...,
protocols: collections.abc.Iterable[builtins.str] | None = ..., protocols: collections.abc.Iterable[builtins.str] | None = ...,
signedPeerRecord: builtins.bytes | None = ...,
) -> None: ... ) -> None: ...
def HasField(self, field_name: typing.Literal["agent_version", b"agent_version", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "public_key", b"public_key", "signedPeerRecord", b"signedPeerRecord"]) -> builtins.bool: ... def HasField(self, field_name: typing.Literal["agent_version", b"agent_version", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "public_key", b"public_key"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["agent_version", b"agent_version", "listen_addrs", b"listen_addrs", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "protocols", b"protocols", "public_key", b"public_key", "signedPeerRecord", b"signedPeerRecord"]) -> None: ... def ClearField(self, field_name: typing.Literal["agent_version", b"agent_version", "listen_addrs", b"listen_addrs", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "protocols", b"protocols", "public_key", b"public_key"]) -> None: ...
global___Identify = Identify global___Identify = Identify

View File

@ -20,16 +20,11 @@ from libp2p.custom_types import (
from libp2p.network.stream.exceptions import ( from libp2p.network.stream.exceptions import (
StreamClosed, StreamClosed,
) )
from libp2p.peer.envelope import consume_envelope
from libp2p.peer.id import ( from libp2p.peer.id import (
ID, ID,
) )
from libp2p.utils import ( from libp2p.utils import (
get_agent_version, get_agent_version,
varint,
)
from libp2p.utils.varint import (
read_length_prefixed_protobuf,
) )
from ..identify.identify import ( from ..identify.identify import (
@ -48,28 +43,20 @@ AGENT_VERSION = get_agent_version()
CONCURRENCY_LIMIT = 10 CONCURRENCY_LIMIT = 10
def identify_push_handler_for( def identify_push_handler_for(host: IHost) -> StreamHandlerFn:
host: IHost, use_varint_format: bool = True
) -> StreamHandlerFn:
""" """
Create a handler for the identify/push protocol. Create a handler for the identify/push protocol.
This handler receives pushed identify messages from remote peers and updates This handler receives pushed identify messages from remote peers and updates
the local peerstore with the new information. the local peerstore with the new information.
Args:
host: The libp2p host.
use_varint_format: True=length-prefixed, False=raw protobuf.
""" """
async def handle_identify_push(stream: INetStream) -> None: async def handle_identify_push(stream: INetStream) -> None:
peer_id = stream.muxed_conn.peer_id peer_id = stream.muxed_conn.peer_id
try: try:
# Use the utility function to read the protobuf message # Read the identify message from the stream
data = await read_length_prefixed_protobuf(stream, use_varint_format) data = await stream.read()
identify_msg = Identify() identify_msg = Identify()
identify_msg.ParseFromString(data) identify_msg.ParseFromString(data)
@ -79,11 +66,6 @@ def identify_push_handler_for(
) )
logger.debug("Successfully processed identify/push from peer %s", peer_id) logger.debug("Successfully processed identify/push from peer %s", peer_id)
# Send acknowledgment to indicate successful processing
# This ensures the sender knows the message was received before closing
await stream.write(b"OK")
except StreamClosed: except StreamClosed:
logger.debug( logger.debug(
"Stream closed while processing identify/push from %s", peer_id "Stream closed while processing identify/push from %s", peer_id
@ -92,10 +74,7 @@ def identify_push_handler_for(
logger.error("Error processing identify/push from %s: %s", peer_id, e) logger.error("Error processing identify/push from %s: %s", peer_id, e)
finally: finally:
# Close the stream after processing # Close the stream after processing
try: await stream.close()
await stream.close()
except Exception:
pass # Ignore errors when closing
return handle_identify_push return handle_identify_push
@ -141,19 +120,6 @@ async def _update_peerstore_from_identify(
except Exception as e: except Exception as e:
logger.error("Error updating protocols for peer %s: %s", peer_id, e) logger.error("Error updating protocols for peer %s: %s", peer_id, e)
if identify_msg.HasField("signedPeerRecord"):
try:
# Convert the signed-peer-record(Envelope) from prtobuf bytes
envelope, _ = consume_envelope(
identify_msg.signedPeerRecord, "libp2p-peer-record"
)
# Use a default TTL of 2 hours (7200 seconds)
if not peerstore.consume_peer_record(envelope, 7200):
logger.error("Updating Certified-Addr-Book was unsuccessful")
except Exception as e:
logger.error(
"Error updating the certified addr book for peer %s: %s", peer_id, e
)
# Update observed address if present # Update observed address if present
if identify_msg.HasField("observed_addr") and identify_msg.observed_addr: if identify_msg.HasField("observed_addr") and identify_msg.observed_addr:
try: try:
@ -171,7 +137,6 @@ async def push_identify_to_peer(
peer_id: ID, peer_id: ID,
observed_multiaddr: Multiaddr | None = None, observed_multiaddr: Multiaddr | None = None,
limit: trio.Semaphore = trio.Semaphore(CONCURRENCY_LIMIT), limit: trio.Semaphore = trio.Semaphore(CONCURRENCY_LIMIT),
use_varint_format: bool = True,
) -> bool: ) -> bool:
""" """
Push an identify message to a specific peer. Push an identify message to a specific peer.
@ -179,15 +144,10 @@ async def push_identify_to_peer(
This function opens a stream to the peer using the identify/push protocol, This function opens a stream to the peer using the identify/push protocol,
sends the identify message, and closes the stream. sends the identify message, and closes the stream.
Args: Returns
host: The libp2p host. -------
peer_id: The peer ID to push to. bool
observed_multiaddr: The observed multiaddress (optional). True if the push was successful, False otherwise.
limit: Semaphore for concurrency control.
use_varint_format: True=length-prefixed, False=raw protobuf.
Returns:
bool: True if the push was successful, False otherwise.
""" """
async with limit: async with limit:
@ -199,28 +159,10 @@ async def push_identify_to_peer(
identify_msg = _mk_identify_protobuf(host, observed_multiaddr) identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
response = identify_msg.SerializeToString() response = identify_msg.SerializeToString()
if use_varint_format: # Send the identify message
# Send length-prefixed identify message await stream.write(response)
await stream.write(varint.encode_uvarint(len(response)))
await stream.write(response)
else:
# Send raw protobuf message
await stream.write(response)
# Wait for acknowledgment from the receiver with timeout # Close the stream
# This ensures the message was processed before closing
try:
with trio.move_on_after(1.0): # 1 second timeout
ack = await stream.read(2) # Read "OK" acknowledgment
if ack != b"OK":
logger.warning(
"Unexpected acknowledgment from peer %s: %s", peer_id, ack
)
except Exception as e:
logger.debug("No acknowledgment received from peer %s: %s", peer_id, e)
# Continue anyway, as the message might have been processed
# Close the stream after acknowledgment (or timeout)
await stream.close() await stream.close()
logger.debug("Successfully pushed identify to peer %s", peer_id) logger.debug("Successfully pushed identify to peer %s", peer_id)
@ -234,36 +176,18 @@ async def push_identify_to_peers(
host: IHost, host: IHost,
peer_ids: set[ID] | None = None, peer_ids: set[ID] | None = None,
observed_multiaddr: Multiaddr | None = None, observed_multiaddr: Multiaddr | None = None,
use_varint_format: bool = True,
) -> None: ) -> None:
""" """
Push an identify message to multiple peers in parallel. Push an identify message to multiple peers in parallel.
If peer_ids is None, push to all connected peers. If peer_ids is None, push to all connected peers.
Args:
host: The libp2p host.
peer_ids: Set of peer IDs to push to (if None, push to all connected peers).
observed_multiaddr: The observed multiaddress (optional).
use_varint_format: True=length-prefixed, False=raw protobuf.
""" """
if peer_ids is None: if peer_ids is None:
# Get all connected peers # Get all connected peers
peer_ids = set(host.get_connected_peers()) peer_ids = set(host.get_connected_peers())
# Create a single shared semaphore for concurrency control
limit = trio.Semaphore(CONCURRENCY_LIMIT)
# Push to each peer in parallel using a trio.Nursery # Push to each peer in parallel using a trio.Nursery
# limiting concurrent connections to CONCURRENCY_LIMIT # limiting concurrent connections to 10
async with trio.open_nursery() as nursery: async with trio.open_nursery() as nursery:
for peer_id in peer_ids: for peer_id in peer_ids:
nursery.start_soon( nursery.start_soon(push_identify_to_peer, host, peer_id, observed_multiaddr)
push_identify_to_peer,
host,
peer_id,
observed_multiaddr,
limit,
use_varint_format,
)

View File

@ -5,7 +5,6 @@ This module provides a complete Distributed Hash Table (DHT)
implementation based on the Kademlia algorithm and protocol. implementation based on the Kademlia algorithm and protocol.
""" """
from collections.abc import Awaitable, Callable
from enum import ( from enum import (
Enum, Enum,
) )
@ -21,7 +20,6 @@ import varint
from libp2p.abc import ( from libp2p.abc import (
IHost, IHost,
) )
from libp2p.discovery.random_walk.rt_refresh_manager import RTRefreshManager
from libp2p.network.stream.net_stream import ( from libp2p.network.stream.net_stream import (
INetStream, INetStream,
) )
@ -75,27 +73,14 @@ class KadDHT(Service):
This class provides a DHT implementation that combines routing table management, This class provides a DHT implementation that combines routing table management,
peer discovery, content routing, and value storage. peer discovery, content routing, and value storage.
Optional Random Walk feature enhances peer discovery by automatically
performing periodic random queries to discover new peers and maintain
routing table health.
Example:
# Basic DHT without random walk (default)
dht = KadDHT(host, DHTMode.SERVER)
# DHT with random walk enabled for enhanced peer discovery
dht = KadDHT(host, DHTMode.SERVER, enable_random_walk=True)
""" """
def __init__(self, host: IHost, mode: DHTMode, enable_random_walk: bool = False): def __init__(self, host: IHost, mode: DHTMode):
""" """
Initialize a new Kademlia DHT node. Initialize a new Kademlia DHT node.
:param host: The libp2p host. :param host: The libp2p host.
:param mode: The mode of host (Client or Server) - must be DHTMode enum :param mode: The mode of host (Client or Server) - must be DHTMode enum
:param enable_random_walk: Whether to enable automatic random walk
""" """
super().__init__() super().__init__()
@ -107,7 +92,6 @@ class KadDHT(Service):
raise TypeError(f"mode must be DHTMode enum, got {type(mode)}") raise TypeError(f"mode must be DHTMode enum, got {type(mode)}")
self.mode = mode self.mode = mode
self.enable_random_walk = enable_random_walk
# Initialize the routing table # Initialize the routing table
self.routing_table = RoutingTable(self.local_peer_id, self.host) self.routing_table = RoutingTable(self.local_peer_id, self.host)
@ -124,56 +108,13 @@ class KadDHT(Service):
# Last time we republished provider records # Last time we republished provider records
self._last_provider_republish = time.time() self._last_provider_republish = time.time()
# Initialize RT Refresh Manager (only if random walk is enabled)
self.rt_refresh_manager: RTRefreshManager | None = None
if self.enable_random_walk:
self.rt_refresh_manager = RTRefreshManager(
host=self.host,
routing_table=self.routing_table,
local_peer_id=self.local_peer_id,
query_function=self._create_query_function(),
enable_auto_refresh=True,
)
# Set protocol handlers # Set protocol handlers
host.set_stream_handler(PROTOCOL_ID, self.handle_stream) host.set_stream_handler(PROTOCOL_ID, self.handle_stream)
def _create_query_function(self) -> Callable[[bytes], Awaitable[list[ID]]]:
"""
Create a query function that wraps peer_routing.find_closest_peers_network.
This function is used by the RandomWalk module to query for peers without
directly importing PeerRouting, avoiding circular import issues.
Returns:
Callable that takes target_key bytes and returns list of peer IDs
"""
async def query_function(target_key: bytes) -> list[ID]:
"""Query for closest peers to target key."""
return await self.peer_routing.find_closest_peers_network(target_key)
return query_function
async def run(self) -> None: async def run(self) -> None:
"""Run the DHT service.""" """Run the DHT service."""
logger.info(f"Starting Kademlia DHT with peer ID {self.local_peer_id}") logger.info(f"Starting Kademlia DHT with peer ID {self.local_peer_id}")
# Start the RT Refresh Manager in parallel with the main DHT service
async with trio.open_nursery() as nursery:
# Start the RT Refresh Manager only if random walk is enabled
if self.rt_refresh_manager is not None:
nursery.start_soon(self.rt_refresh_manager.start)
logger.info("RT Refresh Manager started - Random Walk is now active")
else:
logger.info("Random Walk is disabled - RT Refresh Manager not started")
# Start the main DHT service loop
nursery.start_soon(self._run_main_loop)
async def _run_main_loop(self) -> None:
"""Run the main DHT service loop."""
# Main service loop # Main service loop
while self.manager.is_running: while self.manager.is_running:
# Periodically refresh the routing table # Periodically refresh the routing table
@ -194,17 +135,6 @@ class KadDHT(Service):
# Wait before next maintenance cycle # Wait before next maintenance cycle
await trio.sleep(ROUTING_TABLE_REFRESH_INTERVAL) await trio.sleep(ROUTING_TABLE_REFRESH_INTERVAL)
async def stop(self) -> None:
"""Stop the DHT service and cleanup resources."""
logger.info("Stopping Kademlia DHT")
# Stop the RT Refresh Manager only if it was started
if self.rt_refresh_manager is not None:
await self.rt_refresh_manager.stop()
logger.info("RT Refresh Manager stopped")
else:
logger.info("RT Refresh Manager was not running (Random Walk disabled)")
async def switch_mode(self, new_mode: DHTMode) -> DHTMode: async def switch_mode(self, new_mode: DHTMode) -> DHTMode:
""" """
Switch the DHT mode. Switch the DHT mode.
@ -684,15 +614,3 @@ class KadDHT(Service):
""" """
return self.value_store.size() return self.value_store.size()
def is_random_walk_enabled(self) -> bool:
"""
Check if random walk peer discovery is enabled.
Returns
-------
bool
True if random walk is enabled, False otherwise.
"""
return self.enable_random_walk

View File

@ -2,10 +2,10 @@
# Generated by the protocol buffer compiler. DO NOT EDIT! # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/kad_dht/pb/kademlia.proto # source: libp2p/kad_dht/pb/kademlia.proto
"""Generated protocol buffer code.""" """Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports) # @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default() _sym_db = _symbol_database.Default()
@ -15,19 +15,19 @@ _sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n libp2p/kad_dht/pb/kademlia.proto\":\n\x06Record\x12\x0b\n\x03key\x18\x01 \x01(\x0c\x12\r\n\x05value\x18\x02 \x01(\x0c\x12\x14\n\x0ctimeReceived\x18\x05 \x01(\t\"\xca\x03\n\x07Message\x12\"\n\x04type\x18\x01 \x01(\x0e\x32\x14.Message.MessageType\x12\x17\n\x0f\x63lusterLevelRaw\x18\n \x01(\x05\x12\x0b\n\x03key\x18\x02 \x01(\x0c\x12\x17\n\x06record\x18\x03 \x01(\x0b\x32\x07.Record\x12\"\n\x0b\x63loserPeers\x18\x08 \x03(\x0b\x32\r.Message.Peer\x12$\n\rproviderPeers\x18\t \x03(\x0b\x32\r.Message.Peer\x1aN\n\x04Peer\x12\n\n\x02id\x18\x01 \x01(\x0c\x12\r\n\x05\x61\x64\x64rs\x18\x02 \x03(\x0c\x12+\n\nconnection\x18\x03 \x01(\x0e\x32\x17.Message.ConnectionType\"i\n\x0bMessageType\x12\r\n\tPUT_VALUE\x10\x00\x12\r\n\tGET_VALUE\x10\x01\x12\x10\n\x0c\x41\x44\x44_PROVIDER\x10\x02\x12\x11\n\rGET_PROVIDERS\x10\x03\x12\r\n\tFIND_NODE\x10\x04\x12\x08\n\x04PING\x10\x05\"W\n\x0e\x43onnectionType\x12\x11\n\rNOT_CONNECTED\x10\x00\x12\r\n\tCONNECTED\x10\x01\x12\x0f\n\x0b\x43\x41N_CONNECT\x10\x02\x12\x12\n\x0e\x43\x41NNOT_CONNECT\x10\x03\x62\x06proto3') DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n libp2p/kad_dht/pb/kademlia.proto\":\n\x06Record\x12\x0b\n\x03key\x18\x01 \x01(\x0c\x12\r\n\x05value\x18\x02 \x01(\x0c\x12\x14\n\x0ctimeReceived\x18\x05 \x01(\t\"\xca\x03\n\x07Message\x12\"\n\x04type\x18\x01 \x01(\x0e\x32\x14.Message.MessageType\x12\x17\n\x0f\x63lusterLevelRaw\x18\n \x01(\x05\x12\x0b\n\x03key\x18\x02 \x01(\x0c\x12\x17\n\x06record\x18\x03 \x01(\x0b\x32\x07.Record\x12\"\n\x0b\x63loserPeers\x18\x08 \x03(\x0b\x32\r.Message.Peer\x12$\n\rproviderPeers\x18\t \x03(\x0b\x32\r.Message.Peer\x1aN\n\x04Peer\x12\n\n\x02id\x18\x01 \x01(\x0c\x12\r\n\x05\x61\x64\x64rs\x18\x02 \x03(\x0c\x12+\n\nconnection\x18\x03 \x01(\x0e\x32\x17.Message.ConnectionType\"i\n\x0bMessageType\x12\r\n\tPUT_VALUE\x10\x00\x12\r\n\tGET_VALUE\x10\x01\x12\x10\n\x0c\x41\x44\x44_PROVIDER\x10\x02\x12\x11\n\rGET_PROVIDERS\x10\x03\x12\r\n\tFIND_NODE\x10\x04\x12\x08\n\x04PING\x10\x05\"W\n\x0e\x43onnectionType\x12\x11\n\rNOT_CONNECTED\x10\x00\x12\r\n\tCONNECTED\x10\x01\x12\x0f\n\x0b\x43\x41N_CONNECT\x10\x02\x12\x12\n\x0e\x43\x41NNOT_CONNECT\x10\x03\x62\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _globals = globals()
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.kad_dht.pb.kademlia_pb2', globals()) _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.kad_dht.pb.kademlia_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False: if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None DESCRIPTOR._options = None
_RECORD._serialized_start=36 _globals['_RECORD']._serialized_start=36
_RECORD._serialized_end=94 _globals['_RECORD']._serialized_end=94
_MESSAGE._serialized_start=97 _globals['_MESSAGE']._serialized_start=97
_MESSAGE._serialized_end=555 _globals['_MESSAGE']._serialized_end=555
_MESSAGE_PEER._serialized_start=281 _globals['_MESSAGE_PEER']._serialized_start=281
_MESSAGE_PEER._serialized_end=359 _globals['_MESSAGE_PEER']._serialized_end=359
_MESSAGE_MESSAGETYPE._serialized_start=361 _globals['_MESSAGE_MESSAGETYPE']._serialized_start=361
_MESSAGE_MESSAGETYPE._serialized_end=466 _globals['_MESSAGE_MESSAGETYPE']._serialized_end=466
_MESSAGE_CONNECTIONTYPE._serialized_start=468 _globals['_MESSAGE_CONNECTIONTYPE']._serialized_start=468
_MESSAGE_CONNECTIONTYPE._serialized_end=555 _globals['_MESSAGE_CONNECTIONTYPE']._serialized_end=555
# @@protoc_insertion_point(module_scope) # @@protoc_insertion_point(module_scope)

View File

@ -170,7 +170,7 @@ class PeerRouting(IPeerRouting):
# Return early if we have no peers to start with # Return early if we have no peers to start with
if not closest_peers: if not closest_peers:
logger.debug("No local peers available for network lookup") logger.warning("No local peers available for network lookup")
return [] return []
# Iterative lookup until convergence # Iterative lookup until convergence

View File

@ -8,7 +8,6 @@ from collections import (
import logging import logging
import time import time
import multihash
import trio import trio
from libp2p.abc import ( from libp2p.abc import (
@ -41,22 +40,6 @@ PEER_REFRESH_INTERVAL = 60 # Interval to refresh peers in seconds
STALE_PEER_THRESHOLD = 3600 # Time in seconds after which a peer is considered stale STALE_PEER_THRESHOLD = 3600 # Time in seconds after which a peer is considered stale
def peer_id_to_key(peer_id: ID) -> bytes:
"""
Convert a peer ID to a 256-bit key for routing table operations.
This normalizes all peer IDs to exactly 256 bits by hashing them with SHA-256.
:param peer_id: The peer ID to convert
:return: 32-byte (256-bit) key for routing table operations
"""
return multihash.digest(peer_id.to_bytes(), "sha2-256").digest
def key_to_int(key: bytes) -> int:
"""Convert a 256-bit key to an integer for range calculations."""
return int.from_bytes(key, byteorder="big")
class KBucket: class KBucket:
""" """
A k-bucket implementation for the Kademlia DHT. A k-bucket implementation for the Kademlia DHT.
@ -374,24 +357,9 @@ class KBucket:
True if the key is in range, False otherwise True if the key is in range, False otherwise
""" """
key_int = key_to_int(key) key_int = int.from_bytes(key, byteorder="big")
return self.min_range <= key_int < self.max_range return self.min_range <= key_int < self.max_range
def peer_id_in_range(self, peer_id: ID) -> bool:
"""
Check if a peer ID is in the range of this bucket.
params: peer_id: The peer ID to check
Returns
-------
bool
True if the peer ID is in range, False otherwise
"""
key = peer_id_to_key(peer_id)
return self.key_in_range(key)
def split(self) -> tuple["KBucket", "KBucket"]: def split(self) -> tuple["KBucket", "KBucket"]:
""" """
Split the bucket into two buckets. Split the bucket into two buckets.
@ -408,9 +376,8 @@ class KBucket:
# Redistribute peers # Redistribute peers
for peer_id, (peer_info, timestamp) in self.peers.items(): for peer_id, (peer_info, timestamp) in self.peers.items():
peer_key = peer_id_to_key(peer_id) peer_key = int.from_bytes(peer_id.to_bytes(), byteorder="big")
peer_key_int = key_to_int(peer_key) if peer_key < midpoint:
if peer_key_int < midpoint:
lower_bucket.peers[peer_id] = (peer_info, timestamp) lower_bucket.peers[peer_id] = (peer_info, timestamp)
else: else:
upper_bucket.peers[peer_id] = (peer_info, timestamp) upper_bucket.peers[peer_id] = (peer_info, timestamp)
@ -491,38 +458,7 @@ class RoutingTable:
success = await bucket.add_peer(peer_info) success = await bucket.add_peer(peer_info)
if success: if success:
logger.debug(f"Successfully added peer {peer_id} to routing table") logger.debug(f"Successfully added peer {peer_id} to routing table")
return True return success
# If bucket is full and couldn't add peer, try splitting the bucket
# Only split if the bucket contains our Peer ID
if self._should_split_bucket(bucket):
logger.debug(
f"Bucket is full, attempting to split bucket for peer {peer_id}"
)
split_success = self._split_bucket(bucket)
if split_success:
# After splitting,
# find the appropriate bucket for the peer and try to add it
target_bucket = self.find_bucket(peer_info.peer_id)
success = await target_bucket.add_peer(peer_info)
if success:
logger.debug(
f"Successfully added peer {peer_id} after bucket split"
)
return True
else:
logger.debug(
f"Failed to add peer {peer_id} even after bucket split"
)
return False
else:
logger.debug(f"Failed to split bucket for peer {peer_id}")
return False
else:
logger.debug(
f"Bucket is full and cannot be split, peer {peer_id} not added"
)
return False
except Exception as e: except Exception as e:
logger.debug(f"Error adding peer {peer_obj} to routing table: {e}") logger.debug(f"Error adding peer {peer_obj} to routing table: {e}")
@ -544,9 +480,9 @@ class RoutingTable:
def find_bucket(self, peer_id: ID) -> KBucket: def find_bucket(self, peer_id: ID) -> KBucket:
""" """
Find the bucket that would contain the given peer ID. Find the bucket that would contain the given peer ID or PeerInfo.
:param peer_id: The peer ID to find a bucket for :param peer_obj: Either a peer ID or a PeerInfo object
Returns Returns
------- -------
@ -554,7 +490,7 @@ class RoutingTable:
""" """
for bucket in self.buckets: for bucket in self.buckets:
if bucket.peer_id_in_range(peer_id): if bucket.key_in_range(peer_id.to_bytes()):
return bucket return bucket
return self.buckets[0] return self.buckets[0]
@ -577,11 +513,7 @@ class RoutingTable:
all_peers.extend(bucket.peer_ids()) all_peers.extend(bucket.peer_ids())
# Sort by XOR distance to the key # Sort by XOR distance to the key
def distance_to_key(peer_id: ID) -> int: all_peers.sort(key=lambda p: xor_distance(p.to_bytes(), key))
peer_key = peer_id_to_key(peer_id)
return xor_distance(peer_key, key)
all_peers.sort(key=distance_to_key)
return all_peers[:count] return all_peers[:count]
@ -659,20 +591,6 @@ class RoutingTable:
stale_peers.extend(bucket.get_stale_peers(stale_threshold_seconds)) stale_peers.extend(bucket.get_stale_peers(stale_threshold_seconds))
return stale_peers return stale_peers
def get_peer_infos(self) -> list[PeerInfo]:
"""
Get all PeerInfo objects in the routing table.
Returns
-------
List[PeerInfo]: List of all PeerInfo objects
"""
peer_infos = []
for bucket in self.buckets:
peer_infos.extend(bucket.peer_infos())
return peer_infos
def cleanup_routing_table(self) -> None: def cleanup_routing_table(self) -> None:
""" """
Cleanup the routing table by removing all data. Cleanup the routing table by removing all data.
@ -680,66 +598,3 @@ class RoutingTable:
""" """
self.buckets = [KBucket(self.host, BUCKET_SIZE)] self.buckets = [KBucket(self.host, BUCKET_SIZE)]
logger.info("Routing table cleaned up, all data removed.") logger.info("Routing table cleaned up, all data removed.")
def _should_split_bucket(self, bucket: KBucket) -> bool:
"""
Check if a bucket should be split according to Kademlia rules.
:param bucket: The bucket to check
:return: True if the bucket should be split
"""
# Check if we've exceeded maximum buckets
if len(self.buckets) >= MAXIMUM_BUCKETS:
logger.debug("Maximum number of buckets reached, cannot split")
return False
# Check if the bucket contains our local ID
local_key = peer_id_to_key(self.local_id)
local_key_int = key_to_int(local_key)
contains_local_id = bucket.min_range <= local_key_int < bucket.max_range
logger.debug(
f"Bucket range: {bucket.min_range} - {bucket.max_range}, "
f"local_key_int: {local_key_int}, contains_local: {contains_local_id}"
)
return contains_local_id
def _split_bucket(self, bucket: KBucket) -> bool:
"""
Split a bucket into two buckets.
:param bucket: The bucket to split
:return: True if the bucket was successfully split
"""
try:
# Find the bucket index
bucket_index = self.buckets.index(bucket)
logger.debug(f"Splitting bucket at index {bucket_index}")
# Split the bucket
lower_bucket, upper_bucket = bucket.split()
# Replace the original bucket with the two new buckets
self.buckets[bucket_index] = lower_bucket
self.buckets.insert(bucket_index + 1, upper_bucket)
logger.debug(
f"Bucket split successful. New bucket count: {len(self.buckets)}"
)
logger.debug(
f"Lower bucket range: "
f"{lower_bucket.min_range} - {lower_bucket.max_range}, "
f"peers: {lower_bucket.size()}"
)
logger.debug(
f"Upper bucket range: "
f"{upper_bucket.min_range} - {upper_bucket.max_range}, "
f"peers: {upper_bucket.size()}"
)
return True
except Exception as e:
logger.error(f"Error splitting bucket: {e}")
return False

View File

@ -3,7 +3,6 @@ from typing import (
TYPE_CHECKING, TYPE_CHECKING,
) )
from multiaddr import Multiaddr
import trio import trio
from libp2p.abc import ( from libp2p.abc import (
@ -23,8 +22,7 @@ if TYPE_CHECKING:
""" """
Reference: https://github.com/libp2p/go-libp2p-swarm/blob/ Reference: https://github.com/libp2p/go-libp2p-swarm/blob/04c86bbdafd390651cb2ee14e334f7caeedad722/swarm_conn.go
04c86bbdafd390651cb2ee14e334f7caeedad722/swarm_conn.go
""" """
@ -44,21 +42,6 @@ class SwarmConn(INetConn):
self.streams = set() self.streams = set()
self.event_closed = trio.Event() self.event_closed = trio.Event()
self.event_started = trio.Event() self.event_started = trio.Event()
# Provide back-references/hooks expected by NetStream
try:
setattr(self.muxed_conn, "swarm", self.swarm)
# NetStream expects an awaitable remove_stream hook
async def _remove_stream_hook(stream: NetStream) -> None:
self.remove_stream(stream)
setattr(self.muxed_conn, "remove_stream", _remove_stream_hook)
except Exception as e:
logging.warning(
f"Failed to set optional conveniences on muxed_conn "
f"for peer {muxed_conn.peer_id}: {e}"
)
# optional conveniences
if hasattr(muxed_conn, "on_close"): if hasattr(muxed_conn, "on_close"):
logging.debug(f"Setting on_close for peer {muxed_conn.peer_id}") logging.debug(f"Setting on_close for peer {muxed_conn.peer_id}")
setattr(muxed_conn, "on_close", self._on_muxed_conn_closed) setattr(muxed_conn, "on_close", self._on_muxed_conn_closed)
@ -164,24 +147,6 @@ class SwarmConn(INetConn):
def get_streams(self) -> tuple[NetStream, ...]: def get_streams(self) -> tuple[NetStream, ...]:
return tuple(self.streams) return tuple(self.streams)
def get_transport_addresses(self) -> list[Multiaddr]:
"""
Retrieve the transport addresses used by this connection.
Returns
-------
list[Multiaddr]
A list of multiaddresses used by the transport.
"""
# Return the addresses from the peerstore for this peer
try:
peer_id = self.muxed_conn.peer_id
return self.swarm.peerstore.addrs(peer_id)
except Exception as e:
logging.warning(f"Error getting transport addresses: {e}")
return []
def remove_stream(self, stream: NetStream) -> None: def remove_stream(self, stream: NetStream) -> None:
if stream not in self.streams: if stream not in self.streams:
return return

View File

@ -1,7 +1,3 @@
from collections.abc import (
Awaitable,
Callable,
)
import logging import logging
from multiaddr import ( from multiaddr import (
@ -249,11 +245,9 @@ class Swarm(Service, INetworkService):
# We need to wait until `self.listener_nursery` is created. # We need to wait until `self.listener_nursery` is created.
await self.event_listener_nursery_created.wait() await self.event_listener_nursery_created.wait()
success_count = 0
for maddr in multiaddrs: for maddr in multiaddrs:
if str(maddr) in self.listeners: if str(maddr) in self.listeners:
success_count += 1 return True
continue
async def conn_handler( async def conn_handler(
read_write_closer: ReadWriteCloser, maddr: Multiaddr = maddr read_write_closer: ReadWriteCloser, maddr: Multiaddr = maddr
@ -304,14 +298,13 @@ class Swarm(Service, INetworkService):
# Call notifiers since event occurred # Call notifiers since event occurred
await self.notify_listen(maddr) await self.notify_listen(maddr)
success_count += 1 return True
logger.debug("successfully started listening on: %s", maddr)
except OSError: except OSError:
# Failed. Continue looping. # Failed. Continue looping.
logger.debug("fail to listen on: %s", maddr) logger.debug("fail to listen on: %s", maddr)
# Return true if at least one address succeeded # No maddr succeeded
return success_count > 0 return False
async def close(self) -> None: async def close(self) -> None:
""" """
@ -333,16 +326,8 @@ class Swarm(Service, INetworkService):
# Close all listeners # Close all listeners
if hasattr(self, "listeners"): if hasattr(self, "listeners"):
for maddr_str, listener in self.listeners.items(): for listener in self.listeners.values():
await listener.close() await listener.close()
# Notify about listener closure
try:
multiaddr = Multiaddr(maddr_str)
await self.notify_listen_close(multiaddr)
except Exception as e:
logger.warning(
f"Failed to notify listen_close for {maddr_str}: {e}"
)
self.listeners.clear() self.listeners.clear()
# Close the transport if it exists and has a close method # Close the transport if it exists and has a close method
@ -426,17 +411,7 @@ class Swarm(Service, INetworkService):
nursery.start_soon(notifee.listen, self, multiaddr) nursery.start_soon(notifee.listen, self, multiaddr)
async def notify_closed_stream(self, stream: INetStream) -> None: async def notify_closed_stream(self, stream: INetStream) -> None:
async with trio.open_nursery() as nursery: raise NotImplementedError
for notifee in self.notifees:
nursery.start_soon(notifee.closed_stream, self, stream)
async def notify_listen_close(self, multiaddr: Multiaddr) -> None: async def notify_listen_close(self, multiaddr: Multiaddr) -> None:
async with trio.open_nursery() as nursery: raise NotImplementedError
for notifee in self.notifees:
nursery.start_soon(notifee.listen_close, self, multiaddr)
# Generic notifier used by NetStream._notify_closed
async def notify_all(self, notifier: Callable[[INotifee], Awaitable[None]]) -> None:
async with trio.open_nursery() as nursery:
for notifee in self.notifees:
nursery.start_soon(notifier, notifee)

View File

@ -1,271 +0,0 @@
from typing import Any, cast
from libp2p.crypto.ed25519 import Ed25519PublicKey
from libp2p.crypto.keys import PrivateKey, PublicKey
from libp2p.crypto.rsa import RSAPublicKey
from libp2p.crypto.secp256k1 import Secp256k1PublicKey
import libp2p.peer.pb.crypto_pb2 as cryto_pb
import libp2p.peer.pb.envelope_pb2 as pb
import libp2p.peer.pb.peer_record_pb2 as record_pb
from libp2p.peer.peer_record import (
PeerRecord,
peer_record_from_protobuf,
unmarshal_record,
)
from libp2p.utils.varint import encode_uvarint
ENVELOPE_DOMAIN = "libp2p-peer-record"
PEER_RECORD_CODEC = b"\x03\x01"
class Envelope:
"""
A signed wrapper around a serialized libp2p record.
Envelopes are cryptographically signed by the author's private key
and are scoped to a specific 'domain' to prevent cross-protocol replay.
Attributes:
public_key: The public key that can verify the envelope's signature.
payload_type: A multicodec code identifying the type of payload inside.
raw_payload: The raw serialized record data.
signature: Signature over the domain-scoped payload content.
"""
public_key: PublicKey
payload_type: bytes
raw_payload: bytes
signature: bytes
_cached_record: PeerRecord | None = None
_unmarshal_error: Exception | None = None
def __init__(
self,
public_key: PublicKey,
payload_type: bytes,
raw_payload: bytes,
signature: bytes,
):
self.public_key = public_key
self.payload_type = payload_type
self.raw_payload = raw_payload
self.signature = signature
def marshal_envelope(self) -> bytes:
"""
Serialize this Envelope into its protobuf wire format.
Converts all envelope fields into a `pb.Envelope` protobuf message
and returns the serialized bytes.
:return: Serialized envelope as bytes.
"""
pb_env = pb.Envelope(
public_key=pub_key_to_protobuf(self.public_key),
payload_type=self.payload_type,
payload=self.raw_payload,
signature=self.signature,
)
return pb_env.SerializeToString()
def validate(self, domain: str) -> None:
"""
Verify the envelope's signature within the given domain scope.
This ensures that the envelope has not been tampered with
and was signed under the correct usage context.
:param domain: Domain string that contextualizes the signature.
:raises ValueError: If the signature is invalid.
"""
unsigned = make_unsigned(domain, self.payload_type, self.raw_payload)
if not self.public_key.verify(unsigned, self.signature):
raise ValueError("Invalid envelope signature")
def record(self) -> PeerRecord:
"""
Lazily decode and return the embedded PeerRecord.
This method unmarshals the payload bytes into a `PeerRecord` instance,
using the registered codec to identify the type. The decoded result
is cached for future use.
:return: Decoded PeerRecord object.
:raises Exception: If decoding fails or payload type is unsupported.
"""
if self._cached_record is not None:
return self._cached_record
try:
if self.payload_type != PEER_RECORD_CODEC:
raise ValueError("Unsuported payload type in envelope")
msg = record_pb.PeerRecord()
msg.ParseFromString(self.raw_payload)
self._cached_record = peer_record_from_protobuf(msg)
return self._cached_record
except Exception as e:
self._unmarshal_error = e
raise
def equal(self, other: Any) -> bool:
"""
Compare this Envelope with another for structural equality.
Two envelopes are considered equal if:
- They have the same public key
- The payload type and payload bytes match
- Their signatures are identical
:param other: Another object to compare.
:return: True if equal, False otherwise.
"""
if isinstance(other, Envelope):
return (
self.public_key.__eq__(other.public_key)
and self.payload_type == other.payload_type
and self.signature == other.signature
and self.raw_payload == other.raw_payload
)
return False
def pub_key_to_protobuf(pub_key: PublicKey) -> cryto_pb.PublicKey:
"""
Convert a Python PublicKey object to its protobuf equivalent.
:param pub_key: A libp2p-compatible PublicKey instance.
:return: Serialized protobuf PublicKey message.
"""
internal_key_type = pub_key.get_type()
key_type = cast(cryto_pb.KeyType, internal_key_type.value)
data = pub_key.to_bytes()
protobuf_key = cryto_pb.PublicKey(Type=key_type, Data=data)
return protobuf_key
def pub_key_from_protobuf(pb_key: cryto_pb.PublicKey) -> PublicKey:
"""
Parse a protobuf PublicKey message into a native libp2p PublicKey.
Supports Ed25519, RSA, and Secp256k1 key types.
:param pb_key: Protobuf representation of a public key.
:return: Parsed PublicKey object.
:raises ValueError: If the key type is unrecognized.
"""
if pb_key.Type == cryto_pb.KeyType.Ed25519:
return Ed25519PublicKey.from_bytes(pb_key.Data)
elif pb_key.Type == cryto_pb.KeyType.RSA:
return RSAPublicKey.from_bytes(pb_key.Data)
elif pb_key.Type == cryto_pb.KeyType.Secp256k1:
return Secp256k1PublicKey.from_bytes(pb_key.Data)
# libp2p.crypto.ecdsa not implemented
else:
raise ValueError(f"Unknown key type: {pb_key.Type}")
def seal_record(record: PeerRecord, private_key: PrivateKey) -> Envelope:
"""
Create and sign a new Envelope from a PeerRecord.
The record is serialized and signed in the scope of its domain and codec.
The result is a self-contained, verifiable Envelope.
:param record: A PeerRecord to encapsulate.
:param private_key: The signer's private key.
:return: A signed Envelope instance.
"""
payload = record.marshal_record()
unsigned = make_unsigned(record.domain(), record.codec(), payload)
signature = private_key.sign(unsigned)
return Envelope(
public_key=private_key.get_public_key(),
payload_type=record.codec(),
raw_payload=payload,
signature=signature,
)
def consume_envelope(data: bytes, domain: str) -> tuple[Envelope, PeerRecord]:
"""
Parse, validate, and decode an Envelope from bytes.
Validates the envelope's signature using the given domain and decodes
the inner payload into a PeerRecord.
:param data: Serialized envelope bytes.
:param domain: Domain string to verify signature against.
:return: Tuple of (Envelope, PeerRecord).
:raises ValueError: If signature validation or decoding fails.
"""
env = unmarshal_envelope(data)
env.validate(domain)
record = env.record()
return env, record
def unmarshal_envelope(data: bytes) -> Envelope:
"""
Deserialize an Envelope from its wire format.
This parses the protobuf fields without verifying the signature.
:param data: Serialized envelope bytes.
:return: Parsed Envelope object.
:raises DecodeError: If protobuf parsing fails.
"""
pb_env = pb.Envelope()
pb_env.ParseFromString(data)
pk = pub_key_from_protobuf(pb_env.public_key)
return Envelope(
public_key=pk,
payload_type=pb_env.payload_type,
raw_payload=pb_env.payload,
signature=pb_env.signature,
)
def make_unsigned(domain: str, payload_type: bytes, payload: bytes) -> bytes:
"""
Build a byte buffer to be signed for an Envelope.
The unsigned byte structure is:
varint(len(domain)) || domain ||
varint(len(payload_type)) || payload_type ||
varint(len(payload)) || payload
This is the exact input used during signing and verification.
:param domain: Domain string for signature scoping.
:param payload_type: Identifier for the type of payload.
:param payload: Raw serialized payload bytes.
:return: Byte buffer to be signed or verified.
"""
fields = [domain.encode(), payload_type, payload]
buf = bytearray()
for field in fields:
buf.extend(encode_uvarint(len(field)))
buf.extend(field)
return bytes(buf)
def debug_dump_envelope(env: Envelope) -> None:
print("\n=== Envelope ===")
print(f"Payload Type: {env.payload_type!r}")
print(f"Signature: {env.signature.hex()} ({len(env.signature)} bytes)")
print(f"Raw Payload: {env.raw_payload.hex()} ({len(env.raw_payload)} bytes)")
try:
peer_record = unmarshal_record(env.raw_payload)
print("\n=== Parsed PeerRecord ===")
print(peer_record)
except Exception as e:
print("Failed to parse PeerRecord:", e)

View File

@ -1,4 +1,3 @@
import functools
import hashlib import hashlib
import base58 import base58
@ -37,23 +36,25 @@ if ENABLE_INLINING:
class ID: class ID:
_bytes: bytes _bytes: bytes
_xor_id: int | None = None
_b58_str: str | None = None
def __init__(self, peer_id_bytes: bytes) -> None: def __init__(self, peer_id_bytes: bytes) -> None:
self._bytes = peer_id_bytes self._bytes = peer_id_bytes
@functools.cached_property @property
def xor_id(self) -> int: def xor_id(self) -> int:
return int(sha256_digest(self._bytes).hex(), 16) if not self._xor_id:
self._xor_id = int(sha256_digest(self._bytes).hex(), 16)
@functools.cached_property return self._xor_id
def base58(self) -> str:
return base58.b58encode(self._bytes).decode()
def to_bytes(self) -> bytes: def to_bytes(self) -> bytes:
return self._bytes return self._bytes
def to_base58(self) -> str: def to_base58(self) -> str:
return self.base58 if not self._b58_str:
self._b58_str = base58.b58encode(self._bytes).decode()
return self._b58_str
def __repr__(self) -> str: def __repr__(self) -> str:
return f"<libp2p.peer.id.ID ({self!s})>" return f"<libp2p.peer.id.ID ({self!s})>"

View File

@ -1,22 +0,0 @@
syntax = "proto3";
package libp2p.peer.pb.crypto;
option go_package = "github.com/libp2p/go-libp2p/core/crypto/pb";
enum KeyType {
RSA = 0;
Ed25519 = 1;
Secp256k1 = 2;
ECDSA = 3;
}
message PublicKey {
KeyType Type = 1;
bytes Data = 2;
}
message PrivateKey {
KeyType Type = 1;
bytes Data = 2;
}

View File

@ -1,31 +0,0 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/peer/pb/crypto.proto
# Protobuf Python Version: 4.25.3
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1blibp2p/peer/pb/crypto.proto\x12\x15libp2p.peer.pb.crypto\"G\n\tPublicKey\x12,\n\x04Type\x18\x01 \x01(\x0e\x32\x1e.libp2p.peer.pb.crypto.KeyType\x12\x0c\n\x04\x44\x61ta\x18\x02 \x01(\x0c\"H\n\nPrivateKey\x12,\n\x04Type\x18\x01 \x01(\x0e\x32\x1e.libp2p.peer.pb.crypto.KeyType\x12\x0c\n\x04\x44\x61ta\x18\x02 \x01(\x0c*9\n\x07KeyType\x12\x07\n\x03RSA\x10\x00\x12\x0b\n\x07\x45\x64\x32\x35\x35\x31\x39\x10\x01\x12\r\n\tSecp256k1\x10\x02\x12\t\n\x05\x45\x43\x44SA\x10\x03\x42,Z*github.com/libp2p/go-libp2p/core/crypto/pbb\x06proto3')
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.crypto_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
_globals['DESCRIPTOR']._options = None
_globals['DESCRIPTOR']._serialized_options = b'Z*github.com/libp2p/go-libp2p/core/crypto/pb'
_globals['_KEYTYPE']._serialized_start=201
_globals['_KEYTYPE']._serialized_end=258
_globals['_PUBLICKEY']._serialized_start=54
_globals['_PUBLICKEY']._serialized_end=125
_globals['_PRIVATEKEY']._serialized_start=127
_globals['_PRIVATEKEY']._serialized_end=199
# @@protoc_insertion_point(module_scope)

View File

@ -1,33 +0,0 @@
from google.protobuf.internal import enum_type_wrapper as _enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from typing import ClassVar as _ClassVar, Optional as _Optional, Union as _Union
DESCRIPTOR: _descriptor.FileDescriptor
class KeyType(int, metaclass=_enum_type_wrapper.EnumTypeWrapper):
__slots__ = ()
RSA: _ClassVar[KeyType]
Ed25519: _ClassVar[KeyType]
Secp256k1: _ClassVar[KeyType]
ECDSA: _ClassVar[KeyType]
RSA: KeyType
Ed25519: KeyType
Secp256k1: KeyType
ECDSA: KeyType
class PublicKey(_message.Message):
__slots__ = ("Type", "Data")
TYPE_FIELD_NUMBER: _ClassVar[int]
DATA_FIELD_NUMBER: _ClassVar[int]
Type: KeyType
Data: bytes
def __init__(self, Type: _Optional[_Union[KeyType, str]] = ..., Data: _Optional[bytes] = ...) -> None: ...
class PrivateKey(_message.Message):
__slots__ = ("Type", "Data")
TYPE_FIELD_NUMBER: _ClassVar[int]
DATA_FIELD_NUMBER: _ClassVar[int]
Type: KeyType
Data: bytes
def __init__(self, Type: _Optional[_Union[KeyType, str]] = ..., Data: _Optional[bytes] = ...) -> None: ...

View File

@ -1,14 +0,0 @@
syntax = "proto3";
package libp2p.peer.pb.record;
import "libp2p/peer/pb/crypto.proto";
option go_package = "github.com/libp2p/go-libp2p/core/record/pb";
message Envelope {
libp2p.peer.pb.crypto.PublicKey public_key = 1;
bytes payload_type = 2;
bytes payload = 3;
bytes signature = 5;
}

View File

@ -1,28 +0,0 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/peer/pb/envelope.proto
# Protobuf Python Version: 4.25.3
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from libp2p.peer.pb import crypto_pb2 as libp2p_dot_peer_dot_pb_dot_crypto__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1dlibp2p/peer/pb/envelope.proto\x12\x15libp2p.peer.pb.record\x1a\x1blibp2p/peer/pb/crypto.proto\"z\n\x08\x45nvelope\x12\x34\n\npublic_key\x18\x01 \x01(\x0b\x32 .libp2p.peer.pb.crypto.PublicKey\x12\x14\n\x0cpayload_type\x18\x02 \x01(\x0c\x12\x0f\n\x07payload\x18\x03 \x01(\x0c\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x42,Z*github.com/libp2p/go-libp2p/core/record/pbb\x06proto3')
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.envelope_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
_globals['DESCRIPTOR']._options = None
_globals['DESCRIPTOR']._serialized_options = b'Z*github.com/libp2p/go-libp2p/core/record/pb'
_globals['_ENVELOPE']._serialized_start=85
_globals['_ENVELOPE']._serialized_end=207
# @@protoc_insertion_point(module_scope)

View File

@ -1,18 +0,0 @@
from libp2p.peer.pb import crypto_pb2 as _crypto_pb2
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from typing import ClassVar as _ClassVar, Mapping as _Mapping, Optional as _Optional, Union as _Union
DESCRIPTOR: _descriptor.FileDescriptor
class Envelope(_message.Message):
__slots__ = ("public_key", "payload_type", "payload", "signature")
PUBLIC_KEY_FIELD_NUMBER: _ClassVar[int]
PAYLOAD_TYPE_FIELD_NUMBER: _ClassVar[int]
PAYLOAD_FIELD_NUMBER: _ClassVar[int]
SIGNATURE_FIELD_NUMBER: _ClassVar[int]
public_key: _crypto_pb2.PublicKey
payload_type: bytes
payload: bytes
signature: bytes
def __init__(self, public_key: _Optional[_Union[_crypto_pb2.PublicKey, _Mapping]] = ..., payload_type: _Optional[bytes] = ..., payload: _Optional[bytes] = ..., signature: _Optional[bytes] = ...) -> None: ... # type: ignore[type-arg]

View File

@ -1,31 +0,0 @@
syntax = "proto3";
package peer.pb;
option go_package = "github.com/libp2p/go-libp2p/core/peer/pb";
// PeerRecord messages contain information that is useful to share with other peers.
// Currently, a PeerRecord contains the public listen addresses for a peer, but this
// is expected to expand to include other information in the future.
//
// PeerRecords are designed to be serialized to bytes and placed inside of
// SignedEnvelopes before sharing with other peers.
// See https://github.com/libp2p/go-libp2p/blob/master/core/record/pb/envelope.proto for
// the SignedEnvelope definition.
message PeerRecord {
// AddressInfo is a wrapper around a binary multiaddr. It is defined as a
// separate message to allow us to add per-address metadata in the future.
message AddressInfo {
bytes multiaddr = 1;
}
// peer_id contains a libp2p peer id in its binary representation.
bytes peer_id = 1;
// seq contains a monotonically-increasing sequence counter to order PeerRecords in time.
uint64 seq = 2;
// addresses is a list of public listen addresses for the peer.
repeated AddressInfo addresses = 3;
}

View File

@ -1,29 +0,0 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/peer/pb/peer_record.proto
# Protobuf Python Version: 4.25.3
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n libp2p/peer/pb/peer_record.proto\x12\x07peer.pb\"\x80\x01\n\nPeerRecord\x12\x0f\n\x07peer_id\x18\x01 \x01(\x0c\x12\x0b\n\x03seq\x18\x02 \x01(\x04\x12\x32\n\taddresses\x18\x03 \x03(\x0b\x32\x1f.peer.pb.PeerRecord.AddressInfo\x1a \n\x0b\x41\x64\x64ressInfo\x12\x11\n\tmultiaddr\x18\x01 \x01(\x0c\x42*Z(github.com/libp2p/go-libp2p/core/peer/pbb\x06proto3')
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.peer_record_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
_globals['DESCRIPTOR']._options = None
_globals['DESCRIPTOR']._serialized_options = b'Z(github.com/libp2p/go-libp2p/core/peer/pb'
_globals['_PEERRECORD']._serialized_start=46
_globals['_PEERRECORD']._serialized_end=174
_globals['_PEERRECORD_ADDRESSINFO']._serialized_start=142
_globals['_PEERRECORD_ADDRESSINFO']._serialized_end=174
# @@protoc_insertion_point(module_scope)

View File

@ -1,21 +0,0 @@
from google.protobuf.internal import containers as _containers
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from typing import ClassVar as _ClassVar, Iterable as _Iterable, Mapping as _Mapping, Optional as _Optional, Union as _Union
DESCRIPTOR: _descriptor.FileDescriptor
class PeerRecord(_message.Message):
__slots__ = ("peer_id", "seq", "addresses")
class AddressInfo(_message.Message):
__slots__ = ("multiaddr",)
MULTIADDR_FIELD_NUMBER: _ClassVar[int]
multiaddr: bytes
def __init__(self, multiaddr: _Optional[bytes] = ...) -> None: ...
PEER_ID_FIELD_NUMBER: _ClassVar[int]
SEQ_FIELD_NUMBER: _ClassVar[int]
ADDRESSES_FIELD_NUMBER: _ClassVar[int]
peer_id: bytes
seq: int
addresses: _containers.RepeatedCompositeFieldContainer[PeerRecord.AddressInfo]
def __init__(self, peer_id: _Optional[bytes] = ..., seq: _Optional[int] = ..., addresses: _Optional[_Iterable[_Union[PeerRecord.AddressInfo, _Mapping]]] = ...) -> None: ... # type: ignore[type-arg]

View File

@ -1,251 +0,0 @@
from collections.abc import Sequence
import threading
import time
from typing import Any
from multiaddr import Multiaddr
from libp2p.abc import IPeerRecord
from libp2p.peer.id import ID
import libp2p.peer.pb.peer_record_pb2 as pb
from libp2p.peer.peerinfo import PeerInfo
PEER_RECORD_ENVELOPE_DOMAIN = "libp2p-peer-record"
PEER_RECORD_ENVELOPE_PAYLOAD_TYPE = b"\x03\x01"
_last_timestamp_lock = threading.Lock()
_last_timestamp: int = 0
class PeerRecord(IPeerRecord):
"""
A record that contains metatdata about a peer in the libp2p network.
This includes:
- `peer_id`: The peer's globally unique indentifier.
- `addrs`: A list of the peer's publicly reachable multiaddrs.
- `seq`: A strictly monotonically increasing timestamp used
to order records over time.
PeerRecords are designed to be signed and transmitted in libp2p routing Envelopes.
"""
peer_id: ID
addrs: list[Multiaddr]
seq: int
def __init__(
self,
peer_id: ID | None = None,
addrs: list[Multiaddr] | None = None,
seq: int | None = None,
) -> None:
"""
Initialize a new PeerRecord.
If `seq` is not provided, a timestamp-based strictly increasing sequence
number will be generated.
:param peer_id: ID of the peer this record refers to.
:param addrs: Public multiaddrs of the peer.
:param seq: Monotonic sequence number.
"""
if peer_id is not None:
self.peer_id = peer_id
self.addrs = addrs or []
if seq is not None:
self.seq = seq
else:
self.seq = timestamp_seq()
def __repr__(self) -> str:
return (
f"PeerRecord(\n"
f" peer_id={self.peer_id},\n"
f" multiaddrs={[str(m) for m in self.addrs]},\n"
f" seq={self.seq}\n"
f")"
)
def domain(self) -> str:
"""
Return the domain string associated with this PeerRecord.
Used during record signing and envelope validation to identify the record type.
"""
return PEER_RECORD_ENVELOPE_DOMAIN
def codec(self) -> bytes:
"""
Return the codec identifier for PeerRecords.
This binary perfix helps distinguish PeerRecords in serialized envelopes.
"""
return PEER_RECORD_ENVELOPE_PAYLOAD_TYPE
def to_protobuf(self) -> pb.PeerRecord:
"""
Convert the current PeerRecord into a ProtoBuf PeerRecord message.
:raises ValueError: if peer_id serialization fails.
:return: A ProtoBuf-encoded PeerRecord message object.
"""
try:
id_bytes = self.peer_id.to_bytes()
except Exception as e:
raise ValueError(f"failed to marshal peer_id: {e}")
msg = pb.PeerRecord()
msg.peer_id = id_bytes
msg.seq = self.seq
msg.addresses.extend(addrs_to_protobuf(self.addrs))
return msg
def marshal_record(self) -> bytes:
"""
Serialize a PeerRecord into raw bytes suitable for embedding in an Envelope.
This is typically called during the process of signing or sealing the record.
:raises ValueError: if serialization to protobuf fails.
:return: Serialized PeerRecord bytes.
"""
try:
msg = self.to_protobuf()
return msg.SerializeToString()
except Exception as e:
raise ValueError(f"failed to marshal PeerRecord: {e}")
def equal(self, other: Any) -> bool:
"""
Check if this PeerRecord is identical to another.
Two PeerRecords are considered equal if:
- Their peer IDs match.
- Their sequence numbers are identical.
- Their address lists are identical and in the same order.
:param other: Another PeerRecord instance.
:return: True if all fields mathch, False otherwise.
"""
if isinstance(other, PeerRecord):
if self.peer_id == other.peer_id:
if self.seq == other.seq:
if len(self.addrs) == len(other.addrs):
for a1, a2 in zip(self.addrs, other.addrs):
if a1 == a2:
continue
else:
return False
return True
return False
def unmarshal_record(data: bytes) -> PeerRecord:
"""
Deserialize a PeerRecord from its serialized byte representation.
Typically used when receiveing a PeerRecord inside a signed routing Envelope.
:param data: Serialized protobuf-encoded bytes.
:raises ValueError: if parsing or conversion fails.
:reurn: A valid PeerRecord instance.
"""
if data is None:
raise ValueError("cannot unmarshal PeerRecord from None")
msg = pb.PeerRecord()
try:
msg.ParseFromString(data)
except Exception as e:
raise ValueError(f"Failed to parse PeerRecord protobuf: {e}")
try:
record = peer_record_from_protobuf(msg)
except Exception as e:
raise ValueError(f"Failed to convert protobuf to PeerRecord: {e}")
return record
def timestamp_seq() -> int:
"""
Generate a strictly increasing timestamp-based sequence number.
Ensures that even if multiple PeerRecords are generated in the same nanosecond,
their `seq` values will still be strictly increasing by using a lock to track the
last value.
:return: A strictly increasing integer timestamp.
"""
global _last_timestamp
now = int(time.time_ns())
with _last_timestamp_lock:
if now <= _last_timestamp:
now = _last_timestamp + 1
_last_timestamp = now
return now
def peer_record_from_peer_info(info: PeerInfo) -> PeerRecord:
"""
Create a PeerRecord from a PeerInfo object.
This automatically assigns a timestamp-based sequence number to the record.
:param info: A PeerInfo instance (contains peer_id and addrs).
:return: A PeerRecord instance.
"""
record = PeerRecord()
record.peer_id = info.peer_id
record.addrs = info.addrs
return record
def peer_record_from_protobuf(msg: pb.PeerRecord) -> PeerRecord:
"""
Convert a protobuf PeerRecord message into a PeerRecord object.
:param msg: Protobuf PeerRecord message.
:raises ValueError: if the peer_id cannot be parsed.
:return: A deserialized PeerRecord instance.
"""
try:
peer_id = ID(msg.peer_id)
except Exception as e:
raise ValueError(f"Failed to unmarshal peer_id: {e}")
addrs = addrs_from_protobuf(msg.addresses)
seq = msg.seq
return PeerRecord(peer_id, addrs, seq)
def addrs_from_protobuf(addrs: Sequence[pb.PeerRecord.AddressInfo]) -> list[Multiaddr]:
"""
Convert a list of protobuf address records to Multiaddr objects.
:param addrs: A list of protobuf PeerRecord.AddressInfo messages.
:return: A list of decoded Multiaddr instances (invalid ones are skipped).
"""
out = []
for addr_info in addrs:
try:
addr = Multiaddr(addr_info.multiaddr)
out.append(addr)
except Exception:
continue
return out
def addrs_to_protobuf(addrs: list[Multiaddr]) -> list[pb.PeerRecord.AddressInfo]:
"""
Convert a list of Multiaddr objects into their protobuf representation.
:param addrs: A list of Multiaddr instances.
:return: A list of PeerRecord.AddressInfo protobuf messages.
"""
out = []
for addr in addrs:
addr_info = pb.PeerRecord.AddressInfo()
addr_info.multiaddr = addr.to_bytes()
out.append(addr_info)
return out

View File

@ -3,11 +3,9 @@ from collections.abc import (
) )
from typing import ( from typing import (
Any, Any,
cast,
) )
import multiaddr import multiaddr
from multiaddr.protocols import Protocol
from .id import ( from .id import (
ID, ID,
@ -44,8 +42,7 @@ def info_from_p2p_addr(addr: multiaddr.Multiaddr) -> PeerInfo:
p2p_protocols = p2p_part.protocols() p2p_protocols = p2p_part.protocols()
if not p2p_protocols: if not p2p_protocols:
raise InvalidAddrError("The last part of the address has no protocols") raise InvalidAddrError("The last part of the address has no protocols")
last_protocol = cast(Protocol, p2p_part.protocols()[0]) last_protocol = p2p_protocols[0]
if last_protocol is None: if last_protocol is None:
raise InvalidAddrError("The last protocol is None") raise InvalidAddrError("The last protocol is None")

View File

@ -23,7 +23,6 @@ from libp2p.crypto.keys import (
PrivateKey, PrivateKey,
PublicKey, PublicKey,
) )
from libp2p.peer.envelope import Envelope
from .id import ( from .id import (
ID, ID,
@ -39,23 +38,12 @@ from .peerinfo import (
PERMANENT_ADDR_TTL = 0 PERMANENT_ADDR_TTL = 0
class PeerRecordState:
envelope: Envelope
seq: int
def __init__(self, envelope: Envelope, seq: int):
self.envelope = envelope
self.seq = seq
class PeerStore(IPeerStore): class PeerStore(IPeerStore):
peer_data_map: dict[ID, PeerData] peer_data_map: dict[ID, PeerData]
def __init__(self, max_records: int = 10000) -> None: def __init__(self) -> None:
self.peer_data_map = defaultdict(PeerData) self.peer_data_map = defaultdict(PeerData)
self.addr_update_channels: dict[ID, MemorySendChannel[Multiaddr]] = {} self.addr_update_channels: dict[ID, MemorySendChannel[Multiaddr]] = {}
self.peer_record_map: dict[ID, PeerRecordState] = {}
self.max_records = max_records
def peer_info(self, peer_id: ID) -> PeerInfo: def peer_info(self, peer_id: ID) -> PeerInfo:
""" """
@ -76,15 +64,7 @@ class PeerStore(IPeerStore):
return list(self.peer_data_map.keys()) return list(self.peer_data_map.keys())
def clear_peerdata(self, peer_id: ID) -> None: def clear_peerdata(self, peer_id: ID) -> None:
"""Clears all data associated with the given peer_id.""" """Clears the peer data of the peer"""
if peer_id in self.peer_data_map:
del self.peer_data_map[peer_id]
else:
raise PeerStoreError("peer ID not found")
# Clear the peer records
if peer_id in self.peer_record_map:
self.peer_record_map.pop(peer_id, None)
def valid_peer_ids(self) -> list[ID]: def valid_peer_ids(self) -> list[ID]:
""" """
@ -98,38 +78,6 @@ class PeerStore(IPeerStore):
peer_data.clear_addrs() peer_data.clear_addrs()
return valid_peer_ids return valid_peer_ids
def _enforce_record_limit(self) -> None:
"""Enforce maximum number of stored records."""
if len(self.peer_record_map) > self.max_records:
# Record oldest records based on seequence number
sorted_records = sorted(
self.peer_record_map.items(), key=lambda x: x[1].seq
)
records_to_remove = len(self.peer_record_map) - self.max_records
for peer_id, _ in sorted_records[:records_to_remove]:
self.maybe_delete_peer_record(peer_id)
del self.peer_record_map[peer_id]
async def start_cleanup_task(self, cleanup_interval: int = 3600) -> None:
"""Start periodic cleanup of expired peer records and addresses."""
while True:
await trio.sleep(cleanup_interval)
self._cleanup_expired_records()
def _cleanup_expired_records(self) -> None:
"""Remove expired peer records and addresses"""
expired_peers = []
for peer_id, peer_data in self.peer_data_map.items():
if peer_data.is_expired():
expired_peers.append(peer_id)
for peer_id in expired_peers:
self.maybe_delete_peer_record(peer_id)
del self.peer_data_map[peer_id]
self._enforce_record_limit()
# --------PROTO-BOOK-------- # --------PROTO-BOOK--------
def get_protocols(self, peer_id: ID) -> list[str]: def get_protocols(self, peer_id: ID) -> list[str]:
@ -213,84 +161,6 @@ class PeerStore(IPeerStore):
peer_data = self.peer_data_map[peer_id] peer_data = self.peer_data_map[peer_id]
peer_data.clear_metadata() peer_data.clear_metadata()
# -----CERT-ADDR-BOOK-----
def maybe_delete_peer_record(self, peer_id: ID) -> None:
"""
Delete the signed peer record for a peer if it has no know
(non-expired) addresses.
This is a garbage collection mechanism: if all addresses for a peer have expired
or been cleared, there's no point holding onto its signed `Envelope`
:param peer_id: The peer whose record we may delete/
"""
if peer_id in self.peer_record_map:
if not self.addrs(peer_id):
self.peer_record_map.pop(peer_id, None)
def consume_peer_record(self, envelope: Envelope, ttl: int) -> bool:
"""
Accept and store a signed PeerRecord, unless it's older than
the one already stored.
This function:
- Extracts the peer ID and sequence number from the envelope
- Rejects the record if it's older (lower seq)
- Updates the stored peer record and replaces associated addresses if accepted
:param envelope: Signed envelope containing a PeerRecord.
:param ttl: Time-to-live for the included multiaddrs (in seconds).
:return: True if the record was accepted and stored; False if it was rejected.
"""
record = envelope.record()
peer_id = record.peer_id
existing = self.peer_record_map.get(peer_id)
if existing and existing.seq > record.seq:
return False # reject older record
new_addrs = set(record.addrs)
self.peer_record_map[peer_id] = PeerRecordState(envelope, record.seq)
self.peer_data_map[peer_id].clear_addrs()
self.add_addrs(peer_id, list(new_addrs), ttl)
return True
def consume_peer_records(self, envelopes: list[Envelope], ttl: int) -> list[bool]:
"""Consume multiple peer records in a single operation."""
results = []
for envelope in envelopes:
results.append(self.consume_peer_record(envelope, ttl))
return results
def get_peer_record(self, peer_id: ID) -> Envelope | None:
"""
Retrieve the most recent signed PeerRecord `Envelope` for a peer, if it exists
and is still relevant.
First, it runs cleanup via `maybe_delete_peer_record` to purge stale data.
Then it checks whether the peer has valid, unexpired addresses before
returning the associated envelope.
:param peer_id: The peer to look up.
:return: The signed Envelope if the peer is known and has valid
addresses; None otherwise.
"""
self.maybe_delete_peer_record(peer_id)
# Check if the peer has any valid addresses
if (
peer_id in self.peer_data_map
and not self.peer_data_map[peer_id].is_expired()
):
state = self.peer_record_map.get(peer_id)
if state is not None:
return state.envelope
return None
# -------ADDR-BOOK-------- # -------ADDR-BOOK--------
def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int = 0) -> None: def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int = 0) -> None:
@ -319,8 +189,6 @@ class PeerStore(IPeerStore):
except trio.WouldBlock: except trio.WouldBlock:
pass # Or consider logging / dropping / replacing stream pass # Or consider logging / dropping / replacing stream
self.maybe_delete_peer_record(peer_id)
def addrs(self, peer_id: ID) -> list[Multiaddr]: def addrs(self, peer_id: ID) -> list[Multiaddr]:
""" """
:param peer_id: peer ID to get addrs for :param peer_id: peer ID to get addrs for
@ -344,8 +212,6 @@ class PeerStore(IPeerStore):
if peer_id in self.peer_data_map: if peer_id in self.peer_data_map:
self.peer_data_map[peer_id].clear_addrs() self.peer_data_map[peer_id].clear_addrs()
self.maybe_delete_peer_record(peer_id)
def peers_with_addrs(self) -> list[ID]: def peers_with_addrs(self) -> list[ID]:
""" """
:return: all of the peer IDs which has addrsfloat stored in peer store :return: all of the peer IDs which has addrsfloat stored in peer store

View File

@ -48,11 +48,12 @@ class Multiselect(IMultiselectMuxer):
""" """
self.handlers[protocol] = handler self.handlers[protocol] = handler
# FIXME: Make TProtocol Optional[TProtocol] to keep types consistent
async def negotiate( async def negotiate(
self, self,
communicator: IMultiselectCommunicator, communicator: IMultiselectCommunicator,
negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT, negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> tuple[TProtocol | None, StreamHandlerFn | None]: ) -> tuple[TProtocol, StreamHandlerFn | None]:
""" """
Negotiate performs protocol selection. Negotiate performs protocol selection.
@ -83,14 +84,14 @@ class Multiselect(IMultiselectMuxer):
raise MultiselectError() from error raise MultiselectError() from error
else: else:
protocol_to_check = None if not command else TProtocol(command) protocol = TProtocol(command)
if protocol_to_check in self.handlers: if protocol in self.handlers:
try: try:
await communicator.write(command) await communicator.write(protocol)
except MultiselectCommunicatorError as error: except MultiselectCommunicatorError as error:
raise MultiselectError() from error raise MultiselectError() from error
return protocol_to_check, self.handlers[protocol_to_check] return protocol, self.handlers[protocol]
try: try:
await communicator.write(PROTOCOL_NOT_FOUND_MSG) await communicator.write(PROTOCOL_NOT_FOUND_MSG)
except MultiselectCommunicatorError as error: except MultiselectCommunicatorError as error:
@ -100,18 +101,6 @@ class Multiselect(IMultiselectMuxer):
except trio.TooSlowError: except trio.TooSlowError:
raise MultiselectError("handshake read timeout") raise MultiselectError("handshake read timeout")
def get_protocols(self) -> tuple[TProtocol | None, ...]:
"""
Retrieve the protocols for which handlers have been registered.
Returns
-------
tuple[TProtocol, ...]
A tuple of registered protocol names.
"""
return tuple(self.handlers.keys())
async def handshake(self, communicator: IMultiselectCommunicator) -> None: async def handshake(self, communicator: IMultiselectCommunicator) -> None:
""" """
Perform handshake to agree on multiselect protocol. Perform handshake to agree on multiselect protocol.

View File

@ -134,10 +134,8 @@ class MultiselectClient(IMultiselectClient):
:raise MultiselectClientError: raised when protocol negotiation failed :raise MultiselectClientError: raised when protocol negotiation failed
:return: selected protocol :return: selected protocol
""" """
# Represent `None` protocol as an empty string.
protocol_str = protocol if protocol is not None else ""
try: try:
await communicator.write(protocol_str) await communicator.write(protocol)
except MultiselectCommunicatorError as error: except MultiselectCommunicatorError as error:
raise MultiselectClientError() from error raise MultiselectClientError() from error
@ -147,7 +145,7 @@ class MultiselectClient(IMultiselectClient):
except MultiselectCommunicatorError as error: except MultiselectCommunicatorError as error:
raise MultiselectClientError() from error raise MultiselectClientError() from error
if response == protocol_str: if response == protocol:
return protocol return protocol
if response == PROTOCOL_NOT_FOUND_MSG: if response == PROTOCOL_NOT_FOUND_MSG:
raise MultiselectClientError("protocol not supported") raise MultiselectClientError("protocol not supported")

View File

@ -30,10 +30,7 @@ class MultiselectCommunicator(IMultiselectCommunicator):
""" """
:raise MultiselectCommunicatorError: raised when failed to write to underlying reader :raise MultiselectCommunicatorError: raised when failed to write to underlying reader
""" # noqa: E501 """ # noqa: E501
if msg_str is None: msg_bytes = encode_delim(msg_str.encode())
msg_bytes = encode_delim(b"")
else:
msg_bytes = encode_delim(msg_str.encode())
try: try:
await self.read_writer.write(msg_bytes) await self.read_writer.write(msg_bytes)
except IOException as error: except IOException as error:

View File

@ -775,16 +775,16 @@ class GossipSub(IPubsubRouter, Service):
# Get list of all seen (seqnos, from) from the (seqno, from) tuples in # Get list of all seen (seqnos, from) from the (seqno, from) tuples in
# seen_messages cache # seen_messages cache
seen_seqnos_and_peers = [ seen_seqnos_and_peers = [
str(seqno_and_from) seqno_and_from for seqno_and_from in self.pubsub.seen_messages.cache.keys()
for seqno_and_from in self.pubsub.seen_messages.cache.keys()
] ]
# Add all unknown message ids (ids that appear in ihave_msg but not in # Add all unknown message ids (ids that appear in ihave_msg but not in
# seen_seqnos) to list of messages we want to request # seen_seqnos) to list of messages we want to request
msg_ids_wanted: list[str] = [ # FIXME: Update type of message ID
msg_ids_wanted: list[Any] = [
msg_id msg_id
for msg_id in ihave_msg.messageIDs for msg_id in ihave_msg.messageIDs
if msg_id not in seen_seqnos_and_peers if literal_eval(msg_id) not in seen_seqnos_and_peers
] ]
# Request messages with IWANT message # Request messages with IWANT message

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT! # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/pubsub/pb/rpc.proto # source: rpc.proto
"""Generated protocol buffer code.""" """Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor as _descriptor
@ -13,39 +13,39 @@ _sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1alibp2p/pubsub/pb/rpc.proto\x12\tpubsub.pb\"\xb4\x01\n\x03RPC\x12-\n\rsubscriptions\x18\x01 \x03(\x0b\x32\x16.pubsub.pb.RPC.SubOpts\x12#\n\x07publish\x18\x02 \x03(\x0b\x32\x12.pubsub.pb.Message\x12*\n\x07\x63ontrol\x18\x03 \x01(\x0b\x32\x19.pubsub.pb.ControlMessage\x1a-\n\x07SubOpts\x12\x11\n\tsubscribe\x18\x01 \x01(\x08\x12\x0f\n\x07topicid\x18\x02 \x01(\t\"i\n\x07Message\x12\x0f\n\x07\x66rom_id\x18\x01 \x01(\x0c\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\x12\r\n\x05seqno\x18\x03 \x01(\x0c\x12\x10\n\x08topicIDs\x18\x04 \x03(\t\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x12\x0b\n\x03key\x18\x06 \x01(\x0c\"\xb0\x01\n\x0e\x43ontrolMessage\x12&\n\x05ihave\x18\x01 \x03(\x0b\x32\x17.pubsub.pb.ControlIHave\x12&\n\x05iwant\x18\x02 \x03(\x0b\x32\x17.pubsub.pb.ControlIWant\x12&\n\x05graft\x18\x03 \x03(\x0b\x32\x17.pubsub.pb.ControlGraft\x12&\n\x05prune\x18\x04 \x03(\x0b\x32\x17.pubsub.pb.ControlPrune\"3\n\x0c\x43ontrolIHave\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\x12\n\nmessageIDs\x18\x02 \x03(\t\"\"\n\x0c\x43ontrolIWant\x12\x12\n\nmessageIDs\x18\x01 \x03(\t\"\x1f\n\x0c\x43ontrolGraft\x12\x0f\n\x07topicID\x18\x01 \x01(\t\"T\n\x0c\x43ontrolPrune\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\"\n\x05peers\x18\x02 \x03(\x0b\x32\x13.pubsub.pb.PeerInfo\x12\x0f\n\x07\x62\x61\x63koff\x18\x03 \x01(\x04\"4\n\x08PeerInfo\x12\x0e\n\x06peerID\x18\x01 \x01(\x0c\x12\x18\n\x10signedPeerRecord\x18\x02 \x01(\x0c\"\x87\x03\n\x0fTopicDescriptor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x31\n\x04\x61uth\x18\x02 \x01(\x0b\x32#.pubsub.pb.TopicDescriptor.AuthOpts\x12/\n\x03\x65nc\x18\x03 \x01(\x0b\x32\".pubsub.pb.TopicDescriptor.EncOpts\x1a|\n\x08\x41uthOpts\x12:\n\x04mode\x18\x01 \x01(\x0e\x32,.pubsub.pb.TopicDescriptor.AuthOpts.AuthMode\x12\x0c\n\x04keys\x18\x02 \x03(\x0c\"&\n\x08\x41uthMode\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03KEY\x10\x01\x12\x07\n\x03WOT\x10\x02\x1a\x83\x01\n\x07\x45ncOpts\x12\x38\n\x04mode\x18\x01 \x01(\x0e\x32*.pubsub.pb.TopicDescriptor.EncOpts.EncMode\x12\x11\n\tkeyHashes\x18\x02 \x03(\x0c\"+\n\x07\x45ncMode\x12\x08\n\x04NONE\x10\x00\x12\r\n\tSHAREDKEY\x10\x01\x12\x07\n\x03WOT\x10\x02') DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\trpc.proto\x12\tpubsub.pb\"\xb4\x01\n\x03RPC\x12-\n\rsubscriptions\x18\x01 \x03(\x0b\x32\x16.pubsub.pb.RPC.SubOpts\x12#\n\x07publish\x18\x02 \x03(\x0b\x32\x12.pubsub.pb.Message\x12*\n\x07\x63ontrol\x18\x03 \x01(\x0b\x32\x19.pubsub.pb.ControlMessage\x1a-\n\x07SubOpts\x12\x11\n\tsubscribe\x18\x01 \x01(\x08\x12\x0f\n\x07topicid\x18\x02 \x01(\t\"i\n\x07Message\x12\x0f\n\x07\x66rom_id\x18\x01 \x01(\x0c\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\x12\r\n\x05seqno\x18\x03 \x01(\x0c\x12\x10\n\x08topicIDs\x18\x04 \x03(\t\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x12\x0b\n\x03key\x18\x06 \x01(\x0c\"\xb0\x01\n\x0e\x43ontrolMessage\x12&\n\x05ihave\x18\x01 \x03(\x0b\x32\x17.pubsub.pb.ControlIHave\x12&\n\x05iwant\x18\x02 \x03(\x0b\x32\x17.pubsub.pb.ControlIWant\x12&\n\x05graft\x18\x03 \x03(\x0b\x32\x17.pubsub.pb.ControlGraft\x12&\n\x05prune\x18\x04 \x03(\x0b\x32\x17.pubsub.pb.ControlPrune\"3\n\x0c\x43ontrolIHave\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\x12\n\nmessageIDs\x18\x02 \x03(\t\"\"\n\x0c\x43ontrolIWant\x12\x12\n\nmessageIDs\x18\x01 \x03(\t\"\x1f\n\x0c\x43ontrolGraft\x12\x0f\n\x07topicID\x18\x01 \x01(\t\"T\n\x0c\x43ontrolPrune\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\"\n\x05peers\x18\x02 \x03(\x0b\x32\x13.pubsub.pb.PeerInfo\x12\x0f\n\x07\x62\x61\x63koff\x18\x03 \x01(\x04\"4\n\x08PeerInfo\x12\x0e\n\x06peerID\x18\x01 \x01(\x0c\x12\x18\n\x10signedPeerRecord\x18\x02 \x01(\x0c\"\x87\x03\n\x0fTopicDescriptor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x31\n\x04\x61uth\x18\x02 \x01(\x0b\x32#.pubsub.pb.TopicDescriptor.AuthOpts\x12/\n\x03\x65nc\x18\x03 \x01(\x0b\x32\".pubsub.pb.TopicDescriptor.EncOpts\x1a|\n\x08\x41uthOpts\x12:\n\x04mode\x18\x01 \x01(\x0e\x32,.pubsub.pb.TopicDescriptor.AuthOpts.AuthMode\x12\x0c\n\x04keys\x18\x02 \x03(\x0c\"&\n\x08\x41uthMode\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03KEY\x10\x01\x12\x07\n\x03WOT\x10\x02\x1a\x83\x01\n\x07\x45ncOpts\x12\x38\n\x04mode\x18\x01 \x01(\x0e\x32*.pubsub.pb.TopicDescriptor.EncOpts.EncMode\x12\x11\n\tkeyHashes\x18\x02 \x03(\x0c\"+\n\x07\x45ncMode\x12\x08\n\x04NONE\x10\x00\x12\r\n\tSHAREDKEY\x10\x01\x12\x07\n\x03WOT\x10\x02')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.pubsub.pb.rpc_pb2', globals()) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'rpc_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False: if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None DESCRIPTOR._options = None
_RPC._serialized_start=42 _RPC._serialized_start=25
_RPC._serialized_end=222 _RPC._serialized_end=205
_RPC_SUBOPTS._serialized_start=177 _RPC_SUBOPTS._serialized_start=160
_RPC_SUBOPTS._serialized_end=222 _RPC_SUBOPTS._serialized_end=205
_MESSAGE._serialized_start=224 _MESSAGE._serialized_start=207
_MESSAGE._serialized_end=329 _MESSAGE._serialized_end=312
_CONTROLMESSAGE._serialized_start=332 _CONTROLMESSAGE._serialized_start=315
_CONTROLMESSAGE._serialized_end=508 _CONTROLMESSAGE._serialized_end=491
_CONTROLIHAVE._serialized_start=510 _CONTROLIHAVE._serialized_start=493
_CONTROLIHAVE._serialized_end=561 _CONTROLIHAVE._serialized_end=544
_CONTROLIWANT._serialized_start=563 _CONTROLIWANT._serialized_start=546
_CONTROLIWANT._serialized_end=597 _CONTROLIWANT._serialized_end=580
_CONTROLGRAFT._serialized_start=599 _CONTROLGRAFT._serialized_start=582
_CONTROLGRAFT._serialized_end=630 _CONTROLGRAFT._serialized_end=613
_CONTROLPRUNE._serialized_start=632 _CONTROLPRUNE._serialized_start=615
_CONTROLPRUNE._serialized_end=716 _CONTROLPRUNE._serialized_end=699
_PEERINFO._serialized_start=718 _PEERINFO._serialized_start=701
_PEERINFO._serialized_end=770 _PEERINFO._serialized_end=753
_TOPICDESCRIPTOR._serialized_start=773 _TOPICDESCRIPTOR._serialized_start=756
_TOPICDESCRIPTOR._serialized_end=1164 _TOPICDESCRIPTOR._serialized_end=1147
_TOPICDESCRIPTOR_AUTHOPTS._serialized_start=906 _TOPICDESCRIPTOR_AUTHOPTS._serialized_start=889
_TOPICDESCRIPTOR_AUTHOPTS._serialized_end=1030 _TOPICDESCRIPTOR_AUTHOPTS._serialized_end=1013
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_start=992 _TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_start=975
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_end=1030 _TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_end=1013
_TOPICDESCRIPTOR_ENCOPTS._serialized_start=1033 _TOPICDESCRIPTOR_ENCOPTS._serialized_start=1016
_TOPICDESCRIPTOR_ENCOPTS._serialized_end=1164 _TOPICDESCRIPTOR_ENCOPTS._serialized_end=1147
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_start=1121 _TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_start=1104
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_end=1164 _TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_end=1147
# @@protoc_insertion_point(module_scope) # @@protoc_insertion_point(module_scope)

View File

@ -102,9 +102,6 @@ class TopicValidator(NamedTuple):
is_async: bool is_async: bool
MAX_CONCURRENT_VALIDATORS = 10
class Pubsub(Service, IPubsub): class Pubsub(Service, IPubsub):
host: IHost host: IHost
@ -112,7 +109,6 @@ class Pubsub(Service, IPubsub):
peer_receive_channel: trio.MemoryReceiveChannel[ID] peer_receive_channel: trio.MemoryReceiveChannel[ID]
dead_peer_receive_channel: trio.MemoryReceiveChannel[ID] dead_peer_receive_channel: trio.MemoryReceiveChannel[ID]
_validator_semaphore: trio.Semaphore
seen_messages: LastSeenCache seen_messages: LastSeenCache
@ -147,7 +143,6 @@ class Pubsub(Service, IPubsub):
msg_id_constructor: Callable[ msg_id_constructor: Callable[
[rpc_pb2.Message], bytes [rpc_pb2.Message], bytes
] = get_peer_and_seqno_msg_id, ] = get_peer_and_seqno_msg_id,
max_concurrent_validator_count: int = MAX_CONCURRENT_VALIDATORS,
) -> None: ) -> None:
""" """
Construct a new Pubsub object, which is responsible for handling all Construct a new Pubsub object, which is responsible for handling all
@ -173,7 +168,6 @@ class Pubsub(Service, IPubsub):
# Therefore, we can only close from the receive side. # Therefore, we can only close from the receive side.
self.peer_receive_channel = peer_receive self.peer_receive_channel = peer_receive
self.dead_peer_receive_channel = dead_peer_receive self.dead_peer_receive_channel = dead_peer_receive
self._validator_semaphore = trio.Semaphore(max_concurrent_validator_count)
# Register a notifee # Register a notifee
self.host.get_network().register_notifee( self.host.get_network().register_notifee(
PubsubNotifee(peer_send, dead_peer_send) PubsubNotifee(peer_send, dead_peer_send)
@ -663,11 +657,7 @@ class Pubsub(Service, IPubsub):
logger.debug("successfully published message %s", msg) logger.debug("successfully published message %s", msg)
async def validate_msg( async def validate_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None:
self,
msg_forwarder: ID,
msg: rpc_pb2.Message,
) -> None:
""" """
Validate the received message. Validate the received message.
@ -690,34 +680,23 @@ class Pubsub(Service, IPubsub):
if not validator(msg_forwarder, msg): if not validator(msg_forwarder, msg):
raise ValidationError(f"Validation failed for msg={msg}") raise ValidationError(f"Validation failed for msg={msg}")
# TODO: Implement throttle on async validators
if len(async_topic_validators) > 0: if len(async_topic_validators) > 0:
# Appends to lists are thread safe in CPython # Appends to lists are thread safe in CPython
results: list[bool] = [] results = []
async def run_async_validator(func: AsyncValidatorFn) -> None:
result = await func(msg_forwarder, msg)
results.append(result)
async with trio.open_nursery() as nursery: async with trio.open_nursery() as nursery:
for async_validator in async_topic_validators: for async_validator in async_topic_validators:
nursery.start_soon( nursery.start_soon(run_async_validator, async_validator)
self._run_async_validator,
async_validator,
msg_forwarder,
msg,
results,
)
if not all(results): if not all(results):
raise ValidationError(f"Validation failed for msg={msg}") raise ValidationError(f"Validation failed for msg={msg}")
async def _run_async_validator(
self,
func: AsyncValidatorFn,
msg_forwarder: ID,
msg: rpc_pb2.Message,
results: list[bool],
) -> None:
async with self._validator_semaphore:
result = await func(msg_forwarder, msg)
results.append(result)
async def push_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None: async def push_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None:
""" """
Push a pubsub message to others. Push a pubsub message to others.

View File

@ -15,10 +15,6 @@ from libp2p.relay.circuit_v2 import (
RelayLimits, RelayLimits,
RelayResourceManager, RelayResourceManager,
Reservation, Reservation,
DCUTR_PROTOCOL_ID,
DCUtRProtocol,
ReachabilityChecker,
is_private_ip,
) )
__all__ = [ __all__ = [
@ -29,9 +25,4 @@ __all__ = [
"RelayLimits", "RelayLimits",
"RelayResourceManager", "RelayResourceManager",
"Reservation", "Reservation",
"DCUtRProtocol",
"DCUTR_PROTOCOL_ID",
"ReachabilityChecker",
"is_private_ip"
] ]

View File

@ -5,16 +5,6 @@ This package implements the Circuit Relay v2 protocol as specified in:
https://github.com/libp2p/specs/blob/master/relay/circuit-v2.md https://github.com/libp2p/specs/blob/master/relay/circuit-v2.md
""" """
from .dcutr import (
DCUtRProtocol,
)
from .dcutr import PROTOCOL_ID as DCUTR_PROTOCOL_ID
from .nat import (
ReachabilityChecker,
is_private_ip,
)
from .discovery import ( from .discovery import (
RelayDiscovery, RelayDiscovery,
) )
@ -39,8 +29,4 @@ __all__ = [
"RelayResourceManager", "RelayResourceManager",
"CircuitV2Transport", "CircuitV2Transport",
"RelayDiscovery", "RelayDiscovery",
"DCUtRProtocol",
"DCUTR_PROTOCOL_ID",
"ReachabilityChecker",
"is_private_ip",
] ]

View File

@ -1,580 +0,0 @@
"""
Direct Connection Upgrade through Relay (DCUtR) protocol implementation.
This module implements the DCUtR protocol as specified in:
https://github.com/libp2p/specs/blob/master/relay/DCUtR.md
DCUtR enables peers behind NAT to establish direct connections
using hole punching techniques.
"""
import logging
import time
from typing import Any
from multiaddr import Multiaddr
import trio
from libp2p.abc import (
IHost,
INetConn,
INetStream,
)
from libp2p.custom_types import (
TProtocol,
)
from libp2p.peer.id import (
ID,
)
from libp2p.peer.peerinfo import (
PeerInfo,
)
from libp2p.relay.circuit_v2.nat import (
ReachabilityChecker,
)
from libp2p.relay.circuit_v2.pb.dcutr_pb2 import (
HolePunch,
)
from libp2p.tools.async_service import (
Service,
)
logger = logging.getLogger(__name__)
# Protocol ID for DCUtR
PROTOCOL_ID = TProtocol("/libp2p/dcutr")
# Maximum message size for DCUtR (4KiB as per spec)
MAX_MESSAGE_SIZE = 4 * 1024
# Timeouts
STREAM_READ_TIMEOUT = 30 # seconds
STREAM_WRITE_TIMEOUT = 30 # seconds
DIAL_TIMEOUT = 10 # seconds
# Maximum number of hole punch attempts per peer
MAX_HOLE_PUNCH_ATTEMPTS = 5
# Delay between retry attempts
HOLE_PUNCH_RETRY_DELAY = 30 # seconds
# Maximum observed addresses to exchange
MAX_OBSERVED_ADDRS = 20
class DCUtRProtocol(Service):
"""
DCUtRProtocol implements the Direct Connection Upgrade through Relay protocol.
This protocol allows two NATed peers to establish direct connections through
hole punching, after they have established an initial connection through a relay.
"""
def __init__(self, host: IHost):
"""
Initialize the DCUtR protocol.
Parameters
----------
host : IHost
The libp2p host this protocol is running on
"""
super().__init__()
self.host = host
self.event_started = trio.Event()
self._hole_punch_attempts: dict[ID, int] = {}
self._direct_connections: set[ID] = set()
self._in_progress: set[ID] = set()
self._reachability_checker = ReachabilityChecker(host)
self._nursery: trio.Nursery | None = None
async def run(self, *, task_status: Any = trio.TASK_STATUS_IGNORED) -> None:
"""Run the protocol service."""
try:
# Register the DCUtR protocol handler
logger.debug("Registering DCUtR protocol handler")
self.host.set_stream_handler(PROTOCOL_ID, self._handle_dcutr_stream)
# Signal that we're ready
self.event_started.set()
# Start the service
async with trio.open_nursery() as nursery:
self._nursery = nursery
task_status.started()
logger.debug("DCUtR protocol service started")
# Wait for service to be stopped
await self.manager.wait_finished()
finally:
# Clean up
try:
# Use empty async lambda instead of None for stream handler
async def empty_handler(_: INetStream) -> None:
pass
self.host.set_stream_handler(PROTOCOL_ID, empty_handler)
logger.debug("DCUtR protocol handler unregistered")
except Exception as e:
logger.error("Error unregistering DCUtR protocol handler: %s", str(e))
# Clear state
self._hole_punch_attempts.clear()
self._direct_connections.clear()
self._in_progress.clear()
self._nursery = None
async def _handle_dcutr_stream(self, stream: INetStream) -> None:
"""
Handle incoming DCUtR streams.
Parameters
----------
stream : INetStream
The incoming stream
"""
try:
# Get the remote peer ID
remote_peer_id = stream.muxed_conn.peer_id
logger.debug("Received DCUtR stream from peer %s", remote_peer_id)
# Check if we already have a direct connection
if await self._have_direct_connection(remote_peer_id):
logger.debug(
"Already have direct connection to %s, closing stream",
remote_peer_id,
)
await stream.close()
return
# Check if there's already an active hole punch attempt
if remote_peer_id in self._in_progress:
logger.debug("Hole punch already in progress with %s", remote_peer_id)
# Let the existing attempt continue
await stream.close()
return
# Mark as in progress
self._in_progress.add(remote_peer_id)
try:
# Read the CONNECT message
with trio.fail_after(STREAM_READ_TIMEOUT):
msg_bytes = await stream.read(MAX_MESSAGE_SIZE)
# Parse the message
connect_msg = HolePunch()
connect_msg.ParseFromString(msg_bytes)
# Verify it's a CONNECT message
if connect_msg.type != HolePunch.CONNECT:
logger.warning("Expected CONNECT message, got %s", connect_msg.type)
await stream.close()
return
logger.debug(
"Received CONNECT message from %s with %d addresses",
remote_peer_id,
len(connect_msg.ObsAddrs),
)
# Process observed addresses from the peer
peer_addrs = self._decode_observed_addrs(list(connect_msg.ObsAddrs))
logger.debug("Decoded %d valid addresses from peer", len(peer_addrs))
# Store the addresses in the peerstore
if peer_addrs:
self.host.get_peerstore().add_addrs(
remote_peer_id, peer_addrs, 10 * 60
) # 10 minute TTL
# Send our CONNECT message with our observed addresses
our_addrs = await self._get_observed_addrs()
response = HolePunch()
response.type = HolePunch.CONNECT
response.ObsAddrs.extend(our_addrs)
with trio.fail_after(STREAM_WRITE_TIMEOUT):
await stream.write(response.SerializeToString())
logger.debug(
"Sent CONNECT response to %s with %d addresses",
remote_peer_id,
len(our_addrs),
)
# Wait for SYNC message
with trio.fail_after(STREAM_READ_TIMEOUT):
sync_bytes = await stream.read(MAX_MESSAGE_SIZE)
# Parse the SYNC message
sync_msg = HolePunch()
sync_msg.ParseFromString(sync_bytes)
# Verify it's a SYNC message
if sync_msg.type != HolePunch.SYNC:
logger.warning("Expected SYNC message, got %s", sync_msg.type)
await stream.close()
return
logger.debug("Received SYNC message from %s", remote_peer_id)
# Perform hole punch
success = await self._perform_hole_punch(remote_peer_id, peer_addrs)
if success:
logger.info(
"Successfully established direct connection with %s",
remote_peer_id,
)
else:
logger.warning(
"Failed to establish direct connection with %s", remote_peer_id
)
except trio.TooSlowError:
logger.warning("Timeout in DCUtR protocol with peer %s", remote_peer_id)
except Exception as e:
logger.error(
"Error in DCUtR protocol with peer %s: %s", remote_peer_id, str(e)
)
finally:
# Clean up
self._in_progress.discard(remote_peer_id)
await stream.close()
except Exception as e:
logger.error("Error handling DCUtR stream: %s", str(e))
await stream.close()
async def initiate_hole_punch(self, peer_id: ID) -> bool:
"""
Initiate a hole punch with a peer.
Parameters
----------
peer_id : ID
The peer to hole punch with
Returns
-------
bool
True if hole punch was successful, False otherwise
"""
# Check if we already have a direct connection
if await self._have_direct_connection(peer_id):
logger.debug("Already have direct connection to %s", peer_id)
return True
# Check if there's already an active hole punch attempt
if peer_id in self._in_progress:
logger.debug("Hole punch already in progress with %s", peer_id)
return False
# Check if we've exceeded the maximum number of attempts
attempts = self._hole_punch_attempts.get(peer_id, 0)
if attempts >= MAX_HOLE_PUNCH_ATTEMPTS:
logger.warning("Maximum hole punch attempts reached for peer %s", peer_id)
return False
# Mark as in progress and increment attempt counter
self._in_progress.add(peer_id)
self._hole_punch_attempts[peer_id] = attempts + 1
try:
# Open a DCUtR stream to the peer
logger.debug("Opening DCUtR stream to peer %s", peer_id)
stream = await self.host.new_stream(peer_id, [PROTOCOL_ID])
if not stream:
logger.warning("Failed to open DCUtR stream to peer %s", peer_id)
return False
try:
# Send our CONNECT message with our observed addresses
our_addrs = await self._get_observed_addrs()
connect_msg = HolePunch()
connect_msg.type = HolePunch.CONNECT
connect_msg.ObsAddrs.extend(our_addrs)
start_time = time.time()
with trio.fail_after(STREAM_WRITE_TIMEOUT):
await stream.write(connect_msg.SerializeToString())
logger.debug(
"Sent CONNECT message to %s with %d addresses",
peer_id,
len(our_addrs),
)
# Receive the peer's CONNECT message
with trio.fail_after(STREAM_READ_TIMEOUT):
resp_bytes = await stream.read(MAX_MESSAGE_SIZE)
# Calculate RTT
rtt = time.time() - start_time
# Parse the response
resp = HolePunch()
resp.ParseFromString(resp_bytes)
# Verify it's a CONNECT message
if resp.type != HolePunch.CONNECT:
logger.warning("Expected CONNECT message, got %s", resp.type)
return False
logger.debug(
"Received CONNECT response from %s with %d addresses",
peer_id,
len(resp.ObsAddrs),
)
# Process observed addresses from the peer
peer_addrs = self._decode_observed_addrs(list(resp.ObsAddrs))
logger.debug("Decoded %d valid addresses from peer", len(peer_addrs))
# Store the addresses in the peerstore
if peer_addrs:
self.host.get_peerstore().add_addrs(
peer_id, peer_addrs, 10 * 60
) # 10 minute TTL
# Send SYNC message with timing information
# We'll use a future time that's 2*RTT from now to ensure both sides
# are ready
punch_time = time.time() + (2 * rtt) + 1 # Add 1 second buffer
sync_msg = HolePunch()
sync_msg.type = HolePunch.SYNC
with trio.fail_after(STREAM_WRITE_TIMEOUT):
await stream.write(sync_msg.SerializeToString())
logger.debug("Sent SYNC message to %s", peer_id)
# Perform the synchronized hole punch
success = await self._perform_hole_punch(
peer_id, peer_addrs, punch_time
)
if success:
logger.info(
"Successfully established direct connection with %s", peer_id
)
return True
else:
logger.warning(
"Failed to establish direct connection with %s", peer_id
)
return False
except trio.TooSlowError:
logger.warning("Timeout in DCUtR protocol with peer %s", peer_id)
return False
except Exception as e:
logger.error(
"Error in DCUtR protocol with peer %s: %s", peer_id, str(e)
)
return False
finally:
await stream.close()
except Exception as e:
logger.error(
"Error initiating hole punch with peer %s: %s", peer_id, str(e)
)
return False
finally:
self._in_progress.discard(peer_id)
# This should never be reached, but add explicit return for type checking
return False
async def _perform_hole_punch(
self, peer_id: ID, addrs: list[Multiaddr], punch_time: float | None = None
) -> bool:
"""
Perform a hole punch attempt with a peer.
Parameters
----------
peer_id : ID
The peer to hole punch with
addrs : list[Multiaddr]
List of addresses to try
punch_time : Optional[float]
Time to perform the punch (if None, do it immediately)
Returns
-------
bool
True if hole punch was successful
"""
if not addrs:
logger.warning("No addresses to try for hole punch with %s", peer_id)
return False
# If punch_time is specified, wait until that time
if punch_time is not None:
now = time.time()
if punch_time > now:
wait_time = punch_time - now
logger.debug("Waiting %.2f seconds before hole punch", wait_time)
await trio.sleep(wait_time)
# Try to dial each address
logger.debug(
"Starting hole punch with peer %s using %d addresses", peer_id, len(addrs)
)
# Filter to only include non-relay addresses
direct_addrs = [
addr for addr in addrs if not str(addr).startswith("/p2p-circuit")
]
if not direct_addrs:
logger.warning("No direct addresses found for peer %s", peer_id)
return False
# Start dialing attempts in parallel
async with trio.open_nursery() as nursery:
for addr in direct_addrs[
:5
]: # Limit to 5 addresses to avoid too many connections
nursery.start_soon(self._dial_peer, peer_id, addr)
# Check if we established a direct connection
return await self._have_direct_connection(peer_id)
async def _dial_peer(self, peer_id: ID, addr: Multiaddr) -> None:
"""
Attempt to dial a peer at a specific address.
Parameters
----------
peer_id : ID
The peer to dial
addr : Multiaddr
The address to dial
"""
try:
logger.debug("Attempting to dial %s at %s", peer_id, addr)
# Create peer info
peer_info = PeerInfo(peer_id, [addr])
# Try to connect with timeout
with trio.fail_after(DIAL_TIMEOUT):
await self.host.connect(peer_info)
logger.info("Successfully connected to %s at %s", peer_id, addr)
# Add to direct connections set
self._direct_connections.add(peer_id)
except trio.TooSlowError:
logger.debug("Timeout dialing %s at %s", peer_id, addr)
except Exception as e:
logger.debug("Error dialing %s at %s: %s", peer_id, addr, str(e))
async def _have_direct_connection(self, peer_id: ID) -> bool:
"""
Check if we already have a direct connection to a peer.
Parameters
----------
peer_id : ID
The peer to check
Returns
-------
bool
True if we have a direct connection, False otherwise
"""
# Check our direct connections cache first
if peer_id in self._direct_connections:
return True
# Check if the peer is connected
network = self.host.get_network()
conn_or_conns = network.connections.get(peer_id)
if not conn_or_conns:
return False
# Handle both single connection and list of connections
connections: list[INetConn] = (
[conn_or_conns] if not isinstance(conn_or_conns, list) else conn_or_conns
)
# Check if any connection is direct (not relayed)
for conn in connections:
# Get the transport addresses
addrs = conn.get_transport_addresses()
# If any address doesn't start with /p2p-circuit, it's a direct connection
if any(not str(addr).startswith("/p2p-circuit") for addr in addrs):
# Cache this result
self._direct_connections.add(peer_id)
return True
return False
async def _get_observed_addrs(self) -> list[bytes]:
"""
Get our observed addresses to share with the peer.
Returns
-------
List[bytes]
List of observed addresses as bytes
"""
# Get all listen addresses
addrs = self.host.get_addrs()
# Filter out relay addresses
direct_addrs = [
addr for addr in addrs if not str(addr).startswith("/p2p-circuit")
]
# Limit the number of addresses
if len(direct_addrs) > MAX_OBSERVED_ADDRS:
direct_addrs = direct_addrs[:MAX_OBSERVED_ADDRS]
# Convert to bytes
addr_bytes = [addr.to_bytes() for addr in direct_addrs]
return addr_bytes
def _decode_observed_addrs(self, addr_bytes: list[bytes]) -> list[Multiaddr]:
"""
Decode observed addresses received from a peer.
Parameters
----------
addr_bytes : List[bytes]
The encoded addresses
Returns
-------
List[Multiaddr]
The decoded multiaddresses
"""
result = []
for addr_byte in addr_bytes:
try:
addr = Multiaddr(addr_byte)
# Validate the address (basic check)
if str(addr).startswith("/ip"):
result.append(addr)
except Exception as e:
logger.debug("Error decoding multiaddr: %s", str(e))
return result

View File

@ -234,8 +234,7 @@ class RelayDiscovery(Service):
if not callable(proto_getter): if not callable(proto_getter):
return None return None
if peer_id not in peerstore.peer_ids():
return None
try: try:
# Try to get protocols # Try to get protocols
proto_result = proto_getter(peer_id) proto_result = proto_getter(peer_id)
@ -284,6 +283,8 @@ class RelayDiscovery(Service):
return None return None
mux = self.host.get_mux() mux = self.host.get_mux()
if not hasattr(mux, "protocols"):
return None
peer_protocols = set() peer_protocols = set()
# Get protocols from mux with proper type safety # Get protocols from mux with proper type safety
@ -292,9 +293,7 @@ class RelayDiscovery(Service):
# Get protocols with proper typing # Get protocols with proper typing
mux_protocols = mux.get_protocols() mux_protocols = mux.get_protocols()
if isinstance(mux_protocols, (list, tuple)): if isinstance(mux_protocols, (list, tuple)):
available_protocols = [ available_protocols = list(mux_protocols)
p for p in mux.get_protocols() if p is not None
]
for protocol in available_protocols: for protocol in available_protocols:
try: try:
@ -314,7 +313,7 @@ class RelayDiscovery(Service):
self._protocol_cache[peer_id] = peer_protocols self._protocol_cache[peer_id] = peer_protocols
protocol_str = str(PROTOCOL_ID) protocol_str = str(PROTOCOL_ID)
for protocol in map(TProtocol, peer_protocols): for protocol in peer_protocols:
if protocol == protocol_str: if protocol == protocol_str:
return True return True
return False return False

View File

@ -1,300 +0,0 @@
"""
NAT traversal utilities for libp2p.
This module provides utilities for NAT traversal and reachability detection.
"""
import ipaddress
import logging
from multiaddr import (
Multiaddr,
)
from libp2p.abc import (
IHost,
INetConn,
)
from libp2p.peer.id import (
ID,
)
logger = logging.getLogger("libp2p.relay.circuit_v2.nat")
# Timeout for reachability checks
REACHABILITY_TIMEOUT = 10 # seconds
# Define private IP ranges
PRIVATE_IP_RANGES = [
("10.0.0.0", "10.255.255.255"), # Class A private network: 10.0.0.0/8
("172.16.0.0", "172.31.255.255"), # Class B private network: 172.16.0.0/12
("192.168.0.0", "192.168.255.255"), # Class C private network: 192.168.0.0/16
]
# Link-local address range: 169.254.0.0/16
LINK_LOCAL_RANGE = ("169.254.0.0", "169.254.255.255")
# Loopback address range: 127.0.0.0/8
LOOPBACK_RANGE = ("127.0.0.0", "127.255.255.255")
def ip_to_int(ip: str) -> int:
"""
Convert an IP address to an integer.
Parameters
----------
ip : str
IP address to convert
Returns
-------
int
Integer representation of the IP
"""
try:
return int(ipaddress.IPv4Address(ip))
except ipaddress.AddressValueError:
# Handle IPv6 addresses
return int(ipaddress.IPv6Address(ip))
def is_ip_in_range(ip: str, start_range: str, end_range: str) -> bool:
"""
Check if an IP address is within a range.
Parameters
----------
ip : str
IP address to check
start_range : str
Start of the range
end_range : str
End of the range
Returns
-------
bool
True if the IP is in the range
"""
try:
ip_int = ip_to_int(ip)
start_int = ip_to_int(start_range)
end_int = ip_to_int(end_range)
return start_int <= ip_int <= end_int
except Exception:
return False
def is_private_ip(ip: str) -> bool:
"""
Check if an IP address is private.
Parameters
----------
ip : str
IP address to check
Returns
-------
bool
True if IP is private
"""
for start_range, end_range in PRIVATE_IP_RANGES:
if is_ip_in_range(ip, start_range, end_range):
return True
# Check for link-local addresses
if is_ip_in_range(ip, *LINK_LOCAL_RANGE):
return True
# Check for loopback addresses
if is_ip_in_range(ip, *LOOPBACK_RANGE):
return True
return False
def extract_ip_from_multiaddr(addr: Multiaddr) -> str | None:
"""
Extract the IP address from a multiaddr.
Parameters
----------
addr : Multiaddr
Multiaddr to extract from
Returns
-------
Optional[str]
IP address or None if not found
"""
# Convert to string representation
addr_str = str(addr)
# Look for IPv4 address
ipv4_start = addr_str.find("/ip4/")
if ipv4_start != -1:
# Extract the IPv4 address
ipv4_end = addr_str.find("/", ipv4_start + 5)
if ipv4_end != -1:
return addr_str[ipv4_start + 5 : ipv4_end]
# Look for IPv6 address
ipv6_start = addr_str.find("/ip6/")
if ipv6_start != -1:
# Extract the IPv6 address
ipv6_end = addr_str.find("/", ipv6_start + 5)
if ipv6_end != -1:
return addr_str[ipv6_start + 5 : ipv6_end]
return None
class ReachabilityChecker:
"""
Utility class for checking peer reachability.
This class assesses whether a peer's addresses are likely
to be directly reachable or behind NAT.
"""
def __init__(self, host: IHost):
"""
Initialize the reachability checker.
Parameters
----------
host : IHost
The libp2p host
"""
self.host = host
self._peer_reachability: dict[ID, bool] = {}
self._known_public_peers: set[ID] = set()
def is_addr_public(self, addr: Multiaddr) -> bool:
"""
Check if an address is likely to be publicly reachable.
Parameters
----------
addr : Multiaddr
The multiaddr to check
Returns
-------
bool
True if address is likely public
"""
# Extract the IP address
ip = extract_ip_from_multiaddr(addr)
if not ip:
return False
# Check if it's a private IP
return not is_private_ip(ip)
def get_public_addrs(self, addrs: list[Multiaddr]) -> list[Multiaddr]:
"""
Filter a list of addresses to only include likely public ones.
Parameters
----------
addrs : List[Multiaddr]
List of addresses to filter
Returns
-------
List[Multiaddr]
List of likely public addresses
"""
return [addr for addr in addrs if self.is_addr_public(addr)]
async def check_peer_reachability(self, peer_id: ID) -> bool:
"""
Check if a peer is directly reachable.
Parameters
----------
peer_id : ID
The peer ID to check
Returns
-------
bool
True if peer is likely directly reachable
"""
# Check if we already know
if peer_id in self._peer_reachability:
return self._peer_reachability[peer_id]
# Check if the peer is connected
network = self.host.get_network()
connections: INetConn | list[INetConn] | None = network.connections.get(peer_id)
if not connections:
# Not connected, can't determine reachability
return False
# Check if any connection is direct (not relayed)
if isinstance(connections, list):
for conn in connections:
# Get the transport addresses
addrs = conn.get_transport_addresses()
# If any address doesn't start with /p2p-circuit,
# it's a direct connection
if any(not str(addr).startswith("/p2p-circuit") for addr in addrs):
self._peer_reachability[peer_id] = True
return True
else:
# Handle single connection case
addrs = connections.get_transport_addresses()
if any(not str(addr).startswith("/p2p-circuit") for addr in addrs):
self._peer_reachability[peer_id] = True
return True
# Get the peer's addresses from peerstore
try:
addrs = self.host.get_peerstore().addrs(peer_id)
# Check if peer has any public addresses
public_addrs = self.get_public_addrs(addrs)
if public_addrs:
self._peer_reachability[peer_id] = True
return True
except Exception as e:
logger.debug("Error getting peer addresses: %s", str(e))
# Default to not directly reachable
self._peer_reachability[peer_id] = False
return False
async def check_self_reachability(self) -> tuple[bool, list[Multiaddr]]:
"""
Check if this host is likely directly reachable.
Returns
-------
Tuple[bool, List[Multiaddr]]
Tuple of (is_reachable, public_addresses)
"""
# Get all host addresses
addrs = self.host.get_addrs()
# Filter for public addresses
public_addrs = self.get_public_addrs(addrs)
# If we have public addresses, assume we're reachable
# This is a simplified assumption - real reachability would need
# external checking
is_reachable = len(public_addrs) > 0
return is_reachable, public_addrs

View File

@ -5,11 +5,6 @@ Contains generated protobuf code for circuit_v2 relay protocol.
""" """
# Import the classes to be accessible directly from the package # Import the classes to be accessible directly from the package
from .dcutr_pb2 import (
HolePunch,
)
from .circuit_pb2 import ( from .circuit_pb2 import (
HopMessage, HopMessage,
Limit, Limit,
@ -18,4 +13,4 @@ from .circuit_pb2 import (
StopMessage, StopMessage,
) )
__all__ = ["HopMessage", "Limit", "Reservation", "Status", "StopMessage", "HolePunch"] __all__ = ["HopMessage", "Limit", "Reservation", "Status", "StopMessage"]

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT! # Generated by the protocol buffer compiler. DO NOT EDIT!
# NO CHECKED-IN PROTOBUF GENCODE
# source: libp2p/relay/circuit_v2/pb/circuit.proto # source: libp2p/relay/circuit_v2/pb/circuit.proto
"""Generated protocol buffer code.""" """Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder from google.protobuf.internal import builder as _builder
@ -11,14 +12,11 @@ from google.protobuf import symbol_database as _symbol_database
_sym_db = _symbol_database.Default() _sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n(libp2p/relay/circuit_v2/pb/circuit.proto\x12\rcircuit.pb.v2\"\xf3\x01\n\nHopMessage\x12,\n\x04type\x18\x01 \x01(\x0e\x32\x1e.circuit.pb.v2.HopMessage.Type\x12\x0c\n\x04peer\x18\x02 \x01(\x0c\x12/\n\x0breservation\x18\x03 \x01(\x0b\x32\x1a.circuit.pb.v2.Reservation\x12#\n\x05limit\x18\x04 \x01(\x0b\x32\x14.circuit.pb.v2.Limit\x12%\n\x06status\x18\x05 \x01(\x0b\x32\x15.circuit.pb.v2.Status\",\n\x04Type\x12\x0b\n\x07RESERVE\x10\x00\x12\x0b\n\x07\x43ONNECT\x10\x01\x12\n\n\x06STATUS\x10\x02\"\x92\x01\n\x0bStopMessage\x12-\n\x04type\x18\x01 \x01(\x0e\x32\x1f.circuit.pb.v2.StopMessage.Type\x12\x0c\n\x04peer\x18\x02 \x01(\x0c\x12%\n\x06status\x18\x03 \x01(\x0b\x32\x15.circuit.pb.v2.Status\"\x1f\n\x04Type\x12\x0b\n\x07\x43ONNECT\x10\x00\x12\n\n\x06STATUS\x10\x01\"A\n\x0bReservation\x12\x0f\n\x07voucher\x18\x01 \x01(\x0c\x12\x11\n\tsignature\x18\x02 \x01(\x0c\x12\x0e\n\x06\x65xpire\x18\x03 \x01(\x03\"\'\n\x05Limit\x12\x10\n\x08\x64uration\x18\x01 \x01(\x03\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x03\"\xf6\x01\n\x06Status\x12(\n\x04\x63ode\x18\x01 \x01(\x0e\x32\x1a.circuit.pb.v2.Status.Code\x12\x0f\n\x07message\x18\x02 \x01(\t\"\xb0\x01\n\x04\x43ode\x12\x06\n\x02OK\x10\x00\x12\x17\n\x13RESERVATION_REFUSED\x10\x64\x12\x1b\n\x17RESOURCE_LIMIT_EXCEEDED\x10\x65\x12\x15\n\x11PERMISSION_DENIED\x10\x66\x12\x16\n\x11\x43ONNECTION_FAILED\x10\xc8\x01\x12\x11\n\x0c\x44IAL_REFUSED\x10\xc9\x01\x12\x10\n\x0bSTOP_FAILED\x10\xac\x02\x12\x16\n\x11MALFORMED_MESSAGE\x10\x90\x03\x62\x06proto3') DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n(libp2p/relay/circuit_v2/pb/circuit.proto\x12\rcircuit.pb.v2\"\xf3\x01\n\nHopMessage\x12,\n\x04type\x18\x01 \x01(\x0e\x32\x1e.circuit.pb.v2.HopMessage.Type\x12\x0c\n\x04peer\x18\x02 \x01(\x0c\x12/\n\x0breservation\x18\x03 \x01(\x0b\x32\x1a.circuit.pb.v2.Reservation\x12#\n\x05limit\x18\x04 \x01(\x0b\x32\x14.circuit.pb.v2.Limit\x12%\n\x06status\x18\x05 \x01(\x0b\x32\x15.circuit.pb.v2.Status\",\n\x04Type\x12\x0b\n\x07RESERVE\x10\x00\x12\x0b\n\x07\x43ONNECT\x10\x01\x12\n\n\x06STATUS\x10\x02\"\x92\x01\n\x0bStopMessage\x12-\n\x04type\x18\x01 \x01(\x0e\x32\x1f.circuit.pb.v2.StopMessage.Type\x12\x0c\n\x04peer\x18\x02 \x01(\x0c\x12%\n\x06status\x18\x03 \x01(\x0b\x32\x15.circuit.pb.v2.Status\"\x1f\n\x04Type\x12\x0b\n\x07\x43ONNECT\x10\x00\x12\n\n\x06STATUS\x10\x01\"A\n\x0bReservation\x12\x0f\n\x07voucher\x18\x01 \x01(\x0c\x12\x11\n\tsignature\x18\x02 \x01(\x0c\x12\x0e\n\x06\x65xpire\x18\x03 \x01(\x03\"\'\n\x05Limit\x12\x10\n\x08\x64uration\x18\x01 \x01(\x03\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x03\"\xf6\x01\n\x06Status\x12(\n\x04\x63ode\x18\x01 \x01(\x0e\x32\x1a.circuit.pb.v2.Status.Code\x12\x0f\n\x07message\x18\x02 \x01(\t\"\xb0\x01\n\x04\x43ode\x12\x06\n\x02OK\x10\x00\x12\x17\n\x13RESERVATION_REFUSED\x10\x64\x12\x1b\n\x17RESOURCE_LIMIT_EXCEEDED\x10\x65\x12\x15\n\x11PERMISSION_DENIED\x10\x66\x12\x16\n\x11\x43ONNECTION_FAILED\x10\xc8\x01\x12\x11\n\x0c\x44IAL_REFUSED\x10\xc9\x01\x12\x10\n\x0bSTOP_FAILED\x10\xac\x02\x12\x16\n\x11MALFORMED_MESSAGE\x10\x90\x03\x62\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.relay.circuit_v2.pb.circuit_pb2', globals()) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.relay.circuit_v2.pb.circuit_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False: if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None DESCRIPTOR._options = None
_HOPMESSAGE._serialized_start=60 _HOPMESSAGE._serialized_start=60
_HOPMESSAGE._serialized_end=303 _HOPMESSAGE._serialized_end=303

View File

@ -1,14 +0,0 @@
syntax = "proto2";
package holepunch.pb;
message HolePunch {
enum Type {
CONNECT = 100;
SYNC = 300;
}
required Type type = 1;
repeated bytes ObsAddrs = 2;
}

View File

@ -1,27 +0,0 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/relay/circuit_v2/pb/dcutr.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n&libp2p/relay/circuit_v2/pb/dcutr.proto\x12\x0cholepunch.pb\"i\n\tHolePunch\x12*\n\x04type\x18\x01 \x02(\x0e\x32\x1c.holepunch.pb.HolePunch.Type\x12\x10\n\x08ObsAddrs\x18\x02 \x03(\x0c\"\x1e\n\x04Type\x12\x0b\n\x07\x43ONNECT\x10\x64\x12\t\n\x04SYNC\x10\xac\x02')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.relay.circuit_v2.pb.dcutr_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_HOLEPUNCH._serialized_start=56
_HOLEPUNCH._serialized_end=161
_HOLEPUNCH_TYPE._serialized_start=131
_HOLEPUNCH_TYPE._serialized_end=161
# @@protoc_insertion_point(module_scope)

View File

@ -1,53 +0,0 @@
"""
@generated by mypy-protobuf. Do not edit manually!
isort:skip_file
"""
import builtins
import collections.abc
import google.protobuf.descriptor
import google.protobuf.internal.containers
import google.protobuf.internal.enum_type_wrapper
import google.protobuf.message
import sys
import typing
if sys.version_info >= (3, 10):
import typing as typing_extensions
else:
import typing_extensions
DESCRIPTOR: google.protobuf.descriptor.FileDescriptor
@typing.final
class HolePunch(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class _Type:
ValueType = typing.NewType("ValueType", builtins.int)
V: typing_extensions.TypeAlias = ValueType
class _TypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[HolePunch._Type.ValueType], builtins.type):
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
CONNECT: HolePunch._Type.ValueType # 100
SYNC: HolePunch._Type.ValueType # 300
class Type(_Type, metaclass=_TypeEnumTypeWrapper): ...
CONNECT: HolePunch.Type.ValueType # 100
SYNC: HolePunch.Type.ValueType # 300
TYPE_FIELD_NUMBER: builtins.int
OBSADDRS_FIELD_NUMBER: builtins.int
type: global___HolePunch.Type.ValueType
@property
def ObsAddrs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]: ...
def __init__(
self,
*,
type: global___HolePunch.Type.ValueType | None = ...,
ObsAddrs: collections.abc.Iterable[builtins.bytes] | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["type", b"type"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["ObsAddrs", b"ObsAddrs", "type", b"type"]) -> None: ...
global___HolePunch = HolePunch

View File

@ -41,8 +41,7 @@ class BaseNoiseMsgReadWriter(EncryptedMsgReadWriter):
read_writer: NoisePacketReadWriter read_writer: NoisePacketReadWriter
noise_state: NoiseState noise_state: NoiseState
# NOTE: This prefix is added in msg#3 in Go. # FIXME: This prefix is added in msg#3 in Go. Check whether it's a desired behavior.
# Support in py-libp2p is available but not used
prefix: bytes = b"\x00" * 32 prefix: bytes = b"\x00" * 32
def __init__(self, conn: IRawConnection, noise_state: NoiseState) -> None: def __init__(self, conn: IRawConnection, noise_state: NoiseState) -> None:

View File

@ -29,6 +29,11 @@ class Transport(ISecureTransport):
early_data: bytes | None early_data: bytes | None
with_noise_pipes: bool with_noise_pipes: bool
# NOTE: Implementations that support Noise Pipes must decide whether to use
# an XX or IK handshake based on whether they possess a cached static
# Noise key for the remote peer.
# TODO: A storage of seen noise static keys for pattern IK?
def __init__( def __init__(
self, self,
libp2p_keypair: KeyPair, libp2p_keypair: KeyPair,

View File

@ -17,9 +17,6 @@ from libp2p.custom_types import (
from libp2p.peer.id import ( from libp2p.peer.id import (
ID, ID,
) )
from libp2p.protocol_muxer.exceptions import (
MultiselectError,
)
from libp2p.protocol_muxer.multiselect import ( from libp2p.protocol_muxer.multiselect import (
Multiselect, Multiselect,
) )
@ -107,7 +104,7 @@ class SecurityMultistream(ABC):
:param is_initiator: true if we are the initiator, false otherwise :param is_initiator: true if we are the initiator, false otherwise
:return: selected secure transport :return: selected secure transport
""" """
protocol: TProtocol | None protocol: TProtocol
communicator = MultiselectCommunicator(conn) communicator = MultiselectCommunicator(conn)
if is_initiator: if is_initiator:
# Select protocol if initiator # Select protocol if initiator
@ -117,7 +114,5 @@ class SecurityMultistream(ABC):
else: else:
# Select protocol if non-initiator # Select protocol if non-initiator
protocol, _ = await self.multiselect.negotiate(communicator) protocol, _ = await self.multiselect.negotiate(communicator)
if protocol is None:
raise MultiselectError("fail to negotiate a security protocol")
# Return transport from protocol # Return transport from protocol
return self.transports[protocol] return self.transports[protocol]

View File

@ -1,5 +1,3 @@
from collections.abc import AsyncGenerator
from contextlib import asynccontextmanager
from types import ( from types import (
TracebackType, TracebackType,
) )
@ -34,72 +32,6 @@ if TYPE_CHECKING:
) )
class ReadWriteLock:
"""
A read-write lock that allows multiple concurrent readers
or one exclusive writer, implemented using Trio primitives.
"""
def __init__(self) -> None:
self._readers = 0
self._readers_lock = trio.Lock() # Protects access to _readers count
self._writer_lock = trio.Semaphore(1) # Allows only one writer at a time
async def acquire_read(self) -> None:
"""Acquire a read lock. Multiple readers can hold it simultaneously."""
try:
async with self._readers_lock:
if self._readers == 0:
await self._writer_lock.acquire()
self._readers += 1
except trio.Cancelled:
raise
async def release_read(self) -> None:
"""Release a read lock."""
async with self._readers_lock:
if self._readers == 1:
self._writer_lock.release()
self._readers -= 1
async def acquire_write(self) -> None:
"""Acquire an exclusive write lock."""
try:
await self._writer_lock.acquire()
except trio.Cancelled:
raise
def release_write(self) -> None:
"""Release the exclusive write lock."""
self._writer_lock.release()
@asynccontextmanager
async def read_lock(self) -> AsyncGenerator[None, None]:
"""Context manager for acquiring and releasing a read lock safely."""
acquire = False
try:
await self.acquire_read()
acquire = True
yield
finally:
if acquire:
with trio.CancelScope() as scope:
scope.shield = True
await self.release_read()
@asynccontextmanager
async def write_lock(self) -> AsyncGenerator[None, None]:
"""Context manager for acquiring and releasing a write lock safely."""
acquire = False
try:
await self.acquire_write()
acquire = True
yield
finally:
if acquire:
self.release_write()
class MplexStream(IMuxedStream): class MplexStream(IMuxedStream):
""" """
reference: https://github.com/libp2p/go-mplex/blob/master/stream.go reference: https://github.com/libp2p/go-mplex/blob/master/stream.go
@ -114,7 +46,7 @@ class MplexStream(IMuxedStream):
read_deadline: int | None read_deadline: int | None
write_deadline: int | None write_deadline: int | None
rw_lock: ReadWriteLock # TODO: Add lock for read/write to avoid interleaving receiving messages?
close_lock: trio.Lock close_lock: trio.Lock
# NOTE: `dataIn` is size of 8 in Go implementation. # NOTE: `dataIn` is size of 8 in Go implementation.
@ -148,7 +80,6 @@ class MplexStream(IMuxedStream):
self.event_remote_closed = trio.Event() self.event_remote_closed = trio.Event()
self.event_reset = trio.Event() self.event_reset = trio.Event()
self.close_lock = trio.Lock() self.close_lock = trio.Lock()
self.rw_lock = ReadWriteLock()
self.incoming_data_channel = incoming_data_channel self.incoming_data_channel = incoming_data_channel
self._buf = bytearray() self._buf = bytearray()
@ -182,49 +113,48 @@ class MplexStream(IMuxedStream):
:param n: number of bytes to read :param n: number of bytes to read
:return: bytes actually read :return: bytes actually read
""" """
async with self.rw_lock.read_lock(): if n is not None and n < 0:
if n is not None and n < 0: raise ValueError(
raise ValueError( "the number of bytes to read `n` must be non-negative or "
"the number of bytes to read `n` must be non-negative or " f"`None` to indicate read until EOF, got n={n}"
f"`None` to indicate read until EOF, got n={n}" )
) if self.event_reset.is_set():
if self.event_reset.is_set(): raise MplexStreamReset
raise MplexStreamReset if n is None:
if n is None: return await self._read_until_eof()
return await self._read_until_eof() if len(self._buf) == 0:
if len(self._buf) == 0: data: bytes
data: bytes # Peek whether there is data available. If yes, we just read until there is
# Peek whether there is data available. If yes, we just read until # no data, then return.
# there is no data, then return. try:
data = self.incoming_data_channel.receive_nowait()
self._buf.extend(data)
except trio.EndOfChannel:
raise MplexStreamEOF
except trio.WouldBlock:
# We know `receive` will be blocked here. Wait for data here with
# `receive` and catch all kinds of errors here.
try: try:
data = self.incoming_data_channel.receive_nowait() data = await self.incoming_data_channel.receive()
self._buf.extend(data) self._buf.extend(data)
except trio.EndOfChannel: except trio.EndOfChannel:
raise MplexStreamEOF if self.event_reset.is_set():
except trio.WouldBlock: raise MplexStreamReset
# We know `receive` will be blocked here. Wait for data here with if self.event_remote_closed.is_set():
# `receive` and catch all kinds of errors here. raise MplexStreamEOF
try: except trio.ClosedResourceError as error:
data = await self.incoming_data_channel.receive() # Probably `incoming_data_channel` is closed in `reset` when we are
self._buf.extend(data) # waiting for `receive`.
except trio.EndOfChannel: if self.event_reset.is_set():
if self.event_reset.is_set(): raise MplexStreamReset
raise MplexStreamReset raise Exception(
if self.event_remote_closed.is_set(): "`incoming_data_channel` is closed but stream is not reset. "
raise MplexStreamEOF "This should never happen."
except trio.ClosedResourceError as error: ) from error
# Probably `incoming_data_channel` is closed in `reset` when self._buf.extend(self._read_return_when_blocked())
# we are waiting for `receive`. payload = self._buf[:n]
if self.event_reset.is_set(): self._buf = self._buf[len(payload) :]
raise MplexStreamReset return bytes(payload)
raise Exception(
"`incoming_data_channel` is closed but stream is not reset."
"This should never happen."
) from error
self._buf.extend(self._read_return_when_blocked())
payload = self._buf[:n]
self._buf = self._buf[len(payload) :]
return bytes(payload)
async def write(self, data: bytes) -> None: async def write(self, data: bytes) -> None:
""" """
@ -232,21 +162,22 @@ class MplexStream(IMuxedStream):
:return: number of bytes written :return: number of bytes written
""" """
async with self.rw_lock.write_lock(): if self.event_local_closed.is_set():
if self.event_local_closed.is_set(): raise MplexStreamClosed(f"cannot write to closed stream: data={data!r}")
raise MplexStreamClosed(f"cannot write to closed stream: data={data!r}") flag = (
flag = ( HeaderTags.MessageInitiator
HeaderTags.MessageInitiator if self.is_initiator
if self.is_initiator else HeaderTags.MessageReceiver
else HeaderTags.MessageReceiver )
) await self.muxed_conn.send_message(flag, data, self.stream_id)
await self.muxed_conn.send_message(flag, data, self.stream_id)
async def close(self) -> None: async def close(self) -> None:
""" """
Closing a stream closes it for writing and closes the remote end for Closing a stream closes it for writing and closes the remote end for
reading but allows writing in the other direction. reading but allows writing in the other direction.
""" """
# TODO error handling with timeout
async with self.close_lock: async with self.close_lock:
if self.event_local_closed.is_set(): if self.event_local_closed.is_set():
return return
@ -254,17 +185,8 @@ class MplexStream(IMuxedStream):
flag = ( flag = (
HeaderTags.CloseInitiator if self.is_initiator else HeaderTags.CloseReceiver HeaderTags.CloseInitiator if self.is_initiator else HeaderTags.CloseReceiver
) )
# TODO: Raise when `muxed_conn.send_message` fails and `Mplex` isn't shutdown.
try: await self.muxed_conn.send_message(flag, None, self.stream_id)
with trio.fail_after(5): # timeout in seconds
await self.muxed_conn.send_message(flag, None, self.stream_id)
except trio.TooSlowError:
raise TimeoutError("Timeout while trying to close the stream")
except MuxedConnUnavailable:
if not self.muxed_conn.event_shutting_down.is_set():
raise RuntimeError(
"Failed to send close message and Mplex isn't shutting down"
)
_is_remote_closed: bool _is_remote_closed: bool
async with self.close_lock: async with self.close_lock:

View File

@ -17,9 +17,6 @@ from libp2p.custom_types import (
from libp2p.peer.id import ( from libp2p.peer.id import (
ID, ID,
) )
from libp2p.protocol_muxer.exceptions import (
MultiselectError,
)
from libp2p.protocol_muxer.multiselect import ( from libp2p.protocol_muxer.multiselect import (
Multiselect, Multiselect,
) )
@ -76,7 +73,7 @@ class MuxerMultistream:
:param conn: conn to choose a transport over :param conn: conn to choose a transport over
:return: selected muxer transport :return: selected muxer transport
""" """
protocol: TProtocol | None protocol: TProtocol
communicator = MultiselectCommunicator(conn) communicator = MultiselectCommunicator(conn)
if conn.is_initiator: if conn.is_initiator:
protocol = await self.multiselect_client.select_one_of( protocol = await self.multiselect_client.select_one_of(
@ -84,8 +81,6 @@ class MuxerMultistream:
) )
else: else:
protocol, _ = await self.multiselect.negotiate(communicator) protocol, _ = await self.multiselect.negotiate(communicator)
if protocol is None:
raise MultiselectError("fail to negotiate a stream muxer protocol")
return self.transports[protocol] return self.transports[protocol]
async def new_conn(self, conn: ISecureConn, peer_id: ID) -> IMuxedConn: async def new_conn(self, conn: ISecureConn, peer_id: ID) -> IMuxedConn:

View File

@ -45,9 +45,6 @@ from libp2p.stream_muxer.exceptions import (
MuxedStreamReset, MuxedStreamReset,
) )
# Configure logger for this module
logger = logging.getLogger("libp2p.stream_muxer.yamux")
PROTOCOL_ID = "/yamux/1.0.0" PROTOCOL_ID = "/yamux/1.0.0"
TYPE_DATA = 0x0 TYPE_DATA = 0x0
TYPE_WINDOW_UPDATE = 0x1 TYPE_WINDOW_UPDATE = 0x1
@ -101,13 +98,13 @@ class YamuxStream(IMuxedStream):
# Flow control: Check if we have enough send window # Flow control: Check if we have enough send window
total_len = len(data) total_len = len(data)
sent = 0 sent = 0
logger.debug(f"Stream {self.stream_id}: Starts writing {total_len} bytes ") logging.debug(f"Stream {self.stream_id}: Starts writing {total_len} bytes ")
while sent < total_len: while sent < total_len:
# Wait for available window with timeout # Wait for available window with timeout
timeout = False timeout = False
async with self.window_lock: async with self.window_lock:
if self.send_window == 0: if self.send_window == 0:
logger.debug( logging.debug(
f"Stream {self.stream_id}: Window is zero, waiting for update" f"Stream {self.stream_id}: Window is zero, waiting for update"
) )
# Release lock and wait with timeout # Release lock and wait with timeout
@ -155,12 +152,12 @@ class YamuxStream(IMuxedStream):
""" """
if increment <= 0: if increment <= 0:
# If increment is zero or negative, skip sending update # If increment is zero or negative, skip sending update
logger.debug( logging.debug(
f"Stream {self.stream_id}: Skipping window update" f"Stream {self.stream_id}: Skipping window update"
f"(increment={increment})" f"(increment={increment})"
) )
return return
logger.debug( logging.debug(
f"Stream {self.stream_id}: Sending window update with increment={increment}" f"Stream {self.stream_id}: Sending window update with increment={increment}"
) )
@ -188,7 +185,7 @@ class YamuxStream(IMuxedStream):
# If the stream is closed for receiving and the buffer is empty, raise EOF # If the stream is closed for receiving and the buffer is empty, raise EOF
if self.recv_closed and not self.conn.stream_buffers.get(self.stream_id): if self.recv_closed and not self.conn.stream_buffers.get(self.stream_id):
logger.debug( logging.debug(
f"Stream {self.stream_id}: Stream closed for receiving and buffer empty" f"Stream {self.stream_id}: Stream closed for receiving and buffer empty"
) )
raise MuxedStreamEOF("Stream is closed for receiving") raise MuxedStreamEOF("Stream is closed for receiving")
@ -201,7 +198,7 @@ class YamuxStream(IMuxedStream):
# If buffer is not available, check if stream is closed # If buffer is not available, check if stream is closed
if buffer is None: if buffer is None:
logger.debug(f"Stream {self.stream_id}: No buffer available") logging.debug(f"Stream {self.stream_id}: No buffer available")
raise MuxedStreamEOF("Stream buffer closed") raise MuxedStreamEOF("Stream buffer closed")
# If we have data in buffer, process it # If we have data in buffer, process it
@ -213,34 +210,34 @@ class YamuxStream(IMuxedStream):
# Send window update for the chunk we just read # Send window update for the chunk we just read
async with self.window_lock: async with self.window_lock:
self.recv_window += len(chunk) self.recv_window += len(chunk)
logger.debug(f"Stream {self.stream_id}: Update {len(chunk)}") logging.debug(f"Stream {self.stream_id}: Update {len(chunk)}")
await self.send_window_update(len(chunk), skip_lock=True) await self.send_window_update(len(chunk), skip_lock=True)
# If stream is closed (FIN received) and buffer is empty, break # If stream is closed (FIN received) and buffer is empty, break
if self.recv_closed and len(buffer) == 0: if self.recv_closed and len(buffer) == 0:
logger.debug(f"Stream {self.stream_id}: Closed with empty buffer") logging.debug(f"Stream {self.stream_id}: Closed with empty buffer")
break break
# If stream was reset, raise reset error # If stream was reset, raise reset error
if self.reset_received: if self.reset_received:
logger.debug(f"Stream {self.stream_id}: Stream was reset") logging.debug(f"Stream {self.stream_id}: Stream was reset")
raise MuxedStreamReset("Stream was reset") raise MuxedStreamReset("Stream was reset")
# Wait for more data or stream closure # Wait for more data or stream closure
logger.debug(f"Stream {self.stream_id}: Waiting for data or FIN") logging.debug(f"Stream {self.stream_id}: Waiting for data or FIN")
await self.conn.stream_events[self.stream_id].wait() await self.conn.stream_events[self.stream_id].wait()
self.conn.stream_events[self.stream_id] = trio.Event() self.conn.stream_events[self.stream_id] = trio.Event()
# After loop exit, first check if we have data to return # After loop exit, first check if we have data to return
if data: if data:
logger.debug( logging.debug(
f"Stream {self.stream_id}: Returning {len(data)} bytes after loop" f"Stream {self.stream_id}: Returning {len(data)} bytes after loop"
) )
return data return data
# No data accumulated, now check why we exited the loop # No data accumulated, now check why we exited the loop
if self.conn.event_shutting_down.is_set(): if self.conn.event_shutting_down.is_set():
logger.debug(f"Stream {self.stream_id}: Connection shutting down") logging.debug(f"Stream {self.stream_id}: Connection shutting down")
raise MuxedStreamEOF("Connection shut down") raise MuxedStreamEOF("Connection shut down")
# Return empty data # Return empty data
@ -249,7 +246,7 @@ class YamuxStream(IMuxedStream):
data = await self.conn.read_stream(self.stream_id, n) data = await self.conn.read_stream(self.stream_id, n)
async with self.window_lock: async with self.window_lock:
self.recv_window += len(data) self.recv_window += len(data)
logger.debug( logging.debug(
f"Stream {self.stream_id}: Sending window update after read, " f"Stream {self.stream_id}: Sending window update after read, "
f"increment={len(data)}" f"increment={len(data)}"
) )
@ -258,7 +255,7 @@ class YamuxStream(IMuxedStream):
async def close(self) -> None: async def close(self) -> None:
if not self.send_closed: if not self.send_closed:
logger.debug(f"Half-closing stream {self.stream_id} (local end)") logging.debug(f"Half-closing stream {self.stream_id} (local end)")
header = struct.pack( header = struct.pack(
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_FIN, self.stream_id, 0 YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_FIN, self.stream_id, 0
) )
@ -274,7 +271,7 @@ class YamuxStream(IMuxedStream):
async def reset(self) -> None: async def reset(self) -> None:
if not self.closed: if not self.closed:
logger.debug(f"Resetting stream {self.stream_id}") logging.debug(f"Resetting stream {self.stream_id}")
header = struct.pack( header = struct.pack(
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_RST, self.stream_id, 0 YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_RST, self.stream_id, 0
) )
@ -352,7 +349,7 @@ class Yamux(IMuxedConn):
self._nursery: Nursery | None = None self._nursery: Nursery | None = None
async def start(self) -> None: async def start(self) -> None:
logger.debug(f"Starting Yamux for {self.peer_id}") logging.debug(f"Starting Yamux for {self.peer_id}")
if self.event_started.is_set(): if self.event_started.is_set():
return return
async with trio.open_nursery() as nursery: async with trio.open_nursery() as nursery:
@ -365,7 +362,7 @@ class Yamux(IMuxedConn):
return self.is_initiator_value return self.is_initiator_value
async def close(self, error_code: int = GO_AWAY_NORMAL) -> None: async def close(self, error_code: int = GO_AWAY_NORMAL) -> None:
logger.debug(f"Closing Yamux connection with code {error_code}") logging.debug(f"Closing Yamux connection with code {error_code}")
async with self.streams_lock: async with self.streams_lock:
if not self.event_shutting_down.is_set(): if not self.event_shutting_down.is_set():
try: try:
@ -374,7 +371,7 @@ class Yamux(IMuxedConn):
) )
await self.secured_conn.write(header) await self.secured_conn.write(header)
except Exception as e: except Exception as e:
logger.debug(f"Failed to send GO_AWAY: {e}") logging.debug(f"Failed to send GO_AWAY: {e}")
self.event_shutting_down.set() self.event_shutting_down.set()
for stream in self.streams.values(): for stream in self.streams.values():
stream.closed = True stream.closed = True
@ -385,12 +382,12 @@ class Yamux(IMuxedConn):
self.stream_events.clear() self.stream_events.clear()
try: try:
await self.secured_conn.close() await self.secured_conn.close()
logger.debug(f"Successfully closed secured_conn for peer {self.peer_id}") logging.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
except Exception as e: except Exception as e:
logger.debug(f"Error closing secured_conn for peer {self.peer_id}: {e}") logging.debug(f"Error closing secured_conn for peer {self.peer_id}: {e}")
self.event_closed.set() self.event_closed.set()
if self.on_close: if self.on_close:
logger.debug(f"Calling on_close in Yamux.close for peer {self.peer_id}") logging.debug(f"Calling on_close in Yamux.close for peer {self.peer_id}")
if inspect.iscoroutinefunction(self.on_close): if inspect.iscoroutinefunction(self.on_close):
if self.on_close is not None: if self.on_close is not None:
await self.on_close() await self.on_close()
@ -419,7 +416,7 @@ class Yamux(IMuxedConn):
header = struct.pack( header = struct.pack(
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_SYN, stream_id, 0 YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_SYN, stream_id, 0
) )
logger.debug(f"Sending SYN header for stream {stream_id}") logging.debug(f"Sending SYN header for stream {stream_id}")
await self.secured_conn.write(header) await self.secured_conn.write(header)
return stream return stream
except Exception as e: except Exception as e:
@ -427,32 +424,32 @@ class Yamux(IMuxedConn):
raise e raise e
async def accept_stream(self) -> IMuxedStream: async def accept_stream(self) -> IMuxedStream:
logger.debug("Waiting for new stream") logging.debug("Waiting for new stream")
try: try:
stream = await self.new_stream_receive_channel.receive() stream = await self.new_stream_receive_channel.receive()
logger.debug(f"Received stream {stream.stream_id}") logging.debug(f"Received stream {stream.stream_id}")
return stream return stream
except trio.EndOfChannel: except trio.EndOfChannel:
raise MuxedStreamError("No new streams available") raise MuxedStreamError("No new streams available")
async def read_stream(self, stream_id: int, n: int = -1) -> bytes: async def read_stream(self, stream_id: int, n: int = -1) -> bytes:
logger.debug(f"Reading from stream {self.peer_id}:{stream_id}, n={n}") logging.debug(f"Reading from stream {self.peer_id}:{stream_id}, n={n}")
if n is None: if n is None:
n = -1 n = -1
while True: while True:
async with self.streams_lock: async with self.streams_lock:
if stream_id not in self.streams: if stream_id not in self.streams:
logger.debug(f"Stream {self.peer_id}:{stream_id} unknown") logging.debug(f"Stream {self.peer_id}:{stream_id} unknown")
raise MuxedStreamEOF("Stream closed") raise MuxedStreamEOF("Stream closed")
if self.event_shutting_down.is_set(): if self.event_shutting_down.is_set():
logger.debug( logging.debug(
f"Stream {self.peer_id}:{stream_id}: connection shutting down" f"Stream {self.peer_id}:{stream_id}: connection shutting down"
) )
raise MuxedStreamEOF("Connection shut down") raise MuxedStreamEOF("Connection shut down")
stream = self.streams[stream_id] stream = self.streams[stream_id]
buffer = self.stream_buffers.get(stream_id) buffer = self.stream_buffers.get(stream_id)
logger.debug( logging.debug(
f"Stream {self.peer_id}:{stream_id}: " f"Stream {self.peer_id}:{stream_id}: "
f"closed={stream.closed}, " f"closed={stream.closed}, "
f"recv_closed={stream.recv_closed}, " f"recv_closed={stream.recv_closed}, "
@ -460,7 +457,7 @@ class Yamux(IMuxedConn):
f"buffer_len={len(buffer) if buffer else 0}" f"buffer_len={len(buffer) if buffer else 0}"
) )
if buffer is None: if buffer is None:
logger.debug( logging.debug(
f"Stream {self.peer_id}:{stream_id}:" f"Stream {self.peer_id}:{stream_id}:"
f"Buffer gone, assuming closed" f"Buffer gone, assuming closed"
) )
@ -473,7 +470,7 @@ class Yamux(IMuxedConn):
else: else:
data = bytes(buffer[:n]) data = bytes(buffer[:n])
del buffer[:n] del buffer[:n]
logger.debug( logging.debug(
f"Returning {len(data)} bytes" f"Returning {len(data)} bytes"
f"from stream {self.peer_id}:{stream_id}, " f"from stream {self.peer_id}:{stream_id}, "
f"buffer_len={len(buffer)}" f"buffer_len={len(buffer)}"
@ -481,7 +478,7 @@ class Yamux(IMuxedConn):
return data return data
# If reset received and buffer is empty, raise reset # If reset received and buffer is empty, raise reset
if stream.reset_received: if stream.reset_received:
logger.debug( logging.debug(
f"Stream {self.peer_id}:{stream_id}:" f"Stream {self.peer_id}:{stream_id}:"
f"reset_received=True, raising MuxedStreamReset" f"reset_received=True, raising MuxedStreamReset"
) )
@ -494,7 +491,7 @@ class Yamux(IMuxedConn):
else: else:
data = bytes(buffer[:n]) data = bytes(buffer[:n])
del buffer[:n] del buffer[:n]
logger.debug( logging.debug(
f"Returning {len(data)} bytes" f"Returning {len(data)} bytes"
f"from stream {self.peer_id}:{stream_id}, " f"from stream {self.peer_id}:{stream_id}, "
f"buffer_len={len(buffer)}" f"buffer_len={len(buffer)}"
@ -502,21 +499,21 @@ class Yamux(IMuxedConn):
return data return data
# Check if stream is closed # Check if stream is closed
if stream.closed: if stream.closed:
logger.debug( logging.debug(
f"Stream {self.peer_id}:{stream_id}:" f"Stream {self.peer_id}:{stream_id}:"
f"closed=True, raising MuxedStreamReset" f"closed=True, raising MuxedStreamReset"
) )
raise MuxedStreamReset("Stream is reset or closed") raise MuxedStreamReset("Stream is reset or closed")
# Check if recv_closed and buffer empty # Check if recv_closed and buffer empty
if stream.recv_closed: if stream.recv_closed:
logger.debug( logging.debug(
f"Stream {self.peer_id}:{stream_id}:" f"Stream {self.peer_id}:{stream_id}:"
f"recv_closed=True, buffer empty, raising EOF" f"recv_closed=True, buffer empty, raising EOF"
) )
raise MuxedStreamEOF("Stream is closed for receiving") raise MuxedStreamEOF("Stream is closed for receiving")
# Wait for data if stream is still open # Wait for data if stream is still open
logger.debug(f"Waiting for data on stream {self.peer_id}:{stream_id}") logging.debug(f"Waiting for data on stream {self.peer_id}:{stream_id}")
try: try:
await self.stream_events[stream_id].wait() await self.stream_events[stream_id].wait()
self.stream_events[stream_id] = trio.Event() self.stream_events[stream_id] = trio.Event()
@ -531,7 +528,7 @@ class Yamux(IMuxedConn):
try: try:
header = await self.secured_conn.read(HEADER_SIZE) header = await self.secured_conn.read(HEADER_SIZE)
if not header or len(header) < HEADER_SIZE: if not header or len(header) < HEADER_SIZE:
logger.debug( logging.debug(
f"Connection closed orincomplete header for peer {self.peer_id}" f"Connection closed orincomplete header for peer {self.peer_id}"
) )
self.event_shutting_down.set() self.event_shutting_down.set()
@ -540,7 +537,7 @@ class Yamux(IMuxedConn):
version, typ, flags, stream_id, length = struct.unpack( version, typ, flags, stream_id, length = struct.unpack(
YAMUX_HEADER_FORMAT, header YAMUX_HEADER_FORMAT, header
) )
logger.debug( logging.debug(
f"Received header for peer {self.peer_id}:" f"Received header for peer {self.peer_id}:"
f"type={typ}, flags={flags}, stream_id={stream_id}," f"type={typ}, flags={flags}, stream_id={stream_id},"
f"length={length}" f"length={length}"
@ -561,7 +558,7 @@ class Yamux(IMuxedConn):
0, 0,
) )
await self.secured_conn.write(ack_header) await self.secured_conn.write(ack_header)
logger.debug( logging.debug(
f"Sending stream {stream_id}" f"Sending stream {stream_id}"
f"to channel for peer {self.peer_id}" f"to channel for peer {self.peer_id}"
) )
@ -579,7 +576,7 @@ class Yamux(IMuxedConn):
elif typ == TYPE_DATA and flags & FLAG_RST: elif typ == TYPE_DATA and flags & FLAG_RST:
async with self.streams_lock: async with self.streams_lock:
if stream_id in self.streams: if stream_id in self.streams:
logger.debug( logging.debug(
f"Resetting stream {stream_id} for peer {self.peer_id}" f"Resetting stream {stream_id} for peer {self.peer_id}"
) )
self.streams[stream_id].closed = True self.streams[stream_id].closed = True
@ -588,27 +585,27 @@ class Yamux(IMuxedConn):
elif typ == TYPE_DATA and flags & FLAG_ACK: elif typ == TYPE_DATA and flags & FLAG_ACK:
async with self.streams_lock: async with self.streams_lock:
if stream_id in self.streams: if stream_id in self.streams:
logger.debug( logging.debug(
f"Received ACK for stream" f"Received ACK for stream"
f"{stream_id} for peer {self.peer_id}" f"{stream_id} for peer {self.peer_id}"
) )
elif typ == TYPE_GO_AWAY: elif typ == TYPE_GO_AWAY:
error_code = length error_code = length
if error_code == GO_AWAY_NORMAL: if error_code == GO_AWAY_NORMAL:
logger.debug( logging.debug(
f"Received GO_AWAY for peer" f"Received GO_AWAY for peer"
f"{self.peer_id}: Normal termination" f"{self.peer_id}: Normal termination"
) )
elif error_code == GO_AWAY_PROTOCOL_ERROR: elif error_code == GO_AWAY_PROTOCOL_ERROR:
logger.error( logging.error(
f"Received GO_AWAY for peer{self.peer_id}: Protocol error" f"Received GO_AWAY for peer{self.peer_id}: Protocol error"
) )
elif error_code == GO_AWAY_INTERNAL_ERROR: elif error_code == GO_AWAY_INTERNAL_ERROR:
logger.error( logging.error(
f"Received GO_AWAY for peer {self.peer_id}: Internal error" f"Received GO_AWAY for peer {self.peer_id}: Internal error"
) )
else: else:
logger.error( logging.error(
f"Received GO_AWAY for peer {self.peer_id}" f"Received GO_AWAY for peer {self.peer_id}"
f"with unknown error code: {error_code}" f"with unknown error code: {error_code}"
) )
@ -617,7 +614,7 @@ class Yamux(IMuxedConn):
break break
elif typ == TYPE_PING: elif typ == TYPE_PING:
if flags & FLAG_SYN: if flags & FLAG_SYN:
logger.debug( logging.debug(
f"Received ping request with value" f"Received ping request with value"
f"{length} for peer {self.peer_id}" f"{length} for peer {self.peer_id}"
) )
@ -626,7 +623,7 @@ class Yamux(IMuxedConn):
) )
await self.secured_conn.write(ping_header) await self.secured_conn.write(ping_header)
elif flags & FLAG_ACK: elif flags & FLAG_ACK:
logger.debug( logging.debug(
f"Received ping response with value" f"Received ping response with value"
f"{length} for peer {self.peer_id}" f"{length} for peer {self.peer_id}"
) )
@ -640,7 +637,7 @@ class Yamux(IMuxedConn):
self.stream_buffers[stream_id].extend(data) self.stream_buffers[stream_id].extend(data)
self.stream_events[stream_id].set() self.stream_events[stream_id].set()
if flags & FLAG_FIN: if flags & FLAG_FIN:
logger.debug( logging.debug(
f"Received FIN for stream {self.peer_id}:" f"Received FIN for stream {self.peer_id}:"
f"{stream_id}, marking recv_closed" f"{stream_id}, marking recv_closed"
) )
@ -648,7 +645,7 @@ class Yamux(IMuxedConn):
if self.streams[stream_id].send_closed: if self.streams[stream_id].send_closed:
self.streams[stream_id].closed = True self.streams[stream_id].closed = True
except Exception as e: except Exception as e:
logger.error(f"Error reading data for stream {stream_id}: {e}") logging.error(f"Error reading data for stream {stream_id}: {e}")
# Mark stream as closed on read error # Mark stream as closed on read error
async with self.streams_lock: async with self.streams_lock:
if stream_id in self.streams: if stream_id in self.streams:
@ -662,7 +659,7 @@ class Yamux(IMuxedConn):
if stream_id in self.streams: if stream_id in self.streams:
stream = self.streams[stream_id] stream = self.streams[stream_id]
async with stream.window_lock: async with stream.window_lock:
logger.debug( logging.debug(
f"Received window update for stream" f"Received window update for stream"
f"{self.peer_id}:{stream_id}," f"{self.peer_id}:{stream_id},"
f" increment: {increment}" f" increment: {increment}"
@ -677,7 +674,7 @@ class Yamux(IMuxedConn):
and details.get("requested_count") == 2 and details.get("requested_count") == 2
and details.get("received_count") == 0 and details.get("received_count") == 0
): ):
logger.info( logging.info(
f"Stream closed cleanly for peer {self.peer_id}" f"Stream closed cleanly for peer {self.peer_id}"
+ f" (IncompleteReadError: {details})" + f" (IncompleteReadError: {details})"
) )
@ -685,32 +682,15 @@ class Yamux(IMuxedConn):
await self._cleanup_on_error() await self._cleanup_on_error()
break break
else: else:
logger.error( logging.error(
f"Error in handle_incoming for peer {self.peer_id}: " f"Error in handle_incoming for peer {self.peer_id}: "
+ f"{type(e).__name__}: {str(e)}" + f"{type(e).__name__}: {str(e)}"
) )
else: else:
# Handle RawConnError with more nuance logging.error(
if isinstance(e, RawConnError): f"Error in handle_incoming for peer {self.peer_id}: "
error_msg = str(e) + f"{type(e).__name__}: {str(e)}"
# If RawConnError is empty, it's likely normal cleanup )
if not error_msg.strip():
logger.info(
f"RawConnError (empty) during cleanup for peer "
f"{self.peer_id} (normal connection shutdown)"
)
else:
# Log non-empty RawConnError as warning
logger.warning(
f"RawConnError during connection handling for peer "
f"{self.peer_id}: {error_msg}"
)
else:
# Log all other errors normally
logger.error(
f"Error in handle_incoming for peer {self.peer_id}: "
+ f"{type(e).__name__}: {str(e)}"
)
# Don't crash the whole connection for temporary errors # Don't crash the whole connection for temporary errors
if self.event_shutting_down.is_set() or isinstance( if self.event_shutting_down.is_set() or isinstance(
e, (RawConnError, OSError) e, (RawConnError, OSError)
@ -740,9 +720,9 @@ class Yamux(IMuxedConn):
# Close the secured connection # Close the secured connection
try: try:
await self.secured_conn.close() await self.secured_conn.close()
logger.debug(f"Successfully closed secured_conn for peer {self.peer_id}") logging.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
except Exception as close_error: except Exception as close_error:
logger.error( logging.error(
f"Error closing secured_conn for peer {self.peer_id}: {close_error}" f"Error closing secured_conn for peer {self.peer_id}: {close_error}"
) )
@ -751,14 +731,14 @@ class Yamux(IMuxedConn):
# Call on_close callback if provided # Call on_close callback if provided
if self.on_close: if self.on_close:
logger.debug(f"Calling on_close for peer {self.peer_id}") logging.debug(f"Calling on_close for peer {self.peer_id}")
try: try:
if inspect.iscoroutinefunction(self.on_close): if inspect.iscoroutinefunction(self.on_close):
await self.on_close() await self.on_close()
else: else:
self.on_close() self.on_close()
except Exception as callback_error: except Exception as callback_error:
logger.error(f"Error in on_close callback: {callback_error}") logging.error(f"Error in on_close callback: {callback_error}")
# Cancel nursery tasks # Cancel nursery tasks
if self._nursery: if self._nursery:

View File

@ -7,21 +7,11 @@ from libp2p.utils.varint import (
encode_varint_prefixed, encode_varint_prefixed,
read_delim, read_delim,
read_varint_prefixed_bytes, read_varint_prefixed_bytes,
decode_varint_from_bytes,
decode_varint_with_size,
read_length_prefixed_protobuf,
) )
from libp2p.utils.version import ( from libp2p.utils.version import (
get_agent_version, get_agent_version,
) )
from libp2p.utils.address_validation import (
get_available_interfaces,
get_optimal_binding_address,
expand_wildcard_address,
find_free_port,
)
__all__ = [ __all__ = [
"decode_uvarint_from_stream", "decode_uvarint_from_stream",
"encode_delim", "encode_delim",
@ -30,11 +20,4 @@ __all__ = [
"get_agent_version", "get_agent_version",
"read_delim", "read_delim",
"read_varint_prefixed_bytes", "read_varint_prefixed_bytes",
"decode_varint_from_bytes",
"decode_varint_with_size",
"read_length_prefixed_protobuf",
"get_available_interfaces",
"get_optimal_binding_address",
"expand_wildcard_address",
"find_free_port",
] ]

View File

@ -1,160 +0,0 @@
from __future__ import annotations
import socket
from multiaddr import Multiaddr
try:
from multiaddr.utils import ( # type: ignore
get_network_addrs,
get_thin_waist_addresses,
)
_HAS_THIN_WAIST = True
except ImportError: # pragma: no cover - only executed in older environments
_HAS_THIN_WAIST = False
get_thin_waist_addresses = None # type: ignore
get_network_addrs = None # type: ignore
def _safe_get_network_addrs(ip_version: int) -> list[str]:
"""
Internal safe wrapper. Returns a list of IP addresses for the requested IP version.
Falls back to minimal defaults when Thin Waist helpers are missing.
:param ip_version: 4 or 6
"""
if _HAS_THIN_WAIST and get_network_addrs:
try:
return get_network_addrs(ip_version) or []
except Exception: # pragma: no cover - defensive
return []
# Fallback behavior (very conservative)
if ip_version == 4:
return ["127.0.0.1"]
if ip_version == 6:
return ["::1"]
return []
def find_free_port() -> int:
"""Find a free port on localhost."""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(("", 0)) # Bind to a free port provided by the OS
return s.getsockname()[1]
def _safe_expand(addr: Multiaddr, port: int | None = None) -> list[Multiaddr]:
"""
Internal safe expansion wrapper. Returns a list of Multiaddr objects.
If Thin Waist isn't available, returns [addr] (identity).
"""
if _HAS_THIN_WAIST and get_thin_waist_addresses:
try:
if port is not None:
return get_thin_waist_addresses(addr, port=port) or []
return get_thin_waist_addresses(addr) or []
except Exception: # pragma: no cover - defensive
return [addr]
return [addr]
def get_available_interfaces(port: int, protocol: str = "tcp") -> list[Multiaddr]:
"""
Discover available network interfaces (IPv4 + IPv6 if supported) for binding.
:param port: Port number to bind to.
:param protocol: Transport protocol (e.g., "tcp" or "udp").
:return: List of Multiaddr objects representing candidate interface addresses.
"""
addrs: list[Multiaddr] = []
# IPv4 enumeration
seen_v4: set[str] = set()
for ip in _safe_get_network_addrs(4):
seen_v4.add(ip)
addrs.append(Multiaddr(f"/ip4/{ip}/{protocol}/{port}"))
# Ensure IPv4 loopback is always included when IPv4 interfaces are discovered
if seen_v4 and "127.0.0.1" not in seen_v4:
addrs.append(Multiaddr(f"/ip4/127.0.0.1/{protocol}/{port}"))
# TODO: IPv6 support temporarily disabled due to libp2p handshake issues
# IPv6 connections fail during protocol negotiation (SecurityUpgradeFailure)
# Re-enable IPv6 support once the following issues are resolved:
# - libp2p security handshake over IPv6
# - multiselect protocol over IPv6
# - connection establishment over IPv6
#
# seen_v6: set[str] = set()
# for ip in _safe_get_network_addrs(6):
# seen_v6.add(ip)
# addrs.append(Multiaddr(f"/ip6/{ip}/{protocol}/{port}"))
#
# # Always include IPv6 loopback for testing purposes when IPv6 is available
# # This ensures IPv6 functionality can be tested even without global IPv6 addresses
# if "::1" not in seen_v6:
# addrs.append(Multiaddr(f"/ip6/::1/{protocol}/{port}"))
# Fallback if nothing discovered
if not addrs:
addrs.append(Multiaddr(f"/ip4/0.0.0.0/{protocol}/{port}"))
return addrs
def expand_wildcard_address(
addr: Multiaddr, port: int | None = None
) -> list[Multiaddr]:
"""
Expand a wildcard (e.g. /ip4/0.0.0.0/tcp/0) into all concrete interfaces.
:param addr: Multiaddr to expand.
:param port: Optional override for port selection.
:return: List of concrete Multiaddr instances.
"""
expanded = _safe_expand(addr, port=port)
if not expanded: # Safety fallback
return [addr]
return expanded
def get_optimal_binding_address(port: int, protocol: str = "tcp") -> Multiaddr:
"""
Choose an optimal address for an example to bind to:
- Prefer non-loopback IPv4
- Then non-loopback IPv6
- Fallback to loopback
- Fallback to wildcard
:param port: Port number.
:param protocol: Transport protocol.
:return: A single Multiaddr chosen heuristically.
"""
candidates = get_available_interfaces(port, protocol)
def is_non_loopback(ma: Multiaddr) -> bool:
s = str(ma)
return not ("/ip4/127." in s or "/ip6/::1" in s)
for c in candidates:
if "/ip4/" in str(c) and is_non_loopback(c):
return c
for c in candidates:
if "/ip6/" in str(c) and is_non_loopback(c):
return c
for c in candidates:
if "/ip4/127." in str(c) or "/ip6/::1" in str(c):
return c
# As a final fallback, produce a wildcard
return Multiaddr(f"/ip4/0.0.0.0/{protocol}/{port}")
__all__ = [
"get_available_interfaces",
"get_optimal_binding_address",
"expand_wildcard_address",
"find_free_port",
]

View File

@ -1,9 +1,7 @@
import itertools import itertools
import logging import logging
import math import math
from typing import BinaryIO
from libp2p.abc import INetStream
from libp2p.exceptions import ( from libp2p.exceptions import (
ParseError, ParseError,
) )
@ -27,41 +25,18 @@ HIGH_MASK = 2**7
SHIFT_64_BIT_MAX = int(math.ceil(64 / 7)) * 7 SHIFT_64_BIT_MAX = int(math.ceil(64 / 7)) * 7
def encode_uvarint(value: int) -> bytes: def encode_uvarint(number: int) -> bytes:
"""Encode an unsigned integer as a varint.""" """Pack `number` into varint bytes."""
if value < 0: buf = b""
raise ValueError("Cannot encode negative value as uvarint") while True:
towrite = number & 0x7F
result = bytearray() number >>= 7
while value >= 0x80: if number:
result.append((value & 0x7F) | 0x80) buf += bytes((towrite | 0x80,))
value >>= 7 else:
result.append(value & 0x7F) buf += bytes((towrite,))
return bytes(result)
def decode_uvarint(data: bytes) -> int:
"""Decode a varint from bytes."""
if not data:
raise ParseError("Unexpected end of data")
result = 0
shift = 0
for byte in data:
result |= (byte & 0x7F) << shift
if (byte & 0x80) == 0:
break break
shift += 7 return buf
if shift >= 64:
raise ValueError("Varint too long")
return result
def decode_varint_from_bytes(data: bytes) -> int:
"""Decode a varint from bytes (alias for decode_uvarint for backward comp)."""
return decode_uvarint(data)
async def decode_uvarint_from_stream(reader: Reader) -> int: async def decode_uvarint_from_stream(reader: Reader) -> int:
@ -69,9 +44,7 @@ async def decode_uvarint_from_stream(reader: Reader) -> int:
res = 0 res = 0
for shift in itertools.count(0, 7): for shift in itertools.count(0, 7):
if shift > SHIFT_64_BIT_MAX: if shift > SHIFT_64_BIT_MAX:
raise ParseError( raise ParseError("TODO: better exception msg: Integer is too large...")
"Varint decoding error: integer exceeds maximum size of 64 bits."
)
byte = await read_exactly(reader, 1) byte = await read_exactly(reader, 1)
value = byte[0] value = byte[0]
@ -83,35 +56,9 @@ async def decode_uvarint_from_stream(reader: Reader) -> int:
return res return res
def decode_varint_with_size(data: bytes) -> tuple[int, int]: def encode_varint_prefixed(msg_bytes: bytes) -> bytes:
""" varint_len = encode_uvarint(len(msg_bytes))
Decode a varint from bytes and return both the value and the number of bytes return varint_len + msg_bytes
consumed.
Returns:
Tuple[int, int]: (value, bytes_consumed)
"""
result = 0
shift = 0
bytes_consumed = 0
for byte in data:
result |= (byte & 0x7F) << shift
bytes_consumed += 1
if (byte & 0x80) == 0:
break
shift += 7
if shift >= 64:
raise ValueError("Varint too long")
return result, bytes_consumed
def encode_varint_prefixed(data: bytes) -> bytes:
"""Encode data with a varint length prefix."""
length_bytes = encode_uvarint(len(data))
return length_bytes + data
async def read_varint_prefixed_bytes(reader: Reader) -> bytes: async def read_varint_prefixed_bytes(reader: Reader) -> bytes:
@ -138,95 +85,3 @@ async def read_delim(reader: Reader) -> bytes:
f'`msg_bytes` is not delimited by b"\\n": `msg_bytes`={msg_bytes!r}' f'`msg_bytes` is not delimited by b"\\n": `msg_bytes`={msg_bytes!r}'
) )
return msg_bytes[:-1] return msg_bytes[:-1]
def read_varint_prefixed_bytes_sync(
stream: BinaryIO, max_length: int = 1024 * 1024
) -> bytes:
"""
Read varint-prefixed bytes from a stream.
Args:
stream: A stream-like object with a read() method
max_length: Maximum allowed data length to prevent memory exhaustion
Returns:
bytes: The data without the length prefix
Raises:
ValueError: If the length prefix is invalid or too large
EOFError: If the stream ends unexpectedly
"""
# Read the varint length prefix
length_bytes = b""
while True:
byte_data = stream.read(1)
if not byte_data:
raise EOFError("Stream ended while reading varint length prefix")
length_bytes += byte_data
if byte_data[0] & 0x80 == 0:
break
# Decode the length
length = decode_uvarint(length_bytes)
if length > max_length:
raise ValueError(f"Data length {length} exceeds maximum allowed {max_length}")
# Read the data
data = stream.read(length)
if len(data) != length:
raise EOFError(f"Expected {length} bytes, got {len(data)}")
return data
async def read_length_prefixed_protobuf(
stream: INetStream, use_varint_format: bool = True, max_length: int = 1024 * 1024
) -> bytes:
"""Read a protobuf message from a stream, handling both formats."""
if use_varint_format:
# Read length-prefixed protobuf message from the stream
# First read the varint length prefix
length_bytes = b""
while True:
b = await stream.read(1)
if not b:
raise Exception("No length prefix received")
length_bytes += b
if b[0] & 0x80 == 0:
break
msg_length = decode_varint_from_bytes(length_bytes)
if msg_length > max_length:
raise Exception(
f"Message length {msg_length} exceeds maximum allowed {max_length}"
)
# Read the protobuf message
data = await stream.read(msg_length)
if len(data) != msg_length:
raise Exception(
f"Incomplete message: expected {msg_length}, got {len(data)}"
)
return data
else:
# Read raw protobuf message from the stream
# For raw format, read all available data in one go
data = await stream.read()
# If we got no data, raise an exception
if not data:
raise Exception("No data received in raw format")
if len(data) > max_length:
raise Exception(
f"Message length {len(data)} exceeds maximum allowed {max_length}"
)
return data

View File

@ -0,0 +1 @@
Added support for ``Kademlia DHT`` in py-libp2p.

View File

@ -1 +0,0 @@
remove FIXME comment since it's obsolete and 32-byte prefix support is there but not enabled by default

View File

@ -0,0 +1 @@
Limit concurrency in `push_identify_to_peers` to prevent resource congestion under high peer counts.

View File

@ -0,0 +1,7 @@
Store public key and peer ID in peerstore during handshake
Modified the InsecureTransport class to accept an optional peerstore parameter and updated the handshake process to store the received public key and peer ID in the peerstore when available.
Added test cases to verify:
1. The peerstore remains unchanged when handshake fails due to peer ID mismatch
2. The handshake correctly adds a public key to a peer ID that already exists in the peerstore but doesn't have a public key yet

View File

@ -0,0 +1,6 @@
Fixed several flow-control and concurrency issues in the `YamuxStream` class. Previously, stress-testing revealed that transferring data over `DEFAULT_WINDOW_SIZE` would break the stream due to inconsistent window update handling and lock management. The fixes include:
- Removed sending of window updates during writes to maintain correct flow-control.
- Added proper timeout handling when releasing and acquiring locks to prevent concurrency errors.
- Corrected the `read` function to properly handle window updates for both `read_until_EOF` and `read_n_bytes`.
- Added event logging at `send_window_updates` and `waiting_for_window_updates` for better observability.

View File

@ -0,0 +1 @@
Added support for ``Multicast DNS`` in py-libp2p

View File

@ -0,0 +1 @@
Refactored gossipsub heartbeat logic to use a single helper method `_handle_topic_heartbeat` that handles both fanout and gossip heartbeats.

View File

@ -0,0 +1 @@
Added sparse connect utility function to pubsub test utilities for creating test networks with configurable connectivity.

View File

@ -0,0 +1,2 @@
Reordered the arguments to `upgrade_security` to place `is_initiator` before `peer_id`, and made `peer_id` optional.
This allows the method to reflect the fact that peer identity is not required for inbound connections.

View File

@ -0,0 +1 @@
Uses the `decapsulate` method of the `Multiaddr` class to clean up the observed address.

View File

@ -0,0 +1 @@
Optimized pubsub publishing to send multiple topics in a single message instead of separate messages per topic.

View File

@ -0,0 +1 @@
Optimized pubsub message writing by implementing a write_msg() method that uses pre-allocated buffers and single write operations, improving performance by eliminating separate varint prefix encoding and write operations in FloodSub and GossipSub.

View File

@ -0,0 +1 @@
added peer exchange and backoff logic as part of Gossipsub v1.1 upgrade

View File

@ -0,0 +1,4 @@
Add timeout wrappers in:
1. multiselect.py: `negotiate` function
2. multiselect_client.py: `select_one_of` , `query_multistream_command` functions
to prevent indefinite hangs when a remote peer does not respond.

View File

@ -0,0 +1 @@
align stream creation logic with yamux specification

Some files were not shown because too many files have changed in this diff Show More