mirror of
https://github.com/varun-r-mallya/py-libp2p.git
synced 2026-02-10 23:20:55 +00:00
Compare commits
1 Commits
d2fdf70692
...
concurrenc
| Author | SHA1 | Date | |
|---|---|---|---|
| 8b2268fcc9 |
1
Makefile
1
Makefile
@ -60,7 +60,6 @@ PB = libp2p/crypto/pb/crypto.proto \
|
||||
libp2p/identity/identify/pb/identify.proto \
|
||||
libp2p/host/autonat/pb/autonat.proto \
|
||||
libp2p/relay/circuit_v2/pb/circuit.proto \
|
||||
libp2p/relay/circuit_v2/pb/dcutr.proto \
|
||||
libp2p/kad_dht/pb/kademlia.proto
|
||||
|
||||
PY = $(PB:.proto=_pb2.py)
|
||||
|
||||
48
README.md
48
README.md
@ -34,19 +34,19 @@ ______________________________________________________________________
|
||||
| -------------------------------------- | :--------: | :---------------------------------------------------------------------------------: |
|
||||
| **`libp2p-tcp`** | ✅ | [source](https://github.com/libp2p/py-libp2p/blob/main/libp2p/transport/tcp/tcp.py) |
|
||||
| **`libp2p-quic`** | 🌱 | |
|
||||
| **`libp2p-websocket`** | 🌱 | |
|
||||
| **`libp2p-webrtc-browser-to-server`** | 🌱 | |
|
||||
| **`libp2p-webrtc-private-to-private`** | 🌱 | |
|
||||
| **`libp2p-websocket`** | ❌ | |
|
||||
| **`libp2p-webrtc-browser-to-server`** | ❌ | |
|
||||
| **`libp2p-webrtc-private-to-private`** | ❌ | |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
### NAT Traversal
|
||||
|
||||
| **NAT Traversal** | **Status** | **Source** |
|
||||
| ----------------------------- | :--------: | :-----------------------------------------------------------------------------: |
|
||||
| **`libp2p-circuit-relay-v2`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/relay/circuit_v2) |
|
||||
| **`libp2p-autonat`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/host/autonat) |
|
||||
| **`libp2p-hole-punching`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/relay/circuit_v2) |
|
||||
| **NAT Traversal** | **Status** |
|
||||
| ----------------------------- | :--------: |
|
||||
| **`libp2p-circuit-relay-v2`** | ❌ |
|
||||
| **`libp2p-autonat`** | ❌ |
|
||||
| **`libp2p-hole-punching`** | ❌ |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
@ -54,27 +54,27 @@ ______________________________________________________________________
|
||||
|
||||
| **Secure Communication** | **Status** | **Source** |
|
||||
| ------------------------ | :--------: | :---------------------------------------------------------------------------: |
|
||||
| **`libp2p-noise`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/security/noise) |
|
||||
| **`libp2p-tls`** | 🌱 | |
|
||||
| **`libp2p-noise`** | 🌱 | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/security/noise) |
|
||||
| **`libp2p-tls`** | ❌ | |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
### Discovery
|
||||
|
||||
| **Discovery** | **Status** | **Source** |
|
||||
| -------------------- | :--------: | :--------------------------------------------------------------------------------: |
|
||||
| **`bootstrap`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/discovery/bootstrap) |
|
||||
| **`random-walk`** | 🌱 | |
|
||||
| **`mdns-discovery`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/discovery/mdns) |
|
||||
| **`rendezvous`** | 🌱 | |
|
||||
| **Discovery** | **Status** |
|
||||
| -------------------- | :--------: |
|
||||
| **`bootstrap`** | ❌ |
|
||||
| **`random-walk`** | ❌ |
|
||||
| **`mdns-discovery`** | ❌ |
|
||||
| **`rendezvous`** | ❌ |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
### Peer Routing
|
||||
|
||||
| **Peer Routing** | **Status** | **Source** |
|
||||
| -------------------- | :--------: | :--------------------------------------------------------------------: |
|
||||
| **`libp2p-kad-dht`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/kad_dht) |
|
||||
| **Peer Routing** | **Status** |
|
||||
| -------------------- | :--------: |
|
||||
| **`libp2p-kad-dht`** | ❌ |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
@ -89,10 +89,10 @@ ______________________________________________________________________
|
||||
|
||||
### Stream Muxers
|
||||
|
||||
| **Stream Muxers** | **Status** | **Source** |
|
||||
| ------------------ | :--------: | :-------------------------------------------------------------------------------: |
|
||||
| **`libp2p-yamux`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/stream_muxer/yamux) |
|
||||
| **`libp2p-mplex`** | ✅ | [source](https://github.com/libp2p/py-libp2p/tree/main/libp2p/stream_muxer/mplex) |
|
||||
| **Stream Muxers** | **Status** | **Status** |
|
||||
| ------------------ | :--------: | :----------------------------------------------------------------------------------------: |
|
||||
| **`libp2p-yamux`** | 🌱 | |
|
||||
| **`libp2p-mplex`** | 🛠️ | [source](https://github.com/libp2p/py-libp2p/blob/main/libp2p/stream_muxer/mplex/mplex.py) |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
@ -100,7 +100,7 @@ ______________________________________________________________________
|
||||
|
||||
| **Storage** | **Status** |
|
||||
| ------------------- | :--------: |
|
||||
| **`libp2p-record`** | 🌱 |
|
||||
| **`libp2p-record`** | ❌ |
|
||||
|
||||
______________________________________________________________________
|
||||
|
||||
|
||||
@ -1,64 +0,0 @@
|
||||
mDNS Peer Discovery Example
|
||||
===========================
|
||||
|
||||
This example demonstrates how to use mDNS (Multicast DNS) for peer discovery in py-libp2p.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
First, ensure you have py-libp2p installed and your environment is activated:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ python -m pip install libp2p
|
||||
|
||||
Running the Example
|
||||
-------------------
|
||||
|
||||
The mDNS demo script allows you to discover peers on your local network using mDNS. To start a peer, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mdns-demo
|
||||
|
||||
You should see output similar to:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Run this from another console to start another peer on a different port:
|
||||
|
||||
python mdns-demo -p <ANOTHER_PORT>
|
||||
|
||||
Waiting for mDNS peer discovery events...
|
||||
|
||||
2025-06-20 23:28:12,052 - libp2p.example.discovery.mdns - INFO - Starting peer Discovery
|
||||
|
||||
To discover peers, open another terminal and run the same command with a different port:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ python mdns-demo -p 9001
|
||||
|
||||
You should see output indicating that a new peer has been discovered:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Run this from the same folder in another console to start another peer on a different port:
|
||||
|
||||
python mdns-demo -p <ANOTHER_PORT>
|
||||
|
||||
Waiting for mDNS peer discovery events...
|
||||
|
||||
2025-06-20 23:43:43,786 - libp2p.example.discovery.mdns - INFO - Starting peer Discovery
|
||||
2025-06-20 23:43:43,790 - libp2p.example.discovery.mdns - INFO - Discovered: 16Uiu2HAmGxy5NdQEjZWtrYUMrzdp3Syvg7MB2E5Lx8weA9DanYxj
|
||||
|
||||
When a new peer is discovered, its peer ID will be printed in the console output.
|
||||
|
||||
How it Works
|
||||
------------
|
||||
|
||||
- Each node advertises itself on the local network using mDNS.
|
||||
- When a new peer is discovered, the handler prints its peer ID.
|
||||
- This is useful for local peer discovery without requiring a DHT or bootstrap nodes.
|
||||
|
||||
You can modify the script to perform additional actions when peers are discovered, such as opening streams or exchanging messages.
|
||||
@ -13,4 +13,3 @@ Examples
|
||||
examples.pubsub
|
||||
examples.circuit_relay
|
||||
examples.kademlia
|
||||
examples.mDNS
|
||||
|
||||
@ -1,13 +0,0 @@
|
||||
libp2p.discovery.bootstrap package
|
||||
==================================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.discovery.bootstrap
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
@ -1,21 +0,0 @@
|
||||
libp2p.discovery.events package
|
||||
===============================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.discovery.events.peerDiscovery module
|
||||
--------------------------------------------
|
||||
|
||||
.. automodule:: libp2p.discovery.events.peerDiscovery
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.discovery.events
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
@ -1,45 +0,0 @@
|
||||
libp2p.discovery.mdns package
|
||||
=============================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
libp2p.discovery.mdns.broadcaster module
|
||||
----------------------------------------
|
||||
|
||||
.. automodule:: libp2p.discovery.mdns.broadcaster
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.discovery.mdns.listener module
|
||||
-------------------------------------
|
||||
|
||||
.. automodule:: libp2p.discovery.mdns.listener
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.discovery.mdns.mdns module
|
||||
---------------------------------
|
||||
|
||||
.. automodule:: libp2p.discovery.mdns.mdns
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
libp2p.discovery.mdns.utils module
|
||||
----------------------------------
|
||||
|
||||
.. automodule:: libp2p.discovery.mdns.utils
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.discovery.mdns
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
@ -1,23 +0,0 @@
|
||||
libp2p.discovery package
|
||||
========================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 4
|
||||
|
||||
libp2p.discovery.bootstrap
|
||||
libp2p.discovery.events
|
||||
libp2p.discovery.mdns
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: libp2p.discovery
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
@ -8,7 +8,6 @@ Subpackages
|
||||
:maxdepth: 4
|
||||
|
||||
libp2p.crypto
|
||||
libp2p.discovery
|
||||
libp2p.host
|
||||
libp2p.identity
|
||||
libp2p.io
|
||||
|
||||
@ -3,65 +3,6 @@ Release Notes
|
||||
|
||||
.. towncrier release notes start
|
||||
|
||||
py-libp2p v0.2.9 (2025-07-09)
|
||||
-----------------------------
|
||||
|
||||
Breaking Changes
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
- Reordered the arguments to ``upgrade_security`` to place ``is_initiator`` before ``peer_id``, and made ``peer_id`` optional.
|
||||
This allows the method to reflect the fact that peer identity is not required for inbound connections. (`#681 <https://github.com/libp2p/py-libp2p/issues/681>`__)
|
||||
|
||||
|
||||
Bugfixes
|
||||
~~~~~~~~
|
||||
|
||||
- Add timeout wrappers in:
|
||||
1. ``multiselect.py``: ``negotiate`` function
|
||||
2. ``multiselect_client.py``: ``select_one_of`` , ``query_multistream_command`` functions
|
||||
to prevent indefinite hangs when a remote peer does not respond. (`#696 <https://github.com/libp2p/py-libp2p/issues/696>`__)
|
||||
- Align stream creation logic with yamux specification (`#701 <https://github.com/libp2p/py-libp2p/issues/701>`__)
|
||||
- Fixed an issue in ``Pubsub`` where async validators were not handled reliably under concurrency. Now uses a safe aggregator list for consistent behavior. (`#702 <https://github.com/libp2p/py-libp2p/issues/702>`__)
|
||||
|
||||
|
||||
Features
|
||||
~~~~~~~~
|
||||
|
||||
- Added support for ``Kademlia DHT`` in py-libp2p. (`#579 <https://github.com/libp2p/py-libp2p/issues/579>`__)
|
||||
- Limit concurrency in ``push_identify_to_peers`` to prevent resource congestion under high peer counts. (`#621 <https://github.com/libp2p/py-libp2p/issues/621>`__)
|
||||
- Store public key and peer ID in peerstore during handshake
|
||||
|
||||
Modified the InsecureTransport class to accept an optional peerstore parameter and updated the handshake process to store the received public key and peer ID in the peerstore when available.
|
||||
|
||||
Added test cases to verify:
|
||||
1. The peerstore remains unchanged when handshake fails due to peer ID mismatch
|
||||
2. The handshake correctly adds a public key to a peer ID that already exists in the peerstore but doesn't have a public key yet (`#631 <https://github.com/libp2p/py-libp2p/issues/631>`__)
|
||||
- Fixed several flow-control and concurrency issues in the ``YamuxStream`` class. Previously, stress-testing revealed that transferring data over ``DEFAULT_WINDOW_SIZE`` would break the stream due to inconsistent window update handling and lock management. The fixes include:
|
||||
|
||||
- Removed sending of window updates during writes to maintain correct flow-control.
|
||||
- Added proper timeout handling when releasing and acquiring locks to prevent concurrency errors.
|
||||
- Corrected the ``read`` function to properly handle window updates for both ``read_until_EOF`` and ``read_n_bytes``.
|
||||
- Added event logging at ``send_window_updates`` and ``waiting_for_window_updates`` for better observability. (`#639 <https://github.com/libp2p/py-libp2p/issues/639>`__)
|
||||
- Added support for ``Multicast DNS`` in py-libp2p (`#649 <https://github.com/libp2p/py-libp2p/issues/649>`__)
|
||||
- Optimized pubsub publishing to send multiple topics in a single message instead of separate messages per topic. (`#685 <https://github.com/libp2p/py-libp2p/issues/685>`__)
|
||||
- Optimized pubsub message writing by implementing a write_msg() method that uses pre-allocated buffers and single write operations, improving performance by eliminating separate varint prefix encoding and write operations in FloodSub and GossipSub. (`#687 <https://github.com/libp2p/py-libp2p/issues/687>`__)
|
||||
- Added peer exchange and backoff logic as part of Gossipsub v1.1 upgrade (`#690 <https://github.com/libp2p/py-libp2p/issues/690>`__)
|
||||
|
||||
|
||||
Internal Changes - for py-libp2p Contributors
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- Added sparse connect utility function to pubsub test utilities for creating test networks with configurable connectivity. (`#679 <https://github.com/libp2p/py-libp2p/issues/679>`__)
|
||||
- Added comprehensive tests for pubsub connection utility functions to verify degree limits are enforced, excess peers are handled correctly, and edge cases (degree=0, negative values, empty lists) are managed gracefully. (`#707 <https://github.com/libp2p/py-libp2p/issues/707>`__)
|
||||
- Added extra tests for identify push concurrency cap under high peer load (`#708 <https://github.com/libp2p/py-libp2p/issues/708>`__)
|
||||
|
||||
|
||||
Miscellaneous Changes
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- `#678 <https://github.com/libp2p/py-libp2p/issues/678>`__, `#684 <https://github.com/libp2p/py-libp2p/issues/684>`__
|
||||
|
||||
|
||||
py-libp2p v0.2.8 (2025-06-10)
|
||||
-----------------------------
|
||||
|
||||
|
||||
@ -1,136 +0,0 @@
|
||||
import argparse
|
||||
import logging
|
||||
import secrets
|
||||
|
||||
import multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p import new_host
|
||||
from libp2p.abc import PeerInfo
|
||||
from libp2p.crypto.secp256k1 import create_new_key_pair
|
||||
from libp2p.discovery.events.peerDiscovery import peerDiscovery
|
||||
|
||||
# Configure logging
|
||||
logger = logging.getLogger("libp2p.discovery.bootstrap")
|
||||
logger.setLevel(logging.INFO)
|
||||
handler = logging.StreamHandler()
|
||||
handler.setFormatter(
|
||||
logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
|
||||
)
|
||||
logger.addHandler(handler)
|
||||
|
||||
# Configure root logger to only show warnings and above to reduce noise
|
||||
# This prevents verbose DEBUG messages from multiaddr, DNS, etc.
|
||||
logging.getLogger().setLevel(logging.WARNING)
|
||||
|
||||
# Specifically silence noisy libraries
|
||||
logging.getLogger("multiaddr").setLevel(logging.WARNING)
|
||||
logging.getLogger("root").setLevel(logging.WARNING)
|
||||
|
||||
|
||||
def on_peer_discovery(peer_info: PeerInfo) -> None:
|
||||
"""Handler for peer discovery events."""
|
||||
logger.info(f"🔍 Discovered peer: {peer_info.peer_id}")
|
||||
logger.debug(f" Addresses: {[str(addr) for addr in peer_info.addrs]}")
|
||||
|
||||
|
||||
# Example bootstrap peers
|
||||
BOOTSTRAP_PEERS = [
|
||||
"/dnsaddr/github.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
|
||||
"/dnsaddr/cloudflare.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
|
||||
"/dnsaddr/google.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
|
||||
"/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
|
||||
"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
|
||||
"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
|
||||
"/ip6/2604:a880:1:20::203:d001/tcp/4001/p2p/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
|
||||
"/ip4/128.199.219.111/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
|
||||
"/ip4/104.236.76.40/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
|
||||
"/ip4/178.62.158.247/tcp/4001/p2p/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
|
||||
"/ip6/2604:a880:1:20::203:d001/tcp/4001/p2p/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
|
||||
"/ip6/2400:6180:0:d0::151:6001/tcp/4001/p2p/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
|
||||
"/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/p2p/QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm",
|
||||
]
|
||||
|
||||
|
||||
async def run(port: int, bootstrap_addrs: list[str]) -> None:
|
||||
"""Run the bootstrap discovery example."""
|
||||
# Generate key pair
|
||||
secret = secrets.token_bytes(32)
|
||||
key_pair = create_new_key_pair(secret)
|
||||
|
||||
# Create listen address
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
|
||||
# Register peer discovery handler
|
||||
peerDiscovery.register_peer_discovered_handler(on_peer_discovery)
|
||||
|
||||
logger.info("🚀 Starting Bootstrap Discovery Example")
|
||||
logger.info(f"📍 Listening on: {listen_addr}")
|
||||
logger.info(f"🌐 Bootstrap peers: {len(bootstrap_addrs)}")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Bootstrap Discovery Example")
|
||||
print("=" * 60)
|
||||
print("This example demonstrates connecting to bootstrap peers.")
|
||||
print("Watch the logs for peer discovery events!")
|
||||
print("Press Ctrl+C to exit.")
|
||||
print("=" * 60)
|
||||
|
||||
# Create and run host with bootstrap discovery
|
||||
host = new_host(key_pair=key_pair, bootstrap=bootstrap_addrs)
|
||||
|
||||
try:
|
||||
async with host.run(listen_addrs=[listen_addr]):
|
||||
# Keep running and log peer discovery events
|
||||
await trio.sleep_forever()
|
||||
except KeyboardInterrupt:
|
||||
logger.info("👋 Shutting down...")
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Main entry point."""
|
||||
description = """
|
||||
Bootstrap Discovery Example for py-libp2p
|
||||
|
||||
This example demonstrates how to use bootstrap peers for peer discovery.
|
||||
Bootstrap peers are predefined peers that help new nodes join the network.
|
||||
|
||||
Usage:
|
||||
python bootstrap.py -p 8000
|
||||
python bootstrap.py -p 8001 --custom-bootstrap \\
|
||||
"/ip4/127.0.0.1/tcp/8000/p2p/QmYourPeerID"
|
||||
"""
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description=description, formatter_class=argparse.RawDescriptionHelpFormatter
|
||||
)
|
||||
parser.add_argument(
|
||||
"-p", "--port", default=0, type=int, help="Port to listen on (default: random)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--custom-bootstrap",
|
||||
nargs="*",
|
||||
help="Custom bootstrap addresses (space-separated)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-v", "--verbose", action="store_true", help="Enable verbose output"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.verbose:
|
||||
logger.setLevel(logging.DEBUG)
|
||||
|
||||
# Use custom bootstrap addresses if provided, otherwise use defaults
|
||||
bootstrap_addrs = (
|
||||
args.custom_bootstrap if args.custom_bootstrap else BOOTSTRAP_PEERS
|
||||
)
|
||||
|
||||
try:
|
||||
trio.run(run, args.port, bootstrap_addrs)
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Exiting...")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -43,9 +43,6 @@ async def run(port: int, destination: str) -> None:
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
host = new_host()
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
if not destination: # its the server
|
||||
|
||||
async def stream_handler(stream: INetStream) -> None:
|
||||
|
||||
@ -45,10 +45,7 @@ async def run(port: int, destination: str, seed: int | None = None) -> None:
|
||||
secret = secrets.token_bytes(32)
|
||||
|
||||
host = new_host(key_pair=create_new_key_pair(secret))
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
async with host.run(listen_addrs=[listen_addr]):
|
||||
print(f"I am {host.get_id().to_string()}")
|
||||
|
||||
if not destination: # its the server
|
||||
|
||||
@ -1,7 +1,6 @@
|
||||
import argparse
|
||||
import base64
|
||||
import logging
|
||||
import sys
|
||||
|
||||
import multiaddr
|
||||
import trio
|
||||
@ -9,13 +8,10 @@ import trio
|
||||
from libp2p import (
|
||||
new_host,
|
||||
)
|
||||
from libp2p.identity.identify.identify import (
|
||||
ID as IDENTIFY_PROTOCOL_ID,
|
||||
identify_handler_for,
|
||||
parse_identify_response,
|
||||
from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID
|
||||
from libp2p.identity.identify.pb.identify_pb2 import (
|
||||
Identify,
|
||||
)
|
||||
from libp2p.identity.identify.pb.identify_pb2 import Identify
|
||||
from libp2p.peer.envelope import debug_dump_envelope, unmarshal_envelope
|
||||
from libp2p.peer.peerinfo import (
|
||||
info_from_p2p_addr,
|
||||
)
|
||||
@ -34,11 +30,10 @@ def decode_multiaddrs(raw_addrs):
|
||||
return decoded_addrs
|
||||
|
||||
|
||||
def print_identify_response(identify_response: Identify):
|
||||
def print_identify_response(identify_response):
|
||||
"""Pretty-print Identify response."""
|
||||
public_key_b64 = base64.b64encode(identify_response.public_key).decode("utf-8")
|
||||
listen_addrs = decode_multiaddrs(identify_response.listen_addrs)
|
||||
signed_peer_record = unmarshal_envelope(identify_response.signedPeerRecord)
|
||||
try:
|
||||
observed_addr_decoded = decode_multiaddrs([identify_response.observed_addr])
|
||||
except Exception:
|
||||
@ -54,10 +49,8 @@ def print_identify_response(identify_response: Identify):
|
||||
f" Agent Version: {identify_response.agent_version}"
|
||||
)
|
||||
|
||||
debug_dump_envelope(signed_peer_record)
|
||||
|
||||
|
||||
async def run(port: int, destination: str, use_varint_format: bool = True) -> None:
|
||||
async def run(port: int, destination: str) -> None:
|
||||
localhost_ip = "0.0.0.0"
|
||||
|
||||
if not destination:
|
||||
@ -65,159 +58,39 @@ async def run(port: int, destination: str, use_varint_format: bool = True) -> No
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}")
|
||||
host_a = new_host()
|
||||
|
||||
# Set up identify handler with specified format
|
||||
# Set use_varint_format = False, if want to checkout the Signed-PeerRecord
|
||||
identify_handler = identify_handler_for(
|
||||
host_a, use_varint_format=use_varint_format
|
||||
)
|
||||
host_a.set_stream_handler(IDENTIFY_PROTOCOL_ID, identify_handler)
|
||||
|
||||
async with (
|
||||
host_a.run(listen_addrs=[listen_addr]),
|
||||
trio.open_nursery() as nursery,
|
||||
):
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host_a.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
# Get the actual address and replace 0.0.0.0 with 127.0.0.1 for client
|
||||
# connections
|
||||
server_addr = str(host_a.get_addrs()[0])
|
||||
client_addr = server_addr.replace("/ip4/0.0.0.0/", "/ip4/127.0.0.1/")
|
||||
|
||||
format_name = "length-prefixed" if use_varint_format else "raw protobuf"
|
||||
format_flag = "--raw-format" if not use_varint_format else ""
|
||||
async with host_a.run(listen_addrs=[listen_addr]):
|
||||
print(
|
||||
f"First host listening (using {format_name} format). "
|
||||
f"Run this from another console:\n\n"
|
||||
f"identify-demo {format_flag} -d {client_addr}\n"
|
||||
"First host listening. Run this from another console:\n\n"
|
||||
f"identify-demo "
|
||||
f"-d {host_a.get_addrs()[0]}\n"
|
||||
)
|
||||
print("Waiting for incoming identify request...")
|
||||
|
||||
# Add a custom handler to show connection events
|
||||
async def custom_identify_handler(stream):
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
print(f"\n🔗 Received identify request from peer: {peer_id}")
|
||||
|
||||
# Show remote address in multiaddr format
|
||||
try:
|
||||
from libp2p.identity.identify.identify import (
|
||||
_remote_address_to_multiaddr,
|
||||
)
|
||||
|
||||
remote_address = stream.get_remote_address()
|
||||
if remote_address:
|
||||
observed_multiaddr = _remote_address_to_multiaddr(
|
||||
remote_address
|
||||
)
|
||||
# Add the peer ID to create a complete multiaddr
|
||||
complete_multiaddr = f"{observed_multiaddr}/p2p/{peer_id}"
|
||||
print(f" Remote address: {complete_multiaddr}")
|
||||
else:
|
||||
print(f" Remote address: {remote_address}")
|
||||
except Exception:
|
||||
print(f" Remote address: {stream.get_remote_address()}")
|
||||
|
||||
# Call the original handler
|
||||
await identify_handler(stream)
|
||||
|
||||
print(f"✅ Successfully processed identify request from {peer_id}")
|
||||
|
||||
# Replace the handler with our custom one
|
||||
host_a.set_stream_handler(IDENTIFY_PROTOCOL_ID, custom_identify_handler)
|
||||
|
||||
try:
|
||||
await trio.sleep_forever()
|
||||
except KeyboardInterrupt:
|
||||
print("\n🛑 Shutting down listener...")
|
||||
logger.info("Listener interrupted by user")
|
||||
return
|
||||
await trio.sleep_forever()
|
||||
|
||||
else:
|
||||
# Create second host (dialer)
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}")
|
||||
host_b = new_host()
|
||||
|
||||
async with (
|
||||
host_b.run(listen_addrs=[listen_addr]),
|
||||
trio.open_nursery() as nursery,
|
||||
):
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host_b.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
async with host_b.run(listen_addrs=[listen_addr]):
|
||||
# Connect to the first host
|
||||
print(f"dialer (host_b) listening on {host_b.get_addrs()[0]}")
|
||||
maddr = multiaddr.Multiaddr(destination)
|
||||
info = info_from_p2p_addr(maddr)
|
||||
print(f"Second host connecting to peer: {info.peer_id}")
|
||||
|
||||
try:
|
||||
await host_b.connect(info)
|
||||
except Exception as e:
|
||||
error_msg = str(e)
|
||||
if "unable to connect" in error_msg or "SwarmException" in error_msg:
|
||||
print(f"\n❌ Cannot connect to peer: {info.peer_id}")
|
||||
print(f" Address: {destination}")
|
||||
print(f" Error: {error_msg}")
|
||||
print(
|
||||
"\n💡 Make sure the peer is running and the address is correct."
|
||||
)
|
||||
return
|
||||
else:
|
||||
# Re-raise other exceptions
|
||||
raise
|
||||
|
||||
await host_b.connect(info)
|
||||
stream = await host_b.new_stream(info.peer_id, (IDENTIFY_PROTOCOL_ID,))
|
||||
|
||||
try:
|
||||
print("Starting identify protocol...")
|
||||
|
||||
# Read the response using the utility function
|
||||
from libp2p.utils.varint import read_length_prefixed_protobuf
|
||||
|
||||
response = await read_length_prefixed_protobuf(
|
||||
stream, use_varint_format
|
||||
)
|
||||
full_response = response
|
||||
|
||||
response = await stream.read()
|
||||
await stream.close()
|
||||
|
||||
# Parse the response using the robust protocol-level function
|
||||
# This handles both old and new formats automatically
|
||||
identify_msg = parse_identify_response(full_response)
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(response)
|
||||
print_identify_response(identify_msg)
|
||||
|
||||
except Exception as e:
|
||||
error_msg = str(e)
|
||||
print(f"Identify protocol error: {error_msg}")
|
||||
|
||||
# Check for specific format mismatch errors
|
||||
if "Error parsing message" in error_msg or "DecodeError" in error_msg:
|
||||
print("\n" + "=" * 60)
|
||||
print("FORMAT MISMATCH DETECTED!")
|
||||
print("=" * 60)
|
||||
if use_varint_format:
|
||||
print(
|
||||
"You are using length-prefixed format (default) but the "
|
||||
"listener"
|
||||
)
|
||||
print("is using raw protobuf format.")
|
||||
print(
|
||||
"\nTo fix this, run the dialer with the --raw-format flag:"
|
||||
)
|
||||
print(f"identify-demo --raw-format -d {destination}")
|
||||
else:
|
||||
print("You are using raw protobuf format but the listener")
|
||||
print("is using length-prefixed format (default).")
|
||||
print(
|
||||
"\nTo fix this, run the dialer without the --raw-format "
|
||||
"flag:"
|
||||
)
|
||||
print(f"identify-demo -d {destination}")
|
||||
print("=" * 60)
|
||||
else:
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
print(f"Identify protocol error: {e}")
|
||||
|
||||
return
|
||||
|
||||
@ -225,12 +98,9 @@ async def run(port: int, destination: str, use_varint_format: bool = True) -> No
|
||||
def main() -> None:
|
||||
description = """
|
||||
This program demonstrates the libp2p identify protocol.
|
||||
First run 'identify-demo -p <PORT> [--raw-format]' to start a listener.
|
||||
First run identify-demo -p <PORT>' to start a listener.
|
||||
Then run 'identify-demo <ANOTHER_PORT> -d <DESTINATION>'
|
||||
where <DESTINATION> is the multiaddress shown by the listener.
|
||||
|
||||
Use --raw-format to send raw protobuf messages (old format) instead of
|
||||
length-prefixed protobuf messages (new format, default).
|
||||
"""
|
||||
|
||||
example_maddr = (
|
||||
@ -245,35 +115,12 @@ def main() -> None:
|
||||
type=str,
|
||||
help=f"destination multiaddr string, e.g. {example_maddr}",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--raw-format",
|
||||
action="store_true",
|
||||
help=(
|
||||
"use raw protobuf format (old format) instead of "
|
||||
"length-prefixed (new format)"
|
||||
),
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Determine format: use varint (length-prefixed) if --raw-format is specified,
|
||||
# otherwise use raw protobuf format (old format)
|
||||
use_varint_format = args.raw_format
|
||||
|
||||
try:
|
||||
if args.destination:
|
||||
# Run in dialer mode
|
||||
trio.run(run, *(args.port, args.destination, use_varint_format))
|
||||
else:
|
||||
# Run in listener mode
|
||||
trio.run(run, *(args.port, args.destination, use_varint_format))
|
||||
trio.run(run, *(args.port, args.destination))
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Goodbye!")
|
||||
logger.info("Application interrupted by user")
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {str(e)}")
|
||||
logger.error("Error: %s", str(e))
|
||||
sys.exit(1)
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@ -11,26 +11,23 @@ This example shows how to:
|
||||
|
||||
import logging
|
||||
|
||||
import multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p import (
|
||||
new_host,
|
||||
)
|
||||
from libp2p.abc import (
|
||||
INetStream,
|
||||
)
|
||||
from libp2p.crypto.secp256k1 import (
|
||||
create_new_key_pair,
|
||||
)
|
||||
from libp2p.custom_types import (
|
||||
TProtocol,
|
||||
)
|
||||
from libp2p.identity.identify.pb.identify_pb2 import (
|
||||
Identify,
|
||||
from libp2p.identity.identify import (
|
||||
identify_handler_for,
|
||||
)
|
||||
from libp2p.identity.identify_push import (
|
||||
ID_PUSH,
|
||||
identify_push_handler_for,
|
||||
push_identify_to_peer,
|
||||
)
|
||||
from libp2p.peer.peerinfo import (
|
||||
@ -41,145 +38,8 @@ from libp2p.peer.peerinfo import (
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def create_custom_identify_handler(host, host_name: str):
|
||||
"""Create a custom identify handler that displays received information."""
|
||||
|
||||
async def handle_identify(stream: INetStream) -> None:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
print(f"\n🔍 {host_name} received identify request from peer: {peer_id}")
|
||||
|
||||
# Get the standard identify response using the existing function
|
||||
from libp2p.identity.identify.identify import (
|
||||
_mk_identify_protobuf,
|
||||
_remote_address_to_multiaddr,
|
||||
)
|
||||
|
||||
# Get observed address
|
||||
observed_multiaddr = None
|
||||
try:
|
||||
remote_address = stream.get_remote_address()
|
||||
if remote_address:
|
||||
observed_multiaddr = _remote_address_to_multiaddr(remote_address)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Build the identify protobuf
|
||||
identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
|
||||
response_data = identify_msg.SerializeToString()
|
||||
|
||||
print(f" 📋 {host_name} identify information:")
|
||||
if identify_msg.HasField("protocol_version"):
|
||||
print(f" Protocol Version: {identify_msg.protocol_version}")
|
||||
if identify_msg.HasField("agent_version"):
|
||||
print(f" Agent Version: {identify_msg.agent_version}")
|
||||
if identify_msg.HasField("public_key"):
|
||||
print(f" Public Key: {identify_msg.public_key.hex()[:16]}...")
|
||||
if identify_msg.listen_addrs:
|
||||
print(" Listen Addresses:")
|
||||
for addr_bytes in identify_msg.listen_addrs:
|
||||
addr = multiaddr.Multiaddr(addr_bytes)
|
||||
print(f" - {addr}")
|
||||
if identify_msg.protocols:
|
||||
print(" Supported Protocols:")
|
||||
for protocol in identify_msg.protocols:
|
||||
print(f" - {protocol}")
|
||||
|
||||
# Send the response
|
||||
await stream.write(response_data)
|
||||
await stream.close()
|
||||
|
||||
return handle_identify
|
||||
|
||||
|
||||
def create_custom_identify_push_handler(host, host_name: str):
|
||||
"""Create a custom identify/push handler that displays received information."""
|
||||
|
||||
async def handle_identify_push(stream: INetStream) -> None:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
print(f"\n📤 {host_name} received identify/push from peer: {peer_id}")
|
||||
|
||||
try:
|
||||
# Read the identify message using the utility function
|
||||
from libp2p.utils.varint import read_length_prefixed_protobuf
|
||||
|
||||
data = await read_length_prefixed_protobuf(stream, use_varint_format=True)
|
||||
|
||||
# Parse the identify message
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(data)
|
||||
|
||||
print(" 📋 Received identify information:")
|
||||
if identify_msg.HasField("protocol_version"):
|
||||
print(f" Protocol Version: {identify_msg.protocol_version}")
|
||||
if identify_msg.HasField("agent_version"):
|
||||
print(f" Agent Version: {identify_msg.agent_version}")
|
||||
if identify_msg.HasField("public_key"):
|
||||
print(f" Public Key: {identify_msg.public_key.hex()[:16]}...")
|
||||
if identify_msg.HasField("observed_addr") and identify_msg.observed_addr:
|
||||
observed_addr = multiaddr.Multiaddr(identify_msg.observed_addr)
|
||||
print(f" Observed Address: {observed_addr}")
|
||||
if identify_msg.listen_addrs:
|
||||
print(" Listen Addresses:")
|
||||
for addr_bytes in identify_msg.listen_addrs:
|
||||
addr = multiaddr.Multiaddr(addr_bytes)
|
||||
print(f" - {addr}")
|
||||
if identify_msg.protocols:
|
||||
print(" Supported Protocols:")
|
||||
for protocol in identify_msg.protocols:
|
||||
print(f" - {protocol}")
|
||||
|
||||
# Update the peerstore with the new information
|
||||
from libp2p.identity.identify_push.identify_push import (
|
||||
_update_peerstore_from_identify,
|
||||
)
|
||||
|
||||
await _update_peerstore_from_identify(
|
||||
host.get_peerstore(), peer_id, identify_msg
|
||||
)
|
||||
|
||||
print(f" ✅ {host_name} updated peerstore with new information")
|
||||
|
||||
except Exception as e:
|
||||
print(f" ❌ Error processing identify/push: {e}")
|
||||
finally:
|
||||
await stream.close()
|
||||
|
||||
return handle_identify_push
|
||||
|
||||
|
||||
async def display_peerstore_info(host, host_name: str, peer_id, description: str):
|
||||
"""Display peerstore information for a specific peer."""
|
||||
peerstore = host.get_peerstore()
|
||||
|
||||
try:
|
||||
addrs = peerstore.addrs(peer_id)
|
||||
except Exception:
|
||||
addrs = []
|
||||
|
||||
try:
|
||||
protocols = peerstore.get_protocols(peer_id)
|
||||
except Exception:
|
||||
protocols = []
|
||||
|
||||
print(f"\n📚 {host_name} peerstore for {description}:")
|
||||
print(f" Peer ID: {peer_id}")
|
||||
if addrs:
|
||||
print(" Addresses:")
|
||||
for addr in addrs:
|
||||
print(f" - {addr}")
|
||||
else:
|
||||
print(" Addresses: None")
|
||||
|
||||
if protocols:
|
||||
print(" Protocols:")
|
||||
for protocol in protocols:
|
||||
print(f" - {protocol}")
|
||||
else:
|
||||
print(" Protocols: None")
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
print("\n==== Starting Enhanced Identify-Push Example ====\n")
|
||||
print("\n==== Starting Identify-Push Example ====\n")
|
||||
|
||||
# Create key pairs for the two hosts
|
||||
key_pair_1 = create_new_key_pair()
|
||||
@ -188,57 +48,45 @@ async def main() -> None:
|
||||
# Create the first host
|
||||
host_1 = new_host(key_pair=key_pair_1)
|
||||
|
||||
# Set up custom identify and identify/push handlers
|
||||
host_1.set_stream_handler(
|
||||
TProtocol("/ipfs/id/1.0.0"), create_custom_identify_handler(host_1, "Host 1")
|
||||
)
|
||||
host_1.set_stream_handler(
|
||||
ID_PUSH, create_custom_identify_push_handler(host_1, "Host 1")
|
||||
)
|
||||
# Set up the identify and identify/push handlers
|
||||
host_1.set_stream_handler(TProtocol("/ipfs/id/1.0.0"), identify_handler_for(host_1))
|
||||
host_1.set_stream_handler(ID_PUSH, identify_push_handler_for(host_1))
|
||||
|
||||
# Create the second host
|
||||
host_2 = new_host(key_pair=key_pair_2)
|
||||
|
||||
# Set up custom identify and identify/push handlers
|
||||
host_2.set_stream_handler(
|
||||
TProtocol("/ipfs/id/1.0.0"), create_custom_identify_handler(host_2, "Host 2")
|
||||
)
|
||||
host_2.set_stream_handler(
|
||||
ID_PUSH, create_custom_identify_push_handler(host_2, "Host 2")
|
||||
)
|
||||
# Set up the identify and identify/push handlers
|
||||
host_2.set_stream_handler(TProtocol("/ipfs/id/1.0.0"), identify_handler_for(host_2))
|
||||
host_2.set_stream_handler(ID_PUSH, identify_push_handler_for(host_2))
|
||||
|
||||
# Start listening on random ports using the run context manager
|
||||
import multiaddr
|
||||
|
||||
listen_addr_1 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
|
||||
listen_addr_2 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
|
||||
|
||||
async with (
|
||||
host_1.run([listen_addr_1]),
|
||||
host_2.run([listen_addr_2]),
|
||||
trio.open_nursery() as nursery,
|
||||
):
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host_1.get_peerstore().start_cleanup_task, 60)
|
||||
nursery.start_soon(host_2.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
async with host_1.run([listen_addr_1]), host_2.run([listen_addr_2]):
|
||||
# Get the addresses of both hosts
|
||||
addr_1 = host_1.get_addrs()[0]
|
||||
logger.info(f"Host 1 listening on {addr_1}")
|
||||
print(f"Host 1 listening on {addr_1}")
|
||||
print(f"Peer ID: {host_1.get_id().pretty()}")
|
||||
|
||||
addr_2 = host_2.get_addrs()[0]
|
||||
logger.info(f"Host 2 listening on {addr_2}")
|
||||
print(f"Host 2 listening on {addr_2}")
|
||||
print(f"Peer ID: {host_2.get_id().pretty()}")
|
||||
|
||||
print("🏠 Host Configuration:")
|
||||
print(f" Host 1: {addr_1}")
|
||||
print(f" Host 1 Peer ID: {host_1.get_id().pretty()}")
|
||||
print(f" Host 2: {addr_2}")
|
||||
print(f" Host 2 Peer ID: {host_2.get_id().pretty()}")
|
||||
|
||||
print("\n🔗 Connecting Host 2 to Host 1...")
|
||||
print("\nConnecting Host 2 to Host 1...")
|
||||
|
||||
# Connect host_2 to host_1
|
||||
peer_info = info_from_p2p_addr(addr_1)
|
||||
await host_2.connect(peer_info)
|
||||
print("✅ Host 2 successfully connected to Host 1")
|
||||
logger.info("Host 2 connected to Host 1")
|
||||
print("Host 2 successfully connected to Host 1")
|
||||
|
||||
# Run the identify protocol from host_2 to host_1
|
||||
print("\n🔄 Running identify protocol (Host 2 → Host 1)...")
|
||||
# (so Host 1 learns Host 2's address)
|
||||
from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID
|
||||
|
||||
stream = await host_2.new_stream(host_1.get_id(), (IDENTIFY_PROTOCOL_ID,))
|
||||
@ -246,58 +94,64 @@ async def main() -> None:
|
||||
await stream.close()
|
||||
|
||||
# Run the identify protocol from host_1 to host_2
|
||||
print("\n🔄 Running identify protocol (Host 1 → Host 2)...")
|
||||
# (so Host 2 learns Host 1's address)
|
||||
stream = await host_1.new_stream(host_2.get_id(), (IDENTIFY_PROTOCOL_ID,))
|
||||
response = await stream.read()
|
||||
await stream.close()
|
||||
|
||||
# Update Host 1's peerstore with Host 2's addresses
|
||||
# --- NEW CODE: Update Host 1's peerstore with Host 2's addresses ---
|
||||
from libp2p.identity.identify.pb.identify_pb2 import (
|
||||
Identify,
|
||||
)
|
||||
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(response)
|
||||
peerstore_1 = host_1.get_peerstore()
|
||||
peer_id_2 = host_2.get_id()
|
||||
for addr_bytes in identify_msg.listen_addrs:
|
||||
maddr = multiaddr.Multiaddr(addr_bytes)
|
||||
peerstore_1.add_addr(peer_id_2, maddr, ttl=3600)
|
||||
# TTL can be any positive int
|
||||
peerstore_1.add_addr(
|
||||
peer_id_2,
|
||||
maddr,
|
||||
ttl=3600,
|
||||
)
|
||||
# --- END NEW CODE ---
|
||||
|
||||
# Display peerstore information before push
|
||||
await display_peerstore_info(
|
||||
host_1, "Host 1", peer_id_2, "Host 2 (before push)"
|
||||
# Now Host 1's peerstore should have Host 2's address
|
||||
peerstore_1 = host_1.get_peerstore()
|
||||
peer_id_2 = host_2.get_id()
|
||||
addrs_1_for_2 = peerstore_1.addrs(peer_id_2)
|
||||
logger.info(
|
||||
f"[DEBUG] Host 1 peerstore addresses for Host 2 before push: "
|
||||
f"{addrs_1_for_2}"
|
||||
)
|
||||
print(
|
||||
f"[DEBUG] Host 1 peerstore addresses for Host 2 before push: "
|
||||
f"{addrs_1_for_2}"
|
||||
)
|
||||
|
||||
# Push identify information from host_1 to host_2
|
||||
print("\n📤 Host 1 pushing identify information to Host 2...")
|
||||
logger.info("Host 1 pushing identify information to Host 2")
|
||||
print("\nHost 1 pushing identify information to Host 2...")
|
||||
|
||||
try:
|
||||
# Call push_identify_to_peer which now returns a boolean
|
||||
success = await push_identify_to_peer(host_1, host_2.get_id())
|
||||
|
||||
if success:
|
||||
print("✅ Identify push completed successfully!")
|
||||
logger.info("Identify push completed successfully")
|
||||
print("Identify push completed successfully!")
|
||||
else:
|
||||
print("⚠️ Identify push didn't complete successfully")
|
||||
logger.warning("Identify push didn't complete successfully")
|
||||
print("\nWarning: Identify push didn't complete successfully")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error during identify push: {str(e)}")
|
||||
logger.error(f"Error during identify push: {str(e)}")
|
||||
print(f"\nError during identify push: {str(e)}")
|
||||
|
||||
# Give a moment for the identify/push processing to complete
|
||||
await trio.sleep(0.5)
|
||||
|
||||
# Display peerstore information after push
|
||||
await display_peerstore_info(host_1, "Host 1", peer_id_2, "Host 2 (after push)")
|
||||
await display_peerstore_info(
|
||||
host_2, "Host 2", host_1.get_id(), "Host 1 (after push)"
|
||||
)
|
||||
|
||||
# Give more time for background tasks to finish and connections to stabilize
|
||||
print("\n⏳ Waiting for background tasks to complete...")
|
||||
await trio.sleep(1.0)
|
||||
|
||||
# Gracefully close connections to prevent connection errors
|
||||
print("🔌 Closing connections...")
|
||||
await host_2.disconnect(host_1.get_id())
|
||||
await trio.sleep(0.2)
|
||||
|
||||
print("\n🎉 Example completed successfully!")
|
||||
# Add this at the end of your async with block:
|
||||
await trio.sleep(0.5) # Give background tasks time to finish
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@ -41,9 +41,6 @@ from libp2p.identity.identify import (
|
||||
ID as ID_IDENTIFY,
|
||||
identify_handler_for,
|
||||
)
|
||||
from libp2p.identity.identify.identify import (
|
||||
_remote_address_to_multiaddr,
|
||||
)
|
||||
from libp2p.identity.identify.pb.identify_pb2 import (
|
||||
Identify,
|
||||
)
|
||||
@ -60,46 +57,18 @@ from libp2p.peer.peerinfo import (
|
||||
logger = logging.getLogger("libp2p.identity.identify-push-example")
|
||||
|
||||
|
||||
def custom_identify_push_handler_for(host, use_varint_format: bool = True):
|
||||
def custom_identify_push_handler_for(host):
|
||||
"""
|
||||
Create a custom handler for the identify/push protocol that logs and prints
|
||||
the identity information received from the dialer.
|
||||
|
||||
Args:
|
||||
host: The libp2p host
|
||||
use_varint_format: If True, expect length-prefixed format; if False, expect
|
||||
raw protobuf
|
||||
|
||||
"""
|
||||
|
||||
async def handle_identify_push(stream: INetStream) -> None:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
|
||||
# Get remote address information
|
||||
try:
|
||||
remote_address = stream.get_remote_address()
|
||||
if remote_address:
|
||||
observed_multiaddr = _remote_address_to_multiaddr(remote_address)
|
||||
logger.info(
|
||||
"Connection from remote peer %s, address: %s, multiaddr: %s",
|
||||
peer_id,
|
||||
remote_address,
|
||||
observed_multiaddr,
|
||||
)
|
||||
print(f"\n🔗 Received identify/push request from peer: {peer_id}")
|
||||
# Add the peer ID to create a complete multiaddr
|
||||
complete_multiaddr = f"{observed_multiaddr}/p2p/{peer_id}"
|
||||
print(f" Remote address: {complete_multiaddr}")
|
||||
except Exception as e:
|
||||
logger.error("Error getting remote address: %s", e)
|
||||
print(f"\n🔗 Received identify/push request from peer: {peer_id}")
|
||||
|
||||
try:
|
||||
# Use the utility function to read the protobuf message
|
||||
from libp2p.utils.varint import read_length_prefixed_protobuf
|
||||
|
||||
data = await read_length_prefixed_protobuf(stream, use_varint_format)
|
||||
|
||||
# Read the identify message from the stream
|
||||
data = await stream.read()
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(data)
|
||||
|
||||
@ -148,41 +117,11 @@ def custom_identify_push_handler_for(host, use_varint_format: bool = True):
|
||||
await _update_peerstore_from_identify(peerstore, peer_id, identify_msg)
|
||||
|
||||
logger.info("Successfully processed identify/push from peer %s", peer_id)
|
||||
print(f"✅ Successfully processed identify/push from peer {peer_id}")
|
||||
print(f"\nSuccessfully processed identify/push from peer {peer_id}")
|
||||
|
||||
except Exception as e:
|
||||
error_msg = str(e)
|
||||
logger.error(
|
||||
"Error processing identify/push from %s: %s", peer_id, error_msg
|
||||
)
|
||||
print(f"\nError processing identify/push from {peer_id}: {error_msg}")
|
||||
|
||||
# Check for specific format mismatch errors
|
||||
if (
|
||||
"Error parsing message" in error_msg
|
||||
or "DecodeError" in error_msg
|
||||
or "ParseFromString" in error_msg
|
||||
):
|
||||
print("\n" + "=" * 60)
|
||||
print("FORMAT MISMATCH DETECTED!")
|
||||
print("=" * 60)
|
||||
if use_varint_format:
|
||||
print(
|
||||
"You are using length-prefixed format (default) but the "
|
||||
"dialer is using raw protobuf format."
|
||||
)
|
||||
print("\nTo fix this, run the dialer with the --raw-format flag:")
|
||||
print(
|
||||
"identify-push-listener-dialer-demo --raw-format -d <ADDRESS>"
|
||||
)
|
||||
else:
|
||||
print("You are using raw protobuf format but the dialer")
|
||||
print("is using length-prefixed format (default).")
|
||||
print(
|
||||
"\nTo fix this, run the dialer without the --raw-format flag:"
|
||||
)
|
||||
print("identify-push-listener-dialer-demo -d <ADDRESS>")
|
||||
print("=" * 60)
|
||||
logger.error("Error processing identify/push from %s: %s", peer_id, e)
|
||||
print(f"\nError processing identify/push from {peer_id}: {e}")
|
||||
finally:
|
||||
# Close the stream after processing
|
||||
await stream.close()
|
||||
@ -190,15 +129,9 @@ def custom_identify_push_handler_for(host, use_varint_format: bool = True):
|
||||
return handle_identify_push
|
||||
|
||||
|
||||
async def run_listener(
|
||||
port: int, use_varint_format: bool = True, raw_format_flag: bool = False
|
||||
) -> None:
|
||||
async def run_listener(port: int) -> None:
|
||||
"""Run a host in listener mode."""
|
||||
format_name = "length-prefixed" if use_varint_format else "raw protobuf"
|
||||
print(
|
||||
f"\n==== Starting Identify-Push Listener on port {port} "
|
||||
f"(using {format_name} format) ====\n"
|
||||
)
|
||||
print(f"\n==== Starting Identify-Push Listener on port {port} ====\n")
|
||||
|
||||
# Create key pair for the listener
|
||||
key_pair = create_new_key_pair()
|
||||
@ -206,58 +139,35 @@ async def run_listener(
|
||||
# Create the listener host
|
||||
host = new_host(key_pair=key_pair)
|
||||
|
||||
# Set up the identify and identify/push handlers with specified format
|
||||
host.set_stream_handler(
|
||||
ID_IDENTIFY, identify_handler_for(host, use_varint_format=use_varint_format)
|
||||
)
|
||||
host.set_stream_handler(
|
||||
ID_IDENTIFY_PUSH,
|
||||
custom_identify_push_handler_for(host, use_varint_format=use_varint_format),
|
||||
)
|
||||
# Set up the identify and identify/push handlers
|
||||
host.set_stream_handler(ID_IDENTIFY, identify_handler_for(host))
|
||||
host.set_stream_handler(ID_IDENTIFY_PUSH, custom_identify_push_handler_for(host))
|
||||
|
||||
# Start listening
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
|
||||
try:
|
||||
async with host.run([listen_addr]):
|
||||
addr = host.get_addrs()[0]
|
||||
logger.info("Listener host ready!")
|
||||
print("Listener host ready!")
|
||||
async with host.run([listen_addr]):
|
||||
addr = host.get_addrs()[0]
|
||||
logger.info("Listener host ready!")
|
||||
print("Listener host ready!")
|
||||
|
||||
logger.info(f"Listening on: {addr}")
|
||||
print(f"Listening on: {addr}")
|
||||
logger.info(f"Listening on: {addr}")
|
||||
print(f"Listening on: {addr}")
|
||||
|
||||
logger.info(f"Peer ID: {host.get_id().pretty()}")
|
||||
print(f"Peer ID: {host.get_id().pretty()}")
|
||||
logger.info(f"Peer ID: {host.get_id().pretty()}")
|
||||
print(f"Peer ID: {host.get_id().pretty()}")
|
||||
|
||||
print("\nRun dialer with command:")
|
||||
if raw_format_flag:
|
||||
print(f"identify-push-listener-dialer-demo -d {addr} --raw-format")
|
||||
else:
|
||||
print(f"identify-push-listener-dialer-demo -d {addr}")
|
||||
print("\nWaiting for incoming identify/push requests... (Ctrl+C to exit)")
|
||||
print("\nRun dialer with command:")
|
||||
print(f"identify-push-listener-dialer-demo -d {addr}")
|
||||
print("\nWaiting for incoming connections... (Ctrl+C to exit)")
|
||||
|
||||
# Keep running until interrupted
|
||||
try:
|
||||
await trio.sleep_forever()
|
||||
except KeyboardInterrupt:
|
||||
print("\n🛑 Shutting down listener...")
|
||||
logger.info("Listener interrupted by user")
|
||||
return
|
||||
except Exception as e:
|
||||
logger.error(f"Listener error: {e}")
|
||||
raise
|
||||
# Keep running until interrupted
|
||||
await trio.sleep_forever()
|
||||
|
||||
|
||||
async def run_dialer(
|
||||
port: int, destination: str, use_varint_format: bool = True
|
||||
) -> None:
|
||||
async def run_dialer(port: int, destination: str) -> None:
|
||||
"""Run a host in dialer mode that connects to a listener."""
|
||||
format_name = "length-prefixed" if use_varint_format else "raw protobuf"
|
||||
print(
|
||||
f"\n==== Starting Identify-Push Dialer on port {port} "
|
||||
f"(using {format_name} format) ====\n"
|
||||
)
|
||||
print(f"\n==== Starting Identify-Push Dialer on port {port} ====\n")
|
||||
|
||||
# Create key pair for the dialer
|
||||
key_pair = create_new_key_pair()
|
||||
@ -265,14 +175,9 @@ async def run_dialer(
|
||||
# Create the dialer host
|
||||
host = new_host(key_pair=key_pair)
|
||||
|
||||
# Set up the identify and identify/push handlers with specified format
|
||||
host.set_stream_handler(
|
||||
ID_IDENTIFY, identify_handler_for(host, use_varint_format=use_varint_format)
|
||||
)
|
||||
host.set_stream_handler(
|
||||
ID_IDENTIFY_PUSH,
|
||||
identify_push_handler_for(host, use_varint_format=use_varint_format),
|
||||
)
|
||||
# Set up the identify and identify/push handlers
|
||||
host.set_stream_handler(ID_IDENTIFY, identify_handler_for(host))
|
||||
host.set_stream_handler(ID_IDENTIFY_PUSH, identify_push_handler_for(host))
|
||||
|
||||
# Start listening on a different port
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
@ -293,9 +198,7 @@ async def run_dialer(
|
||||
try:
|
||||
await host.connect(peer_info)
|
||||
logger.info("Successfully connected to listener!")
|
||||
print("✅ Successfully connected to listener!")
|
||||
print(f" Connected to: {peer_info.peer_id}")
|
||||
print(f" Full address: {destination}")
|
||||
print("Successfully connected to listener!")
|
||||
|
||||
# Push identify information to the listener
|
||||
logger.info("Pushing identify information to listener...")
|
||||
@ -303,13 +206,11 @@ async def run_dialer(
|
||||
|
||||
try:
|
||||
# Call push_identify_to_peer which returns a boolean
|
||||
success = await push_identify_to_peer(
|
||||
host, peer_info.peer_id, use_varint_format=use_varint_format
|
||||
)
|
||||
success = await push_identify_to_peer(host, peer_info.peer_id)
|
||||
|
||||
if success:
|
||||
logger.info("Identify push completed successfully!")
|
||||
print("✅ Identify push completed successfully!")
|
||||
print("Identify push completed successfully!")
|
||||
|
||||
logger.info("Example completed successfully!")
|
||||
print("\nExample completed successfully!")
|
||||
@ -320,57 +221,17 @@ async def run_dialer(
|
||||
logger.warning("Example completed with warnings.")
|
||||
print("Example completed with warnings.")
|
||||
except Exception as e:
|
||||
error_msg = str(e)
|
||||
logger.error(f"Error during identify push: {error_msg}")
|
||||
print(f"\nError during identify push: {error_msg}")
|
||||
|
||||
# Check for specific format mismatch errors
|
||||
if (
|
||||
"Error parsing message" in error_msg
|
||||
or "DecodeError" in error_msg
|
||||
or "ParseFromString" in error_msg
|
||||
):
|
||||
print("\n" + "=" * 60)
|
||||
print("FORMAT MISMATCH DETECTED!")
|
||||
print("=" * 60)
|
||||
if use_varint_format:
|
||||
print(
|
||||
"You are using length-prefixed format (default) but the "
|
||||
"listener is using raw protobuf format."
|
||||
)
|
||||
print(
|
||||
"\nTo fix this, run the dialer with the --raw-format flag:"
|
||||
)
|
||||
print(
|
||||
f"identify-push-listener-dialer-demo --raw-format -d "
|
||||
f"{destination}"
|
||||
)
|
||||
else:
|
||||
print("You are using raw protobuf format but the listener")
|
||||
print("is using length-prefixed format (default).")
|
||||
print(
|
||||
"\nTo fix this, run the dialer without the --raw-format "
|
||||
"flag:"
|
||||
)
|
||||
print(f"identify-push-listener-dialer-demo -d {destination}")
|
||||
print("=" * 60)
|
||||
logger.error(f"Error during identify push: {str(e)}")
|
||||
print(f"\nError during identify push: {str(e)}")
|
||||
|
||||
logger.error("Example completed with errors.")
|
||||
print("Example completed with errors.")
|
||||
# Continue execution despite the push error
|
||||
|
||||
except Exception as e:
|
||||
error_msg = str(e)
|
||||
if "unable to connect" in error_msg or "SwarmException" in error_msg:
|
||||
print(f"\n❌ Cannot connect to peer: {peer_info.peer_id}")
|
||||
print(f" Address: {destination}")
|
||||
print(f" Error: {error_msg}")
|
||||
print("\n💡 Make sure the peer is running and the address is correct.")
|
||||
return
|
||||
else:
|
||||
logger.error(f"Error during dialer operation: {error_msg}")
|
||||
print(f"\nError during dialer operation: {error_msg}")
|
||||
raise
|
||||
logger.error(f"Error during dialer operation: {str(e)}")
|
||||
print(f"\nError during dialer operation: {str(e)}")
|
||||
raise
|
||||
|
||||
|
||||
def main() -> None:
|
||||
@ -379,55 +240,34 @@ def main() -> None:
|
||||
This program demonstrates the libp2p identify/push protocol.
|
||||
Without arguments, it runs as a listener on random port.
|
||||
With -d parameter, it runs as a dialer on random port.
|
||||
|
||||
Port 0 (default) means the OS will automatically assign an available port.
|
||||
This prevents port conflicts when running multiple instances.
|
||||
|
||||
Use --raw-format to send raw protobuf messages (old format) instead of
|
||||
length-prefixed protobuf messages (new format, default).
|
||||
"""
|
||||
|
||||
parser = argparse.ArgumentParser(description=description)
|
||||
parser.add_argument(
|
||||
"-p",
|
||||
"--port",
|
||||
default=0,
|
||||
type=int,
|
||||
help="source port number (0 = random available port)",
|
||||
example = (
|
||||
"/ip4/127.0.0.1/tcp/8000/p2p/QmQn4SwGkDZKkUEpBRBvTmheQycxAHJUNmVEnjA2v1qe8Q"
|
||||
)
|
||||
|
||||
parser = argparse.ArgumentParser(description=description)
|
||||
parser.add_argument("-p", "--port", default=0, type=int, help="source port number")
|
||||
parser.add_argument(
|
||||
"-d",
|
||||
"--destination",
|
||||
type=str,
|
||||
help="destination multiaddr string",
|
||||
help=f"destination multiaddr string, e.g. {example}",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--raw-format",
|
||||
action="store_true",
|
||||
help=(
|
||||
"use raw protobuf format (old format) instead of "
|
||||
"length-prefixed (new format)"
|
||||
),
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Determine format: raw format if --raw-format is specified, otherwise
|
||||
# length-prefixed
|
||||
use_varint_format = not args.raw_format
|
||||
|
||||
try:
|
||||
if args.destination:
|
||||
# Run in dialer mode with random available port if not specified
|
||||
trio.run(run_dialer, args.port, args.destination, use_varint_format)
|
||||
trio.run(run_dialer, args.port, args.destination)
|
||||
else:
|
||||
# Run in listener mode with random available port if not specified
|
||||
trio.run(run_listener, args.port, use_varint_format, args.raw_format)
|
||||
trio.run(run_listener, args.port)
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Goodbye!")
|
||||
logger.info("Application interrupted by user")
|
||||
print("\nInterrupted by user")
|
||||
logger.info("Interrupted by user")
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {str(e)}")
|
||||
print(f"\nError: {str(e)}")
|
||||
logger.error("Error: %s", str(e))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
@ -151,10 +151,7 @@ async def run_node(
|
||||
host = new_host(key_pair=key_pair)
|
||||
listen_addr = Multiaddr(f"/ip4/127.0.0.1/tcp/{port}")
|
||||
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
async with host.run(listen_addrs=[listen_addr]):
|
||||
peer_id = host.get_id().pretty()
|
||||
addr_str = f"/ip4/127.0.0.1/tcp/{port}/p2p/{peer_id}"
|
||||
await connect_to_bootstrap_nodes(host, bootstrap_nodes)
|
||||
|
||||
@ -1,77 +0,0 @@
|
||||
import argparse
|
||||
import logging
|
||||
import secrets
|
||||
|
||||
import multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p import (
|
||||
new_host,
|
||||
)
|
||||
from libp2p.abc import PeerInfo
|
||||
from libp2p.crypto.secp256k1 import (
|
||||
create_new_key_pair,
|
||||
)
|
||||
from libp2p.discovery.events.peerDiscovery import peerDiscovery
|
||||
|
||||
logger = logging.getLogger("libp2p.discovery.mdns")
|
||||
logger.setLevel(logging.INFO)
|
||||
handler = logging.StreamHandler()
|
||||
handler.setFormatter(
|
||||
logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
|
||||
)
|
||||
logger.addHandler(handler)
|
||||
|
||||
# Set root logger to DEBUG to capture all logs from dependencies
|
||||
logging.getLogger().setLevel(logging.DEBUG)
|
||||
|
||||
|
||||
def onPeerDiscovery(peerinfo: PeerInfo):
|
||||
logger.info(f"Discovered: {peerinfo.peer_id}")
|
||||
|
||||
|
||||
async def run(port: int) -> None:
|
||||
secret = secrets.token_bytes(32)
|
||||
key_pair = create_new_key_pair(secret)
|
||||
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
|
||||
|
||||
peerDiscovery.register_peer_discovered_handler(onPeerDiscovery)
|
||||
|
||||
print(
|
||||
"Run this from the same folder in another console to "
|
||||
"start another peer on a different port:\n\n"
|
||||
"mdns-demo -p <ANOTHER_PORT>\n"
|
||||
)
|
||||
print("Waiting for mDNS peer discovery events...\n")
|
||||
|
||||
logger.info("Starting peer Discovery")
|
||||
host = new_host(key_pair=key_pair, enable_mDNS=True)
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
await trio.sleep_forever()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
description = """
|
||||
This program demonstrates mDNS peer discovery using libp2p.
|
||||
To use it, run 'mdns-demo -p <PORT>', where <PORT> is the port number.
|
||||
Start multiple peers on different ports to see discovery in action.
|
||||
"""
|
||||
parser = argparse.ArgumentParser(description=description)
|
||||
parser.add_argument("-p", "--port", default=0, type=int, help="source port number")
|
||||
parser.add_argument(
|
||||
"-v", "--verbose", action="store_true", help="Enable verbose output"
|
||||
)
|
||||
args = parser.parse_args()
|
||||
if args.verbose:
|
||||
logger.setLevel(logging.DEBUG)
|
||||
try:
|
||||
trio.run(run, args.port)
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Exiting...")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -59,9 +59,6 @@ async def run(port: int, destination: str) -> None:
|
||||
host = new_host(listen_addrs=[listen_addr])
|
||||
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
if not destination:
|
||||
host.set_stream_handler(PING_PROTOCOL_ID, handle_ping)
|
||||
|
||||
|
||||
@ -144,9 +144,6 @@ async def run(topic: str, destination: str | None, port: int | None) -> None:
|
||||
pubsub = Pubsub(host, gossipsub)
|
||||
termination_event = trio.Event() # Event to signal termination
|
||||
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
|
||||
# Start the peer-store cleanup task
|
||||
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
|
||||
|
||||
logger.info(f"Node started with peer ID: {host.get_id()}")
|
||||
logger.info(f"Listening on: {listen_addr}")
|
||||
logger.info("Initializing PubSub and GossipSub...")
|
||||
|
||||
@ -32,9 +32,6 @@ from libp2p.custom_types import (
|
||||
TProtocol,
|
||||
TSecurityOptions,
|
||||
)
|
||||
from libp2p.discovery.mdns.mdns import (
|
||||
MDNSDiscovery,
|
||||
)
|
||||
from libp2p.host.basic_host import (
|
||||
BasicHost,
|
||||
)
|
||||
@ -84,8 +81,6 @@ DEFAULT_MUXER = "YAMUX"
|
||||
# Multiplexer options
|
||||
MUXER_YAMUX = "YAMUX"
|
||||
MUXER_MPLEX = "MPLEX"
|
||||
DEFAULT_NEGOTIATE_TIMEOUT = 5
|
||||
|
||||
|
||||
|
||||
def set_default_muxer(muxer_name: Literal["YAMUX", "MPLEX"]) -> None:
|
||||
@ -250,9 +245,6 @@ def new_host(
|
||||
disc_opt: IPeerRouting | None = None,
|
||||
muxer_preference: Literal["YAMUX", "MPLEX"] | None = None,
|
||||
listen_addrs: Sequence[multiaddr.Multiaddr] | None = None,
|
||||
enable_mDNS: bool = False,
|
||||
bootstrap: list[str] | None = None,
|
||||
negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
|
||||
) -> IHost:
|
||||
"""
|
||||
Create a new libp2p host based on the given parameters.
|
||||
@ -264,8 +256,6 @@ def new_host(
|
||||
:param disc_opt: optional discovery
|
||||
:param muxer_preference: optional explicit muxer preference
|
||||
:param listen_addrs: optional list of multiaddrs to listen on
|
||||
:param enable_mDNS: whether to enable mDNS discovery
|
||||
:param bootstrap: optional list of bootstrap peer addresses as strings
|
||||
:return: return a host instance
|
||||
"""
|
||||
swarm = new_swarm(
|
||||
@ -278,7 +268,8 @@ def new_host(
|
||||
)
|
||||
|
||||
if disc_opt is not None:
|
||||
return RoutedHost(swarm, disc_opt, enable_mDNS, bootstrap)
|
||||
return BasicHost(network=swarm,enable_mDNS=enable_mDNS , bootstrap=bootstrap, negotitate_timeout=negotiate_timeout)
|
||||
return RoutedHost(swarm, disc_opt)
|
||||
return BasicHost(swarm)
|
||||
|
||||
|
||||
__version__ = __version("libp2p")
|
||||
|
||||
998
libp2p/abc.py
998
libp2p/abc.py
File diff suppressed because it is too large
Load Diff
@ -1,5 +0,0 @@
|
||||
"""Bootstrap peer discovery module for py-libp2p."""
|
||||
|
||||
from .bootstrap import BootstrapDiscovery
|
||||
|
||||
__all__ = ["BootstrapDiscovery"]
|
||||
@ -1,94 +0,0 @@
|
||||
import logging
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
from multiaddr.resolvers import DNSResolver
|
||||
|
||||
from libp2p.abc import ID, INetworkService, PeerInfo
|
||||
from libp2p.discovery.bootstrap.utils import validate_bootstrap_addresses
|
||||
from libp2p.discovery.events.peerDiscovery import peerDiscovery
|
||||
from libp2p.peer.peerinfo import info_from_p2p_addr
|
||||
|
||||
logger = logging.getLogger("libp2p.discovery.bootstrap")
|
||||
resolver = DNSResolver()
|
||||
|
||||
|
||||
class BootstrapDiscovery:
|
||||
"""
|
||||
Bootstrap-based peer discovery for py-libp2p.
|
||||
Connects to predefined bootstrap peers and adds them to peerstore.
|
||||
"""
|
||||
|
||||
def __init__(self, swarm: INetworkService, bootstrap_addrs: list[str]):
|
||||
self.swarm = swarm
|
||||
self.peerstore = swarm.peerstore
|
||||
self.bootstrap_addrs = bootstrap_addrs or []
|
||||
self.discovered_peers: set[str] = set()
|
||||
|
||||
async def start(self) -> None:
|
||||
"""Process bootstrap addresses and emit peer discovery events."""
|
||||
logger.debug(
|
||||
f"Starting bootstrap discovery with "
|
||||
f"{len(self.bootstrap_addrs)} bootstrap addresses"
|
||||
)
|
||||
|
||||
# Validate and filter bootstrap addresses
|
||||
self.bootstrap_addrs = validate_bootstrap_addresses(self.bootstrap_addrs)
|
||||
|
||||
for addr_str in self.bootstrap_addrs:
|
||||
try:
|
||||
await self._process_bootstrap_addr(addr_str)
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to process bootstrap address {addr_str}: {e}")
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Clean up bootstrap discovery resources."""
|
||||
logger.debug("Stopping bootstrap discovery")
|
||||
self.discovered_peers.clear()
|
||||
|
||||
async def _process_bootstrap_addr(self, addr_str: str) -> None:
|
||||
"""Convert string address to PeerInfo and add to peerstore."""
|
||||
try:
|
||||
multiaddr = Multiaddr(addr_str)
|
||||
except Exception as e:
|
||||
logger.debug(f"Invalid multiaddr format '{addr_str}': {e}")
|
||||
return
|
||||
if self.is_dns_addr(multiaddr):
|
||||
resolved_addrs = await resolver.resolve(multiaddr)
|
||||
peer_id_str = multiaddr.get_peer_id()
|
||||
if peer_id_str is None:
|
||||
logger.warning(f"Missing peer ID in DNS address: {addr_str}")
|
||||
return
|
||||
peer_id = ID.from_base58(peer_id_str)
|
||||
addrs = [addr for addr in resolved_addrs]
|
||||
if not addrs:
|
||||
logger.warning(f"No addresses resolved for DNS address: {addr_str}")
|
||||
return
|
||||
peer_info = PeerInfo(peer_id, addrs)
|
||||
self.add_addr(peer_info)
|
||||
else:
|
||||
self.add_addr(info_from_p2p_addr(multiaddr))
|
||||
|
||||
def is_dns_addr(self, addr: Multiaddr) -> bool:
|
||||
"""Check if the address is a DNS address."""
|
||||
return any(protocol.name == "dnsaddr" for protocol in addr.protocols())
|
||||
|
||||
def add_addr(self, peer_info: PeerInfo) -> None:
|
||||
"""Add a peer to the peerstore and emit discovery event."""
|
||||
# Skip if it's our own peer
|
||||
if peer_info.peer_id == self.swarm.get_peer_id():
|
||||
logger.debug(f"Skipping own peer ID: {peer_info.peer_id}")
|
||||
return
|
||||
|
||||
# Always add addresses to peerstore (allows multiple addresses for same peer)
|
||||
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
|
||||
|
||||
# Only emit discovery event if this is the first time we see this peer
|
||||
peer_id_str = str(peer_info.peer_id)
|
||||
if peer_id_str not in self.discovered_peers:
|
||||
# Track discovered peer
|
||||
self.discovered_peers.add(peer_id_str)
|
||||
# Emit peer discovery event
|
||||
peerDiscovery.emit_peer_discovered(peer_info)
|
||||
logger.debug(f"Peer discovered: {peer_info.peer_id}")
|
||||
else:
|
||||
logger.debug(f"Additional addresses added for peer: {peer_info.peer_id}")
|
||||
@ -1,51 +0,0 @@
|
||||
"""Utility functions for bootstrap discovery."""
|
||||
|
||||
import logging
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
|
||||
from libp2p.peer.peerinfo import InvalidAddrError, PeerInfo, info_from_p2p_addr
|
||||
|
||||
logger = logging.getLogger("libp2p.discovery.bootstrap.utils")
|
||||
|
||||
|
||||
def validate_bootstrap_addresses(addrs: list[str]) -> list[str]:
|
||||
"""
|
||||
Validate and filter bootstrap addresses.
|
||||
|
||||
:param addrs: List of bootstrap address strings
|
||||
:return: List of valid bootstrap addresses
|
||||
"""
|
||||
valid_addrs = []
|
||||
|
||||
for addr_str in addrs:
|
||||
try:
|
||||
# Try to parse as multiaddr
|
||||
multiaddr = Multiaddr(addr_str)
|
||||
|
||||
# Try to extract peer info (this validates the p2p component)
|
||||
info_from_p2p_addr(multiaddr)
|
||||
|
||||
valid_addrs.append(addr_str)
|
||||
logger.debug(f"Valid bootstrap address: {addr_str}")
|
||||
|
||||
except (InvalidAddrError, ValueError, Exception) as e:
|
||||
logger.warning(f"Invalid bootstrap address '{addr_str}': {e}")
|
||||
continue
|
||||
|
||||
return valid_addrs
|
||||
|
||||
|
||||
def parse_bootstrap_peer_info(addr_str: str) -> PeerInfo | None:
|
||||
"""
|
||||
Parse bootstrap address string into PeerInfo.
|
||||
|
||||
:param addr_str: Bootstrap address string
|
||||
:return: PeerInfo object or None if parsing fails
|
||||
"""
|
||||
try:
|
||||
multiaddr = Multiaddr(addr_str)
|
||||
return info_from_p2p_addr(multiaddr)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to parse bootstrap address '{addr_str}': {e}")
|
||||
return None
|
||||
@ -1,26 +0,0 @@
|
||||
from collections.abc import (
|
||||
Callable,
|
||||
)
|
||||
|
||||
from libp2p.abc import (
|
||||
PeerInfo,
|
||||
)
|
||||
|
||||
TTL: int = 60 * 60 # Time-to-live for discovered peers in seconds
|
||||
|
||||
|
||||
class PeerDiscovery:
|
||||
def __init__(self) -> None:
|
||||
self._peer_discovered_handlers: list[Callable[[PeerInfo], None]] = []
|
||||
|
||||
def register_peer_discovered_handler(
|
||||
self, handler: Callable[[PeerInfo], None]
|
||||
) -> None:
|
||||
self._peer_discovered_handlers.append(handler)
|
||||
|
||||
def emit_peer_discovered(self, peer_info: PeerInfo) -> None:
|
||||
for handler in self._peer_discovered_handlers:
|
||||
handler(peer_info)
|
||||
|
||||
|
||||
peerDiscovery = PeerDiscovery()
|
||||
@ -1,91 +0,0 @@
|
||||
import logging
|
||||
import socket
|
||||
|
||||
from zeroconf import (
|
||||
EventLoopBlocked,
|
||||
ServiceInfo,
|
||||
Zeroconf,
|
||||
)
|
||||
|
||||
logger = logging.getLogger("libp2p.discovery.mdns.broadcaster")
|
||||
|
||||
|
||||
class PeerBroadcaster:
|
||||
"""
|
||||
Broadcasts this peer's presence on the local network using mDNS/zeroconf.
|
||||
Registers a service with the peer's ID in the TXT record as per libp2p spec.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
zeroconf: Zeroconf,
|
||||
service_type: str,
|
||||
service_name: str,
|
||||
peer_id: str,
|
||||
port: int,
|
||||
):
|
||||
self.zeroconf = zeroconf
|
||||
self.service_type = service_type
|
||||
self.peer_id = peer_id
|
||||
self.port = port
|
||||
self.service_name = service_name
|
||||
|
||||
# Get the local IP address
|
||||
local_ip = self._get_local_ip()
|
||||
hostname = socket.gethostname()
|
||||
|
||||
self.service_info = ServiceInfo(
|
||||
type_=self.service_type,
|
||||
name=self.service_name,
|
||||
port=self.port,
|
||||
properties={b"id": self.peer_id.encode()},
|
||||
server=f"{hostname}.local.",
|
||||
addresses=[socket.inet_aton(local_ip)],
|
||||
)
|
||||
|
||||
def _get_local_ip(self) -> str:
|
||||
"""Get the local IP address of this machine"""
|
||||
try:
|
||||
# Connect to a remote address to determine the local IP
|
||||
# This doesn't actually send data
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s:
|
||||
s.connect(("8.8.8.8", 80))
|
||||
local_ip = s.getsockname()[0]
|
||||
return local_ip
|
||||
except Exception:
|
||||
# Fallback to localhost if we can't determine the IP
|
||||
return "127.0.0.1"
|
||||
|
||||
def register(self) -> None:
|
||||
"""Register the peer's mDNS service on the network."""
|
||||
try:
|
||||
self.zeroconf.register_service(self.service_info)
|
||||
logger.debug(f"mDNS service registered: {self.service_name}")
|
||||
except EventLoopBlocked as e:
|
||||
logger.warning(
|
||||
"EventLoopBlocked while registering mDNS '%s': %s", self.service_name, e
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Unexpected error during mDNS registration for '%s': %r",
|
||||
self.service_name,
|
||||
e,
|
||||
)
|
||||
|
||||
def unregister(self) -> None:
|
||||
"""Unregister the peer's mDNS service from the network."""
|
||||
try:
|
||||
self.zeroconf.unregister_service(self.service_info)
|
||||
logger.debug(f"mDNS service unregistered: {self.service_name}")
|
||||
except EventLoopBlocked as e:
|
||||
logger.warning(
|
||||
"EventLoopBlocked while unregistering mDNS '%s': %s",
|
||||
self.service_name,
|
||||
e,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Unexpected error during mDNS unregistration for '%s': %r",
|
||||
self.service_name,
|
||||
e,
|
||||
)
|
||||
@ -1,83 +0,0 @@
|
||||
import logging
|
||||
import socket
|
||||
|
||||
from zeroconf import (
|
||||
ServiceBrowser,
|
||||
ServiceInfo,
|
||||
ServiceListener,
|
||||
Zeroconf,
|
||||
)
|
||||
|
||||
from libp2p.abc import IPeerStore, Multiaddr
|
||||
from libp2p.discovery.events.peerDiscovery import peerDiscovery
|
||||
from libp2p.peer.id import ID
|
||||
from libp2p.peer.peerinfo import PeerInfo
|
||||
|
||||
logger = logging.getLogger("libp2p.discovery.mdns.listener")
|
||||
|
||||
|
||||
class PeerListener(ServiceListener):
|
||||
"""mDNS listener — now a true ServiceListener subclass."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
peerstore: IPeerStore,
|
||||
zeroconf: Zeroconf,
|
||||
service_type: str,
|
||||
service_name: str,
|
||||
) -> None:
|
||||
self.peerstore = peerstore
|
||||
self.zeroconf = zeroconf
|
||||
self.service_type = service_type
|
||||
self.service_name = service_name
|
||||
self.discovered_services: dict[str, ID] = {}
|
||||
self.browser = ServiceBrowser(self.zeroconf, self.service_type, listener=self)
|
||||
|
||||
def add_service(self, zc: Zeroconf, type_: str, name: str) -> None:
|
||||
if name == self.service_name:
|
||||
return
|
||||
logger.debug(f"Adding service: {name}")
|
||||
info = zc.get_service_info(type_, name, timeout=5000)
|
||||
if not info:
|
||||
return
|
||||
peer_info = self._extract_peer_info(info)
|
||||
if peer_info:
|
||||
self.discovered_services[name] = peer_info.peer_id
|
||||
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
|
||||
peerDiscovery.emit_peer_discovered(peer_info)
|
||||
logger.debug(f"Discovered Peer: {peer_info.peer_id}")
|
||||
|
||||
def remove_service(self, zc: Zeroconf, type_: str, name: str) -> None:
|
||||
if name == self.service_name:
|
||||
return
|
||||
logger.debug(f"Removing service: {name}")
|
||||
peer_id = self.discovered_services.pop(name)
|
||||
self.peerstore.clear_addrs(peer_id)
|
||||
logger.debug(f"Removed Peer: {peer_id}")
|
||||
|
||||
def update_service(self, zc: Zeroconf, type_: str, name: str) -> None:
|
||||
info = zc.get_service_info(type_, name, timeout=5000)
|
||||
if not info:
|
||||
return
|
||||
peer_info = self._extract_peer_info(info)
|
||||
if peer_info:
|
||||
self.peerstore.clear_addrs(peer_info.peer_id)
|
||||
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
|
||||
logger.debug(f"Updated Peer {peer_info.peer_id}")
|
||||
|
||||
def _extract_peer_info(self, info: ServiceInfo) -> PeerInfo | None:
|
||||
try:
|
||||
addrs = [
|
||||
Multiaddr(f"/ip4/{socket.inet_ntoa(addr)}/tcp/{info.port}")
|
||||
for addr in info.addresses
|
||||
]
|
||||
pid_bytes = info.properties.get(b"id")
|
||||
if not pid_bytes:
|
||||
return None
|
||||
pid = ID.from_base58(pid_bytes.decode())
|
||||
return PeerInfo(peer_id=pid, addrs=addrs)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def stop(self) -> None:
|
||||
self.browser.cancel()
|
||||
@ -1,73 +0,0 @@
|
||||
"""
|
||||
mDNS-based peer discovery for py-libp2p.
|
||||
Conforms to https://github.com/libp2p/specs/blob/master/discovery/mdns.md
|
||||
Uses zeroconf for mDNS broadcast/listen. Async operations use trio.
|
||||
"""
|
||||
|
||||
import logging
|
||||
|
||||
from zeroconf import (
|
||||
Zeroconf,
|
||||
)
|
||||
|
||||
from libp2p.abc import (
|
||||
INetworkService,
|
||||
)
|
||||
|
||||
from .broadcaster import (
|
||||
PeerBroadcaster,
|
||||
)
|
||||
from .listener import (
|
||||
PeerListener,
|
||||
)
|
||||
from .utils import (
|
||||
stringGen,
|
||||
)
|
||||
|
||||
logger = logging.getLogger("libp2p.discovery.mdns")
|
||||
|
||||
SERVICE_TYPE = "_p2p._udp.local."
|
||||
MCAST_PORT = 5353
|
||||
MCAST_ADDR = "224.0.0.251"
|
||||
|
||||
|
||||
class MDNSDiscovery:
|
||||
"""
|
||||
mDNS-based peer discovery for py-libp2p, using zeroconf.
|
||||
Conforms to the libp2p mDNS discovery spec.
|
||||
"""
|
||||
|
||||
def __init__(self, swarm: INetworkService, port: int = 8000):
|
||||
self.peer_id = str(swarm.get_peer_id())
|
||||
self.port = port
|
||||
self.zeroconf = Zeroconf()
|
||||
self.serviceName = f"{stringGen()}.{SERVICE_TYPE}"
|
||||
self.peerstore = swarm.peerstore
|
||||
self.swarm = swarm
|
||||
self.broadcaster = PeerBroadcaster(
|
||||
zeroconf=self.zeroconf,
|
||||
service_type=SERVICE_TYPE,
|
||||
service_name=self.serviceName,
|
||||
peer_id=self.peer_id,
|
||||
port=self.port,
|
||||
)
|
||||
self.listener = PeerListener(
|
||||
zeroconf=self.zeroconf,
|
||||
peerstore=self.peerstore,
|
||||
service_type=SERVICE_TYPE,
|
||||
service_name=self.serviceName,
|
||||
)
|
||||
|
||||
def start(self) -> None:
|
||||
"""Register this peer and start listening for others."""
|
||||
logger.debug(
|
||||
f"Starting mDNS discovery for peer {self.peer_id} on port {self.port}"
|
||||
)
|
||||
self.broadcaster.register()
|
||||
# Listener is started in constructor
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Unregister this peer and clean up zeroconf resources."""
|
||||
logger.debug("Stopping mDNS discovery")
|
||||
self.broadcaster.unregister()
|
||||
self.zeroconf.close()
|
||||
@ -1,11 +0,0 @@
|
||||
import random
|
||||
import string
|
||||
|
||||
|
||||
def stringGen(len: int = 63) -> str:
|
||||
"""Generate a random string of lowercase letters and digits."""
|
||||
charset = string.ascii_lowercase + string.digits
|
||||
result = []
|
||||
for _ in range(len):
|
||||
result.append(random.choice(charset))
|
||||
return "".join(result)
|
||||
@ -29,8 +29,6 @@ from libp2p.custom_types import (
|
||||
StreamHandlerFn,
|
||||
TProtocol,
|
||||
)
|
||||
from libp2p.discovery.bootstrap.bootstrap import BootstrapDiscovery
|
||||
from libp2p.discovery.mdns.mdns import MDNSDiscovery
|
||||
from libp2p.host.defaults import (
|
||||
get_default_protocols,
|
||||
)
|
||||
@ -72,7 +70,6 @@ if TYPE_CHECKING:
|
||||
|
||||
|
||||
logger = logging.getLogger("libp2p.network.basic_host")
|
||||
DEFAULT_NEGOTIATE_TIMEOUT = 5
|
||||
|
||||
|
||||
class BasicHost(IHost):
|
||||
@ -92,23 +89,15 @@ class BasicHost(IHost):
|
||||
def __init__(
|
||||
self,
|
||||
network: INetworkService,
|
||||
enable_mDNS: bool = False,
|
||||
bootstrap: list[str] | None = None,
|
||||
default_protocols: Optional["OrderedDict[TProtocol, StreamHandlerFn]"] = None,
|
||||
negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
|
||||
) -> None:
|
||||
self._network = network
|
||||
self._network.set_stream_handler(self._swarm_stream_handler)
|
||||
self.peerstore = self._network.peerstore
|
||||
self.negotiate_timeout = negotitate_timeout
|
||||
# Protocol muxing
|
||||
default_protocols = default_protocols or get_default_protocols(self)
|
||||
self.multiselect = Multiselect(dict(default_protocols.items()))
|
||||
self.multiselect_client = MultiselectClient()
|
||||
if enable_mDNS:
|
||||
self.mDNS = MDNSDiscovery(network)
|
||||
if bootstrap:
|
||||
self.bootstrap = BootstrapDiscovery(network, bootstrap)
|
||||
|
||||
def get_id(self) -> ID:
|
||||
"""
|
||||
@ -173,19 +162,7 @@ class BasicHost(IHost):
|
||||
network = self.get_network()
|
||||
async with background_trio_service(network):
|
||||
await network.listen(*listen_addrs)
|
||||
if hasattr(self, "mDNS") and self.mDNS is not None:
|
||||
logger.debug("Starting mDNS Discovery")
|
||||
self.mDNS.start()
|
||||
if hasattr(self, "bootstrap") and self.bootstrap is not None:
|
||||
logger.debug("Starting Bootstrap Discovery")
|
||||
await self.bootstrap.start()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
if hasattr(self, "mDNS") and self.mDNS is not None:
|
||||
self.mDNS.stop()
|
||||
if hasattr(self, "bootstrap") and self.bootstrap is not None:
|
||||
self.bootstrap.stop()
|
||||
yield
|
||||
|
||||
return _run()
|
||||
|
||||
@ -201,10 +178,7 @@ class BasicHost(IHost):
|
||||
self.multiselect.add_handler(protocol_id, stream_handler)
|
||||
|
||||
async def new_stream(
|
||||
self,
|
||||
peer_id: ID,
|
||||
protocol_ids: Sequence[TProtocol],
|
||||
negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
|
||||
self, peer_id: ID, protocol_ids: Sequence[TProtocol]
|
||||
) -> INetStream:
|
||||
"""
|
||||
:param peer_id: peer_id that host is connecting
|
||||
@ -216,9 +190,7 @@ class BasicHost(IHost):
|
||||
# Perform protocol muxing to determine protocol to use
|
||||
try:
|
||||
selected_protocol = await self.multiselect_client.select_one_of(
|
||||
list(protocol_ids),
|
||||
MultiselectCommunicator(net_stream),
|
||||
negotitate_timeout,
|
||||
list(protocol_ids), MultiselectCommunicator(net_stream)
|
||||
)
|
||||
except MultiselectClientError as error:
|
||||
logger.debug("fail to open a stream to peer %s, error=%s", peer_id, error)
|
||||
@ -228,12 +200,7 @@ class BasicHost(IHost):
|
||||
net_stream.set_protocol(selected_protocol)
|
||||
return net_stream
|
||||
|
||||
async def send_command(
|
||||
self,
|
||||
peer_id: ID,
|
||||
command: str,
|
||||
response_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
|
||||
) -> list[str]:
|
||||
async def send_command(self, peer_id: ID, command: str) -> list[str]:
|
||||
"""
|
||||
Send a multistream-select command to the specified peer and return
|
||||
the response.
|
||||
@ -247,7 +214,7 @@ class BasicHost(IHost):
|
||||
|
||||
try:
|
||||
response = await self.multiselect_client.query_multistream_command(
|
||||
MultiselectCommunicator(new_stream), command, response_timeout
|
||||
MultiselectCommunicator(new_stream), command
|
||||
)
|
||||
except MultiselectClientError as error:
|
||||
logger.debug("fail to open a stream to peer %s, error=%s", peer_id, error)
|
||||
@ -286,7 +253,7 @@ class BasicHost(IHost):
|
||||
# Perform protocol muxing to determine protocol to use
|
||||
try:
|
||||
protocol, handler = await self.multiselect.negotiate(
|
||||
MultiselectCommunicator(net_stream), self.negotiate_timeout
|
||||
MultiselectCommunicator(net_stream)
|
||||
)
|
||||
except MultiselectError as error:
|
||||
peer_id = net_stream.muxed_conn.peer_id
|
||||
|
||||
@ -26,8 +26,5 @@ if TYPE_CHECKING:
|
||||
|
||||
def get_default_protocols(host: IHost) -> "OrderedDict[TProtocol, StreamHandlerFn]":
|
||||
return OrderedDict(
|
||||
(
|
||||
(IdentifyID, identify_handler_for(host, use_varint_format=True)),
|
||||
(PingID, handle_ping),
|
||||
)
|
||||
((IdentifyID, identify_handler_for(host)), (PingID, handle_ping))
|
||||
)
|
||||
|
||||
@ -18,14 +18,8 @@ from libp2p.peer.peerinfo import (
|
||||
class RoutedHost(BasicHost):
|
||||
_router: IPeerRouting
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
network: INetworkService,
|
||||
router: IPeerRouting,
|
||||
enable_mDNS: bool = False,
|
||||
bootstrap: list[str] | None = None,
|
||||
):
|
||||
super().__init__(network, enable_mDNS, bootstrap)
|
||||
def __init__(self, network: INetworkService, router: IPeerRouting):
|
||||
super().__init__(network)
|
||||
self._router = router
|
||||
|
||||
async def connect(self, peer_info: PeerInfo) -> None:
|
||||
|
||||
@ -15,12 +15,8 @@ from libp2p.custom_types import (
|
||||
from libp2p.network.stream.exceptions import (
|
||||
StreamClosed,
|
||||
)
|
||||
from libp2p.peer.envelope import seal_record
|
||||
from libp2p.peer.peer_record import PeerRecord
|
||||
from libp2p.utils import (
|
||||
decode_varint_with_size,
|
||||
get_agent_version,
|
||||
varint,
|
||||
)
|
||||
|
||||
from .pb.identify_pb2 import (
|
||||
@ -63,12 +59,7 @@ def _mk_identify_protobuf(
|
||||
) -> Identify:
|
||||
public_key = host.get_public_key()
|
||||
laddrs = host.get_addrs()
|
||||
protocols = tuple(str(p) for p in host.get_mux().get_protocols() if p is not None)
|
||||
|
||||
# Create a signed peer-record for the remote peer
|
||||
record = PeerRecord(host.get_id(), host.get_addrs())
|
||||
envelope = seal_record(record, host.get_private_key())
|
||||
protobuf = envelope.marshal_envelope()
|
||||
protocols = host.get_mux().get_protocols()
|
||||
|
||||
observed_addr = observed_multiaddr.to_bytes() if observed_multiaddr else b""
|
||||
return Identify(
|
||||
@ -78,51 +69,10 @@ def _mk_identify_protobuf(
|
||||
listen_addrs=map(_multiaddr_to_bytes, laddrs),
|
||||
observed_addr=observed_addr,
|
||||
protocols=protocols,
|
||||
signedPeerRecord=protobuf,
|
||||
)
|
||||
|
||||
|
||||
def parse_identify_response(response: bytes) -> Identify:
|
||||
"""
|
||||
Parse identify response that could be either:
|
||||
- Old format: raw protobuf
|
||||
- New format: length-prefixed protobuf
|
||||
|
||||
This function provides backward and forward compatibility.
|
||||
"""
|
||||
# Try new format first: length-prefixed protobuf
|
||||
if len(response) >= 1:
|
||||
length, varint_size = decode_varint_with_size(response)
|
||||
if varint_size > 0 and length > 0 and varint_size + length <= len(response):
|
||||
protobuf_data = response[varint_size : varint_size + length]
|
||||
try:
|
||||
identify_response = Identify()
|
||||
identify_response.ParseFromString(protobuf_data)
|
||||
# Sanity check: must have agent_version (protocol_version is optional)
|
||||
if identify_response.agent_version:
|
||||
logger.debug(
|
||||
"Parsed length-prefixed identify response (new format)"
|
||||
)
|
||||
return identify_response
|
||||
except Exception:
|
||||
pass # Fall through to old format
|
||||
|
||||
# Fall back to old format: raw protobuf
|
||||
try:
|
||||
identify_response = Identify()
|
||||
identify_response.ParseFromString(response)
|
||||
logger.debug("Parsed raw protobuf identify response (old format)")
|
||||
return identify_response
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to parse identify response: {e}")
|
||||
logger.error(f"Response length: {len(response)}")
|
||||
logger.error(f"Response hex: {response.hex()}")
|
||||
raise
|
||||
|
||||
|
||||
def identify_handler_for(
|
||||
host: IHost, use_varint_format: bool = True
|
||||
) -> StreamHandlerFn:
|
||||
def identify_handler_for(host: IHost) -> StreamHandlerFn:
|
||||
async def handle_identify(stream: INetStream) -> None:
|
||||
# get observed address from ``stream``
|
||||
peer_id = (
|
||||
@ -150,21 +100,7 @@ def identify_handler_for(
|
||||
response = protobuf.SerializeToString()
|
||||
|
||||
try:
|
||||
if use_varint_format:
|
||||
# Send length-prefixed protobuf message (new format)
|
||||
await stream.write(varint.encode_uvarint(len(response)))
|
||||
await stream.write(response)
|
||||
logger.debug(
|
||||
"Sent new format (length-prefixed) identify response to %s",
|
||||
peer_id,
|
||||
)
|
||||
else:
|
||||
# Send raw protobuf message (old format for backward compatibility)
|
||||
await stream.write(response)
|
||||
logger.debug(
|
||||
"Sent old format (raw protobuf) identify response to %s",
|
||||
peer_id,
|
||||
)
|
||||
await stream.write(response)
|
||||
except StreamClosed:
|
||||
logger.debug("Fail to respond to %s request: stream closed", ID)
|
||||
else:
|
||||
|
||||
@ -9,5 +9,4 @@ message Identify {
|
||||
repeated bytes listen_addrs = 2;
|
||||
optional bytes observed_addr = 4;
|
||||
repeated string protocols = 3;
|
||||
optional bytes signedPeerRecord = 8;
|
||||
}
|
||||
|
||||
@ -1,12 +1,11 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/identity/identify/pb/identify.proto
|
||||
# Protobuf Python Version: 4.25.3
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf.internal import builder as _builder
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
@ -14,13 +13,13 @@ _sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*libp2p/identity/identify/pb/identify.proto\x12\x0bidentify.pb\"\xa9\x01\n\x08Identify\x12\x18\n\x10protocol_version\x18\x05 \x01(\t\x12\x15\n\ragent_version\x18\x06 \x01(\t\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\x14\n\x0clisten_addrs\x18\x02 \x03(\x0c\x12\x15\n\robserved_addr\x18\x04 \x01(\x0c\x12\x11\n\tprotocols\x18\x03 \x03(\t\x12\x18\n\x10signedPeerRecord\x18\x08 \x01(\x0c')
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*libp2p/identity/identify/pb/identify.proto\x12\x0bidentify.pb\"\x8f\x01\n\x08Identify\x12\x18\n\x10protocol_version\x18\x05 \x01(\t\x12\x15\n\ragent_version\x18\x06 \x01(\t\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\x14\n\x0clisten_addrs\x18\x02 \x03(\x0c\x12\x15\n\robserved_addr\x18\x04 \x01(\x0c\x12\x11\n\tprotocols\x18\x03 \x03(\t')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.identity.identify.pb.identify_pb2', _globals)
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.identity.identify.pb.identify_pb2', globals())
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
|
||||
DESCRIPTOR._options = None
|
||||
_globals['_IDENTIFY']._serialized_start=60
|
||||
_globals['_IDENTIFY']._serialized_end=229
|
||||
_IDENTIFY._serialized_start=60
|
||||
_IDENTIFY._serialized_end=203
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
|
||||
@ -1,24 +1,46 @@
|
||||
from google.protobuf.internal import containers as _containers
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import message as _message
|
||||
from typing import ClassVar as _ClassVar, Iterable as _Iterable, Optional as _Optional
|
||||
"""
|
||||
@generated by mypy-protobuf. Do not edit manually!
|
||||
isort:skip_file
|
||||
"""
|
||||
|
||||
DESCRIPTOR: _descriptor.FileDescriptor
|
||||
import builtins
|
||||
import collections.abc
|
||||
import google.protobuf.descriptor
|
||||
import google.protobuf.internal.containers
|
||||
import google.protobuf.message
|
||||
import typing
|
||||
|
||||
class Identify(_message.Message):
|
||||
__slots__ = ("protocol_version", "agent_version", "public_key", "listen_addrs", "observed_addr", "protocols", "signedPeerRecord")
|
||||
PROTOCOL_VERSION_FIELD_NUMBER: _ClassVar[int]
|
||||
AGENT_VERSION_FIELD_NUMBER: _ClassVar[int]
|
||||
PUBLIC_KEY_FIELD_NUMBER: _ClassVar[int]
|
||||
LISTEN_ADDRS_FIELD_NUMBER: _ClassVar[int]
|
||||
OBSERVED_ADDR_FIELD_NUMBER: _ClassVar[int]
|
||||
PROTOCOLS_FIELD_NUMBER: _ClassVar[int]
|
||||
SIGNEDPEERRECORD_FIELD_NUMBER: _ClassVar[int]
|
||||
protocol_version: str
|
||||
agent_version: str
|
||||
public_key: bytes
|
||||
listen_addrs: _containers.RepeatedScalarFieldContainer[bytes]
|
||||
observed_addr: bytes
|
||||
protocols: _containers.RepeatedScalarFieldContainer[str]
|
||||
signedPeerRecord: bytes
|
||||
def __init__(self, protocol_version: _Optional[str] = ..., agent_version: _Optional[str] = ..., public_key: _Optional[bytes] = ..., listen_addrs: _Optional[_Iterable[bytes]] = ..., observed_addr: _Optional[bytes] = ..., protocols: _Optional[_Iterable[str]] = ..., signedPeerRecord: _Optional[bytes] = ...) -> None: ...
|
||||
DESCRIPTOR: google.protobuf.descriptor.FileDescriptor
|
||||
|
||||
@typing.final
|
||||
class Identify(google.protobuf.message.Message):
|
||||
DESCRIPTOR: google.protobuf.descriptor.Descriptor
|
||||
|
||||
PROTOCOL_VERSION_FIELD_NUMBER: builtins.int
|
||||
AGENT_VERSION_FIELD_NUMBER: builtins.int
|
||||
PUBLIC_KEY_FIELD_NUMBER: builtins.int
|
||||
LISTEN_ADDRS_FIELD_NUMBER: builtins.int
|
||||
OBSERVED_ADDR_FIELD_NUMBER: builtins.int
|
||||
PROTOCOLS_FIELD_NUMBER: builtins.int
|
||||
protocol_version: builtins.str
|
||||
agent_version: builtins.str
|
||||
public_key: builtins.bytes
|
||||
observed_addr: builtins.bytes
|
||||
@property
|
||||
def listen_addrs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]: ...
|
||||
@property
|
||||
def protocols(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.str]: ...
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
protocol_version: builtins.str | None = ...,
|
||||
agent_version: builtins.str | None = ...,
|
||||
public_key: builtins.bytes | None = ...,
|
||||
listen_addrs: collections.abc.Iterable[builtins.bytes] | None = ...,
|
||||
observed_addr: builtins.bytes | None = ...,
|
||||
protocols: collections.abc.Iterable[builtins.str] | None = ...,
|
||||
) -> None: ...
|
||||
def HasField(self, field_name: typing.Literal["agent_version", b"agent_version", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "public_key", b"public_key"]) -> builtins.bool: ...
|
||||
def ClearField(self, field_name: typing.Literal["agent_version", b"agent_version", "listen_addrs", b"listen_addrs", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "protocols", b"protocols", "public_key", b"public_key"]) -> None: ...
|
||||
|
||||
global___Identify = Identify
|
||||
|
||||
@ -20,16 +20,11 @@ from libp2p.custom_types import (
|
||||
from libp2p.network.stream.exceptions import (
|
||||
StreamClosed,
|
||||
)
|
||||
from libp2p.peer.envelope import consume_envelope
|
||||
from libp2p.peer.id import (
|
||||
ID,
|
||||
)
|
||||
from libp2p.utils import (
|
||||
get_agent_version,
|
||||
varint,
|
||||
)
|
||||
from libp2p.utils.varint import (
|
||||
read_length_prefixed_protobuf,
|
||||
)
|
||||
|
||||
from ..identify.identify import (
|
||||
@ -45,31 +40,22 @@ logger = logging.getLogger(__name__)
|
||||
ID_PUSH = TProtocol("/ipfs/id/push/1.0.0")
|
||||
PROTOCOL_VERSION = "ipfs/0.1.0"
|
||||
AGENT_VERSION = get_agent_version()
|
||||
CONCURRENCY_LIMIT = 10
|
||||
|
||||
|
||||
def identify_push_handler_for(
|
||||
host: IHost, use_varint_format: bool = True
|
||||
) -> StreamHandlerFn:
|
||||
def identify_push_handler_for(host: IHost) -> StreamHandlerFn:
|
||||
"""
|
||||
Create a handler for the identify/push protocol.
|
||||
|
||||
This handler receives pushed identify messages from remote peers and updates
|
||||
the local peerstore with the new information.
|
||||
|
||||
Args:
|
||||
host: The libp2p host.
|
||||
use_varint_format: True=length-prefixed, False=raw protobuf.
|
||||
|
||||
"""
|
||||
|
||||
async def handle_identify_push(stream: INetStream) -> None:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
|
||||
try:
|
||||
# Use the utility function to read the protobuf message
|
||||
data = await read_length_prefixed_protobuf(stream, use_varint_format)
|
||||
|
||||
# Read the identify message from the stream
|
||||
data = await stream.read()
|
||||
identify_msg = Identify()
|
||||
identify_msg.ParseFromString(data)
|
||||
|
||||
@ -79,11 +65,6 @@ def identify_push_handler_for(
|
||||
)
|
||||
|
||||
logger.debug("Successfully processed identify/push from peer %s", peer_id)
|
||||
|
||||
# Send acknowledgment to indicate successful processing
|
||||
# This ensures the sender knows the message was received before closing
|
||||
await stream.write(b"OK")
|
||||
|
||||
except StreamClosed:
|
||||
logger.debug(
|
||||
"Stream closed while processing identify/push from %s", peer_id
|
||||
@ -92,10 +73,7 @@ def identify_push_handler_for(
|
||||
logger.error("Error processing identify/push from %s: %s", peer_id, e)
|
||||
finally:
|
||||
# Close the stream after processing
|
||||
try:
|
||||
await stream.close()
|
||||
except Exception:
|
||||
pass # Ignore errors when closing
|
||||
await stream.close()
|
||||
|
||||
return handle_identify_push
|
||||
|
||||
@ -141,19 +119,6 @@ async def _update_peerstore_from_identify(
|
||||
except Exception as e:
|
||||
logger.error("Error updating protocols for peer %s: %s", peer_id, e)
|
||||
|
||||
if identify_msg.HasField("signedPeerRecord"):
|
||||
try:
|
||||
# Convert the signed-peer-record(Envelope) from prtobuf bytes
|
||||
envelope, _ = consume_envelope(
|
||||
identify_msg.signedPeerRecord, "libp2p-peer-record"
|
||||
)
|
||||
# Use a default TTL of 2 hours (7200 seconds)
|
||||
if not peerstore.consume_peer_record(envelope, 7200):
|
||||
logger.error("Updating Certified-Addr-Book was unsuccessful")
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Error updating the certified addr book for peer %s: %s", peer_id, e
|
||||
)
|
||||
# Update observed address if present
|
||||
if identify_msg.HasField("observed_addr") and identify_msg.observed_addr:
|
||||
try:
|
||||
@ -167,11 +132,7 @@ async def _update_peerstore_from_identify(
|
||||
|
||||
|
||||
async def push_identify_to_peer(
|
||||
host: IHost,
|
||||
peer_id: ID,
|
||||
observed_multiaddr: Multiaddr | None = None,
|
||||
limit: trio.Semaphore = trio.Semaphore(CONCURRENCY_LIMIT),
|
||||
use_varint_format: bool = True,
|
||||
host: IHost, peer_id: ID, observed_multiaddr: Multiaddr | None = None
|
||||
) -> bool:
|
||||
"""
|
||||
Push an identify message to a specific peer.
|
||||
@ -179,91 +140,52 @@ async def push_identify_to_peer(
|
||||
This function opens a stream to the peer using the identify/push protocol,
|
||||
sends the identify message, and closes the stream.
|
||||
|
||||
Args:
|
||||
host: The libp2p host.
|
||||
peer_id: The peer ID to push to.
|
||||
observed_multiaddr: The observed multiaddress (optional).
|
||||
limit: Semaphore for concurrency control.
|
||||
use_varint_format: True=length-prefixed, False=raw protobuf.
|
||||
|
||||
Returns:
|
||||
bool: True if the push was successful, False otherwise.
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if the push was successful, False otherwise.
|
||||
|
||||
"""
|
||||
async with limit:
|
||||
try:
|
||||
# Create a new stream to the peer using the identify/push protocol
|
||||
stream = await host.new_stream(peer_id, [ID_PUSH])
|
||||
try:
|
||||
# Create a new stream to the peer using the identify/push protocol
|
||||
stream = await host.new_stream(peer_id, [ID_PUSH])
|
||||
|
||||
# Create the identify message
|
||||
identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
|
||||
response = identify_msg.SerializeToString()
|
||||
# Create the identify message
|
||||
identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
|
||||
response = identify_msg.SerializeToString()
|
||||
|
||||
if use_varint_format:
|
||||
# Send length-prefixed identify message
|
||||
await stream.write(varint.encode_uvarint(len(response)))
|
||||
await stream.write(response)
|
||||
else:
|
||||
# Send raw protobuf message
|
||||
await stream.write(response)
|
||||
# Send the identify message
|
||||
await stream.write(response)
|
||||
|
||||
# Wait for acknowledgment from the receiver with timeout
|
||||
# This ensures the message was processed before closing
|
||||
try:
|
||||
with trio.move_on_after(1.0): # 1 second timeout
|
||||
ack = await stream.read(2) # Read "OK" acknowledgment
|
||||
if ack != b"OK":
|
||||
logger.warning(
|
||||
"Unexpected acknowledgment from peer %s: %s", peer_id, ack
|
||||
)
|
||||
except Exception as e:
|
||||
logger.debug("No acknowledgment received from peer %s: %s", peer_id, e)
|
||||
# Continue anyway, as the message might have been processed
|
||||
# Close the stream
|
||||
await stream.close()
|
||||
|
||||
# Close the stream after acknowledgment (or timeout)
|
||||
await stream.close()
|
||||
|
||||
logger.debug("Successfully pushed identify to peer %s", peer_id)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error("Error pushing identify to peer %s: %s", peer_id, e)
|
||||
return False
|
||||
logger.debug("Successfully pushed identify to peer %s", peer_id)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error("Error pushing identify to peer %s: %s", peer_id, e)
|
||||
return False
|
||||
|
||||
|
||||
async def push_identify_to_peers(
|
||||
host: IHost,
|
||||
peer_ids: set[ID] | None = None,
|
||||
observed_multiaddr: Multiaddr | None = None,
|
||||
use_varint_format: bool = True,
|
||||
) -> None:
|
||||
"""
|
||||
Push an identify message to multiple peers in parallel.
|
||||
|
||||
If peer_ids is None, push to all connected peers.
|
||||
|
||||
Args:
|
||||
host: The libp2p host.
|
||||
peer_ids: Set of peer IDs to push to (if None, push to all connected peers).
|
||||
observed_multiaddr: The observed multiaddress (optional).
|
||||
use_varint_format: True=length-prefixed, False=raw protobuf.
|
||||
|
||||
"""
|
||||
if peer_ids is None:
|
||||
# Get all connected peers
|
||||
peer_ids = set(host.get_connected_peers())
|
||||
|
||||
# Create a single shared semaphore for concurrency control
|
||||
limit = trio.Semaphore(CONCURRENCY_LIMIT)
|
||||
peer_ids = set(host.get_peerstore().peer_ids())
|
||||
|
||||
# Push to each peer in parallel using a trio.Nursery
|
||||
# limiting concurrent connections to CONCURRENCY_LIMIT
|
||||
# TODO: Consider using a bounded nursery to limit concurrency
|
||||
# and avoid overwhelming the network. This can be done by using
|
||||
# trio.open_nursery(max_concurrent=10) or similar.
|
||||
# For now, we will use an unbounded nursery for simplicity.
|
||||
async with trio.open_nursery() as nursery:
|
||||
for peer_id in peer_ids:
|
||||
nursery.start_soon(
|
||||
push_identify_to_peer,
|
||||
host,
|
||||
peer_id,
|
||||
observed_multiaddr,
|
||||
limit,
|
||||
use_varint_format,
|
||||
)
|
||||
nursery.start_soon(push_identify_to_peer, host, peer_id, observed_multiaddr)
|
||||
|
||||
@ -3,7 +3,6 @@ from typing import (
|
||||
TYPE_CHECKING,
|
||||
)
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p.abc import (
|
||||
@ -148,24 +147,6 @@ class SwarmConn(INetConn):
|
||||
def get_streams(self) -> tuple[NetStream, ...]:
|
||||
return tuple(self.streams)
|
||||
|
||||
def get_transport_addresses(self) -> list[Multiaddr]:
|
||||
"""
|
||||
Retrieve the transport addresses used by this connection.
|
||||
|
||||
Returns
|
||||
-------
|
||||
list[Multiaddr]
|
||||
A list of multiaddresses used by the transport.
|
||||
|
||||
"""
|
||||
# Return the addresses from the peerstore for this peer
|
||||
try:
|
||||
peer_id = self.muxed_conn.peer_id
|
||||
return self.swarm.peerstore.addrs(peer_id)
|
||||
except Exception as e:
|
||||
logging.warning(f"Error getting transport addresses: {e}")
|
||||
return []
|
||||
|
||||
def remove_stream(self, stream: NetStream) -> None:
|
||||
if stream not in self.streams:
|
||||
return
|
||||
|
||||
@ -1,271 +0,0 @@
|
||||
from typing import Any, cast
|
||||
|
||||
from libp2p.crypto.ed25519 import Ed25519PublicKey
|
||||
from libp2p.crypto.keys import PrivateKey, PublicKey
|
||||
from libp2p.crypto.rsa import RSAPublicKey
|
||||
from libp2p.crypto.secp256k1 import Secp256k1PublicKey
|
||||
import libp2p.peer.pb.crypto_pb2 as cryto_pb
|
||||
import libp2p.peer.pb.envelope_pb2 as pb
|
||||
import libp2p.peer.pb.peer_record_pb2 as record_pb
|
||||
from libp2p.peer.peer_record import (
|
||||
PeerRecord,
|
||||
peer_record_from_protobuf,
|
||||
unmarshal_record,
|
||||
)
|
||||
from libp2p.utils.varint import encode_uvarint
|
||||
|
||||
ENVELOPE_DOMAIN = "libp2p-peer-record"
|
||||
PEER_RECORD_CODEC = b"\x03\x01"
|
||||
|
||||
|
||||
class Envelope:
|
||||
"""
|
||||
A signed wrapper around a serialized libp2p record.
|
||||
|
||||
Envelopes are cryptographically signed by the author's private key
|
||||
and are scoped to a specific 'domain' to prevent cross-protocol replay.
|
||||
|
||||
Attributes:
|
||||
public_key: The public key that can verify the envelope's signature.
|
||||
payload_type: A multicodec code identifying the type of payload inside.
|
||||
raw_payload: The raw serialized record data.
|
||||
signature: Signature over the domain-scoped payload content.
|
||||
|
||||
"""
|
||||
|
||||
public_key: PublicKey
|
||||
payload_type: bytes
|
||||
raw_payload: bytes
|
||||
signature: bytes
|
||||
|
||||
_cached_record: PeerRecord | None = None
|
||||
_unmarshal_error: Exception | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
public_key: PublicKey,
|
||||
payload_type: bytes,
|
||||
raw_payload: bytes,
|
||||
signature: bytes,
|
||||
):
|
||||
self.public_key = public_key
|
||||
self.payload_type = payload_type
|
||||
self.raw_payload = raw_payload
|
||||
self.signature = signature
|
||||
|
||||
def marshal_envelope(self) -> bytes:
|
||||
"""
|
||||
Serialize this Envelope into its protobuf wire format.
|
||||
|
||||
Converts all envelope fields into a `pb.Envelope` protobuf message
|
||||
and returns the serialized bytes.
|
||||
|
||||
:return: Serialized envelope as bytes.
|
||||
"""
|
||||
pb_env = pb.Envelope(
|
||||
public_key=pub_key_to_protobuf(self.public_key),
|
||||
payload_type=self.payload_type,
|
||||
payload=self.raw_payload,
|
||||
signature=self.signature,
|
||||
)
|
||||
return pb_env.SerializeToString()
|
||||
|
||||
def validate(self, domain: str) -> None:
|
||||
"""
|
||||
Verify the envelope's signature within the given domain scope.
|
||||
|
||||
This ensures that the envelope has not been tampered with
|
||||
and was signed under the correct usage context.
|
||||
|
||||
:param domain: Domain string that contextualizes the signature.
|
||||
:raises ValueError: If the signature is invalid.
|
||||
"""
|
||||
unsigned = make_unsigned(domain, self.payload_type, self.raw_payload)
|
||||
if not self.public_key.verify(unsigned, self.signature):
|
||||
raise ValueError("Invalid envelope signature")
|
||||
|
||||
def record(self) -> PeerRecord:
|
||||
"""
|
||||
Lazily decode and return the embedded PeerRecord.
|
||||
|
||||
This method unmarshals the payload bytes into a `PeerRecord` instance,
|
||||
using the registered codec to identify the type. The decoded result
|
||||
is cached for future use.
|
||||
|
||||
:return: Decoded PeerRecord object.
|
||||
:raises Exception: If decoding fails or payload type is unsupported.
|
||||
"""
|
||||
if self._cached_record is not None:
|
||||
return self._cached_record
|
||||
|
||||
try:
|
||||
if self.payload_type != PEER_RECORD_CODEC:
|
||||
raise ValueError("Unsuported payload type in envelope")
|
||||
msg = record_pb.PeerRecord()
|
||||
msg.ParseFromString(self.raw_payload)
|
||||
|
||||
self._cached_record = peer_record_from_protobuf(msg)
|
||||
return self._cached_record
|
||||
except Exception as e:
|
||||
self._unmarshal_error = e
|
||||
raise
|
||||
|
||||
def equal(self, other: Any) -> bool:
|
||||
"""
|
||||
Compare this Envelope with another for structural equality.
|
||||
|
||||
Two envelopes are considered equal if:
|
||||
- They have the same public key
|
||||
- The payload type and payload bytes match
|
||||
- Their signatures are identical
|
||||
|
||||
:param other: Another object to compare.
|
||||
:return: True if equal, False otherwise.
|
||||
"""
|
||||
if isinstance(other, Envelope):
|
||||
return (
|
||||
self.public_key.__eq__(other.public_key)
|
||||
and self.payload_type == other.payload_type
|
||||
and self.signature == other.signature
|
||||
and self.raw_payload == other.raw_payload
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def pub_key_to_protobuf(pub_key: PublicKey) -> cryto_pb.PublicKey:
|
||||
"""
|
||||
Convert a Python PublicKey object to its protobuf equivalent.
|
||||
|
||||
:param pub_key: A libp2p-compatible PublicKey instance.
|
||||
:return: Serialized protobuf PublicKey message.
|
||||
"""
|
||||
internal_key_type = pub_key.get_type()
|
||||
key_type = cast(cryto_pb.KeyType, internal_key_type.value)
|
||||
data = pub_key.to_bytes()
|
||||
protobuf_key = cryto_pb.PublicKey(Type=key_type, Data=data)
|
||||
return protobuf_key
|
||||
|
||||
|
||||
def pub_key_from_protobuf(pb_key: cryto_pb.PublicKey) -> PublicKey:
|
||||
"""
|
||||
Parse a protobuf PublicKey message into a native libp2p PublicKey.
|
||||
|
||||
Supports Ed25519, RSA, and Secp256k1 key types.
|
||||
|
||||
:param pb_key: Protobuf representation of a public key.
|
||||
:return: Parsed PublicKey object.
|
||||
:raises ValueError: If the key type is unrecognized.
|
||||
"""
|
||||
if pb_key.Type == cryto_pb.KeyType.Ed25519:
|
||||
return Ed25519PublicKey.from_bytes(pb_key.Data)
|
||||
elif pb_key.Type == cryto_pb.KeyType.RSA:
|
||||
return RSAPublicKey.from_bytes(pb_key.Data)
|
||||
elif pb_key.Type == cryto_pb.KeyType.Secp256k1:
|
||||
return Secp256k1PublicKey.from_bytes(pb_key.Data)
|
||||
# libp2p.crypto.ecdsa not implemented
|
||||
else:
|
||||
raise ValueError(f"Unknown key type: {pb_key.Type}")
|
||||
|
||||
|
||||
def seal_record(record: PeerRecord, private_key: PrivateKey) -> Envelope:
|
||||
"""
|
||||
Create and sign a new Envelope from a PeerRecord.
|
||||
|
||||
The record is serialized and signed in the scope of its domain and codec.
|
||||
The result is a self-contained, verifiable Envelope.
|
||||
|
||||
:param record: A PeerRecord to encapsulate.
|
||||
:param private_key: The signer's private key.
|
||||
:return: A signed Envelope instance.
|
||||
"""
|
||||
payload = record.marshal_record()
|
||||
|
||||
unsigned = make_unsigned(record.domain(), record.codec(), payload)
|
||||
signature = private_key.sign(unsigned)
|
||||
|
||||
return Envelope(
|
||||
public_key=private_key.get_public_key(),
|
||||
payload_type=record.codec(),
|
||||
raw_payload=payload,
|
||||
signature=signature,
|
||||
)
|
||||
|
||||
|
||||
def consume_envelope(data: bytes, domain: str) -> tuple[Envelope, PeerRecord]:
|
||||
"""
|
||||
Parse, validate, and decode an Envelope from bytes.
|
||||
|
||||
Validates the envelope's signature using the given domain and decodes
|
||||
the inner payload into a PeerRecord.
|
||||
|
||||
:param data: Serialized envelope bytes.
|
||||
:param domain: Domain string to verify signature against.
|
||||
:return: Tuple of (Envelope, PeerRecord).
|
||||
:raises ValueError: If signature validation or decoding fails.
|
||||
"""
|
||||
env = unmarshal_envelope(data)
|
||||
env.validate(domain)
|
||||
record = env.record()
|
||||
return env, record
|
||||
|
||||
|
||||
def unmarshal_envelope(data: bytes) -> Envelope:
|
||||
"""
|
||||
Deserialize an Envelope from its wire format.
|
||||
|
||||
This parses the protobuf fields without verifying the signature.
|
||||
|
||||
:param data: Serialized envelope bytes.
|
||||
:return: Parsed Envelope object.
|
||||
:raises DecodeError: If protobuf parsing fails.
|
||||
"""
|
||||
pb_env = pb.Envelope()
|
||||
pb_env.ParseFromString(data)
|
||||
pk = pub_key_from_protobuf(pb_env.public_key)
|
||||
|
||||
return Envelope(
|
||||
public_key=pk,
|
||||
payload_type=pb_env.payload_type,
|
||||
raw_payload=pb_env.payload,
|
||||
signature=pb_env.signature,
|
||||
)
|
||||
|
||||
|
||||
def make_unsigned(domain: str, payload_type: bytes, payload: bytes) -> bytes:
|
||||
"""
|
||||
Build a byte buffer to be signed for an Envelope.
|
||||
|
||||
The unsigned byte structure is:
|
||||
varint(len(domain)) || domain ||
|
||||
varint(len(payload_type)) || payload_type ||
|
||||
varint(len(payload)) || payload
|
||||
|
||||
This is the exact input used during signing and verification.
|
||||
|
||||
:param domain: Domain string for signature scoping.
|
||||
:param payload_type: Identifier for the type of payload.
|
||||
:param payload: Raw serialized payload bytes.
|
||||
:return: Byte buffer to be signed or verified.
|
||||
"""
|
||||
fields = [domain.encode(), payload_type, payload]
|
||||
buf = bytearray()
|
||||
|
||||
for field in fields:
|
||||
buf.extend(encode_uvarint(len(field)))
|
||||
buf.extend(field)
|
||||
|
||||
return bytes(buf)
|
||||
|
||||
|
||||
def debug_dump_envelope(env: Envelope) -> None:
|
||||
print("\n=== Envelope ===")
|
||||
print(f"Payload Type: {env.payload_type!r}")
|
||||
print(f"Signature: {env.signature.hex()} ({len(env.signature)} bytes)")
|
||||
print(f"Raw Payload: {env.raw_payload.hex()} ({len(env.raw_payload)} bytes)")
|
||||
|
||||
try:
|
||||
peer_record = unmarshal_record(env.raw_payload)
|
||||
print("\n=== Parsed PeerRecord ===")
|
||||
print(peer_record)
|
||||
except Exception as e:
|
||||
print("Failed to parse PeerRecord:", e)
|
||||
@ -1,4 +1,3 @@
|
||||
import functools
|
||||
import hashlib
|
||||
|
||||
import base58
|
||||
@ -37,23 +36,25 @@ if ENABLE_INLINING:
|
||||
|
||||
class ID:
|
||||
_bytes: bytes
|
||||
_xor_id: int | None = None
|
||||
_b58_str: str | None = None
|
||||
|
||||
def __init__(self, peer_id_bytes: bytes) -> None:
|
||||
self._bytes = peer_id_bytes
|
||||
|
||||
@functools.cached_property
|
||||
@property
|
||||
def xor_id(self) -> int:
|
||||
return int(sha256_digest(self._bytes).hex(), 16)
|
||||
|
||||
@functools.cached_property
|
||||
def base58(self) -> str:
|
||||
return base58.b58encode(self._bytes).decode()
|
||||
if not self._xor_id:
|
||||
self._xor_id = int(sha256_digest(self._bytes).hex(), 16)
|
||||
return self._xor_id
|
||||
|
||||
def to_bytes(self) -> bytes:
|
||||
return self._bytes
|
||||
|
||||
def to_base58(self) -> str:
|
||||
return self.base58
|
||||
if not self._b58_str:
|
||||
self._b58_str = base58.b58encode(self._bytes).decode()
|
||||
return self._b58_str
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return f"<libp2p.peer.id.ID ({self!s})>"
|
||||
|
||||
@ -1,22 +0,0 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package libp2p.peer.pb.crypto;
|
||||
|
||||
option go_package = "github.com/libp2p/go-libp2p/core/crypto/pb";
|
||||
|
||||
enum KeyType {
|
||||
RSA = 0;
|
||||
Ed25519 = 1;
|
||||
Secp256k1 = 2;
|
||||
ECDSA = 3;
|
||||
}
|
||||
|
||||
message PublicKey {
|
||||
KeyType Type = 1;
|
||||
bytes Data = 2;
|
||||
}
|
||||
|
||||
message PrivateKey {
|
||||
KeyType Type = 1;
|
||||
bytes Data = 2;
|
||||
}
|
||||
@ -1,31 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/peer/pb/crypto.proto
|
||||
# Protobuf Python Version: 4.25.3
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1blibp2p/peer/pb/crypto.proto\x12\x15libp2p.peer.pb.crypto\"G\n\tPublicKey\x12,\n\x04Type\x18\x01 \x01(\x0e\x32\x1e.libp2p.peer.pb.crypto.KeyType\x12\x0c\n\x04\x44\x61ta\x18\x02 \x01(\x0c\"H\n\nPrivateKey\x12,\n\x04Type\x18\x01 \x01(\x0e\x32\x1e.libp2p.peer.pb.crypto.KeyType\x12\x0c\n\x04\x44\x61ta\x18\x02 \x01(\x0c*9\n\x07KeyType\x12\x07\n\x03RSA\x10\x00\x12\x0b\n\x07\x45\x64\x32\x35\x35\x31\x39\x10\x01\x12\r\n\tSecp256k1\x10\x02\x12\t\n\x05\x45\x43\x44SA\x10\x03\x42,Z*github.com/libp2p/go-libp2p/core/crypto/pbb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.crypto_pb2', _globals)
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
_globals['DESCRIPTOR']._options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'Z*github.com/libp2p/go-libp2p/core/crypto/pb'
|
||||
_globals['_KEYTYPE']._serialized_start=201
|
||||
_globals['_KEYTYPE']._serialized_end=258
|
||||
_globals['_PUBLICKEY']._serialized_start=54
|
||||
_globals['_PUBLICKEY']._serialized_end=125
|
||||
_globals['_PRIVATEKEY']._serialized_start=127
|
||||
_globals['_PRIVATEKEY']._serialized_end=199
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@ -1,33 +0,0 @@
|
||||
from google.protobuf.internal import enum_type_wrapper as _enum_type_wrapper
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import message as _message
|
||||
from typing import ClassVar as _ClassVar, Optional as _Optional, Union as _Union
|
||||
|
||||
DESCRIPTOR: _descriptor.FileDescriptor
|
||||
|
||||
class KeyType(int, metaclass=_enum_type_wrapper.EnumTypeWrapper):
|
||||
__slots__ = ()
|
||||
RSA: _ClassVar[KeyType]
|
||||
Ed25519: _ClassVar[KeyType]
|
||||
Secp256k1: _ClassVar[KeyType]
|
||||
ECDSA: _ClassVar[KeyType]
|
||||
RSA: KeyType
|
||||
Ed25519: KeyType
|
||||
Secp256k1: KeyType
|
||||
ECDSA: KeyType
|
||||
|
||||
class PublicKey(_message.Message):
|
||||
__slots__ = ("Type", "Data")
|
||||
TYPE_FIELD_NUMBER: _ClassVar[int]
|
||||
DATA_FIELD_NUMBER: _ClassVar[int]
|
||||
Type: KeyType
|
||||
Data: bytes
|
||||
def __init__(self, Type: _Optional[_Union[KeyType, str]] = ..., Data: _Optional[bytes] = ...) -> None: ...
|
||||
|
||||
class PrivateKey(_message.Message):
|
||||
__slots__ = ("Type", "Data")
|
||||
TYPE_FIELD_NUMBER: _ClassVar[int]
|
||||
DATA_FIELD_NUMBER: _ClassVar[int]
|
||||
Type: KeyType
|
||||
Data: bytes
|
||||
def __init__(self, Type: _Optional[_Union[KeyType, str]] = ..., Data: _Optional[bytes] = ...) -> None: ...
|
||||
@ -1,14 +0,0 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package libp2p.peer.pb.record;
|
||||
|
||||
import "libp2p/peer/pb/crypto.proto";
|
||||
|
||||
option go_package = "github.com/libp2p/go-libp2p/core/record/pb";
|
||||
|
||||
message Envelope {
|
||||
libp2p.peer.pb.crypto.PublicKey public_key = 1;
|
||||
bytes payload_type = 2;
|
||||
bytes payload = 3;
|
||||
bytes signature = 5;
|
||||
}
|
||||
@ -1,28 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/peer/pb/envelope.proto
|
||||
# Protobuf Python Version: 4.25.3
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
from libp2p.peer.pb import crypto_pb2 as libp2p_dot_peer_dot_pb_dot_crypto__pb2
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1dlibp2p/peer/pb/envelope.proto\x12\x15libp2p.peer.pb.record\x1a\x1blibp2p/peer/pb/crypto.proto\"z\n\x08\x45nvelope\x12\x34\n\npublic_key\x18\x01 \x01(\x0b\x32 .libp2p.peer.pb.crypto.PublicKey\x12\x14\n\x0cpayload_type\x18\x02 \x01(\x0c\x12\x0f\n\x07payload\x18\x03 \x01(\x0c\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x42,Z*github.com/libp2p/go-libp2p/core/record/pbb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.envelope_pb2', _globals)
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
_globals['DESCRIPTOR']._options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'Z*github.com/libp2p/go-libp2p/core/record/pb'
|
||||
_globals['_ENVELOPE']._serialized_start=85
|
||||
_globals['_ENVELOPE']._serialized_end=207
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@ -1,18 +0,0 @@
|
||||
from libp2p.peer.pb import crypto_pb2 as _crypto_pb2
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import message as _message
|
||||
from typing import ClassVar as _ClassVar, Mapping as _Mapping, Optional as _Optional, Union as _Union
|
||||
|
||||
DESCRIPTOR: _descriptor.FileDescriptor
|
||||
|
||||
class Envelope(_message.Message):
|
||||
__slots__ = ("public_key", "payload_type", "payload", "signature")
|
||||
PUBLIC_KEY_FIELD_NUMBER: _ClassVar[int]
|
||||
PAYLOAD_TYPE_FIELD_NUMBER: _ClassVar[int]
|
||||
PAYLOAD_FIELD_NUMBER: _ClassVar[int]
|
||||
SIGNATURE_FIELD_NUMBER: _ClassVar[int]
|
||||
public_key: _crypto_pb2.PublicKey
|
||||
payload_type: bytes
|
||||
payload: bytes
|
||||
signature: bytes
|
||||
def __init__(self, public_key: _Optional[_Union[_crypto_pb2.PublicKey, _Mapping]] = ..., payload_type: _Optional[bytes] = ..., payload: _Optional[bytes] = ..., signature: _Optional[bytes] = ...) -> None: ... # type: ignore[type-arg]
|
||||
@ -1,31 +0,0 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package peer.pb;
|
||||
|
||||
option go_package = "github.com/libp2p/go-libp2p/core/peer/pb";
|
||||
|
||||
// PeerRecord messages contain information that is useful to share with other peers.
|
||||
// Currently, a PeerRecord contains the public listen addresses for a peer, but this
|
||||
// is expected to expand to include other information in the future.
|
||||
//
|
||||
// PeerRecords are designed to be serialized to bytes and placed inside of
|
||||
// SignedEnvelopes before sharing with other peers.
|
||||
// See https://github.com/libp2p/go-libp2p/blob/master/core/record/pb/envelope.proto for
|
||||
// the SignedEnvelope definition.
|
||||
message PeerRecord {
|
||||
|
||||
// AddressInfo is a wrapper around a binary multiaddr. It is defined as a
|
||||
// separate message to allow us to add per-address metadata in the future.
|
||||
message AddressInfo {
|
||||
bytes multiaddr = 1;
|
||||
}
|
||||
|
||||
// peer_id contains a libp2p peer id in its binary representation.
|
||||
bytes peer_id = 1;
|
||||
|
||||
// seq contains a monotonically-increasing sequence counter to order PeerRecords in time.
|
||||
uint64 seq = 2;
|
||||
|
||||
// addresses is a list of public listen addresses for the peer.
|
||||
repeated AddressInfo addresses = 3;
|
||||
}
|
||||
@ -1,29 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/peer/pb/peer_record.proto
|
||||
# Protobuf Python Version: 4.25.3
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n libp2p/peer/pb/peer_record.proto\x12\x07peer.pb\"\x80\x01\n\nPeerRecord\x12\x0f\n\x07peer_id\x18\x01 \x01(\x0c\x12\x0b\n\x03seq\x18\x02 \x01(\x04\x12\x32\n\taddresses\x18\x03 \x03(\x0b\x32\x1f.peer.pb.PeerRecord.AddressInfo\x1a \n\x0b\x41\x64\x64ressInfo\x12\x11\n\tmultiaddr\x18\x01 \x01(\x0c\x42*Z(github.com/libp2p/go-libp2p/core/peer/pbb\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.peer_record_pb2', _globals)
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
_globals['DESCRIPTOR']._options = None
|
||||
_globals['DESCRIPTOR']._serialized_options = b'Z(github.com/libp2p/go-libp2p/core/peer/pb'
|
||||
_globals['_PEERRECORD']._serialized_start=46
|
||||
_globals['_PEERRECORD']._serialized_end=174
|
||||
_globals['_PEERRECORD_ADDRESSINFO']._serialized_start=142
|
||||
_globals['_PEERRECORD_ADDRESSINFO']._serialized_end=174
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@ -1,21 +0,0 @@
|
||||
from google.protobuf.internal import containers as _containers
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import message as _message
|
||||
from typing import ClassVar as _ClassVar, Iterable as _Iterable, Mapping as _Mapping, Optional as _Optional, Union as _Union
|
||||
|
||||
DESCRIPTOR: _descriptor.FileDescriptor
|
||||
|
||||
class PeerRecord(_message.Message):
|
||||
__slots__ = ("peer_id", "seq", "addresses")
|
||||
class AddressInfo(_message.Message):
|
||||
__slots__ = ("multiaddr",)
|
||||
MULTIADDR_FIELD_NUMBER: _ClassVar[int]
|
||||
multiaddr: bytes
|
||||
def __init__(self, multiaddr: _Optional[bytes] = ...) -> None: ...
|
||||
PEER_ID_FIELD_NUMBER: _ClassVar[int]
|
||||
SEQ_FIELD_NUMBER: _ClassVar[int]
|
||||
ADDRESSES_FIELD_NUMBER: _ClassVar[int]
|
||||
peer_id: bytes
|
||||
seq: int
|
||||
addresses: _containers.RepeatedCompositeFieldContainer[PeerRecord.AddressInfo]
|
||||
def __init__(self, peer_id: _Optional[bytes] = ..., seq: _Optional[int] = ..., addresses: _Optional[_Iterable[_Union[PeerRecord.AddressInfo, _Mapping]]] = ...) -> None: ... # type: ignore[type-arg]
|
||||
@ -1,251 +0,0 @@
|
||||
from collections.abc import Sequence
|
||||
import threading
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
|
||||
from libp2p.abc import IPeerRecord
|
||||
from libp2p.peer.id import ID
|
||||
import libp2p.peer.pb.peer_record_pb2 as pb
|
||||
from libp2p.peer.peerinfo import PeerInfo
|
||||
|
||||
PEER_RECORD_ENVELOPE_DOMAIN = "libp2p-peer-record"
|
||||
PEER_RECORD_ENVELOPE_PAYLOAD_TYPE = b"\x03\x01"
|
||||
|
||||
_last_timestamp_lock = threading.Lock()
|
||||
_last_timestamp: int = 0
|
||||
|
||||
|
||||
class PeerRecord(IPeerRecord):
|
||||
"""
|
||||
A record that contains metatdata about a peer in the libp2p network.
|
||||
|
||||
This includes:
|
||||
- `peer_id`: The peer's globally unique indentifier.
|
||||
- `addrs`: A list of the peer's publicly reachable multiaddrs.
|
||||
- `seq`: A strictly monotonically increasing timestamp used
|
||||
to order records over time.
|
||||
|
||||
PeerRecords are designed to be signed and transmitted in libp2p routing Envelopes.
|
||||
"""
|
||||
|
||||
peer_id: ID
|
||||
addrs: list[Multiaddr]
|
||||
seq: int
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
peer_id: ID | None = None,
|
||||
addrs: list[Multiaddr] | None = None,
|
||||
seq: int | None = None,
|
||||
) -> None:
|
||||
"""
|
||||
Initialize a new PeerRecord.
|
||||
If `seq` is not provided, a timestamp-based strictly increasing sequence
|
||||
number will be generated.
|
||||
|
||||
:param peer_id: ID of the peer this record refers to.
|
||||
:param addrs: Public multiaddrs of the peer.
|
||||
:param seq: Monotonic sequence number.
|
||||
|
||||
"""
|
||||
if peer_id is not None:
|
||||
self.peer_id = peer_id
|
||||
self.addrs = addrs or []
|
||||
if seq is not None:
|
||||
self.seq = seq
|
||||
else:
|
||||
self.seq = timestamp_seq()
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return (
|
||||
f"PeerRecord(\n"
|
||||
f" peer_id={self.peer_id},\n"
|
||||
f" multiaddrs={[str(m) for m in self.addrs]},\n"
|
||||
f" seq={self.seq}\n"
|
||||
f")"
|
||||
)
|
||||
|
||||
def domain(self) -> str:
|
||||
"""
|
||||
Return the domain string associated with this PeerRecord.
|
||||
|
||||
Used during record signing and envelope validation to identify the record type.
|
||||
"""
|
||||
return PEER_RECORD_ENVELOPE_DOMAIN
|
||||
|
||||
def codec(self) -> bytes:
|
||||
"""
|
||||
Return the codec identifier for PeerRecords.
|
||||
|
||||
This binary perfix helps distinguish PeerRecords in serialized envelopes.
|
||||
"""
|
||||
return PEER_RECORD_ENVELOPE_PAYLOAD_TYPE
|
||||
|
||||
def to_protobuf(self) -> pb.PeerRecord:
|
||||
"""
|
||||
Convert the current PeerRecord into a ProtoBuf PeerRecord message.
|
||||
|
||||
:raises ValueError: if peer_id serialization fails.
|
||||
:return: A ProtoBuf-encoded PeerRecord message object.
|
||||
"""
|
||||
try:
|
||||
id_bytes = self.peer_id.to_bytes()
|
||||
except Exception as e:
|
||||
raise ValueError(f"failed to marshal peer_id: {e}")
|
||||
|
||||
msg = pb.PeerRecord()
|
||||
msg.peer_id = id_bytes
|
||||
msg.seq = self.seq
|
||||
msg.addresses.extend(addrs_to_protobuf(self.addrs))
|
||||
return msg
|
||||
|
||||
def marshal_record(self) -> bytes:
|
||||
"""
|
||||
Serialize a PeerRecord into raw bytes suitable for embedding in an Envelope.
|
||||
|
||||
This is typically called during the process of signing or sealing the record.
|
||||
:raises ValueError: if serialization to protobuf fails.
|
||||
:return: Serialized PeerRecord bytes.
|
||||
"""
|
||||
try:
|
||||
msg = self.to_protobuf()
|
||||
return msg.SerializeToString()
|
||||
except Exception as e:
|
||||
raise ValueError(f"failed to marshal PeerRecord: {e}")
|
||||
|
||||
def equal(self, other: Any) -> bool:
|
||||
"""
|
||||
Check if this PeerRecord is identical to another.
|
||||
|
||||
Two PeerRecords are considered equal if:
|
||||
- Their peer IDs match.
|
||||
- Their sequence numbers are identical.
|
||||
- Their address lists are identical and in the same order.
|
||||
|
||||
:param other: Another PeerRecord instance.
|
||||
:return: True if all fields mathch, False otherwise.
|
||||
"""
|
||||
if isinstance(other, PeerRecord):
|
||||
if self.peer_id == other.peer_id:
|
||||
if self.seq == other.seq:
|
||||
if len(self.addrs) == len(other.addrs):
|
||||
for a1, a2 in zip(self.addrs, other.addrs):
|
||||
if a1 == a2:
|
||||
continue
|
||||
else:
|
||||
return False
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def unmarshal_record(data: bytes) -> PeerRecord:
|
||||
"""
|
||||
Deserialize a PeerRecord from its serialized byte representation.
|
||||
|
||||
Typically used when receiveing a PeerRecord inside a signed routing Envelope.
|
||||
|
||||
:param data: Serialized protobuf-encoded bytes.
|
||||
:raises ValueError: if parsing or conversion fails.
|
||||
:reurn: A valid PeerRecord instance.
|
||||
"""
|
||||
if data is None:
|
||||
raise ValueError("cannot unmarshal PeerRecord from None")
|
||||
|
||||
msg = pb.PeerRecord()
|
||||
try:
|
||||
msg.ParseFromString(data)
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to parse PeerRecord protobuf: {e}")
|
||||
|
||||
try:
|
||||
record = peer_record_from_protobuf(msg)
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to convert protobuf to PeerRecord: {e}")
|
||||
|
||||
return record
|
||||
|
||||
|
||||
def timestamp_seq() -> int:
|
||||
"""
|
||||
Generate a strictly increasing timestamp-based sequence number.
|
||||
|
||||
Ensures that even if multiple PeerRecords are generated in the same nanosecond,
|
||||
their `seq` values will still be strictly increasing by using a lock to track the
|
||||
last value.
|
||||
|
||||
:return: A strictly increasing integer timestamp.
|
||||
"""
|
||||
global _last_timestamp
|
||||
now = int(time.time_ns())
|
||||
with _last_timestamp_lock:
|
||||
if now <= _last_timestamp:
|
||||
now = _last_timestamp + 1
|
||||
_last_timestamp = now
|
||||
return now
|
||||
|
||||
|
||||
def peer_record_from_peer_info(info: PeerInfo) -> PeerRecord:
|
||||
"""
|
||||
Create a PeerRecord from a PeerInfo object.
|
||||
|
||||
This automatically assigns a timestamp-based sequence number to the record.
|
||||
:param info: A PeerInfo instance (contains peer_id and addrs).
|
||||
:return: A PeerRecord instance.
|
||||
"""
|
||||
record = PeerRecord()
|
||||
record.peer_id = info.peer_id
|
||||
record.addrs = info.addrs
|
||||
return record
|
||||
|
||||
|
||||
def peer_record_from_protobuf(msg: pb.PeerRecord) -> PeerRecord:
|
||||
"""
|
||||
Convert a protobuf PeerRecord message into a PeerRecord object.
|
||||
|
||||
:param msg: Protobuf PeerRecord message.
|
||||
:raises ValueError: if the peer_id cannot be parsed.
|
||||
:return: A deserialized PeerRecord instance.
|
||||
"""
|
||||
try:
|
||||
peer_id = ID(msg.peer_id)
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to unmarshal peer_id: {e}")
|
||||
|
||||
addrs = addrs_from_protobuf(msg.addresses)
|
||||
seq = msg.seq
|
||||
|
||||
return PeerRecord(peer_id, addrs, seq)
|
||||
|
||||
|
||||
def addrs_from_protobuf(addrs: Sequence[pb.PeerRecord.AddressInfo]) -> list[Multiaddr]:
|
||||
"""
|
||||
Convert a list of protobuf address records to Multiaddr objects.
|
||||
|
||||
:param addrs: A list of protobuf PeerRecord.AddressInfo messages.
|
||||
:return: A list of decoded Multiaddr instances (invalid ones are skipped).
|
||||
"""
|
||||
out = []
|
||||
for addr_info in addrs:
|
||||
try:
|
||||
addr = Multiaddr(addr_info.multiaddr)
|
||||
out.append(addr)
|
||||
except Exception:
|
||||
continue
|
||||
return out
|
||||
|
||||
|
||||
def addrs_to_protobuf(addrs: list[Multiaddr]) -> list[pb.PeerRecord.AddressInfo]:
|
||||
"""
|
||||
Convert a list of Multiaddr objects into their protobuf representation.
|
||||
|
||||
:param addrs: A list of Multiaddr instances.
|
||||
:return: A list of PeerRecord.AddressInfo protobuf messages.
|
||||
"""
|
||||
out = []
|
||||
for addr in addrs:
|
||||
addr_info = pb.PeerRecord.AddressInfo()
|
||||
addr_info.multiaddr = addr.to_bytes()
|
||||
out.append(addr_info)
|
||||
return out
|
||||
@ -18,13 +18,6 @@ from libp2p.crypto.keys import (
|
||||
PublicKey,
|
||||
)
|
||||
|
||||
"""
|
||||
Latency EWMA Smoothing governs the deacy of the EWMA (the speed at which
|
||||
is changes). This must be a normalized (0-1) value.
|
||||
1 is 100% change, 0 is no change.
|
||||
"""
|
||||
LATENCY_EWMA_SMOOTHING = 0.1
|
||||
|
||||
|
||||
class PeerData(IPeerData):
|
||||
pubkey: PublicKey | None
|
||||
@ -34,7 +27,6 @@ class PeerData(IPeerData):
|
||||
addrs: list[Multiaddr]
|
||||
last_identified: int
|
||||
ttl: int # Keep ttl=0 by default for always valid
|
||||
latmap: float
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.pubkey = None
|
||||
@ -44,9 +36,6 @@ class PeerData(IPeerData):
|
||||
self.addrs = []
|
||||
self.last_identified = int(time.time())
|
||||
self.ttl = 0
|
||||
self.latmap = 0
|
||||
|
||||
# --------PROTO-BOOK--------
|
||||
|
||||
def get_protocols(self) -> list[str]:
|
||||
"""
|
||||
@ -66,37 +55,6 @@ class PeerData(IPeerData):
|
||||
"""
|
||||
self.protocols = list(protocols)
|
||||
|
||||
def remove_protocols(self, protocols: Sequence[str]) -> None:
|
||||
"""
|
||||
:param protocols: protocols to remove
|
||||
"""
|
||||
for protocol in protocols:
|
||||
if protocol in self.protocols:
|
||||
self.protocols.remove(protocol)
|
||||
|
||||
def supports_protocols(self, protocols: Sequence[str]) -> list[str]:
|
||||
"""
|
||||
:param protocols: protocols to check from
|
||||
:return: all supported protocols in the given list
|
||||
"""
|
||||
return [proto for proto in protocols if proto in self.protocols]
|
||||
|
||||
def first_supported_protocol(self, protocols: Sequence[str]) -> str:
|
||||
"""
|
||||
:param protocols: protocols to check from
|
||||
:return: first supported protocol in the given list
|
||||
"""
|
||||
for protocol in protocols:
|
||||
if protocol in self.protocols:
|
||||
return protocol
|
||||
|
||||
return "None supported"
|
||||
|
||||
def clear_protocol_data(self) -> None:
|
||||
"""Clear all protocols"""
|
||||
self.protocols = []
|
||||
|
||||
# -------ADDR-BOOK---------
|
||||
def add_addrs(self, addrs: Sequence[Multiaddr]) -> None:
|
||||
"""
|
||||
:param addrs: multiaddresses to add
|
||||
@ -115,7 +73,6 @@ class PeerData(IPeerData):
|
||||
"""Clear all addresses."""
|
||||
self.addrs = []
|
||||
|
||||
# -------METADATA-----------
|
||||
def put_metadata(self, key: str, val: Any) -> None:
|
||||
"""
|
||||
:param key: key in KV pair
|
||||
@ -133,11 +90,6 @@ class PeerData(IPeerData):
|
||||
return self.metadata[key]
|
||||
raise PeerDataError("key not found")
|
||||
|
||||
def clear_metadata(self) -> None:
|
||||
"""Clears metadata."""
|
||||
self.metadata = {}
|
||||
|
||||
# -------KEY-BOOK---------------
|
||||
def add_pubkey(self, pubkey: PublicKey) -> None:
|
||||
"""
|
||||
:param pubkey:
|
||||
@ -168,41 +120,9 @@ class PeerData(IPeerData):
|
||||
raise PeerDataError("private key not found")
|
||||
return self.privkey
|
||||
|
||||
def clear_keydata(self) -> None:
|
||||
"""Clears keydata"""
|
||||
self.pubkey = None
|
||||
self.privkey = None
|
||||
|
||||
# ----------METRICS--------------
|
||||
def record_latency(self, new_latency: float) -> None:
|
||||
"""
|
||||
Records a new latency measurement for the given peer
|
||||
using Exponentially Weighted Moving Average (EWMA)
|
||||
:param new_latency: the new latency value
|
||||
"""
|
||||
s = LATENCY_EWMA_SMOOTHING
|
||||
if s > 1 or s < 0:
|
||||
s = 0.1
|
||||
|
||||
if self.latmap == 0:
|
||||
self.latmap = new_latency
|
||||
else:
|
||||
prev = self.latmap
|
||||
updated = ((1.0 - s) * prev) + (s * new_latency)
|
||||
self.latmap = updated
|
||||
|
||||
def latency_EWMA(self) -> float:
|
||||
"""Returns the latency EWMA value"""
|
||||
return self.latmap
|
||||
|
||||
def clear_metrics(self) -> None:
|
||||
"""Clear the latency metrics"""
|
||||
self.latmap = 0
|
||||
|
||||
def update_last_identified(self) -> None:
|
||||
self.last_identified = int(time.time())
|
||||
|
||||
# ----------TTL------------------
|
||||
def get_last_identified(self) -> int:
|
||||
"""
|
||||
:return: last identified timestamp
|
||||
|
||||
@ -3,11 +3,9 @@ from collections.abc import (
|
||||
)
|
||||
from typing import (
|
||||
Any,
|
||||
cast,
|
||||
)
|
||||
|
||||
import multiaddr
|
||||
from multiaddr.protocols import Protocol
|
||||
|
||||
from .id import (
|
||||
ID,
|
||||
@ -44,8 +42,7 @@ def info_from_p2p_addr(addr: multiaddr.Multiaddr) -> PeerInfo:
|
||||
p2p_protocols = p2p_part.protocols()
|
||||
if not p2p_protocols:
|
||||
raise InvalidAddrError("The last part of the address has no protocols")
|
||||
last_protocol = cast(Protocol, p2p_part.protocols()[0])
|
||||
|
||||
last_protocol = p2p_protocols[0]
|
||||
if last_protocol is None:
|
||||
raise InvalidAddrError("The last protocol is None")
|
||||
|
||||
|
||||
@ -2,7 +2,6 @@ from collections import (
|
||||
defaultdict,
|
||||
)
|
||||
from collections.abc import (
|
||||
AsyncIterable,
|
||||
Sequence,
|
||||
)
|
||||
from typing import (
|
||||
@ -12,8 +11,6 @@ from typing import (
|
||||
from multiaddr import (
|
||||
Multiaddr,
|
||||
)
|
||||
import trio
|
||||
from trio import MemoryReceiveChannel, MemorySendChannel
|
||||
|
||||
from libp2p.abc import (
|
||||
IPeerStore,
|
||||
@ -23,7 +20,6 @@ from libp2p.crypto.keys import (
|
||||
PrivateKey,
|
||||
PublicKey,
|
||||
)
|
||||
from libp2p.peer.envelope import Envelope
|
||||
|
||||
from .id import (
|
||||
ID,
|
||||
@ -39,23 +35,11 @@ from .peerinfo import (
|
||||
PERMANENT_ADDR_TTL = 0
|
||||
|
||||
|
||||
class PeerRecordState:
|
||||
envelope: Envelope
|
||||
seq: int
|
||||
|
||||
def __init__(self, envelope: Envelope, seq: int):
|
||||
self.envelope = envelope
|
||||
self.seq = seq
|
||||
|
||||
|
||||
class PeerStore(IPeerStore):
|
||||
peer_data_map: dict[ID, PeerData]
|
||||
|
||||
def __init__(self, max_records: int = 10000) -> None:
|
||||
def __init__(self) -> None:
|
||||
self.peer_data_map = defaultdict(PeerData)
|
||||
self.addr_update_channels: dict[ID, MemorySendChannel[Multiaddr]] = {}
|
||||
self.peer_record_map: dict[ID, PeerRecordState] = {}
|
||||
self.max_records = max_records
|
||||
|
||||
def peer_info(self, peer_id: ID) -> PeerInfo:
|
||||
"""
|
||||
@ -69,69 +53,6 @@ class PeerStore(IPeerStore):
|
||||
return PeerInfo(peer_id, peer_data.get_addrs())
|
||||
raise PeerStoreError("peer ID not found")
|
||||
|
||||
def peer_ids(self) -> list[ID]:
|
||||
"""
|
||||
:return: all of the peer IDs stored in peer store
|
||||
"""
|
||||
return list(self.peer_data_map.keys())
|
||||
|
||||
def clear_peerdata(self, peer_id: ID) -> None:
|
||||
"""Clears all data associated with the given peer_id."""
|
||||
if peer_id in self.peer_data_map:
|
||||
del self.peer_data_map[peer_id]
|
||||
else:
|
||||
raise PeerStoreError("peer ID not found")
|
||||
|
||||
# Clear the peer records
|
||||
if peer_id in self.peer_record_map:
|
||||
self.peer_record_map.pop(peer_id, None)
|
||||
|
||||
def valid_peer_ids(self) -> list[ID]:
|
||||
"""
|
||||
:return: all of the valid peer IDs stored in peer store
|
||||
"""
|
||||
valid_peer_ids: list[ID] = []
|
||||
for peer_id, peer_data in self.peer_data_map.items():
|
||||
if not peer_data.is_expired():
|
||||
valid_peer_ids.append(peer_id)
|
||||
else:
|
||||
peer_data.clear_addrs()
|
||||
return valid_peer_ids
|
||||
|
||||
def _enforce_record_limit(self) -> None:
|
||||
"""Enforce maximum number of stored records."""
|
||||
if len(self.peer_record_map) > self.max_records:
|
||||
# Record oldest records based on seequence number
|
||||
sorted_records = sorted(
|
||||
self.peer_record_map.items(), key=lambda x: x[1].seq
|
||||
)
|
||||
records_to_remove = len(self.peer_record_map) - self.max_records
|
||||
for peer_id, _ in sorted_records[:records_to_remove]:
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
del self.peer_record_map[peer_id]
|
||||
|
||||
async def start_cleanup_task(self, cleanup_interval: int = 3600) -> None:
|
||||
"""Start periodic cleanup of expired peer records and addresses."""
|
||||
while True:
|
||||
await trio.sleep(cleanup_interval)
|
||||
self._cleanup_expired_records()
|
||||
|
||||
def _cleanup_expired_records(self) -> None:
|
||||
"""Remove expired peer records and addresses"""
|
||||
expired_peers = []
|
||||
|
||||
for peer_id, peer_data in self.peer_data_map.items():
|
||||
if peer_data.is_expired():
|
||||
expired_peers.append(peer_id)
|
||||
|
||||
for peer_id in expired_peers:
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
del self.peer_data_map[peer_id]
|
||||
|
||||
self._enforce_record_limit()
|
||||
|
||||
# --------PROTO-BOOK--------
|
||||
|
||||
def get_protocols(self, peer_id: ID) -> list[str]:
|
||||
"""
|
||||
:param peer_id: peer ID to get protocols for
|
||||
@ -158,31 +79,23 @@ class PeerStore(IPeerStore):
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
peer_data.set_protocols(list(protocols))
|
||||
|
||||
def remove_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
|
||||
"""
|
||||
:param peer_id: peer ID to get info for
|
||||
:param protocols: unsupported protocols to remove
|
||||
"""
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
peer_data.remove_protocols(protocols)
|
||||
|
||||
def supports_protocols(self, peer_id: ID, protocols: Sequence[str]) -> list[str]:
|
||||
def peer_ids(self) -> list[ID]:
|
||||
"""
|
||||
:return: all of the peer IDs stored in peer store
|
||||
"""
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
return peer_data.supports_protocols(protocols)
|
||||
return list(self.peer_data_map.keys())
|
||||
|
||||
def first_supported_protocol(self, peer_id: ID, protocols: Sequence[str]) -> str:
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
return peer_data.first_supported_protocol(protocols)
|
||||
|
||||
def clear_protocol_data(self, peer_id: ID) -> None:
|
||||
"""Clears prtocoldata"""
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
peer_data.clear_protocol_data()
|
||||
|
||||
# ------METADATA---------
|
||||
def valid_peer_ids(self) -> list[ID]:
|
||||
"""
|
||||
:return: all of the valid peer IDs stored in peer store
|
||||
"""
|
||||
valid_peer_ids: list[ID] = []
|
||||
for peer_id, peer_data in self.peer_data_map.items():
|
||||
if not peer_data.is_expired():
|
||||
valid_peer_ids.append(peer_id)
|
||||
else:
|
||||
peer_data.clear_addrs()
|
||||
return valid_peer_ids
|
||||
|
||||
def get(self, peer_id: ID, key: str) -> Any:
|
||||
"""
|
||||
@ -208,91 +121,6 @@ class PeerStore(IPeerStore):
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
peer_data.put_metadata(key, val)
|
||||
|
||||
def clear_metadata(self, peer_id: ID) -> None:
|
||||
"""Clears metadata"""
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
peer_data.clear_metadata()
|
||||
|
||||
# -----CERT-ADDR-BOOK-----
|
||||
|
||||
def maybe_delete_peer_record(self, peer_id: ID) -> None:
|
||||
"""
|
||||
Delete the signed peer record for a peer if it has no know
|
||||
(non-expired) addresses.
|
||||
|
||||
This is a garbage collection mechanism: if all addresses for a peer have expired
|
||||
or been cleared, there's no point holding onto its signed `Envelope`
|
||||
|
||||
:param peer_id: The peer whose record we may delete/
|
||||
"""
|
||||
if peer_id in self.peer_record_map:
|
||||
if not self.addrs(peer_id):
|
||||
self.peer_record_map.pop(peer_id, None)
|
||||
|
||||
def consume_peer_record(self, envelope: Envelope, ttl: int) -> bool:
|
||||
"""
|
||||
Accept and store a signed PeerRecord, unless it's older than
|
||||
the one already stored.
|
||||
|
||||
This function:
|
||||
- Extracts the peer ID and sequence number from the envelope
|
||||
- Rejects the record if it's older (lower seq)
|
||||
- Updates the stored peer record and replaces associated addresses if accepted
|
||||
|
||||
:param envelope: Signed envelope containing a PeerRecord.
|
||||
:param ttl: Time-to-live for the included multiaddrs (in seconds).
|
||||
:return: True if the record was accepted and stored; False if it was rejected.
|
||||
"""
|
||||
record = envelope.record()
|
||||
peer_id = record.peer_id
|
||||
|
||||
existing = self.peer_record_map.get(peer_id)
|
||||
if existing and existing.seq > record.seq:
|
||||
return False # reject older record
|
||||
|
||||
new_addrs = set(record.addrs)
|
||||
|
||||
self.peer_record_map[peer_id] = PeerRecordState(envelope, record.seq)
|
||||
self.peer_data_map[peer_id].clear_addrs()
|
||||
self.add_addrs(peer_id, list(new_addrs), ttl)
|
||||
|
||||
return True
|
||||
|
||||
def consume_peer_records(self, envelopes: list[Envelope], ttl: int) -> list[bool]:
|
||||
"""Consume multiple peer records in a single operation."""
|
||||
results = []
|
||||
for envelope in envelopes:
|
||||
results.append(self.consume_peer_record(envelope, ttl))
|
||||
return results
|
||||
|
||||
def get_peer_record(self, peer_id: ID) -> Envelope | None:
|
||||
"""
|
||||
Retrieve the most recent signed PeerRecord `Envelope` for a peer, if it exists
|
||||
and is still relevant.
|
||||
|
||||
First, it runs cleanup via `maybe_delete_peer_record` to purge stale data.
|
||||
Then it checks whether the peer has valid, unexpired addresses before
|
||||
returning the associated envelope.
|
||||
|
||||
:param peer_id: The peer to look up.
|
||||
:return: The signed Envelope if the peer is known and has valid
|
||||
addresses; None otherwise.
|
||||
|
||||
"""
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
|
||||
# Check if the peer has any valid addresses
|
||||
if (
|
||||
peer_id in self.peer_data_map
|
||||
and not self.peer_data_map[peer_id].is_expired()
|
||||
):
|
||||
state = self.peer_record_map.get(peer_id)
|
||||
if state is not None:
|
||||
return state.envelope
|
||||
return None
|
||||
|
||||
# -------ADDR-BOOK--------
|
||||
|
||||
def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int = 0) -> None:
|
||||
"""
|
||||
:param peer_id: peer ID to add address for
|
||||
@ -312,15 +140,6 @@ class PeerStore(IPeerStore):
|
||||
peer_data.set_ttl(ttl)
|
||||
peer_data.update_last_identified()
|
||||
|
||||
if peer_id in self.addr_update_channels:
|
||||
for addr in addrs:
|
||||
try:
|
||||
self.addr_update_channels[peer_id].send_nowait(addr)
|
||||
except trio.WouldBlock:
|
||||
pass # Or consider logging / dropping / replacing stream
|
||||
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
|
||||
def addrs(self, peer_id: ID) -> list[Multiaddr]:
|
||||
"""
|
||||
:param peer_id: peer ID to get addrs for
|
||||
@ -344,11 +163,9 @@ class PeerStore(IPeerStore):
|
||||
if peer_id in self.peer_data_map:
|
||||
self.peer_data_map[peer_id].clear_addrs()
|
||||
|
||||
self.maybe_delete_peer_record(peer_id)
|
||||
|
||||
def peers_with_addrs(self) -> list[ID]:
|
||||
"""
|
||||
:return: all of the peer IDs which has addrsfloat stored in peer store
|
||||
:return: all of the peer IDs which has addrs stored in peer store
|
||||
"""
|
||||
# Add all peers with addrs at least 1 to output
|
||||
output: list[ID] = []
|
||||
@ -362,27 +179,6 @@ class PeerStore(IPeerStore):
|
||||
peer_data.clear_addrs()
|
||||
return output
|
||||
|
||||
async def addr_stream(self, peer_id: ID) -> AsyncIterable[Multiaddr]:
|
||||
"""
|
||||
Returns an async stream of newly added addresses for the given peer.
|
||||
|
||||
This function allows consumers to subscribe to address updates for a peer
|
||||
and receive each new address as it is added via `add_addr` or `add_addrs`.
|
||||
|
||||
:param peer_id: The ID of the peer to monitor address updates for.
|
||||
:return: An async iterator yielding Multiaddr instances as they are added.
|
||||
"""
|
||||
send: MemorySendChannel[Multiaddr]
|
||||
receive: MemoryReceiveChannel[Multiaddr]
|
||||
|
||||
send, receive = trio.open_memory_channel(0)
|
||||
self.addr_update_channels[peer_id] = send
|
||||
|
||||
async for addr in receive:
|
||||
yield addr
|
||||
|
||||
# -------KEY-BOOK---------
|
||||
|
||||
def add_pubkey(self, peer_id: ID, pubkey: PublicKey) -> None:
|
||||
"""
|
||||
:param peer_id: peer ID to add public key for
|
||||
@ -443,45 +239,6 @@ class PeerStore(IPeerStore):
|
||||
self.add_pubkey(peer_id, key_pair.public_key)
|
||||
self.add_privkey(peer_id, key_pair.private_key)
|
||||
|
||||
def peer_with_keys(self) -> list[ID]:
|
||||
"""Returns the peer_ids for which keys are stored"""
|
||||
return [
|
||||
peer_id
|
||||
for peer_id, pdata in self.peer_data_map.items()
|
||||
if pdata.pubkey is not None
|
||||
]
|
||||
|
||||
def clear_keydata(self, peer_id: ID) -> None:
|
||||
"""Clears the keys of the peer"""
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
peer_data.clear_keydata()
|
||||
|
||||
# --------METRICS--------
|
||||
|
||||
def record_latency(self, peer_id: ID, RTT: float) -> None:
|
||||
"""
|
||||
Records a new latency measurement for the given peer
|
||||
using Exponentially Weighted Moving Average (EWMA)
|
||||
|
||||
:param peer_id: peer ID to get private key for
|
||||
:param RTT: the new latency value (round trip time)
|
||||
"""
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
peer_data.record_latency(RTT)
|
||||
|
||||
def latency_EWMA(self, peer_id: ID) -> float:
|
||||
"""
|
||||
:param peer_id: peer ID to get private key for
|
||||
:return: The latency EWMA value for that peer
|
||||
"""
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
return peer_data.latency_EWMA()
|
||||
|
||||
def clear_metrics(self, peer_id: ID) -> None:
|
||||
"""Clear the latency metrics"""
|
||||
peer_data = self.peer_data_map[peer_id]
|
||||
peer_data.clear_metrics()
|
||||
|
||||
|
||||
class PeerStoreError(KeyError):
|
||||
"""Raised when peer ID is not found in peer store."""
|
||||
|
||||
@ -1,5 +1,3 @@
|
||||
import trio
|
||||
|
||||
from libp2p.abc import (
|
||||
IMultiselectCommunicator,
|
||||
IMultiselectMuxer,
|
||||
@ -16,7 +14,6 @@ from .exceptions import (
|
||||
|
||||
MULTISELECT_PROTOCOL_ID = "/multistream/1.0.0"
|
||||
PROTOCOL_NOT_FOUND_MSG = "na"
|
||||
DEFAULT_NEGOTIATE_TIMEOUT = 5
|
||||
|
||||
|
||||
class Multiselect(IMultiselectMuxer):
|
||||
@ -50,68 +47,47 @@ class Multiselect(IMultiselectMuxer):
|
||||
|
||||
# FIXME: Make TProtocol Optional[TProtocol] to keep types consistent
|
||||
async def negotiate(
|
||||
self,
|
||||
communicator: IMultiselectCommunicator,
|
||||
negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
|
||||
self, communicator: IMultiselectCommunicator
|
||||
) -> tuple[TProtocol, StreamHandlerFn | None]:
|
||||
"""
|
||||
Negotiate performs protocol selection.
|
||||
|
||||
:param stream: stream to negotiate on
|
||||
:param negotiate_timeout: timeout for negotiation
|
||||
:return: selected protocol name, handler function
|
||||
:raise MultiselectError: raised when negotiation failed
|
||||
"""
|
||||
try:
|
||||
with trio.fail_after(negotiate_timeout):
|
||||
await self.handshake(communicator)
|
||||
await self.handshake(communicator)
|
||||
|
||||
while True:
|
||||
while True:
|
||||
try:
|
||||
command = await communicator.read()
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectError() from error
|
||||
|
||||
if command == "ls":
|
||||
supported_protocols = [p for p in self.handlers.keys() if p is not None]
|
||||
response = "\n".join(supported_protocols) + "\n"
|
||||
|
||||
try:
|
||||
await communicator.write(response)
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectError() from error
|
||||
|
||||
else:
|
||||
protocol = TProtocol(command)
|
||||
if protocol in self.handlers:
|
||||
try:
|
||||
command = await communicator.read()
|
||||
await communicator.write(protocol)
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectError() from error
|
||||
|
||||
if command == "ls":
|
||||
supported_protocols = [
|
||||
p for p in self.handlers.keys() if p is not None
|
||||
]
|
||||
response = "\n".join(supported_protocols) + "\n"
|
||||
return protocol, self.handlers[protocol]
|
||||
try:
|
||||
await communicator.write(PROTOCOL_NOT_FOUND_MSG)
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectError() from error
|
||||
|
||||
try:
|
||||
await communicator.write(response)
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectError() from error
|
||||
|
||||
else:
|
||||
protocol = TProtocol(command)
|
||||
if protocol in self.handlers:
|
||||
try:
|
||||
await communicator.write(protocol)
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectError() from error
|
||||
|
||||
return protocol, self.handlers[protocol]
|
||||
try:
|
||||
await communicator.write(PROTOCOL_NOT_FOUND_MSG)
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectError() from error
|
||||
|
||||
raise MultiselectError("Negotiation failed: no matching protocol")
|
||||
except trio.TooSlowError:
|
||||
raise MultiselectError("handshake read timeout")
|
||||
|
||||
def get_protocols(self) -> tuple[TProtocol | None, ...]:
|
||||
"""
|
||||
Retrieve the protocols for which handlers have been registered.
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple[TProtocol, ...]
|
||||
A tuple of registered protocol names.
|
||||
|
||||
"""
|
||||
return tuple(self.handlers.keys())
|
||||
raise MultiselectError("Negotiation failed: no matching protocol")
|
||||
|
||||
async def handshake(self, communicator: IMultiselectCommunicator) -> None:
|
||||
"""
|
||||
|
||||
@ -2,8 +2,6 @@ from collections.abc import (
|
||||
Sequence,
|
||||
)
|
||||
|
||||
import trio
|
||||
|
||||
from libp2p.abc import (
|
||||
IMultiselectClient,
|
||||
IMultiselectCommunicator,
|
||||
@ -19,7 +17,6 @@ from .exceptions import (
|
||||
|
||||
MULTISELECT_PROTOCOL_ID = "/multistream/1.0.0"
|
||||
PROTOCOL_NOT_FOUND_MSG = "na"
|
||||
DEFAULT_NEGOTIATE_TIMEOUT = 5
|
||||
|
||||
|
||||
class MultiselectClient(IMultiselectClient):
|
||||
@ -43,7 +40,6 @@ class MultiselectClient(IMultiselectClient):
|
||||
|
||||
try:
|
||||
handshake_contents = await communicator.read()
|
||||
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectClientError() from error
|
||||
|
||||
@ -51,10 +47,7 @@ class MultiselectClient(IMultiselectClient):
|
||||
raise MultiselectClientError("multiselect protocol ID mismatch")
|
||||
|
||||
async def select_one_of(
|
||||
self,
|
||||
protocols: Sequence[TProtocol],
|
||||
communicator: IMultiselectCommunicator,
|
||||
negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
|
||||
self, protocols: Sequence[TProtocol], communicator: IMultiselectCommunicator
|
||||
) -> TProtocol:
|
||||
"""
|
||||
For each protocol, send message to multiselect selecting protocol and
|
||||
@ -63,32 +56,22 @@ class MultiselectClient(IMultiselectClient):
|
||||
|
||||
:param protocol: protocol to select
|
||||
:param communicator: communicator to use to communicate with counterparty
|
||||
:param negotiate_timeout: timeout for negotiation
|
||||
:return: selected protocol
|
||||
:raise MultiselectClientError: raised when protocol negotiation failed
|
||||
"""
|
||||
try:
|
||||
with trio.fail_after(negotitate_timeout):
|
||||
await self.handshake(communicator)
|
||||
await self.handshake(communicator)
|
||||
|
||||
for protocol in protocols:
|
||||
try:
|
||||
selected_protocol = await self.try_select(
|
||||
communicator, protocol
|
||||
)
|
||||
return selected_protocol
|
||||
except MultiselectClientError:
|
||||
pass
|
||||
for protocol in protocols:
|
||||
try:
|
||||
selected_protocol = await self.try_select(communicator, protocol)
|
||||
return selected_protocol
|
||||
except MultiselectClientError:
|
||||
pass
|
||||
|
||||
raise MultiselectClientError("protocols not supported")
|
||||
except trio.TooSlowError:
|
||||
raise MultiselectClientError("response timed out")
|
||||
raise MultiselectClientError("protocols not supported")
|
||||
|
||||
async def query_multistream_command(
|
||||
self,
|
||||
communicator: IMultiselectCommunicator,
|
||||
command: str,
|
||||
response_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
|
||||
self, communicator: IMultiselectCommunicator, command: str
|
||||
) -> list[str]:
|
||||
"""
|
||||
Send a multistream-select command over the given communicator and return
|
||||
@ -96,32 +79,26 @@ class MultiselectClient(IMultiselectClient):
|
||||
|
||||
:param communicator: communicator to use to communicate with counterparty
|
||||
:param command: supported multistream-select command(e.g., ls)
|
||||
:param negotiate_timeout: timeout for negotiation
|
||||
:raise MultiselectClientError: If the communicator fails to process data.
|
||||
:return: list of strings representing the response from peer.
|
||||
"""
|
||||
await self.handshake(communicator)
|
||||
|
||||
if command == "ls":
|
||||
try:
|
||||
await communicator.write("ls")
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectClientError() from error
|
||||
else:
|
||||
raise ValueError("Command not supported")
|
||||
|
||||
try:
|
||||
with trio.fail_after(response_timeout):
|
||||
await self.handshake(communicator)
|
||||
response = await communicator.read()
|
||||
response_list = response.strip().splitlines()
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectClientError() from error
|
||||
|
||||
if command == "ls":
|
||||
try:
|
||||
await communicator.write("ls")
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectClientError() from error
|
||||
else:
|
||||
raise ValueError("Command not supported")
|
||||
|
||||
try:
|
||||
response = await communicator.read()
|
||||
response_list = response.strip().splitlines()
|
||||
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectClientError() from error
|
||||
|
||||
return response_list
|
||||
except trio.TooSlowError:
|
||||
raise MultiselectClientError("command response timed out")
|
||||
return response_list
|
||||
|
||||
async def try_select(
|
||||
self, communicator: IMultiselectCommunicator, protocol: TProtocol
|
||||
@ -141,7 +118,6 @@ class MultiselectClient(IMultiselectClient):
|
||||
|
||||
try:
|
||||
response = await communicator.read()
|
||||
|
||||
except MultiselectCommunicatorError as error:
|
||||
raise MultiselectClientError() from error
|
||||
|
||||
|
||||
@ -12,9 +12,15 @@ from libp2p.abc import (
|
||||
from libp2p.custom_types import (
|
||||
TProtocol,
|
||||
)
|
||||
from libp2p.network.stream.exceptions import (
|
||||
StreamClosed,
|
||||
)
|
||||
from libp2p.peer.id import (
|
||||
ID,
|
||||
)
|
||||
from libp2p.utils import (
|
||||
encode_varint_prefixed,
|
||||
)
|
||||
|
||||
from .exceptions import (
|
||||
PubsubRouterError,
|
||||
@ -114,7 +120,13 @@ class FloodSub(IPubsubRouter):
|
||||
if peer_id not in pubsub.peers:
|
||||
continue
|
||||
stream = pubsub.peers[peer_id]
|
||||
await pubsub.write_msg(stream, rpc_msg)
|
||||
# FIXME: We should add a `WriteMsg` similar to write delimited messages.
|
||||
# Ref: https://github.com/libp2p/go-libp2p-pubsub/blob/master/comm.go#L107
|
||||
try:
|
||||
await stream.write(encode_varint_prefixed(rpc_msg.SerializeToString()))
|
||||
except StreamClosed:
|
||||
logger.debug("Fail to publish message to %s: stream closed", peer_id)
|
||||
pubsub._handle_dead_peer(peer_id)
|
||||
|
||||
async def join(self, topic: str) -> None:
|
||||
"""
|
||||
|
||||
@ -24,6 +24,9 @@ from libp2p.abc import (
|
||||
from libp2p.custom_types import (
|
||||
TProtocol,
|
||||
)
|
||||
from libp2p.network.stream.exceptions import (
|
||||
StreamClosed,
|
||||
)
|
||||
from libp2p.peer.id import (
|
||||
ID,
|
||||
)
|
||||
@ -41,6 +44,9 @@ from libp2p.pubsub import (
|
||||
from libp2p.tools.async_service import (
|
||||
Service,
|
||||
)
|
||||
from libp2p.utils import (
|
||||
encode_varint_prefixed,
|
||||
)
|
||||
|
||||
from .exceptions import (
|
||||
NoPubsubAttached,
|
||||
@ -261,10 +267,13 @@ class GossipSub(IPubsubRouter, Service):
|
||||
if peer_id not in self.pubsub.peers:
|
||||
continue
|
||||
stream = self.pubsub.peers[peer_id]
|
||||
|
||||
# TODO: Go use `sendRPC`, which possibly piggybacks gossip/control messages.
|
||||
await self.pubsub.write_msg(stream, rpc_msg)
|
||||
|
||||
# FIXME: We should add a `WriteMsg` similar to write delimited messages.
|
||||
# Ref: https://github.com/libp2p/go-libp2p-pubsub/blob/master/comm.go#L107
|
||||
try:
|
||||
await stream.write(encode_varint_prefixed(rpc_msg.SerializeToString()))
|
||||
except StreamClosed:
|
||||
logger.debug("Fail to publish message to %s: stream closed", peer_id)
|
||||
self.pubsub._handle_dead_peer(peer_id)
|
||||
for topic in pubsub_msg.topicIDs:
|
||||
self.time_since_last_publish[topic] = int(time.time())
|
||||
|
||||
@ -820,6 +829,8 @@ class GossipSub(IPubsubRouter, Service):
|
||||
|
||||
packet.publish.extend(msgs_to_forward)
|
||||
|
||||
# 2) Serialize that packet
|
||||
rpc_msg: bytes = packet.SerializeToString()
|
||||
if self.pubsub is None:
|
||||
raise NoPubsubAttached
|
||||
|
||||
@ -833,7 +844,14 @@ class GossipSub(IPubsubRouter, Service):
|
||||
peer_stream = self.pubsub.peers[sender_peer_id]
|
||||
|
||||
# 4) And write the packet to the stream
|
||||
await self.pubsub.write_msg(peer_stream, packet)
|
||||
try:
|
||||
await peer_stream.write(encode_varint_prefixed(rpc_msg))
|
||||
except StreamClosed:
|
||||
logger.debug(
|
||||
"Fail to responed to iwant request from %s: stream closed",
|
||||
sender_peer_id,
|
||||
)
|
||||
self.pubsub._handle_dead_peer(sender_peer_id)
|
||||
|
||||
async def handle_graft(
|
||||
self, graft_msg: rpc_pb2.ControlGraft, sender_peer_id: ID
|
||||
@ -975,6 +993,8 @@ class GossipSub(IPubsubRouter, Service):
|
||||
packet: rpc_pb2.RPC = rpc_pb2.RPC()
|
||||
packet.control.CopyFrom(control_msg)
|
||||
|
||||
rpc_msg: bytes = packet.SerializeToString()
|
||||
|
||||
# Get stream for peer from pubsub
|
||||
if to_peer not in self.pubsub.peers:
|
||||
logger.debug(
|
||||
@ -984,4 +1004,8 @@ class GossipSub(IPubsubRouter, Service):
|
||||
peer_stream = self.pubsub.peers[to_peer]
|
||||
|
||||
# Write rpc to stream
|
||||
await self.pubsub.write_msg(peer_stream, packet)
|
||||
try:
|
||||
await peer_stream.write(encode_varint_prefixed(rpc_msg))
|
||||
except StreamClosed:
|
||||
logger.debug("Fail to emit control message to %s: stream closed", to_peer)
|
||||
self.pubsub._handle_dead_peer(to_peer)
|
||||
|
||||
@ -66,7 +66,6 @@ from libp2p.utils import (
|
||||
encode_varint_prefixed,
|
||||
read_varint_prefixed_bytes,
|
||||
)
|
||||
from libp2p.utils.varint import encode_uvarint
|
||||
|
||||
from .pb import (
|
||||
rpc_pb2,
|
||||
@ -102,9 +101,6 @@ class TopicValidator(NamedTuple):
|
||||
is_async: bool
|
||||
|
||||
|
||||
MAX_CONCURRENT_VALIDATORS = 10
|
||||
|
||||
|
||||
class Pubsub(Service, IPubsub):
|
||||
host: IHost
|
||||
|
||||
@ -112,7 +108,6 @@ class Pubsub(Service, IPubsub):
|
||||
|
||||
peer_receive_channel: trio.MemoryReceiveChannel[ID]
|
||||
dead_peer_receive_channel: trio.MemoryReceiveChannel[ID]
|
||||
_validator_semaphore: trio.Semaphore
|
||||
|
||||
seen_messages: LastSeenCache
|
||||
|
||||
@ -147,7 +142,6 @@ class Pubsub(Service, IPubsub):
|
||||
msg_id_constructor: Callable[
|
||||
[rpc_pb2.Message], bytes
|
||||
] = get_peer_and_seqno_msg_id,
|
||||
max_concurrent_validator_count: int = MAX_CONCURRENT_VALIDATORS,
|
||||
) -> None:
|
||||
"""
|
||||
Construct a new Pubsub object, which is responsible for handling all
|
||||
@ -173,7 +167,6 @@ class Pubsub(Service, IPubsub):
|
||||
# Therefore, we can only close from the receive side.
|
||||
self.peer_receive_channel = peer_receive
|
||||
self.dead_peer_receive_channel = dead_peer_receive
|
||||
self._validator_semaphore = trio.Semaphore(max_concurrent_validator_count)
|
||||
# Register a notifee
|
||||
self.host.get_network().register_notifee(
|
||||
PubsubNotifee(peer_send, dead_peer_send)
|
||||
@ -663,11 +656,7 @@ class Pubsub(Service, IPubsub):
|
||||
|
||||
logger.debug("successfully published message %s", msg)
|
||||
|
||||
async def validate_msg(
|
||||
self,
|
||||
msg_forwarder: ID,
|
||||
msg: rpc_pb2.Message,
|
||||
) -> None:
|
||||
async def validate_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None:
|
||||
"""
|
||||
Validate the received message.
|
||||
|
||||
@ -690,34 +679,23 @@ class Pubsub(Service, IPubsub):
|
||||
if not validator(msg_forwarder, msg):
|
||||
raise ValidationError(f"Validation failed for msg={msg}")
|
||||
|
||||
# TODO: Implement throttle on async validators
|
||||
|
||||
if len(async_topic_validators) > 0:
|
||||
# Appends to lists are thread safe in CPython
|
||||
results: list[bool] = []
|
||||
results = []
|
||||
|
||||
async def run_async_validator(func: AsyncValidatorFn) -> None:
|
||||
result = await func(msg_forwarder, msg)
|
||||
results.append(result)
|
||||
|
||||
async with trio.open_nursery() as nursery:
|
||||
for async_validator in async_topic_validators:
|
||||
nursery.start_soon(
|
||||
self._run_async_validator,
|
||||
async_validator,
|
||||
msg_forwarder,
|
||||
msg,
|
||||
results,
|
||||
)
|
||||
nursery.start_soon(run_async_validator, async_validator)
|
||||
|
||||
if not all(results):
|
||||
raise ValidationError(f"Validation failed for msg={msg}")
|
||||
|
||||
async def _run_async_validator(
|
||||
self,
|
||||
func: AsyncValidatorFn,
|
||||
msg_forwarder: ID,
|
||||
msg: rpc_pb2.Message,
|
||||
results: list[bool],
|
||||
) -> None:
|
||||
async with self._validator_semaphore:
|
||||
result = await func(msg_forwarder, msg)
|
||||
results.append(result)
|
||||
|
||||
async def push_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None:
|
||||
"""
|
||||
Push a pubsub message to others.
|
||||
@ -800,43 +778,3 @@ class Pubsub(Service, IPubsub):
|
||||
|
||||
def _is_subscribed_to_msg(self, msg: rpc_pb2.Message) -> bool:
|
||||
return any(topic in self.topic_ids for topic in msg.topicIDs)
|
||||
|
||||
async def write_msg(self, stream: INetStream, rpc_msg: rpc_pb2.RPC) -> bool:
|
||||
"""
|
||||
Write an RPC message to a stream with proper error handling.
|
||||
|
||||
Implements WriteMsg similar to go-msgio which is used in go-libp2p
|
||||
Ref: https://github.com/libp2p/go-msgio/blob/master/protoio/uvarint_writer.go#L56
|
||||
|
||||
|
||||
:param stream: stream to write the message to
|
||||
:param rpc_msg: RPC message to write
|
||||
:return: True if successful, False if stream was closed
|
||||
"""
|
||||
try:
|
||||
# Calculate message size first
|
||||
msg_bytes = rpc_msg.SerializeToString()
|
||||
msg_size = len(msg_bytes)
|
||||
|
||||
# Calculate varint size and allocate exact buffer size needed
|
||||
|
||||
varint_bytes = encode_uvarint(msg_size)
|
||||
varint_size = len(varint_bytes)
|
||||
|
||||
# Allocate buffer with exact size (like Go's pool.Get())
|
||||
buf = bytearray(varint_size + msg_size)
|
||||
|
||||
# Write varint length prefix to buffer (like Go's binary.PutUvarint())
|
||||
buf[:varint_size] = varint_bytes
|
||||
|
||||
# Write serialized message after varint (like Go's rpc.MarshalTo())
|
||||
buf[varint_size:] = msg_bytes
|
||||
|
||||
# Single write operation (like Go's s.Write(buf))
|
||||
await stream.write(bytes(buf))
|
||||
return True
|
||||
except StreamClosed:
|
||||
peer_id = stream.muxed_conn.peer_id
|
||||
logger.debug("Fail to write message to %s: stream closed", peer_id)
|
||||
self._handle_dead_peer(peer_id)
|
||||
return False
|
||||
|
||||
@ -15,10 +15,6 @@ from libp2p.relay.circuit_v2 import (
|
||||
RelayLimits,
|
||||
RelayResourceManager,
|
||||
Reservation,
|
||||
DCUTR_PROTOCOL_ID,
|
||||
DCUtRProtocol,
|
||||
ReachabilityChecker,
|
||||
is_private_ip,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
@ -29,9 +25,4 @@ __all__ = [
|
||||
"RelayLimits",
|
||||
"RelayResourceManager",
|
||||
"Reservation",
|
||||
"DCUtRProtocol",
|
||||
"DCUTR_PROTOCOL_ID",
|
||||
"ReachabilityChecker",
|
||||
"is_private_ip"
|
||||
|
||||
]
|
||||
|
||||
@ -5,16 +5,6 @@ This package implements the Circuit Relay v2 protocol as specified in:
|
||||
https://github.com/libp2p/specs/blob/master/relay/circuit-v2.md
|
||||
"""
|
||||
|
||||
from .dcutr import (
|
||||
DCUtRProtocol,
|
||||
)
|
||||
from .dcutr import PROTOCOL_ID as DCUTR_PROTOCOL_ID
|
||||
|
||||
from .nat import (
|
||||
ReachabilityChecker,
|
||||
is_private_ip,
|
||||
)
|
||||
|
||||
from .discovery import (
|
||||
RelayDiscovery,
|
||||
)
|
||||
@ -39,8 +29,4 @@ __all__ = [
|
||||
"RelayResourceManager",
|
||||
"CircuitV2Transport",
|
||||
"RelayDiscovery",
|
||||
"DCUtRProtocol",
|
||||
"DCUTR_PROTOCOL_ID",
|
||||
"ReachabilityChecker",
|
||||
"is_private_ip",
|
||||
]
|
||||
|
||||
@ -1,580 +0,0 @@
|
||||
"""
|
||||
Direct Connection Upgrade through Relay (DCUtR) protocol implementation.
|
||||
|
||||
This module implements the DCUtR protocol as specified in:
|
||||
https://github.com/libp2p/specs/blob/master/relay/DCUtR.md
|
||||
|
||||
DCUtR enables peers behind NAT to establish direct connections
|
||||
using hole punching techniques.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
from multiaddr import Multiaddr
|
||||
import trio
|
||||
|
||||
from libp2p.abc import (
|
||||
IHost,
|
||||
INetConn,
|
||||
INetStream,
|
||||
)
|
||||
from libp2p.custom_types import (
|
||||
TProtocol,
|
||||
)
|
||||
from libp2p.peer.id import (
|
||||
ID,
|
||||
)
|
||||
from libp2p.peer.peerinfo import (
|
||||
PeerInfo,
|
||||
)
|
||||
from libp2p.relay.circuit_v2.nat import (
|
||||
ReachabilityChecker,
|
||||
)
|
||||
from libp2p.relay.circuit_v2.pb.dcutr_pb2 import (
|
||||
HolePunch,
|
||||
)
|
||||
from libp2p.tools.async_service import (
|
||||
Service,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Protocol ID for DCUtR
|
||||
PROTOCOL_ID = TProtocol("/libp2p/dcutr")
|
||||
|
||||
# Maximum message size for DCUtR (4KiB as per spec)
|
||||
MAX_MESSAGE_SIZE = 4 * 1024
|
||||
|
||||
# Timeouts
|
||||
STREAM_READ_TIMEOUT = 30 # seconds
|
||||
STREAM_WRITE_TIMEOUT = 30 # seconds
|
||||
DIAL_TIMEOUT = 10 # seconds
|
||||
|
||||
# Maximum number of hole punch attempts per peer
|
||||
MAX_HOLE_PUNCH_ATTEMPTS = 5
|
||||
|
||||
# Delay between retry attempts
|
||||
HOLE_PUNCH_RETRY_DELAY = 30 # seconds
|
||||
|
||||
# Maximum observed addresses to exchange
|
||||
MAX_OBSERVED_ADDRS = 20
|
||||
|
||||
|
||||
class DCUtRProtocol(Service):
|
||||
"""
|
||||
DCUtRProtocol implements the Direct Connection Upgrade through Relay protocol.
|
||||
|
||||
This protocol allows two NATed peers to establish direct connections through
|
||||
hole punching, after they have established an initial connection through a relay.
|
||||
"""
|
||||
|
||||
def __init__(self, host: IHost):
|
||||
"""
|
||||
Initialize the DCUtR protocol.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
host : IHost
|
||||
The libp2p host this protocol is running on
|
||||
|
||||
"""
|
||||
super().__init__()
|
||||
self.host = host
|
||||
self.event_started = trio.Event()
|
||||
self._hole_punch_attempts: dict[ID, int] = {}
|
||||
self._direct_connections: set[ID] = set()
|
||||
self._in_progress: set[ID] = set()
|
||||
self._reachability_checker = ReachabilityChecker(host)
|
||||
self._nursery: trio.Nursery | None = None
|
||||
|
||||
async def run(self, *, task_status: Any = trio.TASK_STATUS_IGNORED) -> None:
|
||||
"""Run the protocol service."""
|
||||
try:
|
||||
# Register the DCUtR protocol handler
|
||||
logger.debug("Registering DCUtR protocol handler")
|
||||
self.host.set_stream_handler(PROTOCOL_ID, self._handle_dcutr_stream)
|
||||
|
||||
# Signal that we're ready
|
||||
self.event_started.set()
|
||||
|
||||
# Start the service
|
||||
async with trio.open_nursery() as nursery:
|
||||
self._nursery = nursery
|
||||
task_status.started()
|
||||
logger.debug("DCUtR protocol service started")
|
||||
|
||||
# Wait for service to be stopped
|
||||
await self.manager.wait_finished()
|
||||
finally:
|
||||
# Clean up
|
||||
try:
|
||||
# Use empty async lambda instead of None for stream handler
|
||||
async def empty_handler(_: INetStream) -> None:
|
||||
pass
|
||||
|
||||
self.host.set_stream_handler(PROTOCOL_ID, empty_handler)
|
||||
logger.debug("DCUtR protocol handler unregistered")
|
||||
except Exception as e:
|
||||
logger.error("Error unregistering DCUtR protocol handler: %s", str(e))
|
||||
|
||||
# Clear state
|
||||
self._hole_punch_attempts.clear()
|
||||
self._direct_connections.clear()
|
||||
self._in_progress.clear()
|
||||
self._nursery = None
|
||||
|
||||
async def _handle_dcutr_stream(self, stream: INetStream) -> None:
|
||||
"""
|
||||
Handle incoming DCUtR streams.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
stream : INetStream
|
||||
The incoming stream
|
||||
|
||||
"""
|
||||
try:
|
||||
# Get the remote peer ID
|
||||
remote_peer_id = stream.muxed_conn.peer_id
|
||||
logger.debug("Received DCUtR stream from peer %s", remote_peer_id)
|
||||
|
||||
# Check if we already have a direct connection
|
||||
if await self._have_direct_connection(remote_peer_id):
|
||||
logger.debug(
|
||||
"Already have direct connection to %s, closing stream",
|
||||
remote_peer_id,
|
||||
)
|
||||
await stream.close()
|
||||
return
|
||||
|
||||
# Check if there's already an active hole punch attempt
|
||||
if remote_peer_id in self._in_progress:
|
||||
logger.debug("Hole punch already in progress with %s", remote_peer_id)
|
||||
# Let the existing attempt continue
|
||||
await stream.close()
|
||||
return
|
||||
|
||||
# Mark as in progress
|
||||
self._in_progress.add(remote_peer_id)
|
||||
|
||||
try:
|
||||
# Read the CONNECT message
|
||||
with trio.fail_after(STREAM_READ_TIMEOUT):
|
||||
msg_bytes = await stream.read(MAX_MESSAGE_SIZE)
|
||||
|
||||
# Parse the message
|
||||
connect_msg = HolePunch()
|
||||
connect_msg.ParseFromString(msg_bytes)
|
||||
|
||||
# Verify it's a CONNECT message
|
||||
if connect_msg.type != HolePunch.CONNECT:
|
||||
logger.warning("Expected CONNECT message, got %s", connect_msg.type)
|
||||
await stream.close()
|
||||
return
|
||||
|
||||
logger.debug(
|
||||
"Received CONNECT message from %s with %d addresses",
|
||||
remote_peer_id,
|
||||
len(connect_msg.ObsAddrs),
|
||||
)
|
||||
|
||||
# Process observed addresses from the peer
|
||||
peer_addrs = self._decode_observed_addrs(list(connect_msg.ObsAddrs))
|
||||
logger.debug("Decoded %d valid addresses from peer", len(peer_addrs))
|
||||
|
||||
# Store the addresses in the peerstore
|
||||
if peer_addrs:
|
||||
self.host.get_peerstore().add_addrs(
|
||||
remote_peer_id, peer_addrs, 10 * 60
|
||||
) # 10 minute TTL
|
||||
|
||||
# Send our CONNECT message with our observed addresses
|
||||
our_addrs = await self._get_observed_addrs()
|
||||
response = HolePunch()
|
||||
response.type = HolePunch.CONNECT
|
||||
response.ObsAddrs.extend(our_addrs)
|
||||
|
||||
with trio.fail_after(STREAM_WRITE_TIMEOUT):
|
||||
await stream.write(response.SerializeToString())
|
||||
|
||||
logger.debug(
|
||||
"Sent CONNECT response to %s with %d addresses",
|
||||
remote_peer_id,
|
||||
len(our_addrs),
|
||||
)
|
||||
|
||||
# Wait for SYNC message
|
||||
with trio.fail_after(STREAM_READ_TIMEOUT):
|
||||
sync_bytes = await stream.read(MAX_MESSAGE_SIZE)
|
||||
|
||||
# Parse the SYNC message
|
||||
sync_msg = HolePunch()
|
||||
sync_msg.ParseFromString(sync_bytes)
|
||||
|
||||
# Verify it's a SYNC message
|
||||
if sync_msg.type != HolePunch.SYNC:
|
||||
logger.warning("Expected SYNC message, got %s", sync_msg.type)
|
||||
await stream.close()
|
||||
return
|
||||
|
||||
logger.debug("Received SYNC message from %s", remote_peer_id)
|
||||
|
||||
# Perform hole punch
|
||||
success = await self._perform_hole_punch(remote_peer_id, peer_addrs)
|
||||
|
||||
if success:
|
||||
logger.info(
|
||||
"Successfully established direct connection with %s",
|
||||
remote_peer_id,
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
"Failed to establish direct connection with %s", remote_peer_id
|
||||
)
|
||||
|
||||
except trio.TooSlowError:
|
||||
logger.warning("Timeout in DCUtR protocol with peer %s", remote_peer_id)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Error in DCUtR protocol with peer %s: %s", remote_peer_id, str(e)
|
||||
)
|
||||
finally:
|
||||
# Clean up
|
||||
self._in_progress.discard(remote_peer_id)
|
||||
await stream.close()
|
||||
|
||||
except Exception as e:
|
||||
logger.error("Error handling DCUtR stream: %s", str(e))
|
||||
await stream.close()
|
||||
|
||||
async def initiate_hole_punch(self, peer_id: ID) -> bool:
|
||||
"""
|
||||
Initiate a hole punch with a peer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to hole punch with
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if hole punch was successful, False otherwise
|
||||
|
||||
"""
|
||||
# Check if we already have a direct connection
|
||||
if await self._have_direct_connection(peer_id):
|
||||
logger.debug("Already have direct connection to %s", peer_id)
|
||||
return True
|
||||
|
||||
# Check if there's already an active hole punch attempt
|
||||
if peer_id in self._in_progress:
|
||||
logger.debug("Hole punch already in progress with %s", peer_id)
|
||||
return False
|
||||
|
||||
# Check if we've exceeded the maximum number of attempts
|
||||
attempts = self._hole_punch_attempts.get(peer_id, 0)
|
||||
if attempts >= MAX_HOLE_PUNCH_ATTEMPTS:
|
||||
logger.warning("Maximum hole punch attempts reached for peer %s", peer_id)
|
||||
return False
|
||||
|
||||
# Mark as in progress and increment attempt counter
|
||||
self._in_progress.add(peer_id)
|
||||
self._hole_punch_attempts[peer_id] = attempts + 1
|
||||
|
||||
try:
|
||||
# Open a DCUtR stream to the peer
|
||||
logger.debug("Opening DCUtR stream to peer %s", peer_id)
|
||||
stream = await self.host.new_stream(peer_id, [PROTOCOL_ID])
|
||||
if not stream:
|
||||
logger.warning("Failed to open DCUtR stream to peer %s", peer_id)
|
||||
return False
|
||||
|
||||
try:
|
||||
# Send our CONNECT message with our observed addresses
|
||||
our_addrs = await self._get_observed_addrs()
|
||||
connect_msg = HolePunch()
|
||||
connect_msg.type = HolePunch.CONNECT
|
||||
connect_msg.ObsAddrs.extend(our_addrs)
|
||||
|
||||
start_time = time.time()
|
||||
with trio.fail_after(STREAM_WRITE_TIMEOUT):
|
||||
await stream.write(connect_msg.SerializeToString())
|
||||
|
||||
logger.debug(
|
||||
"Sent CONNECT message to %s with %d addresses",
|
||||
peer_id,
|
||||
len(our_addrs),
|
||||
)
|
||||
|
||||
# Receive the peer's CONNECT message
|
||||
with trio.fail_after(STREAM_READ_TIMEOUT):
|
||||
resp_bytes = await stream.read(MAX_MESSAGE_SIZE)
|
||||
|
||||
# Calculate RTT
|
||||
rtt = time.time() - start_time
|
||||
|
||||
# Parse the response
|
||||
resp = HolePunch()
|
||||
resp.ParseFromString(resp_bytes)
|
||||
|
||||
# Verify it's a CONNECT message
|
||||
if resp.type != HolePunch.CONNECT:
|
||||
logger.warning("Expected CONNECT message, got %s", resp.type)
|
||||
return False
|
||||
|
||||
logger.debug(
|
||||
"Received CONNECT response from %s with %d addresses",
|
||||
peer_id,
|
||||
len(resp.ObsAddrs),
|
||||
)
|
||||
|
||||
# Process observed addresses from the peer
|
||||
peer_addrs = self._decode_observed_addrs(list(resp.ObsAddrs))
|
||||
logger.debug("Decoded %d valid addresses from peer", len(peer_addrs))
|
||||
|
||||
# Store the addresses in the peerstore
|
||||
if peer_addrs:
|
||||
self.host.get_peerstore().add_addrs(
|
||||
peer_id, peer_addrs, 10 * 60
|
||||
) # 10 minute TTL
|
||||
|
||||
# Send SYNC message with timing information
|
||||
# We'll use a future time that's 2*RTT from now to ensure both sides
|
||||
# are ready
|
||||
punch_time = time.time() + (2 * rtt) + 1 # Add 1 second buffer
|
||||
|
||||
sync_msg = HolePunch()
|
||||
sync_msg.type = HolePunch.SYNC
|
||||
|
||||
with trio.fail_after(STREAM_WRITE_TIMEOUT):
|
||||
await stream.write(sync_msg.SerializeToString())
|
||||
|
||||
logger.debug("Sent SYNC message to %s", peer_id)
|
||||
|
||||
# Perform the synchronized hole punch
|
||||
success = await self._perform_hole_punch(
|
||||
peer_id, peer_addrs, punch_time
|
||||
)
|
||||
|
||||
if success:
|
||||
logger.info(
|
||||
"Successfully established direct connection with %s", peer_id
|
||||
)
|
||||
return True
|
||||
else:
|
||||
logger.warning(
|
||||
"Failed to establish direct connection with %s", peer_id
|
||||
)
|
||||
return False
|
||||
|
||||
except trio.TooSlowError:
|
||||
logger.warning("Timeout in DCUtR protocol with peer %s", peer_id)
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Error in DCUtR protocol with peer %s: %s", peer_id, str(e)
|
||||
)
|
||||
return False
|
||||
finally:
|
||||
await stream.close()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Error initiating hole punch with peer %s: %s", peer_id, str(e)
|
||||
)
|
||||
return False
|
||||
finally:
|
||||
self._in_progress.discard(peer_id)
|
||||
|
||||
# This should never be reached, but add explicit return for type checking
|
||||
return False
|
||||
|
||||
async def _perform_hole_punch(
|
||||
self, peer_id: ID, addrs: list[Multiaddr], punch_time: float | None = None
|
||||
) -> bool:
|
||||
"""
|
||||
Perform a hole punch attempt with a peer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to hole punch with
|
||||
addrs : list[Multiaddr]
|
||||
List of addresses to try
|
||||
punch_time : Optional[float]
|
||||
Time to perform the punch (if None, do it immediately)
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if hole punch was successful
|
||||
|
||||
"""
|
||||
if not addrs:
|
||||
logger.warning("No addresses to try for hole punch with %s", peer_id)
|
||||
return False
|
||||
|
||||
# If punch_time is specified, wait until that time
|
||||
if punch_time is not None:
|
||||
now = time.time()
|
||||
if punch_time > now:
|
||||
wait_time = punch_time - now
|
||||
logger.debug("Waiting %.2f seconds before hole punch", wait_time)
|
||||
await trio.sleep(wait_time)
|
||||
|
||||
# Try to dial each address
|
||||
logger.debug(
|
||||
"Starting hole punch with peer %s using %d addresses", peer_id, len(addrs)
|
||||
)
|
||||
|
||||
# Filter to only include non-relay addresses
|
||||
direct_addrs = [
|
||||
addr for addr in addrs if not str(addr).startswith("/p2p-circuit")
|
||||
]
|
||||
|
||||
if not direct_addrs:
|
||||
logger.warning("No direct addresses found for peer %s", peer_id)
|
||||
return False
|
||||
|
||||
# Start dialing attempts in parallel
|
||||
async with trio.open_nursery() as nursery:
|
||||
for addr in direct_addrs[
|
||||
:5
|
||||
]: # Limit to 5 addresses to avoid too many connections
|
||||
nursery.start_soon(self._dial_peer, peer_id, addr)
|
||||
|
||||
# Check if we established a direct connection
|
||||
return await self._have_direct_connection(peer_id)
|
||||
|
||||
async def _dial_peer(self, peer_id: ID, addr: Multiaddr) -> None:
|
||||
"""
|
||||
Attempt to dial a peer at a specific address.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to dial
|
||||
addr : Multiaddr
|
||||
The address to dial
|
||||
|
||||
"""
|
||||
try:
|
||||
logger.debug("Attempting to dial %s at %s", peer_id, addr)
|
||||
|
||||
# Create peer info
|
||||
peer_info = PeerInfo(peer_id, [addr])
|
||||
|
||||
# Try to connect with timeout
|
||||
with trio.fail_after(DIAL_TIMEOUT):
|
||||
await self.host.connect(peer_info)
|
||||
|
||||
logger.info("Successfully connected to %s at %s", peer_id, addr)
|
||||
|
||||
# Add to direct connections set
|
||||
self._direct_connections.add(peer_id)
|
||||
|
||||
except trio.TooSlowError:
|
||||
logger.debug("Timeout dialing %s at %s", peer_id, addr)
|
||||
except Exception as e:
|
||||
logger.debug("Error dialing %s at %s: %s", peer_id, addr, str(e))
|
||||
|
||||
async def _have_direct_connection(self, peer_id: ID) -> bool:
|
||||
"""
|
||||
Check if we already have a direct connection to a peer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if we have a direct connection, False otherwise
|
||||
|
||||
"""
|
||||
# Check our direct connections cache first
|
||||
if peer_id in self._direct_connections:
|
||||
return True
|
||||
|
||||
# Check if the peer is connected
|
||||
network = self.host.get_network()
|
||||
conn_or_conns = network.connections.get(peer_id)
|
||||
if not conn_or_conns:
|
||||
return False
|
||||
|
||||
# Handle both single connection and list of connections
|
||||
connections: list[INetConn] = (
|
||||
[conn_or_conns] if not isinstance(conn_or_conns, list) else conn_or_conns
|
||||
)
|
||||
|
||||
# Check if any connection is direct (not relayed)
|
||||
for conn in connections:
|
||||
# Get the transport addresses
|
||||
addrs = conn.get_transport_addresses()
|
||||
|
||||
# If any address doesn't start with /p2p-circuit, it's a direct connection
|
||||
if any(not str(addr).startswith("/p2p-circuit") for addr in addrs):
|
||||
# Cache this result
|
||||
self._direct_connections.add(peer_id)
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
async def _get_observed_addrs(self) -> list[bytes]:
|
||||
"""
|
||||
Get our observed addresses to share with the peer.
|
||||
|
||||
Returns
|
||||
-------
|
||||
List[bytes]
|
||||
List of observed addresses as bytes
|
||||
|
||||
"""
|
||||
# Get all listen addresses
|
||||
addrs = self.host.get_addrs()
|
||||
|
||||
# Filter out relay addresses
|
||||
direct_addrs = [
|
||||
addr for addr in addrs if not str(addr).startswith("/p2p-circuit")
|
||||
]
|
||||
|
||||
# Limit the number of addresses
|
||||
if len(direct_addrs) > MAX_OBSERVED_ADDRS:
|
||||
direct_addrs = direct_addrs[:MAX_OBSERVED_ADDRS]
|
||||
|
||||
# Convert to bytes
|
||||
addr_bytes = [addr.to_bytes() for addr in direct_addrs]
|
||||
|
||||
return addr_bytes
|
||||
|
||||
def _decode_observed_addrs(self, addr_bytes: list[bytes]) -> list[Multiaddr]:
|
||||
"""
|
||||
Decode observed addresses received from a peer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
addr_bytes : List[bytes]
|
||||
The encoded addresses
|
||||
|
||||
Returns
|
||||
-------
|
||||
List[Multiaddr]
|
||||
The decoded multiaddresses
|
||||
|
||||
"""
|
||||
result = []
|
||||
|
||||
for addr_byte in addr_bytes:
|
||||
try:
|
||||
addr = Multiaddr(addr_byte)
|
||||
# Validate the address (basic check)
|
||||
if str(addr).startswith("/ip"):
|
||||
result.append(addr)
|
||||
except Exception as e:
|
||||
logger.debug("Error decoding multiaddr: %s", str(e))
|
||||
|
||||
return result
|
||||
@ -234,8 +234,7 @@ class RelayDiscovery(Service):
|
||||
|
||||
if not callable(proto_getter):
|
||||
return None
|
||||
if peer_id not in peerstore.peer_ids():
|
||||
return None
|
||||
|
||||
try:
|
||||
# Try to get protocols
|
||||
proto_result = proto_getter(peer_id)
|
||||
@ -284,6 +283,8 @@ class RelayDiscovery(Service):
|
||||
return None
|
||||
|
||||
mux = self.host.get_mux()
|
||||
if not hasattr(mux, "protocols"):
|
||||
return None
|
||||
|
||||
peer_protocols = set()
|
||||
# Get protocols from mux with proper type safety
|
||||
@ -292,9 +293,7 @@ class RelayDiscovery(Service):
|
||||
# Get protocols with proper typing
|
||||
mux_protocols = mux.get_protocols()
|
||||
if isinstance(mux_protocols, (list, tuple)):
|
||||
available_protocols = [
|
||||
p for p in mux.get_protocols() if p is not None
|
||||
]
|
||||
available_protocols = list(mux_protocols)
|
||||
|
||||
for protocol in available_protocols:
|
||||
try:
|
||||
@ -314,7 +313,7 @@ class RelayDiscovery(Service):
|
||||
|
||||
self._protocol_cache[peer_id] = peer_protocols
|
||||
protocol_str = str(PROTOCOL_ID)
|
||||
for protocol in map(TProtocol, peer_protocols):
|
||||
for protocol in peer_protocols:
|
||||
if protocol == protocol_str:
|
||||
return True
|
||||
return False
|
||||
|
||||
@ -1,300 +0,0 @@
|
||||
"""
|
||||
NAT traversal utilities for libp2p.
|
||||
|
||||
This module provides utilities for NAT traversal and reachability detection.
|
||||
"""
|
||||
|
||||
import ipaddress
|
||||
import logging
|
||||
|
||||
from multiaddr import (
|
||||
Multiaddr,
|
||||
)
|
||||
|
||||
from libp2p.abc import (
|
||||
IHost,
|
||||
INetConn,
|
||||
)
|
||||
from libp2p.peer.id import (
|
||||
ID,
|
||||
)
|
||||
|
||||
logger = logging.getLogger("libp2p.relay.circuit_v2.nat")
|
||||
|
||||
# Timeout for reachability checks
|
||||
REACHABILITY_TIMEOUT = 10 # seconds
|
||||
|
||||
# Define private IP ranges
|
||||
PRIVATE_IP_RANGES = [
|
||||
("10.0.0.0", "10.255.255.255"), # Class A private network: 10.0.0.0/8
|
||||
("172.16.0.0", "172.31.255.255"), # Class B private network: 172.16.0.0/12
|
||||
("192.168.0.0", "192.168.255.255"), # Class C private network: 192.168.0.0/16
|
||||
]
|
||||
|
||||
# Link-local address range: 169.254.0.0/16
|
||||
LINK_LOCAL_RANGE = ("169.254.0.0", "169.254.255.255")
|
||||
|
||||
# Loopback address range: 127.0.0.0/8
|
||||
LOOPBACK_RANGE = ("127.0.0.0", "127.255.255.255")
|
||||
|
||||
|
||||
def ip_to_int(ip: str) -> int:
|
||||
"""
|
||||
Convert an IP address to an integer.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
ip : str
|
||||
IP address to convert
|
||||
|
||||
Returns
|
||||
-------
|
||||
int
|
||||
Integer representation of the IP
|
||||
|
||||
"""
|
||||
try:
|
||||
return int(ipaddress.IPv4Address(ip))
|
||||
except ipaddress.AddressValueError:
|
||||
# Handle IPv6 addresses
|
||||
return int(ipaddress.IPv6Address(ip))
|
||||
|
||||
|
||||
def is_ip_in_range(ip: str, start_range: str, end_range: str) -> bool:
|
||||
"""
|
||||
Check if an IP address is within a range.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
ip : str
|
||||
IP address to check
|
||||
start_range : str
|
||||
Start of the range
|
||||
end_range : str
|
||||
End of the range
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if the IP is in the range
|
||||
|
||||
"""
|
||||
try:
|
||||
ip_int = ip_to_int(ip)
|
||||
start_int = ip_to_int(start_range)
|
||||
end_int = ip_to_int(end_range)
|
||||
return start_int <= ip_int <= end_int
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def is_private_ip(ip: str) -> bool:
|
||||
"""
|
||||
Check if an IP address is private.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
ip : str
|
||||
IP address to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if IP is private
|
||||
|
||||
"""
|
||||
for start_range, end_range in PRIVATE_IP_RANGES:
|
||||
if is_ip_in_range(ip, start_range, end_range):
|
||||
return True
|
||||
|
||||
# Check for link-local addresses
|
||||
if is_ip_in_range(ip, *LINK_LOCAL_RANGE):
|
||||
return True
|
||||
|
||||
# Check for loopback addresses
|
||||
if is_ip_in_range(ip, *LOOPBACK_RANGE):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def extract_ip_from_multiaddr(addr: Multiaddr) -> str | None:
|
||||
"""
|
||||
Extract the IP address from a multiaddr.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
addr : Multiaddr
|
||||
Multiaddr to extract from
|
||||
|
||||
Returns
|
||||
-------
|
||||
Optional[str]
|
||||
IP address or None if not found
|
||||
|
||||
"""
|
||||
# Convert to string representation
|
||||
addr_str = str(addr)
|
||||
|
||||
# Look for IPv4 address
|
||||
ipv4_start = addr_str.find("/ip4/")
|
||||
if ipv4_start != -1:
|
||||
# Extract the IPv4 address
|
||||
ipv4_end = addr_str.find("/", ipv4_start + 5)
|
||||
if ipv4_end != -1:
|
||||
return addr_str[ipv4_start + 5 : ipv4_end]
|
||||
|
||||
# Look for IPv6 address
|
||||
ipv6_start = addr_str.find("/ip6/")
|
||||
if ipv6_start != -1:
|
||||
# Extract the IPv6 address
|
||||
ipv6_end = addr_str.find("/", ipv6_start + 5)
|
||||
if ipv6_end != -1:
|
||||
return addr_str[ipv6_start + 5 : ipv6_end]
|
||||
|
||||
return None
|
||||
|
||||
|
||||
class ReachabilityChecker:
|
||||
"""
|
||||
Utility class for checking peer reachability.
|
||||
|
||||
This class assesses whether a peer's addresses are likely
|
||||
to be directly reachable or behind NAT.
|
||||
"""
|
||||
|
||||
def __init__(self, host: IHost):
|
||||
"""
|
||||
Initialize the reachability checker.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
host : IHost
|
||||
The libp2p host
|
||||
|
||||
"""
|
||||
self.host = host
|
||||
self._peer_reachability: dict[ID, bool] = {}
|
||||
self._known_public_peers: set[ID] = set()
|
||||
|
||||
def is_addr_public(self, addr: Multiaddr) -> bool:
|
||||
"""
|
||||
Check if an address is likely to be publicly reachable.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
addr : Multiaddr
|
||||
The multiaddr to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if address is likely public
|
||||
|
||||
"""
|
||||
# Extract the IP address
|
||||
ip = extract_ip_from_multiaddr(addr)
|
||||
if not ip:
|
||||
return False
|
||||
|
||||
# Check if it's a private IP
|
||||
return not is_private_ip(ip)
|
||||
|
||||
def get_public_addrs(self, addrs: list[Multiaddr]) -> list[Multiaddr]:
|
||||
"""
|
||||
Filter a list of addresses to only include likely public ones.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
addrs : List[Multiaddr]
|
||||
List of addresses to filter
|
||||
|
||||
Returns
|
||||
-------
|
||||
List[Multiaddr]
|
||||
List of likely public addresses
|
||||
|
||||
"""
|
||||
return [addr for addr in addrs if self.is_addr_public(addr)]
|
||||
|
||||
async def check_peer_reachability(self, peer_id: ID) -> bool:
|
||||
"""
|
||||
Check if a peer is directly reachable.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
peer_id : ID
|
||||
The peer ID to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
bool
|
||||
True if peer is likely directly reachable
|
||||
|
||||
"""
|
||||
# Check if we already know
|
||||
if peer_id in self._peer_reachability:
|
||||
return self._peer_reachability[peer_id]
|
||||
|
||||
# Check if the peer is connected
|
||||
network = self.host.get_network()
|
||||
connections: INetConn | list[INetConn] | None = network.connections.get(peer_id)
|
||||
if not connections:
|
||||
# Not connected, can't determine reachability
|
||||
return False
|
||||
|
||||
# Check if any connection is direct (not relayed)
|
||||
if isinstance(connections, list):
|
||||
for conn in connections:
|
||||
# Get the transport addresses
|
||||
addrs = conn.get_transport_addresses()
|
||||
|
||||
# If any address doesn't start with /p2p-circuit,
|
||||
# it's a direct connection
|
||||
if any(not str(addr).startswith("/p2p-circuit") for addr in addrs):
|
||||
self._peer_reachability[peer_id] = True
|
||||
return True
|
||||
else:
|
||||
# Handle single connection case
|
||||
addrs = connections.get_transport_addresses()
|
||||
if any(not str(addr).startswith("/p2p-circuit") for addr in addrs):
|
||||
self._peer_reachability[peer_id] = True
|
||||
return True
|
||||
|
||||
# Get the peer's addresses from peerstore
|
||||
try:
|
||||
addrs = self.host.get_peerstore().addrs(peer_id)
|
||||
# Check if peer has any public addresses
|
||||
public_addrs = self.get_public_addrs(addrs)
|
||||
if public_addrs:
|
||||
self._peer_reachability[peer_id] = True
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.debug("Error getting peer addresses: %s", str(e))
|
||||
|
||||
# Default to not directly reachable
|
||||
self._peer_reachability[peer_id] = False
|
||||
return False
|
||||
|
||||
async def check_self_reachability(self) -> tuple[bool, list[Multiaddr]]:
|
||||
"""
|
||||
Check if this host is likely directly reachable.
|
||||
|
||||
Returns
|
||||
-------
|
||||
Tuple[bool, List[Multiaddr]]
|
||||
Tuple of (is_reachable, public_addresses)
|
||||
|
||||
"""
|
||||
# Get all host addresses
|
||||
addrs = self.host.get_addrs()
|
||||
|
||||
# Filter for public addresses
|
||||
public_addrs = self.get_public_addrs(addrs)
|
||||
|
||||
# If we have public addresses, assume we're reachable
|
||||
# This is a simplified assumption - real reachability would need
|
||||
# external checking
|
||||
is_reachable = len(public_addrs) > 0
|
||||
|
||||
return is_reachable, public_addrs
|
||||
@ -5,11 +5,6 @@ Contains generated protobuf code for circuit_v2 relay protocol.
|
||||
"""
|
||||
|
||||
# Import the classes to be accessible directly from the package
|
||||
|
||||
from .dcutr_pb2 import (
|
||||
HolePunch,
|
||||
)
|
||||
|
||||
from .circuit_pb2 import (
|
||||
HopMessage,
|
||||
Limit,
|
||||
@ -18,4 +13,4 @@ from .circuit_pb2 import (
|
||||
StopMessage,
|
||||
)
|
||||
|
||||
__all__ = ["HopMessage", "Limit", "Reservation", "Status", "StopMessage", "HolePunch"]
|
||||
__all__ = ["HopMessage", "Limit", "Reservation", "Status", "StopMessage"]
|
||||
|
||||
@ -1,14 +0,0 @@
|
||||
syntax = "proto2";
|
||||
|
||||
package holepunch.pb;
|
||||
|
||||
message HolePunch {
|
||||
enum Type {
|
||||
CONNECT = 100;
|
||||
SYNC = 300;
|
||||
}
|
||||
|
||||
required Type type = 1;
|
||||
|
||||
repeated bytes ObsAddrs = 2;
|
||||
}
|
||||
@ -1,26 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# source: libp2p/relay/circuit_v2/pb/dcutr.proto
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf.internal import builder as _builder
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n&libp2p/relay/circuit_v2/pb/dcutr.proto\x12\x0cholepunch.pb\"\x69\n\tHolePunch\x12*\n\x04type\x18\x01 \x02(\x0e\x32\x1c.holepunch.pb.HolePunch.Type\x12\x10\n\x08ObsAddrs\x18\x02 \x03(\x0c\"\x1e\n\x04Type\x12\x0b\n\x07CONNECT\x10\x64\x12\t\n\x04SYNC\x10\xac\x02')
|
||||
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.relay.circuit_v2.pb.dcutr_pb2', globals())
|
||||
if _descriptor._USE_C_DESCRIPTORS == False:
|
||||
DESCRIPTOR._options = None
|
||||
_HOLEPUNCH._serialized_start=56
|
||||
_HOLEPUNCH._serialized_end=161
|
||||
_HOLEPUNCH_TYPE._serialized_start=131
|
||||
_HOLEPUNCH_TYPE._serialized_end=161
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@ -1,54 +0,0 @@
|
||||
"""
|
||||
@generated by mypy-protobuf. Do not edit manually!
|
||||
isort:skip_file
|
||||
"""
|
||||
|
||||
import builtins
|
||||
import collections.abc
|
||||
import google.protobuf.descriptor
|
||||
import google.protobuf.internal.containers
|
||||
import google.protobuf.message
|
||||
import typing
|
||||
|
||||
DESCRIPTOR: google.protobuf.descriptor.FileDescriptor
|
||||
|
||||
@typing.final
|
||||
class HolePunch(google.protobuf.message.Message):
|
||||
"""HolePunch message for the DCUtR protocol."""
|
||||
|
||||
DESCRIPTOR: google.protobuf.descriptor.Descriptor
|
||||
|
||||
class Type(builtins.int):
|
||||
"""Message types for HolePunch"""
|
||||
@builtins.classmethod
|
||||
def Name(cls, number: builtins.int) -> builtins.str: ...
|
||||
@builtins.classmethod
|
||||
def Value(cls, name: builtins.str) -> 'HolePunch.Type': ...
|
||||
@builtins.classmethod
|
||||
def keys(cls) -> typing.List[builtins.str]: ...
|
||||
@builtins.classmethod
|
||||
def values(cls) -> typing.List['HolePunch.Type']: ...
|
||||
@builtins.classmethod
|
||||
def items(cls) -> typing.List[typing.Tuple[builtins.str, 'HolePunch.Type']]: ...
|
||||
|
||||
CONNECT: HolePunch.Type # 100
|
||||
SYNC: HolePunch.Type # 300
|
||||
|
||||
TYPE_FIELD_NUMBER: builtins.int
|
||||
OBSADDRS_FIELD_NUMBER: builtins.int
|
||||
type: HolePunch.Type
|
||||
|
||||
@property
|
||||
def ObsAddrs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]: ...
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
type: HolePunch.Type = ...,
|
||||
ObsAddrs: collections.abc.Iterable[builtins.bytes] = ...,
|
||||
) -> None: ...
|
||||
|
||||
def HasField(self, field_name: typing.Literal["type", b"type"]) -> builtins.bool: ...
|
||||
def ClearField(self, field_name: typing.Literal["ObsAddrs", b"ObsAddrs", "type", b"type"]) -> None: ...
|
||||
|
||||
global___HolePunch = HolePunch
|
||||
@ -41,8 +41,7 @@ class BaseNoiseMsgReadWriter(EncryptedMsgReadWriter):
|
||||
read_writer: NoisePacketReadWriter
|
||||
noise_state: NoiseState
|
||||
|
||||
# NOTE: This prefix is added in msg#3 in Go.
|
||||
# Support in py-libp2p is available but not used
|
||||
# FIXME: This prefix is added in msg#3 in Go. Check whether it's a desired behavior.
|
||||
prefix: bytes = b"\x00" * 32
|
||||
|
||||
def __init__(self, conn: IRawConnection, noise_state: NoiseState) -> None:
|
||||
|
||||
@ -29,6 +29,11 @@ class Transport(ISecureTransport):
|
||||
early_data: bytes | None
|
||||
with_noise_pipes: bool
|
||||
|
||||
# NOTE: Implementations that support Noise Pipes must decide whether to use
|
||||
# an XX or IK handshake based on whether they possess a cached static
|
||||
# Noise key for the remote peer.
|
||||
# TODO: A storage of seen noise static keys for pattern IK?
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
libp2p_keypair: KeyPair,
|
||||
|
||||
@ -1,5 +1,3 @@
|
||||
from collections.abc import AsyncGenerator
|
||||
from contextlib import asynccontextmanager
|
||||
from types import (
|
||||
TracebackType,
|
||||
)
|
||||
@ -34,72 +32,6 @@ if TYPE_CHECKING:
|
||||
)
|
||||
|
||||
|
||||
class ReadWriteLock:
|
||||
"""
|
||||
A read-write lock that allows multiple concurrent readers
|
||||
or one exclusive writer, implemented using Trio primitives.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._readers = 0
|
||||
self._readers_lock = trio.Lock() # Protects access to _readers count
|
||||
self._writer_lock = trio.Semaphore(1) # Allows only one writer at a time
|
||||
|
||||
async def acquire_read(self) -> None:
|
||||
"""Acquire a read lock. Multiple readers can hold it simultaneously."""
|
||||
try:
|
||||
async with self._readers_lock:
|
||||
if self._readers == 0:
|
||||
await self._writer_lock.acquire()
|
||||
self._readers += 1
|
||||
except trio.Cancelled:
|
||||
raise
|
||||
|
||||
async def release_read(self) -> None:
|
||||
"""Release a read lock."""
|
||||
async with self._readers_lock:
|
||||
if self._readers == 1:
|
||||
self._writer_lock.release()
|
||||
self._readers -= 1
|
||||
|
||||
async def acquire_write(self) -> None:
|
||||
"""Acquire an exclusive write lock."""
|
||||
try:
|
||||
await self._writer_lock.acquire()
|
||||
except trio.Cancelled:
|
||||
raise
|
||||
|
||||
def release_write(self) -> None:
|
||||
"""Release the exclusive write lock."""
|
||||
self._writer_lock.release()
|
||||
|
||||
@asynccontextmanager
|
||||
async def read_lock(self) -> AsyncGenerator[None, None]:
|
||||
"""Context manager for acquiring and releasing a read lock safely."""
|
||||
acquire = False
|
||||
try:
|
||||
await self.acquire_read()
|
||||
acquire = True
|
||||
yield
|
||||
finally:
|
||||
if acquire:
|
||||
with trio.CancelScope() as scope:
|
||||
scope.shield = True
|
||||
await self.release_read()
|
||||
|
||||
@asynccontextmanager
|
||||
async def write_lock(self) -> AsyncGenerator[None, None]:
|
||||
"""Context manager for acquiring and releasing a write lock safely."""
|
||||
acquire = False
|
||||
try:
|
||||
await self.acquire_write()
|
||||
acquire = True
|
||||
yield
|
||||
finally:
|
||||
if acquire:
|
||||
self.release_write()
|
||||
|
||||
|
||||
class MplexStream(IMuxedStream):
|
||||
"""
|
||||
reference: https://github.com/libp2p/go-mplex/blob/master/stream.go
|
||||
@ -114,7 +46,7 @@ class MplexStream(IMuxedStream):
|
||||
read_deadline: int | None
|
||||
write_deadline: int | None
|
||||
|
||||
rw_lock: ReadWriteLock
|
||||
# TODO: Add lock for read/write to avoid interleaving receiving messages?
|
||||
close_lock: trio.Lock
|
||||
|
||||
# NOTE: `dataIn` is size of 8 in Go implementation.
|
||||
@ -148,7 +80,6 @@ class MplexStream(IMuxedStream):
|
||||
self.event_remote_closed = trio.Event()
|
||||
self.event_reset = trio.Event()
|
||||
self.close_lock = trio.Lock()
|
||||
self.rw_lock = ReadWriteLock()
|
||||
self.incoming_data_channel = incoming_data_channel
|
||||
self._buf = bytearray()
|
||||
|
||||
@ -182,49 +113,48 @@ class MplexStream(IMuxedStream):
|
||||
:param n: number of bytes to read
|
||||
:return: bytes actually read
|
||||
"""
|
||||
async with self.rw_lock.read_lock():
|
||||
if n is not None and n < 0:
|
||||
raise ValueError(
|
||||
"the number of bytes to read `n` must be non-negative or "
|
||||
f"`None` to indicate read until EOF, got n={n}"
|
||||
)
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
if n is None:
|
||||
return await self._read_until_eof()
|
||||
if len(self._buf) == 0:
|
||||
data: bytes
|
||||
# Peek whether there is data available. If yes, we just read until
|
||||
# there is no data, then return.
|
||||
if n is not None and n < 0:
|
||||
raise ValueError(
|
||||
"the number of bytes to read `n` must be non-negative or "
|
||||
f"`None` to indicate read until EOF, got n={n}"
|
||||
)
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
if n is None:
|
||||
return await self._read_until_eof()
|
||||
if len(self._buf) == 0:
|
||||
data: bytes
|
||||
# Peek whether there is data available. If yes, we just read until there is
|
||||
# no data, then return.
|
||||
try:
|
||||
data = self.incoming_data_channel.receive_nowait()
|
||||
self._buf.extend(data)
|
||||
except trio.EndOfChannel:
|
||||
raise MplexStreamEOF
|
||||
except trio.WouldBlock:
|
||||
# We know `receive` will be blocked here. Wait for data here with
|
||||
# `receive` and catch all kinds of errors here.
|
||||
try:
|
||||
data = self.incoming_data_channel.receive_nowait()
|
||||
data = await self.incoming_data_channel.receive()
|
||||
self._buf.extend(data)
|
||||
except trio.EndOfChannel:
|
||||
raise MplexStreamEOF
|
||||
except trio.WouldBlock:
|
||||
# We know `receive` will be blocked here. Wait for data here with
|
||||
# `receive` and catch all kinds of errors here.
|
||||
try:
|
||||
data = await self.incoming_data_channel.receive()
|
||||
self._buf.extend(data)
|
||||
except trio.EndOfChannel:
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
if self.event_remote_closed.is_set():
|
||||
raise MplexStreamEOF
|
||||
except trio.ClosedResourceError as error:
|
||||
# Probably `incoming_data_channel` is closed in `reset` when
|
||||
# we are waiting for `receive`.
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
raise Exception(
|
||||
"`incoming_data_channel` is closed but stream is not reset."
|
||||
"This should never happen."
|
||||
) from error
|
||||
self._buf.extend(self._read_return_when_blocked())
|
||||
payload = self._buf[:n]
|
||||
self._buf = self._buf[len(payload) :]
|
||||
return bytes(payload)
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
if self.event_remote_closed.is_set():
|
||||
raise MplexStreamEOF
|
||||
except trio.ClosedResourceError as error:
|
||||
# Probably `incoming_data_channel` is closed in `reset` when we are
|
||||
# waiting for `receive`.
|
||||
if self.event_reset.is_set():
|
||||
raise MplexStreamReset
|
||||
raise Exception(
|
||||
"`incoming_data_channel` is closed but stream is not reset. "
|
||||
"This should never happen."
|
||||
) from error
|
||||
self._buf.extend(self._read_return_when_blocked())
|
||||
payload = self._buf[:n]
|
||||
self._buf = self._buf[len(payload) :]
|
||||
return bytes(payload)
|
||||
|
||||
async def write(self, data: bytes) -> None:
|
||||
"""
|
||||
@ -232,21 +162,22 @@ class MplexStream(IMuxedStream):
|
||||
|
||||
:return: number of bytes written
|
||||
"""
|
||||
async with self.rw_lock.write_lock():
|
||||
if self.event_local_closed.is_set():
|
||||
raise MplexStreamClosed(f"cannot write to closed stream: data={data!r}")
|
||||
flag = (
|
||||
HeaderTags.MessageInitiator
|
||||
if self.is_initiator
|
||||
else HeaderTags.MessageReceiver
|
||||
)
|
||||
await self.muxed_conn.send_message(flag, data, self.stream_id)
|
||||
if self.event_local_closed.is_set():
|
||||
raise MplexStreamClosed(f"cannot write to closed stream: data={data!r}")
|
||||
flag = (
|
||||
HeaderTags.MessageInitiator
|
||||
if self.is_initiator
|
||||
else HeaderTags.MessageReceiver
|
||||
)
|
||||
await self.muxed_conn.send_message(flag, data, self.stream_id)
|
||||
|
||||
async def close(self) -> None:
|
||||
"""
|
||||
Closing a stream closes it for writing and closes the remote end for
|
||||
reading but allows writing in the other direction.
|
||||
"""
|
||||
# TODO error handling with timeout
|
||||
|
||||
async with self.close_lock:
|
||||
if self.event_local_closed.is_set():
|
||||
return
|
||||
@ -254,17 +185,8 @@ class MplexStream(IMuxedStream):
|
||||
flag = (
|
||||
HeaderTags.CloseInitiator if self.is_initiator else HeaderTags.CloseReceiver
|
||||
)
|
||||
|
||||
try:
|
||||
with trio.fail_after(5): # timeout in seconds
|
||||
await self.muxed_conn.send_message(flag, None, self.stream_id)
|
||||
except trio.TooSlowError:
|
||||
raise TimeoutError("Timeout while trying to close the stream")
|
||||
except MuxedConnUnavailable:
|
||||
if not self.muxed_conn.event_shutting_down.is_set():
|
||||
raise RuntimeError(
|
||||
"Failed to send close message and Mplex isn't shutting down"
|
||||
)
|
||||
# TODO: Raise when `muxed_conn.send_message` fails and `Mplex` isn't shutdown.
|
||||
await self.muxed_conn.send_message(flag, None, self.stream_id)
|
||||
|
||||
_is_remote_closed: bool
|
||||
async with self.close_lock:
|
||||
|
||||
@ -31,6 +31,9 @@ from libp2p.stream_muxer.yamux.yamux import (
|
||||
Yamux,
|
||||
)
|
||||
|
||||
# FIXME: add negotiate timeout to `MuxerMultistream`
|
||||
DEFAULT_NEGOTIATE_TIMEOUT = 60
|
||||
|
||||
|
||||
class MuxerMultistream:
|
||||
"""
|
||||
|
||||
@ -45,9 +45,6 @@ from libp2p.stream_muxer.exceptions import (
|
||||
MuxedStreamReset,
|
||||
)
|
||||
|
||||
# Configure logger for this module
|
||||
logger = logging.getLogger("libp2p.stream_muxer.yamux")
|
||||
|
||||
PROTOCOL_ID = "/yamux/1.0.0"
|
||||
TYPE_DATA = 0x0
|
||||
TYPE_WINDOW_UPDATE = 0x1
|
||||
@ -101,31 +98,15 @@ class YamuxStream(IMuxedStream):
|
||||
# Flow control: Check if we have enough send window
|
||||
total_len = len(data)
|
||||
sent = 0
|
||||
logger.debug(f"Stream {self.stream_id}: Starts writing {total_len} bytes ")
|
||||
while sent < total_len:
|
||||
# Wait for available window with timeout
|
||||
timeout = False
|
||||
async with self.window_lock:
|
||||
if self.send_window == 0:
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Window is zero, waiting for update"
|
||||
)
|
||||
# Release lock and wait with timeout
|
||||
self.window_lock.release()
|
||||
# To avoid re-acquiring the lock immediately,
|
||||
with trio.move_on_after(5.0) as cancel_scope:
|
||||
while self.send_window == 0 and not self.closed:
|
||||
await trio.sleep(0.01)
|
||||
# If we timed out, cancel the scope
|
||||
timeout = cancel_scope.cancelled_caught
|
||||
# Re-acquire lock
|
||||
await self.window_lock.acquire()
|
||||
|
||||
# If we timed out waiting for window update, raise an error
|
||||
if timeout:
|
||||
raise MuxedStreamError(
|
||||
"Timed out waiting for window update after 5 seconds."
|
||||
)
|
||||
while sent < total_len:
|
||||
async with self.window_lock:
|
||||
# Wait for available window
|
||||
while self.send_window == 0 and not self.closed:
|
||||
# Release lock while waiting
|
||||
self.window_lock.release()
|
||||
await trio.sleep(0.01)
|
||||
await self.window_lock.acquire()
|
||||
|
||||
if self.closed:
|
||||
raise MuxedStreamError("Stream is closed")
|
||||
@ -142,45 +123,25 @@ class YamuxStream(IMuxedStream):
|
||||
await self.conn.secured_conn.write(header + chunk)
|
||||
sent += to_send
|
||||
|
||||
async def send_window_update(self, increment: int, skip_lock: bool = False) -> None:
|
||||
"""
|
||||
Send a window update to peer.
|
||||
# If window is getting low, consider updating
|
||||
if self.send_window < DEFAULT_WINDOW_SIZE // 2:
|
||||
await self.send_window_update()
|
||||
|
||||
async def send_window_update(self, increment: int | None = None) -> None:
|
||||
"""Send a window update to peer."""
|
||||
if increment is None:
|
||||
increment = DEFAULT_WINDOW_SIZE - self.recv_window
|
||||
|
||||
param:increment: The amount to increment the window size by.
|
||||
If None, uses the difference between DEFAULT_WINDOW_SIZE
|
||||
and current receive window.
|
||||
param:skip_lock (bool): If True, skips acquiring window_lock.
|
||||
This should only be used when calling from a context
|
||||
that already holds the lock.
|
||||
"""
|
||||
if increment <= 0:
|
||||
# If increment is zero or negative, skip sending update
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Skipping window update"
|
||||
f"(increment={increment})"
|
||||
)
|
||||
return
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Sending window update with increment={increment}"
|
||||
)
|
||||
|
||||
async def _do_window_update() -> None:
|
||||
async with self.window_lock:
|
||||
self.recv_window += increment
|
||||
header = struct.pack(
|
||||
YAMUX_HEADER_FORMAT,
|
||||
0,
|
||||
TYPE_WINDOW_UPDATE,
|
||||
0,
|
||||
self.stream_id,
|
||||
increment,
|
||||
YAMUX_HEADER_FORMAT, 0, TYPE_WINDOW_UPDATE, 0, self.stream_id, increment
|
||||
)
|
||||
await self.conn.secured_conn.write(header)
|
||||
|
||||
if skip_lock:
|
||||
await _do_window_update()
|
||||
else:
|
||||
async with self.window_lock:
|
||||
await _do_window_update()
|
||||
|
||||
async def read(self, n: int | None = -1) -> bytes:
|
||||
# Handle None value for n by converting it to -1
|
||||
if n is None:
|
||||
@ -188,77 +149,64 @@ class YamuxStream(IMuxedStream):
|
||||
|
||||
# If the stream is closed for receiving and the buffer is empty, raise EOF
|
||||
if self.recv_closed and not self.conn.stream_buffers.get(self.stream_id):
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Stream {self.stream_id}: Stream closed for receiving and buffer empty"
|
||||
)
|
||||
raise MuxedStreamEOF("Stream is closed for receiving")
|
||||
|
||||
# If reading until EOF (n == -1), block until stream is closed
|
||||
if n == -1:
|
||||
data = b""
|
||||
while not self.conn.event_shutting_down.is_set():
|
||||
while not self.recv_closed and not self.conn.event_shutting_down.is_set():
|
||||
# Check if there's data in the buffer
|
||||
buffer = self.conn.stream_buffers.get(self.stream_id)
|
||||
if buffer and len(buffer) > 0:
|
||||
# Wait for closure even if data is available
|
||||
logging.debug(
|
||||
f"Stream {self.stream_id}:Waiting for FIN before returning data"
|
||||
)
|
||||
await self.conn.stream_events[self.stream_id].wait()
|
||||
self.conn.stream_events[self.stream_id] = trio.Event()
|
||||
else:
|
||||
# No data, wait for data or closure
|
||||
logging.debug(f"Stream {self.stream_id}: Waiting for data or FIN")
|
||||
await self.conn.stream_events[self.stream_id].wait()
|
||||
self.conn.stream_events[self.stream_id] = trio.Event()
|
||||
|
||||
# If buffer is not available, check if stream is closed
|
||||
# After loop, check if stream is closed or shutting down
|
||||
async with self.conn.streams_lock:
|
||||
if self.conn.event_shutting_down.is_set():
|
||||
logging.debug(f"Stream {self.stream_id}: Connection shutting down")
|
||||
raise MuxedStreamEOF("Connection shut down")
|
||||
if self.closed:
|
||||
if self.reset_received:
|
||||
logging.debug(f"Stream {self.stream_id}: Stream was reset")
|
||||
raise MuxedStreamReset("Stream was reset")
|
||||
else:
|
||||
logging.debug(
|
||||
f"Stream {self.stream_id}: Stream closed cleanly (EOF)"
|
||||
)
|
||||
raise MuxedStreamEOF("Stream closed cleanly (EOF)")
|
||||
buffer = self.conn.stream_buffers.get(self.stream_id)
|
||||
if buffer is None:
|
||||
logger.debug(f"Stream {self.stream_id}: No buffer available")
|
||||
logging.debug(
|
||||
f"Stream {self.stream_id}: Buffer gone, assuming closed"
|
||||
)
|
||||
raise MuxedStreamEOF("Stream buffer closed")
|
||||
|
||||
# If we have data in buffer, process it
|
||||
if len(buffer) > 0:
|
||||
chunk = bytes(buffer)
|
||||
buffer.clear()
|
||||
data += chunk
|
||||
|
||||
# Send window update for the chunk we just read
|
||||
async with self.window_lock:
|
||||
self.recv_window += len(chunk)
|
||||
logger.debug(f"Stream {self.stream_id}: Update {len(chunk)}")
|
||||
await self.send_window_update(len(chunk), skip_lock=True)
|
||||
|
||||
# If stream is closed (FIN received) and buffer is empty, break
|
||||
if self.recv_closed and len(buffer) == 0:
|
||||
logger.debug(f"Stream {self.stream_id}: Closed with empty buffer")
|
||||
break
|
||||
|
||||
# If stream was reset, raise reset error
|
||||
if self.reset_received:
|
||||
logger.debug(f"Stream {self.stream_id}: Stream was reset")
|
||||
raise MuxedStreamReset("Stream was reset")
|
||||
|
||||
# Wait for more data or stream closure
|
||||
logger.debug(f"Stream {self.stream_id}: Waiting for data or FIN")
|
||||
await self.conn.stream_events[self.stream_id].wait()
|
||||
self.conn.stream_events[self.stream_id] = trio.Event()
|
||||
|
||||
# After loop exit, first check if we have data to return
|
||||
if data:
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Returning {len(data)} bytes after loop"
|
||||
)
|
||||
logging.debug(f"Stream {self.stream_id}: EOF reached")
|
||||
raise MuxedStreamEOF("Stream is closed for receiving")
|
||||
# Return all buffered data
|
||||
data = bytes(buffer)
|
||||
buffer.clear()
|
||||
logging.debug(f"Stream {self.stream_id}: Returning {len(data)} bytes")
|
||||
return data
|
||||
|
||||
# No data accumulated, now check why we exited the loop
|
||||
if self.conn.event_shutting_down.is_set():
|
||||
logger.debug(f"Stream {self.stream_id}: Connection shutting down")
|
||||
raise MuxedStreamEOF("Connection shut down")
|
||||
|
||||
# Return empty data
|
||||
return b""
|
||||
else:
|
||||
data = await self.conn.read_stream(self.stream_id, n)
|
||||
async with self.window_lock:
|
||||
self.recv_window += len(data)
|
||||
logger.debug(
|
||||
f"Stream {self.stream_id}: Sending window update after read, "
|
||||
f"increment={len(data)}"
|
||||
)
|
||||
await self.send_window_update(len(data), skip_lock=True)
|
||||
return data
|
||||
# For specific size read (n > 0), return available data immediately
|
||||
return await self.conn.read_stream(self.stream_id, n)
|
||||
|
||||
async def close(self) -> None:
|
||||
if not self.send_closed:
|
||||
logger.debug(f"Half-closing stream {self.stream_id} (local end)")
|
||||
logging.debug(f"Half-closing stream {self.stream_id} (local end)")
|
||||
header = struct.pack(
|
||||
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_FIN, self.stream_id, 0
|
||||
)
|
||||
@ -274,7 +222,7 @@ class YamuxStream(IMuxedStream):
|
||||
|
||||
async def reset(self) -> None:
|
||||
if not self.closed:
|
||||
logger.debug(f"Resetting stream {self.stream_id}")
|
||||
logging.debug(f"Resetting stream {self.stream_id}")
|
||||
header = struct.pack(
|
||||
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_RST, self.stream_id, 0
|
||||
)
|
||||
@ -352,7 +300,7 @@ class Yamux(IMuxedConn):
|
||||
self._nursery: Nursery | None = None
|
||||
|
||||
async def start(self) -> None:
|
||||
logger.debug(f"Starting Yamux for {self.peer_id}")
|
||||
logging.debug(f"Starting Yamux for {self.peer_id}")
|
||||
if self.event_started.is_set():
|
||||
return
|
||||
async with trio.open_nursery() as nursery:
|
||||
@ -365,7 +313,7 @@ class Yamux(IMuxedConn):
|
||||
return self.is_initiator_value
|
||||
|
||||
async def close(self, error_code: int = GO_AWAY_NORMAL) -> None:
|
||||
logger.debug(f"Closing Yamux connection with code {error_code}")
|
||||
logging.debug(f"Closing Yamux connection with code {error_code}")
|
||||
async with self.streams_lock:
|
||||
if not self.event_shutting_down.is_set():
|
||||
try:
|
||||
@ -374,7 +322,7 @@ class Yamux(IMuxedConn):
|
||||
)
|
||||
await self.secured_conn.write(header)
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to send GO_AWAY: {e}")
|
||||
logging.debug(f"Failed to send GO_AWAY: {e}")
|
||||
self.event_shutting_down.set()
|
||||
for stream in self.streams.values():
|
||||
stream.closed = True
|
||||
@ -385,12 +333,12 @@ class Yamux(IMuxedConn):
|
||||
self.stream_events.clear()
|
||||
try:
|
||||
await self.secured_conn.close()
|
||||
logger.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
|
||||
logging.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
|
||||
except Exception as e:
|
||||
logger.debug(f"Error closing secured_conn for peer {self.peer_id}: {e}")
|
||||
logging.debug(f"Error closing secured_conn for peer {self.peer_id}: {e}")
|
||||
self.event_closed.set()
|
||||
if self.on_close:
|
||||
logger.debug(f"Calling on_close in Yamux.close for peer {self.peer_id}")
|
||||
logging.debug(f"Calling on_close in Yamux.close for peer {self.peer_id}")
|
||||
if inspect.iscoroutinefunction(self.on_close):
|
||||
if self.on_close is not None:
|
||||
await self.on_close()
|
||||
@ -419,7 +367,7 @@ class Yamux(IMuxedConn):
|
||||
header = struct.pack(
|
||||
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_SYN, stream_id, 0
|
||||
)
|
||||
logger.debug(f"Sending SYN header for stream {stream_id}")
|
||||
logging.debug(f"Sending SYN header for stream {stream_id}")
|
||||
await self.secured_conn.write(header)
|
||||
return stream
|
||||
except Exception as e:
|
||||
@ -427,32 +375,32 @@ class Yamux(IMuxedConn):
|
||||
raise e
|
||||
|
||||
async def accept_stream(self) -> IMuxedStream:
|
||||
logger.debug("Waiting for new stream")
|
||||
logging.debug("Waiting for new stream")
|
||||
try:
|
||||
stream = await self.new_stream_receive_channel.receive()
|
||||
logger.debug(f"Received stream {stream.stream_id}")
|
||||
logging.debug(f"Received stream {stream.stream_id}")
|
||||
return stream
|
||||
except trio.EndOfChannel:
|
||||
raise MuxedStreamError("No new streams available")
|
||||
|
||||
async def read_stream(self, stream_id: int, n: int = -1) -> bytes:
|
||||
logger.debug(f"Reading from stream {self.peer_id}:{stream_id}, n={n}")
|
||||
logging.debug(f"Reading from stream {self.peer_id}:{stream_id}, n={n}")
|
||||
if n is None:
|
||||
n = -1
|
||||
|
||||
while True:
|
||||
async with self.streams_lock:
|
||||
if stream_id not in self.streams:
|
||||
logger.debug(f"Stream {self.peer_id}:{stream_id} unknown")
|
||||
logging.debug(f"Stream {self.peer_id}:{stream_id} unknown")
|
||||
raise MuxedStreamEOF("Stream closed")
|
||||
if self.event_shutting_down.is_set():
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}: connection shutting down"
|
||||
)
|
||||
raise MuxedStreamEOF("Connection shut down")
|
||||
stream = self.streams[stream_id]
|
||||
buffer = self.stream_buffers.get(stream_id)
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}: "
|
||||
f"closed={stream.closed}, "
|
||||
f"recv_closed={stream.recv_closed}, "
|
||||
@ -460,7 +408,7 @@ class Yamux(IMuxedConn):
|
||||
f"buffer_len={len(buffer) if buffer else 0}"
|
||||
)
|
||||
if buffer is None:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}:"
|
||||
f"Buffer gone, assuming closed"
|
||||
)
|
||||
@ -473,7 +421,7 @@ class Yamux(IMuxedConn):
|
||||
else:
|
||||
data = bytes(buffer[:n])
|
||||
del buffer[:n]
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Returning {len(data)} bytes"
|
||||
f"from stream {self.peer_id}:{stream_id}, "
|
||||
f"buffer_len={len(buffer)}"
|
||||
@ -481,7 +429,7 @@ class Yamux(IMuxedConn):
|
||||
return data
|
||||
# If reset received and buffer is empty, raise reset
|
||||
if stream.reset_received:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}:"
|
||||
f"reset_received=True, raising MuxedStreamReset"
|
||||
)
|
||||
@ -494,7 +442,7 @@ class Yamux(IMuxedConn):
|
||||
else:
|
||||
data = bytes(buffer[:n])
|
||||
del buffer[:n]
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Returning {len(data)} bytes"
|
||||
f"from stream {self.peer_id}:{stream_id}, "
|
||||
f"buffer_len={len(buffer)}"
|
||||
@ -502,21 +450,21 @@ class Yamux(IMuxedConn):
|
||||
return data
|
||||
# Check if stream is closed
|
||||
if stream.closed:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}:"
|
||||
f"closed=True, raising MuxedStreamReset"
|
||||
)
|
||||
raise MuxedStreamReset("Stream is reset or closed")
|
||||
# Check if recv_closed and buffer empty
|
||||
if stream.recv_closed:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Stream {self.peer_id}:{stream_id}:"
|
||||
f"recv_closed=True, buffer empty, raising EOF"
|
||||
)
|
||||
raise MuxedStreamEOF("Stream is closed for receiving")
|
||||
|
||||
# Wait for data if stream is still open
|
||||
logger.debug(f"Waiting for data on stream {self.peer_id}:{stream_id}")
|
||||
logging.debug(f"Waiting for data on stream {self.peer_id}:{stream_id}")
|
||||
try:
|
||||
await self.stream_events[stream_id].wait()
|
||||
self.stream_events[stream_id] = trio.Event()
|
||||
@ -531,7 +479,7 @@ class Yamux(IMuxedConn):
|
||||
try:
|
||||
header = await self.secured_conn.read(HEADER_SIZE)
|
||||
if not header or len(header) < HEADER_SIZE:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Connection closed orincomplete header for peer {self.peer_id}"
|
||||
)
|
||||
self.event_shutting_down.set()
|
||||
@ -540,12 +488,12 @@ class Yamux(IMuxedConn):
|
||||
version, typ, flags, stream_id, length = struct.unpack(
|
||||
YAMUX_HEADER_FORMAT, header
|
||||
)
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Received header for peer {self.peer_id}:"
|
||||
f"type={typ}, flags={flags}, stream_id={stream_id},"
|
||||
f"length={length}"
|
||||
)
|
||||
if (typ == TYPE_DATA or typ == TYPE_WINDOW_UPDATE) and flags & FLAG_SYN:
|
||||
if typ == TYPE_DATA and flags & FLAG_SYN:
|
||||
async with self.streams_lock:
|
||||
if stream_id not in self.streams:
|
||||
stream = YamuxStream(stream_id, self, False)
|
||||
@ -561,7 +509,7 @@ class Yamux(IMuxedConn):
|
||||
0,
|
||||
)
|
||||
await self.secured_conn.write(ack_header)
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Sending stream {stream_id}"
|
||||
f"to channel for peer {self.peer_id}"
|
||||
)
|
||||
@ -579,7 +527,7 @@ class Yamux(IMuxedConn):
|
||||
elif typ == TYPE_DATA and flags & FLAG_RST:
|
||||
async with self.streams_lock:
|
||||
if stream_id in self.streams:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Resetting stream {stream_id} for peer {self.peer_id}"
|
||||
)
|
||||
self.streams[stream_id].closed = True
|
||||
@ -588,27 +536,27 @@ class Yamux(IMuxedConn):
|
||||
elif typ == TYPE_DATA and flags & FLAG_ACK:
|
||||
async with self.streams_lock:
|
||||
if stream_id in self.streams:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Received ACK for stream"
|
||||
f"{stream_id} for peer {self.peer_id}"
|
||||
)
|
||||
elif typ == TYPE_GO_AWAY:
|
||||
error_code = length
|
||||
if error_code == GO_AWAY_NORMAL:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Received GO_AWAY for peer"
|
||||
f"{self.peer_id}: Normal termination"
|
||||
)
|
||||
elif error_code == GO_AWAY_PROTOCOL_ERROR:
|
||||
logger.error(
|
||||
logging.error(
|
||||
f"Received GO_AWAY for peer{self.peer_id}: Protocol error"
|
||||
)
|
||||
elif error_code == GO_AWAY_INTERNAL_ERROR:
|
||||
logger.error(
|
||||
logging.error(
|
||||
f"Received GO_AWAY for peer {self.peer_id}: Internal error"
|
||||
)
|
||||
else:
|
||||
logger.error(
|
||||
logging.error(
|
||||
f"Received GO_AWAY for peer {self.peer_id}"
|
||||
f"with unknown error code: {error_code}"
|
||||
)
|
||||
@ -617,7 +565,7 @@ class Yamux(IMuxedConn):
|
||||
break
|
||||
elif typ == TYPE_PING:
|
||||
if flags & FLAG_SYN:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Received ping request with value"
|
||||
f"{length} for peer {self.peer_id}"
|
||||
)
|
||||
@ -626,7 +574,7 @@ class Yamux(IMuxedConn):
|
||||
)
|
||||
await self.secured_conn.write(ping_header)
|
||||
elif flags & FLAG_ACK:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Received ping response with value"
|
||||
f"{length} for peer {self.peer_id}"
|
||||
)
|
||||
@ -640,7 +588,7 @@ class Yamux(IMuxedConn):
|
||||
self.stream_buffers[stream_id].extend(data)
|
||||
self.stream_events[stream_id].set()
|
||||
if flags & FLAG_FIN:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Received FIN for stream {self.peer_id}:"
|
||||
f"{stream_id}, marking recv_closed"
|
||||
)
|
||||
@ -648,7 +596,7 @@ class Yamux(IMuxedConn):
|
||||
if self.streams[stream_id].send_closed:
|
||||
self.streams[stream_id].closed = True
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading data for stream {stream_id}: {e}")
|
||||
logging.error(f"Error reading data for stream {stream_id}: {e}")
|
||||
# Mark stream as closed on read error
|
||||
async with self.streams_lock:
|
||||
if stream_id in self.streams:
|
||||
@ -662,7 +610,7 @@ class Yamux(IMuxedConn):
|
||||
if stream_id in self.streams:
|
||||
stream = self.streams[stream_id]
|
||||
async with stream.window_lock:
|
||||
logger.debug(
|
||||
logging.debug(
|
||||
f"Received window update for stream"
|
||||
f"{self.peer_id}:{stream_id},"
|
||||
f" increment: {increment}"
|
||||
@ -677,7 +625,7 @@ class Yamux(IMuxedConn):
|
||||
and details.get("requested_count") == 2
|
||||
and details.get("received_count") == 0
|
||||
):
|
||||
logger.info(
|
||||
logging.info(
|
||||
f"Stream closed cleanly for peer {self.peer_id}"
|
||||
+ f" (IncompleteReadError: {details})"
|
||||
)
|
||||
@ -685,32 +633,15 @@ class Yamux(IMuxedConn):
|
||||
await self._cleanup_on_error()
|
||||
break
|
||||
else:
|
||||
logger.error(
|
||||
logging.error(
|
||||
f"Error in handle_incoming for peer {self.peer_id}: "
|
||||
+ f"{type(e).__name__}: {str(e)}"
|
||||
)
|
||||
else:
|
||||
# Handle RawConnError with more nuance
|
||||
if isinstance(e, RawConnError):
|
||||
error_msg = str(e)
|
||||
# If RawConnError is empty, it's likely normal cleanup
|
||||
if not error_msg.strip():
|
||||
logger.info(
|
||||
f"RawConnError (empty) during cleanup for peer "
|
||||
f"{self.peer_id} (normal connection shutdown)"
|
||||
)
|
||||
else:
|
||||
# Log non-empty RawConnError as warning
|
||||
logger.warning(
|
||||
f"RawConnError during connection handling for peer "
|
||||
f"{self.peer_id}: {error_msg}"
|
||||
)
|
||||
else:
|
||||
# Log all other errors normally
|
||||
logger.error(
|
||||
f"Error in handle_incoming for peer {self.peer_id}: "
|
||||
+ f"{type(e).__name__}: {str(e)}"
|
||||
)
|
||||
logging.error(
|
||||
f"Error in handle_incoming for peer {self.peer_id}: "
|
||||
+ f"{type(e).__name__}: {str(e)}"
|
||||
)
|
||||
# Don't crash the whole connection for temporary errors
|
||||
if self.event_shutting_down.is_set() or isinstance(
|
||||
e, (RawConnError, OSError)
|
||||
@ -740,9 +671,9 @@ class Yamux(IMuxedConn):
|
||||
# Close the secured connection
|
||||
try:
|
||||
await self.secured_conn.close()
|
||||
logger.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
|
||||
logging.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
|
||||
except Exception as close_error:
|
||||
logger.error(
|
||||
logging.error(
|
||||
f"Error closing secured_conn for peer {self.peer_id}: {close_error}"
|
||||
)
|
||||
|
||||
@ -751,14 +682,14 @@ class Yamux(IMuxedConn):
|
||||
|
||||
# Call on_close callback if provided
|
||||
if self.on_close:
|
||||
logger.debug(f"Calling on_close for peer {self.peer_id}")
|
||||
logging.debug(f"Calling on_close for peer {self.peer_id}")
|
||||
try:
|
||||
if inspect.iscoroutinefunction(self.on_close):
|
||||
await self.on_close()
|
||||
else:
|
||||
self.on_close()
|
||||
except Exception as callback_error:
|
||||
logger.error(f"Error in on_close callback: {callback_error}")
|
||||
logging.error(f"Error in on_close callback: {callback_error}")
|
||||
|
||||
# Cancel nursery tasks
|
||||
if self._nursery:
|
||||
|
||||
@ -7,9 +7,6 @@ from libp2p.utils.varint import (
|
||||
encode_varint_prefixed,
|
||||
read_delim,
|
||||
read_varint_prefixed_bytes,
|
||||
decode_varint_from_bytes,
|
||||
decode_varint_with_size,
|
||||
read_length_prefixed_protobuf,
|
||||
)
|
||||
from libp2p.utils.version import (
|
||||
get_agent_version,
|
||||
@ -23,7 +20,4 @@ __all__ = [
|
||||
"get_agent_version",
|
||||
"read_delim",
|
||||
"read_varint_prefixed_bytes",
|
||||
"decode_varint_from_bytes",
|
||||
"decode_varint_with_size",
|
||||
"read_length_prefixed_protobuf",
|
||||
]
|
||||
|
||||
@ -1,9 +1,7 @@
|
||||
import itertools
|
||||
import logging
|
||||
import math
|
||||
from typing import BinaryIO
|
||||
|
||||
from libp2p.abc import INetStream
|
||||
from libp2p.exceptions import (
|
||||
ParseError,
|
||||
)
|
||||
@ -27,41 +25,18 @@ HIGH_MASK = 2**7
|
||||
SHIFT_64_BIT_MAX = int(math.ceil(64 / 7)) * 7
|
||||
|
||||
|
||||
def encode_uvarint(value: int) -> bytes:
|
||||
"""Encode an unsigned integer as a varint."""
|
||||
if value < 0:
|
||||
raise ValueError("Cannot encode negative value as uvarint")
|
||||
|
||||
result = bytearray()
|
||||
while value >= 0x80:
|
||||
result.append((value & 0x7F) | 0x80)
|
||||
value >>= 7
|
||||
result.append(value & 0x7F)
|
||||
return bytes(result)
|
||||
|
||||
|
||||
def decode_uvarint(data: bytes) -> int:
|
||||
"""Decode a varint from bytes."""
|
||||
if not data:
|
||||
raise ParseError("Unexpected end of data")
|
||||
|
||||
result = 0
|
||||
shift = 0
|
||||
|
||||
for byte in data:
|
||||
result |= (byte & 0x7F) << shift
|
||||
if (byte & 0x80) == 0:
|
||||
def encode_uvarint(number: int) -> bytes:
|
||||
"""Pack `number` into varint bytes."""
|
||||
buf = b""
|
||||
while True:
|
||||
towrite = number & 0x7F
|
||||
number >>= 7
|
||||
if number:
|
||||
buf += bytes((towrite | 0x80,))
|
||||
else:
|
||||
buf += bytes((towrite,))
|
||||
break
|
||||
shift += 7
|
||||
if shift >= 64:
|
||||
raise ValueError("Varint too long")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def decode_varint_from_bytes(data: bytes) -> int:
|
||||
"""Decode a varint from bytes (alias for decode_uvarint for backward comp)."""
|
||||
return decode_uvarint(data)
|
||||
return buf
|
||||
|
||||
|
||||
async def decode_uvarint_from_stream(reader: Reader) -> int:
|
||||
@ -69,9 +44,7 @@ async def decode_uvarint_from_stream(reader: Reader) -> int:
|
||||
res = 0
|
||||
for shift in itertools.count(0, 7):
|
||||
if shift > SHIFT_64_BIT_MAX:
|
||||
raise ParseError(
|
||||
"Varint decoding error: integer exceeds maximum size of 64 bits."
|
||||
)
|
||||
raise ParseError("TODO: better exception msg: Integer is too large...")
|
||||
|
||||
byte = await read_exactly(reader, 1)
|
||||
value = byte[0]
|
||||
@ -83,35 +56,9 @@ async def decode_uvarint_from_stream(reader: Reader) -> int:
|
||||
return res
|
||||
|
||||
|
||||
def decode_varint_with_size(data: bytes) -> tuple[int, int]:
|
||||
"""
|
||||
Decode a varint from bytes and return both the value and the number of bytes
|
||||
consumed.
|
||||
|
||||
Returns:
|
||||
Tuple[int, int]: (value, bytes_consumed)
|
||||
|
||||
"""
|
||||
result = 0
|
||||
shift = 0
|
||||
bytes_consumed = 0
|
||||
|
||||
for byte in data:
|
||||
result |= (byte & 0x7F) << shift
|
||||
bytes_consumed += 1
|
||||
if (byte & 0x80) == 0:
|
||||
break
|
||||
shift += 7
|
||||
if shift >= 64:
|
||||
raise ValueError("Varint too long")
|
||||
|
||||
return result, bytes_consumed
|
||||
|
||||
|
||||
def encode_varint_prefixed(data: bytes) -> bytes:
|
||||
"""Encode data with a varint length prefix."""
|
||||
length_bytes = encode_uvarint(len(data))
|
||||
return length_bytes + data
|
||||
def encode_varint_prefixed(msg_bytes: bytes) -> bytes:
|
||||
varint_len = encode_uvarint(len(msg_bytes))
|
||||
return varint_len + msg_bytes
|
||||
|
||||
|
||||
async def read_varint_prefixed_bytes(reader: Reader) -> bytes:
|
||||
@ -138,95 +85,3 @@ async def read_delim(reader: Reader) -> bytes:
|
||||
f'`msg_bytes` is not delimited by b"\\n": `msg_bytes`={msg_bytes!r}'
|
||||
)
|
||||
return msg_bytes[:-1]
|
||||
|
||||
|
||||
def read_varint_prefixed_bytes_sync(
|
||||
stream: BinaryIO, max_length: int = 1024 * 1024
|
||||
) -> bytes:
|
||||
"""
|
||||
Read varint-prefixed bytes from a stream.
|
||||
|
||||
Args:
|
||||
stream: A stream-like object with a read() method
|
||||
max_length: Maximum allowed data length to prevent memory exhaustion
|
||||
|
||||
Returns:
|
||||
bytes: The data without the length prefix
|
||||
|
||||
Raises:
|
||||
ValueError: If the length prefix is invalid or too large
|
||||
EOFError: If the stream ends unexpectedly
|
||||
|
||||
"""
|
||||
# Read the varint length prefix
|
||||
length_bytes = b""
|
||||
while True:
|
||||
byte_data = stream.read(1)
|
||||
if not byte_data:
|
||||
raise EOFError("Stream ended while reading varint length prefix")
|
||||
|
||||
length_bytes += byte_data
|
||||
if byte_data[0] & 0x80 == 0:
|
||||
break
|
||||
|
||||
# Decode the length
|
||||
length = decode_uvarint(length_bytes)
|
||||
|
||||
if length > max_length:
|
||||
raise ValueError(f"Data length {length} exceeds maximum allowed {max_length}")
|
||||
|
||||
# Read the data
|
||||
data = stream.read(length)
|
||||
if len(data) != length:
|
||||
raise EOFError(f"Expected {length} bytes, got {len(data)}")
|
||||
|
||||
return data
|
||||
|
||||
|
||||
async def read_length_prefixed_protobuf(
|
||||
stream: INetStream, use_varint_format: bool = True, max_length: int = 1024 * 1024
|
||||
) -> bytes:
|
||||
"""Read a protobuf message from a stream, handling both formats."""
|
||||
if use_varint_format:
|
||||
# Read length-prefixed protobuf message from the stream
|
||||
# First read the varint length prefix
|
||||
length_bytes = b""
|
||||
while True:
|
||||
b = await stream.read(1)
|
||||
if not b:
|
||||
raise Exception("No length prefix received")
|
||||
|
||||
length_bytes += b
|
||||
if b[0] & 0x80 == 0:
|
||||
break
|
||||
|
||||
msg_length = decode_varint_from_bytes(length_bytes)
|
||||
|
||||
if msg_length > max_length:
|
||||
raise Exception(
|
||||
f"Message length {msg_length} exceeds maximum allowed {max_length}"
|
||||
)
|
||||
|
||||
# Read the protobuf message
|
||||
data = await stream.read(msg_length)
|
||||
if len(data) != msg_length:
|
||||
raise Exception(
|
||||
f"Incomplete message: expected {msg_length}, got {len(data)}"
|
||||
)
|
||||
|
||||
return data
|
||||
else:
|
||||
# Read raw protobuf message from the stream
|
||||
# For raw format, read all available data in one go
|
||||
data = await stream.read()
|
||||
|
||||
# If we got no data, raise an exception
|
||||
if not data:
|
||||
raise Exception("No data received in raw format")
|
||||
|
||||
if len(data) > max_length:
|
||||
raise Exception(
|
||||
f"Message length {len(data)} exceeds maximum allowed {max_length}"
|
||||
)
|
||||
|
||||
return data
|
||||
|
||||
1
newsfragments/579.feature.rst
Normal file
1
newsfragments/579.feature.rst
Normal file
@ -0,0 +1 @@
|
||||
Added support for ``Kademlia DHT`` in py-libp2p.
|
||||
@ -1 +0,0 @@
|
||||
remove FIXME comment since it's obsolete and 32-byte prefix support is there but not enabled by default
|
||||
7
newsfragments/631.feature.rst
Normal file
7
newsfragments/631.feature.rst
Normal file
@ -0,0 +1,7 @@
|
||||
Store public key and peer ID in peerstore during handshake
|
||||
|
||||
Modified the InsecureTransport class to accept an optional peerstore parameter and updated the handshake process to store the received public key and peer ID in the peerstore when available.
|
||||
|
||||
Added test cases to verify:
|
||||
1. The peerstore remains unchanged when handshake fails due to peer ID mismatch
|
||||
2. The handshake correctly adds a public key to a peer ID that already exists in the peerstore but doesn't have a public key yet
|
||||
1
newsfragments/678.misc.rst
Normal file
1
newsfragments/678.misc.rst
Normal file
@ -0,0 +1 @@
|
||||
Refactored gossipsub heartbeat logic to use a single helper method `_handle_topic_heartbeat` that handles both fanout and gossip heartbeats.
|
||||
1
newsfragments/679.feature.rst
Normal file
1
newsfragments/679.feature.rst
Normal file
@ -0,0 +1 @@
|
||||
Added sparse connect utility function to pubsub test utilities for creating test networks with configurable connectivity.
|
||||
2
newsfragments/681.breaking.rst
Normal file
2
newsfragments/681.breaking.rst
Normal file
@ -0,0 +1,2 @@
|
||||
Reordered the arguments to `upgrade_security` to place `is_initiator` before `peer_id`, and made `peer_id` optional.
|
||||
This allows the method to reflect the fact that peer identity is not required for inbound connections.
|
||||
1
newsfragments/684.misc.rst
Normal file
1
newsfragments/684.misc.rst
Normal file
@ -0,0 +1 @@
|
||||
Uses the `decapsulate` method of the `Multiaddr` class to clean up the observed address.
|
||||
1
newsfragments/685.feature.rst
Normal file
1
newsfragments/685.feature.rst
Normal file
@ -0,0 +1 @@
|
||||
Optimized pubsub publishing to send multiple topics in a single message instead of separate messages per topic.
|
||||
1
newsfragments/690.feature.rst
Normal file
1
newsfragments/690.feature.rst
Normal file
@ -0,0 +1 @@
|
||||
added peer exchange and backoff logic as part of Gossipsub v1.1 upgrade
|
||||
1
newsfragments/702.bugfix.rst
Normal file
1
newsfragments/702.bugfix.rst
Normal file
@ -0,0 +1 @@
|
||||
Fixed an issue in `Pubsub` where async validators were not handled reliably under concurrency. Now uses a safe aggregator list for consistent behavior.
|
||||
@ -1 +0,0 @@
|
||||
Added `Bootstrap` peer discovery module that allows nodes to connect to predefined bootstrap peers for network discovery.
|
||||
@ -1,3 +0,0 @@
|
||||
Improved type safety in `get_mux()` and `get_protocols()` by returning properly typed values instead
|
||||
of `Any`. Also updated `identify.py` and `discovery.py` to handle `None` values safely and
|
||||
compare protocols correctly.
|
||||
@ -1 +0,0 @@
|
||||
Add lock for read/write to avoid interleaving receiving messages in mplex_stream.py
|
||||
@ -1 +0,0 @@
|
||||
Add comprehensive tests for relay_discovery method in circuit_relay_v2
|
||||
@ -1 +0,0 @@
|
||||
Add logic to clear_peerdata method in peerstore
|
||||
@ -1 +0,0 @@
|
||||
[mplex] Add timeout and error handling during stream close
|
||||
@ -1,2 +0,0 @@
|
||||
Added the `Certified Addr-Book` interface supported by `Envelope` and `PeerRecord` class.
|
||||
Integrated the signed-peer-record transfer in the identify/push protocols.
|
||||
@ -1,2 +0,0 @@
|
||||
Added throttling for async topic validators in validate_msg, enforcing a
|
||||
concurrency limit to prevent resource exhaustion under heavy load.
|
||||
@ -1 +0,0 @@
|
||||
fixed malformed PeerId in test_peerinfo
|
||||
@ -1 +0,0 @@
|
||||
fixed a typecheck error using cast in peerinfo.py
|
||||
@ -1 +0,0 @@
|
||||
Improve error message under the function decode_uvarint_from_stream in libp2p/utils/varint.py file
|
||||
@ -1 +0,0 @@
|
||||
identify protocol use now prefix-length messages by default. use use_varint_format param for old raw messages
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user