Merge remote-tracking branch 'upstream/main'

This commit is contained in:
Akash Mondal
2025-09-02 16:16:21 +00:00
31 changed files with 1678 additions and 436 deletions

3
.gitignore vendored
View File

@ -178,3 +178,6 @@ env.bak/
#lockfiles
uv.lock
poetry.lock
# Sphinx documentation build
_build/

View File

@ -0,0 +1,194 @@
Multiple Connections Per Peer
=============================
This example demonstrates how to use the multiple connections per peer feature in py-libp2p.
Overview
--------
The multiple connections per peer feature allows a libp2p node to maintain multiple network connections to the same peer. This provides several benefits:
- **Improved reliability**: If one connection fails, others remain available
- **Better performance**: Load can be distributed across multiple connections
- **Enhanced throughput**: Multiple streams can be created in parallel
- **Fault tolerance**: Redundant connections provide backup paths
Configuration
-------------
The feature is configured through the `ConnectionConfig` class:
.. code-block:: python
from libp2p.network.swarm import ConnectionConfig
# Default configuration
config = ConnectionConfig()
print(f"Max connections per peer: {config.max_connections_per_peer}")
print(f"Load balancing strategy: {config.load_balancing_strategy}")
# Custom configuration
custom_config = ConnectionConfig(
max_connections_per_peer=5,
connection_timeout=60.0,
load_balancing_strategy="least_loaded"
)
Load Balancing Strategies
-------------------------
Two load balancing strategies are available:
**Round Robin** (default)
Cycles through connections in order, distributing load evenly.
**Least Loaded**
Selects the connection with the fewest active streams.
API Usage
---------
The new API provides direct access to multiple connections:
.. code-block:: python
from libp2p import new_swarm
# Create swarm with multiple connections support
swarm = new_swarm()
# Dial a peer - returns list of connections
connections = await swarm.dial_peer(peer_id)
print(f"Established {len(connections)} connections")
# Get all connections to a peer
peer_connections = swarm.get_connections(peer_id)
# Get all connections (across all peers)
all_connections = swarm.get_connections()
# Get the complete connections map
connections_map = swarm.get_connections_map()
# Backward compatibility - get single connection
single_conn = swarm.get_connection(peer_id)
Backward Compatibility
----------------------
Existing code continues to work through backward compatibility features:
.. code-block:: python
# Legacy 1:1 mapping (returns first connection for each peer)
legacy_connections = swarm.connections_legacy
# Single connection access (returns first available connection)
conn = swarm.get_connection(peer_id)
Example
-------
A complete working example is available in the `examples/doc-examples/multiple_connections_example.py` file.
Production Configuration
-------------------------
For production use, consider these settings:
**RetryConfig Parameters**
The `RetryConfig` class controls connection retry behavior with exponential backoff:
- **max_retries**: Maximum number of retry attempts before giving up (default: 3)
- **initial_delay**: Initial delay in seconds before the first retry (default: 0.1s)
- **max_delay**: Maximum delay cap to prevent excessive wait times (default: 30.0s)
- **backoff_multiplier**: Exponential backoff multiplier - each retry multiplies delay by this factor (default: 2.0)
- **jitter_factor**: Random jitter (0.0-1.0) to prevent synchronized retries (default: 0.1)
**ConnectionConfig Parameters**
The `ConnectionConfig` class manages multi-connection behavior:
- **max_connections_per_peer**: Maximum connections allowed to a single peer (default: 3)
- **connection_timeout**: Timeout for establishing new connections in seconds (default: 30.0s)
- **load_balancing_strategy**: Strategy for distributing streams ("round_robin" or "least_loaded")
**Load Balancing Strategies Explained**
- **round_robin**: Cycles through connections in order, distributing load evenly. Simple and predictable.
- **least_loaded**: Selects the connection with the fewest active streams. Better for performance but more complex.
.. code-block:: python
from libp2p.network.swarm import ConnectionConfig, RetryConfig
# Production-ready configuration
retry_config = RetryConfig(
max_retries=3, # Maximum retry attempts before giving up
initial_delay=0.1, # Start with 100ms delay
max_delay=30.0, # Cap exponential backoff at 30 seconds
backoff_multiplier=2.0, # Double delay each retry (100ms -> 200ms -> 400ms)
jitter_factor=0.1 # Add 10% random jitter to prevent thundering herd
)
connection_config = ConnectionConfig(
max_connections_per_peer=3, # Allow up to 3 connections per peer
connection_timeout=30.0, # 30 second timeout for new connections
load_balancing_strategy="round_robin" # Simple, predictable load distribution
)
swarm = new_swarm(
retry_config=retry_config,
connection_config=connection_config
)
**How RetryConfig Works in Practice**
With the configuration above, connection retries follow this pattern:
1. **Attempt 1**: Immediate connection attempt
2. **Attempt 2**: Wait 100ms ± 10ms jitter, then retry
3. **Attempt 3**: Wait 200ms ± 20ms jitter, then retry
4. **Attempt 4**: Wait 400ms ± 40ms jitter, then retry
5. **Attempt 5**: Wait 800ms ± 80ms jitter, then retry
6. **Attempt 6**: Wait 1.6s ± 160ms jitter, then retry
7. **Attempt 7**: Wait 3.2s ± 320ms jitter, then retry
8. **Attempt 8**: Wait 6.4s ± 640ms jitter, then retry
9. **Attempt 9**: Wait 12.8s ± 1.28s jitter, then retry
10. **Attempt 10**: Wait 25.6s ± 2.56s jitter, then retry
11. **Attempt 11**: Wait 30.0s (capped) ± 3.0s jitter, then retry
12. **Attempt 12**: Wait 30.0s (capped) ± 3.0s jitter, then retry
13. **Give up**: After 12 retries (3 initial + 9 retries), connection fails
The jitter prevents multiple clients from retrying simultaneously, reducing server load.
**Parameter Tuning Guidelines**
**For Development/Testing:**
- Use lower `max_retries` (1-2) and shorter delays for faster feedback
- Example: `RetryConfig(max_retries=2, initial_delay=0.01, max_delay=0.1)`
**For Production:**
- Use moderate `max_retries` (3-5) with reasonable delays for reliability
- Example: `RetryConfig(max_retries=5, initial_delay=0.1, max_delay=60.0)`
**For High-Latency Networks:**
- Use higher `max_retries` (5-10) with longer delays
- Example: `RetryConfig(max_retries=8, initial_delay=0.5, max_delay=120.0)`
**For Load Balancing:**
- Use `round_robin` for simple, predictable behavior
- Use `least_loaded` when you need optimal performance and can handle complexity
Architecture
------------
The implementation follows the same architectural patterns as the Go and JavaScript reference implementations:
- **Core data structure**: `dict[ID, list[INetConn]]` for 1:many mapping
- **API consistency**: Methods like `get_connections()` match reference implementations
- **Load balancing**: Integrated at the API level for optimal performance
- **Backward compatibility**: Maintains existing interfaces for gradual migration
This design ensures consistency across libp2p implementations while providing the benefits of multiple connections per peer.

View File

@ -15,3 +15,4 @@ Examples
examples.kademlia
examples.mDNS
examples.random_walk
examples.multiple_connections

View File

@ -0,0 +1,170 @@
#!/usr/bin/env python3
"""
Example demonstrating multiple connections per peer support in libp2p.
This example shows how to:
1. Configure multiple connections per peer
2. Use different load balancing strategies
3. Access multiple connections through the new API
4. Maintain backward compatibility
"""
import logging
import trio
from libp2p import new_swarm
from libp2p.network.swarm import ConnectionConfig, RetryConfig
# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
async def example_basic_multiple_connections() -> None:
"""Example of basic multiple connections per peer usage."""
logger.info("Creating swarm with multiple connections support...")
# Create swarm with default configuration
swarm = new_swarm()
default_connection = ConnectionConfig()
logger.info(f"Swarm created with peer ID: {swarm.get_peer_id()}")
logger.info(
f"Connection config: max_connections_per_peer="
f"{default_connection.max_connections_per_peer}"
)
await swarm.close()
logger.info("Basic multiple connections example completed")
async def example_custom_connection_config() -> None:
"""Example of custom connection configuration."""
logger.info("Creating swarm with custom connection configuration...")
# Custom connection configuration for high-performance scenarios
connection_config = ConnectionConfig(
max_connections_per_peer=5, # More connections per peer
connection_timeout=60.0, # Longer timeout
load_balancing_strategy="least_loaded", # Use least loaded strategy
)
# Create swarm with custom connection config
swarm = new_swarm(connection_config=connection_config)
logger.info("Custom connection config applied:")
logger.info(
f" Max connections per peer: {connection_config.max_connections_per_peer}"
)
logger.info(f" Connection timeout: {connection_config.connection_timeout}s")
logger.info(
f" Load balancing strategy: {connection_config.load_balancing_strategy}"
)
await swarm.close()
logger.info("Custom connection config example completed")
async def example_multiple_connections_api() -> None:
"""Example of using the new multiple connections API."""
logger.info("Demonstrating multiple connections API...")
connection_config = ConnectionConfig(
max_connections_per_peer=3, load_balancing_strategy="round_robin"
)
swarm = new_swarm(connection_config=connection_config)
logger.info("Multiple connections API features:")
logger.info(" - dial_peer() returns list[INetConn]")
logger.info(" - get_connections(peer_id) returns list[INetConn]")
logger.info(" - get_connections_map() returns dict[ID, list[INetConn]]")
logger.info(
" - get_connection(peer_id) returns INetConn | None (backward compatibility)"
)
await swarm.close()
logger.info("Multiple connections API example completed")
async def example_backward_compatibility() -> None:
"""Example of backward compatibility features."""
logger.info("Demonstrating backward compatibility...")
swarm = new_swarm()
logger.info("Backward compatibility features:")
logger.info(" - connections_legacy property provides 1:1 mapping")
logger.info(" - get_connection() method for single connection access")
logger.info(" - Existing code continues to work")
await swarm.close()
logger.info("Backward compatibility example completed")
async def example_production_ready_config() -> None:
"""Example of production-ready configuration."""
logger.info("Creating swarm with production-ready configuration...")
# Production-ready retry configuration
retry_config = RetryConfig(
max_retries=3, # Reasonable retry limit
initial_delay=0.1, # Quick initial retry
max_delay=30.0, # Cap exponential backoff
backoff_multiplier=2.0, # Standard exponential backoff
jitter_factor=0.1, # Small jitter to prevent thundering herd
)
# Production-ready connection configuration
connection_config = ConnectionConfig(
max_connections_per_peer=3, # Balance between performance and resource usage
connection_timeout=30.0, # Reasonable timeout
load_balancing_strategy="round_robin", # Simple, predictable strategy
)
# Create swarm with production config
swarm = new_swarm(retry_config=retry_config, connection_config=connection_config)
logger.info("Production-ready configuration applied:")
logger.info(
f" Retry: {retry_config.max_retries} retries, "
f"{retry_config.max_delay}s max delay"
)
logger.info(f" Connections: {connection_config.max_connections_per_peer} per peer")
logger.info(f" Load balancing: {connection_config.load_balancing_strategy}")
await swarm.close()
logger.info("Production-ready configuration example completed")
async def main() -> None:
"""Run all examples."""
logger.info("Multiple Connections Per Peer Examples")
logger.info("=" * 50)
try:
await example_basic_multiple_connections()
logger.info("-" * 30)
await example_custom_connection_config()
logger.info("-" * 30)
await example_multiple_connections_api()
logger.info("-" * 30)
await example_backward_compatibility()
logger.info("-" * 30)
await example_production_ready_config()
logger.info("-" * 30)
logger.info("All examples completed successfully!")
except Exception as e:
logger.error(f"Example failed: {e}")
raise
if __name__ == "__main__":
trio.run(main)

View File

@ -11,6 +11,7 @@ from collections.abc import (
from importlib.metadata import version as __version
from typing import (
Literal,
Optional,
)
import multiaddr
@ -34,9 +35,6 @@ from libp2p.custom_types import (
TProtocol,
TSecurityOptions,
)
from libp2p.discovery.mdns.mdns import (
MDNSDiscovery,
)
from libp2p.host.basic_host import (
BasicHost,
)
@ -44,6 +42,8 @@ from libp2p.host.routed_host import (
RoutedHost,
)
from libp2p.network.swarm import (
ConnectionConfig,
RetryConfig,
Swarm,
)
from libp2p.peer.id import (
@ -57,17 +57,19 @@ from libp2p.security.insecure.transport import (
PLAINTEXT_PROTOCOL_ID,
InsecureTransport,
)
from libp2p.security.noise.transport import PROTOCOL_ID as NOISE_PROTOCOL_ID
from libp2p.security.noise.transport import Transport as NoiseTransport
from libp2p.security.noise.transport import (
PROTOCOL_ID as NOISE_PROTOCOL_ID,
Transport as NoiseTransport,
)
import libp2p.security.secio.transport as secio
from libp2p.stream_muxer.mplex.mplex import (
MPLEX_PROTOCOL_ID,
Mplex,
)
from libp2p.stream_muxer.yamux.yamux import (
PROTOCOL_ID as YAMUX_PROTOCOL_ID,
Yamux,
)
from libp2p.stream_muxer.yamux.yamux import PROTOCOL_ID as YAMUX_PROTOCOL_ID
from libp2p.transport.tcp.tcp import (
TCP,
)
@ -166,7 +168,8 @@ def new_swarm(
muxer_preference: Literal["YAMUX", "MPLEX"] | None = None,
listen_addrs: Sequence[multiaddr.Multiaddr] | None = None,
enable_quic: bool = False,
quic_transport_opt: QUICTransportConfig | None = None,
retry_config: Optional["RetryConfig"] = None,
connection_config: "ConnectionConfig" | QUICTransportConfig | None = None,
) -> INetworkService:
"""
Create a swarm instance based on the parameters.
@ -186,10 +189,6 @@ def new_swarm(
Mplex (/mplex/6.7.0) is retained for backward compatibility
but may be deprecated in the future.
"""
if not enable_quic and quic_transport_opt is not None:
logger.warning(f"QUIC config provided but QUIC not enabled, ignoring QUIC config")
quic_transport_opt = None
if key_pair is None:
key_pair = generate_new_rsa_identity()
@ -199,6 +198,7 @@ def new_swarm(
if listen_addrs is None:
if enable_quic:
quic_transport_opt = connection_config if isinstance(connection_config, QUICTransportConfig) else None
transport = QUICTransport(key_pair.private_key, config=quic_transport_opt)
else:
transport = TCP()
@ -208,6 +208,7 @@ def new_swarm(
if addr.__contains__("tcp"):
transport = TCP()
elif is_quic:
quic_transport_opt = connection_config if isinstance(connection_config, QUICTransportConfig) else None
transport = QUICTransport(key_pair.private_key, config=quic_transport_opt)
else:
raise ValueError(f"Unknown transport in listen_addrs: {listen_addrs}")
@ -255,7 +256,14 @@ def new_swarm(
# Store our key pair in peerstore
peerstore.add_key_pair(id_opt, key_pair)
return Swarm(id_opt, peerstore, upgrader, transport)
return Swarm(
id_opt,
peerstore,
upgrader,
transport,
retry_config=retry_config,
connection_config=connection_config
)
def new_host(
@ -300,11 +308,17 @@ def new_host(
peerstore_opt=peerstore_opt,
muxer_preference=muxer_preference,
listen_addrs=listen_addrs,
quic_transport_opt=quic_transport_opt if enable_quic else None
connection_config=quic_transport_opt if enable_quic else None
)
if disc_opt is not None:
return RoutedHost(swarm, disc_opt, enable_mDNS, bootstrap)
return BasicHost(network=swarm,enable_mDNS=enable_mDNS , bootstrap=bootstrap, negotitate_timeout=negotiate_timeout)
return BasicHost(
network=swarm,
enable_mDNS=enable_mDNS,
bootstrap=bootstrap,
negotitate_timeout=negotiate_timeout
)
__version__ = __version("libp2p")

View File

@ -1412,15 +1412,16 @@ class INetwork(ABC):
----------
peerstore : IPeerStore
The peer store for managing peer information.
connections : dict[ID, INetConn]
A mapping of peer IDs to network connections.
connections : dict[ID, list[INetConn]]
A mapping of peer IDs to lists of network connections
(multiple connections per peer).
listeners : dict[str, IListener]
A mapping of listener identifiers to listener instances.
"""
peerstore: IPeerStore
connections: dict[ID, INetConn]
connections: dict[ID, list[INetConn]]
listeners: dict[str, IListener]
@abstractmethod
@ -1436,9 +1437,56 @@ class INetwork(ABC):
"""
@abstractmethod
async def dial_peer(self, peer_id: ID) -> INetConn:
def get_connections(self, peer_id: ID | None = None) -> list[INetConn]:
"""
Create a connection to the specified peer.
Get connections for peer (like JS getConnections, Go ConnsToPeer).
Parameters
----------
peer_id : ID | None
The peer ID to get connections for. If None, returns all connections.
Returns
-------
list[INetConn]
List of connections to the specified peer, or all connections
if peer_id is None.
"""
@abstractmethod
def get_connections_map(self) -> dict[ID, list[INetConn]]:
"""
Get all connections map (like JS getConnectionsMap).
Returns
-------
dict[ID, list[INetConn]]
The complete mapping of peer IDs to their connection lists.
"""
@abstractmethod
def get_connection(self, peer_id: ID) -> INetConn | None:
"""
Get single connection for backward compatibility.
Parameters
----------
peer_id : ID
The peer ID to get a connection for.
Returns
-------
INetConn | None
The first available connection, or None if no connections exist.
"""
@abstractmethod
async def dial_peer(self, peer_id: ID) -> list[INetConn]:
"""
Create connections to the specified peer with load balancing.
Parameters
----------
@ -1447,8 +1495,8 @@ class INetwork(ABC):
Returns
-------
INetConn
The network connection instance to the specified peer.
list[INetConn]
List of established connections to the peer.
Raises
------

View File

@ -297,6 +297,11 @@ class BasicHost(IHost):
protocol, handler = await self.multiselect.negotiate(
MultiselectCommunicator(net_stream), self.negotiate_timeout
)
if protocol is None:
await net_stream.reset()
raise StreamFailure(
"Failed to negotiate protocol: no protocol selected"
)
except MultiselectError as error:
peer_id = net_stream.muxed_conn.peer_id
logger.debug(
@ -338,7 +343,7 @@ class BasicHost(IHost):
:param peer_id: ID of the peer to check
:return: True if peer has an active connection, False otherwise
"""
return peer_id in self._network.connections
return len(self._network.get_connections(peer_id)) > 0
def get_peer_connection_info(self, peer_id: ID) -> INetConn | None:
"""
@ -347,4 +352,4 @@ class BasicHost(IHost):
:param peer_id: ID of the peer to get info for
:return: Connection object if peer is connected, None otherwise
"""
return self._network.connections.get(peer_id)
return self._network.get_connection(peer_id)

View File

@ -2,8 +2,9 @@ from collections.abc import (
Awaitable,
Callable,
)
from dataclasses import dataclass
import logging
import sys
import random
from typing import cast
from multiaddr import (
@ -41,6 +42,7 @@ from libp2p.transport.exceptions import (
OpenConnectionError,
SecurityUpgradeFailure,
)
from libp2p.transport.quic.config import QUICTransportConfig
from libp2p.transport.quic.connection import QUICConnection
from libp2p.transport.quic.transport import QUICTransport
from libp2p.transport.upgrader import (
@ -60,13 +62,62 @@ from .exceptions import (
SwarmException,
)
logging.basicConfig(
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[logging.StreamHandler(sys.stdout)],
)
logger = logging.getLogger("libp2p.network.swarm")
@dataclass
class RetryConfig:
"""
Configuration for retry logic with exponential backoff.
This configuration controls how connection attempts are retried when they fail.
The retry mechanism uses exponential backoff with jitter to prevent thundering
herd problems in distributed systems.
Attributes:
max_retries: Maximum number of retry attempts before giving up.
Default: 3 attempts
initial_delay: Initial delay in seconds before the first retry.
Default: 0.1 seconds (100ms)
max_delay: Maximum delay cap in seconds to prevent excessive wait times.
Default: 30.0 seconds
backoff_multiplier: Multiplier for exponential backoff (each retry multiplies
the delay by this factor). Default: 2.0 (doubles each time)
jitter_factor: Random jitter factor (0.0-1.0) to add randomness to delays
and prevent synchronized retries. Default: 0.1 (10% jitter)
"""
max_retries: int = 3
initial_delay: float = 0.1
max_delay: float = 30.0
backoff_multiplier: float = 2.0
jitter_factor: float = 0.1
@dataclass
class ConnectionConfig:
"""
Configuration for multi-connection support.
This configuration controls how multiple connections per peer are managed,
including connection limits, timeouts, and load balancing strategies.
Attributes:
max_connections_per_peer: Maximum number of connections allowed to a single
peer. Default: 3 connections
connection_timeout: Timeout in seconds for establishing new connections.
Default: 30.0 seconds
load_balancing_strategy: Strategy for distributing streams across connections.
Options: "round_robin" (default) or "least_loaded"
"""
max_connections_per_peer: int = 3
connection_timeout: float = 30.0
load_balancing_strategy: str = "round_robin" # or "least_loaded"
def create_default_stream_handler(network: INetworkService) -> StreamHandlerFn:
async def stream_handler(stream: INetStream) -> None:
await network.get_manager().wait_finished()
@ -79,9 +130,7 @@ class Swarm(Service, INetworkService):
peerstore: IPeerStore
upgrader: TransportUpgrader
transport: ITransport
# TODO: Connection and `peer_id` are 1-1 mapping in our implementation,
# whereas in Go one `peer_id` may point to multiple connections.
connections: dict[ID, INetConn]
connections: dict[ID, list[INetConn]]
listeners: dict[str, IListener]
common_stream_handler: StreamHandlerFn
listener_nursery: trio.Nursery | None
@ -89,18 +138,31 @@ class Swarm(Service, INetworkService):
notifees: list[INotifee]
# Enhanced: New configuration
retry_config: RetryConfig
connection_config: ConnectionConfig | QUICTransportConfig
_round_robin_index: dict[ID, int]
def __init__(
self,
peer_id: ID,
peerstore: IPeerStore,
upgrader: TransportUpgrader,
transport: ITransport,
retry_config: RetryConfig | None = None,
connection_config: ConnectionConfig | QUICTransportConfig | None = None,
):
self.self_id = peer_id
self.peerstore = peerstore
self.upgrader = upgrader
self.transport = transport
self.connections = dict()
# Enhanced: Initialize retry and connection configuration
self.retry_config = retry_config or RetryConfig()
self.connection_config = connection_config or ConnectionConfig()
# Enhanced: Initialize connections as 1:many mapping
self.connections = {}
self.listeners = dict()
# Create Notifee array
@ -111,6 +173,9 @@ class Swarm(Service, INetworkService):
self.listener_nursery = None
self.event_listener_nursery_created = trio.Event()
# Load balancing state
self._round_robin_index = {}
async def run(self) -> None:
async with trio.open_nursery() as nursery:
# Create a nursery for listener tasks.
@ -135,18 +200,74 @@ class Swarm(Service, INetworkService):
def set_stream_handler(self, stream_handler: StreamHandlerFn) -> None:
self.common_stream_handler = stream_handler
async def dial_peer(self, peer_id: ID) -> INetConn:
def get_connections(self, peer_id: ID | None = None) -> list[INetConn]:
"""
Try to create a connection to peer_id.
Get connections for peer (like JS getConnections, Go ConnsToPeer).
Parameters
----------
peer_id : ID | None
The peer ID to get connections for. If None, returns all connections.
Returns
-------
list[INetConn]
List of connections to the specified peer, or all connections
if peer_id is None.
"""
if peer_id is not None:
return self.connections.get(peer_id, [])
# Return all connections from all peers
all_conns = []
for conns in self.connections.values():
all_conns.extend(conns)
return all_conns
def get_connections_map(self) -> dict[ID, list[INetConn]]:
"""
Get all connections map (like JS getConnectionsMap).
Returns
-------
dict[ID, list[INetConn]]
The complete mapping of peer IDs to their connection lists.
"""
return self.connections.copy()
def get_connection(self, peer_id: ID) -> INetConn | None:
"""
Get single connection for backward compatibility.
Parameters
----------
peer_id : ID
The peer ID to get a connection for.
Returns
-------
INetConn | None
The first available connection, or None if no connections exist.
"""
conns = self.get_connections(peer_id)
return conns[0] if conns else None
async def dial_peer(self, peer_id: ID) -> list[INetConn]:
"""
Try to create connections to peer_id with enhanced retry logic.
:param peer_id: peer if we want to dial
:raises SwarmException: raised when an error occurs
:return: muxed connection
:return: list of muxed connections
"""
if peer_id in self.connections:
# If muxed connection already exists for peer_id,
# set muxed connection equal to existing muxed connection
return self.connections[peer_id]
# Check if we already have connections
existing_connections = self.get_connections(peer_id)
if existing_connections:
logger.debug(f"Reusing existing connections to peer {peer_id}")
return existing_connections
logger.debug("attempting to dial peer %s", peer_id)
@ -159,12 +280,19 @@ class Swarm(Service, INetworkService):
if not addrs:
raise SwarmException(f"No known addresses to peer {peer_id}")
connections = []
exceptions: list[SwarmException] = []
# Try all known addresses
# Enhanced: Try all known addresses with retry logic
for multiaddr in addrs:
try:
return await self.dial_addr(multiaddr, peer_id)
connection = await self._dial_with_retry(multiaddr, peer_id)
connections.append(connection)
# Limit number of connections per peer
if len(connections) >= self.connection_config.max_connections_per_peer:
break
except SwarmException as e:
exceptions.append(e)
logger.debug(
@ -174,15 +302,74 @@ class Swarm(Service, INetworkService):
exc_info=e,
)
# Tried all addresses, raising exception.
raise SwarmException(
f"unable to connect to {peer_id}, no addresses established a successful "
"connection (with exceptions)"
) from MultiError(exceptions)
if not connections:
# Tried all addresses, raising exception.
raise SwarmException(
f"unable to connect to {peer_id}, no addresses established a "
"successful connection (with exceptions)"
) from MultiError(exceptions)
async def dial_addr(self, addr: Multiaddr, peer_id: ID) -> INetConn:
return connections
async def _dial_with_retry(self, addr: Multiaddr, peer_id: ID) -> INetConn:
"""
Try to create a connection to peer_id with addr.
Enhanced: Dial with retry logic and exponential backoff.
:param addr: the address to dial
:param peer_id: the peer we want to connect to
:raises SwarmException: raised when all retry attempts fail
:return: network connection
"""
last_exception = None
for attempt in range(self.retry_config.max_retries + 1):
try:
return await self._dial_addr_single_attempt(addr, peer_id)
except Exception as e:
last_exception = e
if attempt < self.retry_config.max_retries:
delay = self._calculate_backoff_delay(attempt)
logger.debug(
f"Connection attempt {attempt + 1} failed, "
f"retrying in {delay:.2f}s: {e}"
)
await trio.sleep(delay)
else:
logger.debug(f"All {self.retry_config.max_retries} attempts failed")
# Convert the last exception to SwarmException for consistency
if last_exception is not None:
if isinstance(last_exception, SwarmException):
raise last_exception
else:
raise SwarmException(
f"Failed to connect after {self.retry_config.max_retries} attempts"
) from last_exception
# This should never be reached, but mypy requires it
raise SwarmException("Unexpected error in retry logic")
def _calculate_backoff_delay(self, attempt: int) -> float:
"""
Enhanced: Calculate backoff delay with jitter to prevent thundering herd.
:param attempt: the current attempt number (0-based)
:return: delay in seconds
"""
delay = min(
self.retry_config.initial_delay
* (self.retry_config.backoff_multiplier**attempt),
self.retry_config.max_delay,
)
# Add jitter to prevent synchronized retries
jitter = delay * self.retry_config.jitter_factor
return delay + random.uniform(-jitter, jitter)
async def _dial_addr_single_attempt(self, addr: Multiaddr, peer_id: ID) -> INetConn:
"""
Enhanced: Single attempt to dial an address (extracted from original dial_addr).
:param addr: the address we want to connect with
:param peer_id: the peer we want to connect to
:raises SwarmException: raised when an error occurs
@ -236,25 +423,102 @@ class Swarm(Service, INetworkService):
logger.debug("successfully dialed peer %s", peer_id)
return swarm_conn
async def dial_addr(self, addr: Multiaddr, peer_id: ID) -> INetConn:
"""
Enhanced: Try to create a connection to peer_id with addr using retry logic.
:param addr: the address we want to connect with
:param peer_id: the peer we want to connect to
:raises SwarmException: raised when an error occurs
:return: network connection
"""
return await self._dial_with_retry(addr, peer_id)
async def new_stream(self, peer_id: ID) -> INetStream:
"""
Enhanced: Create a new stream with load balancing across multiple connections.
:param peer_id: peer_id of destination
:raises SwarmException: raised when an error occurs
:return: net stream instance
"""
logger.debug("attempting to open a stream to peer %s", peer_id)
# Get existing connections or dial new ones
connections = self.get_connections(peer_id)
if not connections:
connections = await self.dial_peer(peer_id)
if (
isinstance(self.transport, QUICTransport)
and self.connections[peer_id] is not None
):
conn = cast(SwarmConn, self.connections[peer_id])
# Load balancing strategy at interface level
connection = self._select_connection(connections, peer_id)
if isinstance(self.transport, QUICTransport) and connection is not None:
conn = cast(SwarmConn, connection)
return await conn.new_stream()
swarm_conn = await self.dial_peer(peer_id)
net_stream = await swarm_conn.new_stream()
logger.debug("successfully opened a stream to peer %s", peer_id)
return net_stream
try:
net_stream = await connection.new_stream()
logger.debug("successfully opened a stream to peer %s", peer_id)
return net_stream
except Exception as e:
logger.debug(f"Failed to create stream on connection: {e}")
# Try other connections if available
for other_conn in connections:
if other_conn != connection:
try:
net_stream = await other_conn.new_stream()
logger.debug(
f"Successfully opened a stream to peer {peer_id} "
"using alternative connection"
)
return net_stream
except Exception:
continue
# All connections failed, raise exception
raise SwarmException(f"Failed to create stream to peer {peer_id}") from e
def _select_connection(self, connections: list[INetConn], peer_id: ID) -> INetConn:
"""
Select connection based on load balancing strategy.
Parameters
----------
connections : list[INetConn]
List of available connections.
peer_id : ID
The peer ID for round-robin tracking.
strategy : str
Load balancing strategy ("round_robin", "least_loaded", etc.).
Returns
-------
INetConn
Selected connection.
"""
if not connections:
raise ValueError("No connections available")
strategy = self.connection_config.load_balancing_strategy
if strategy == "round_robin":
# Simple round-robin selection
if peer_id not in self._round_robin_index:
self._round_robin_index[peer_id] = 0
index = self._round_robin_index[peer_id] % len(connections)
self._round_robin_index[peer_id] += 1
return connections[index]
elif strategy == "least_loaded":
# Find connection with least streams
return min(connections, key=lambda c: len(c.get_streams()))
else:
# Default to first connection
return connections[0]
# >>>>>>> upstream/main
async def listen(self, *multiaddrs: Multiaddr) -> bool:
"""
@ -367,9 +631,9 @@ class Swarm(Service, INetworkService):
# Perform alternative cleanup if the manager isn't initialized
# Close all connections manually
if hasattr(self, "connections"):
for conn_id in list(self.connections.keys()):
conn = self.connections[conn_id]
await conn.close()
for peer_id, conns in list(self.connections.items()):
for conn in conns:
await conn.close()
# Clear connection tracking dictionary
self.connections.clear()
@ -399,12 +663,28 @@ class Swarm(Service, INetworkService):
logger.debug("swarm successfully closed")
async def close_peer(self, peer_id: ID) -> None:
if peer_id not in self.connections:
"""
Close all connections to the specified peer.
Parameters
----------
peer_id : ID
The peer ID to close connections for.
"""
connections = self.get_connections(peer_id)
if not connections:
return
connection = self.connections[peer_id]
# NOTE: `connection.close` will delete `peer_id` from `self.connections`
# and `notify_disconnected` for us.
await connection.close()
# Close all connections
for connection in connections:
try:
await connection.close()
except Exception as e:
logger.warning(f"Error closing connection to {peer_id}: {e}")
# Remove from connections dict
self.connections.pop(peer_id, None)
logger.debug("successfully close the connection to peer %s", peer_id)
@ -424,21 +704,71 @@ class Swarm(Service, INetworkService):
logger.debug("Swarm::add_conn | starting swarm connection")
self.manager.run_task(swarm_conn.start)
await swarm_conn.event_started.wait()
# Store muxed_conn with peer id
self.connections[muxed_conn.peer_id] = swarm_conn
# Add to connections dict with deduplication
peer_id = muxed_conn.peer_id
if peer_id not in self.connections:
self.connections[peer_id] = []
# Check for duplicate connections by comparing the underlying muxed connection
for existing_conn in self.connections[peer_id]:
if existing_conn.muxed_conn == muxed_conn:
logger.debug(f"Connection already exists for peer {peer_id}")
# existing_conn is a SwarmConn since it's stored in the connections list
return existing_conn # type: ignore[return-value]
self.connections[peer_id].append(swarm_conn)
# Trim if we exceed max connections
max_conns = self.connection_config.max_connections_per_peer
if len(self.connections[peer_id]) > max_conns:
self._trim_connections(peer_id)
# Call notifiers since event occurred
await self.notify_connected(swarm_conn)
return swarm_conn
def _trim_connections(self, peer_id: ID) -> None:
"""
Remove oldest connections when limit is exceeded.
"""
connections = self.connections[peer_id]
if len(connections) <= self.connection_config.max_connections_per_peer:
return
# Sort by creation time and remove oldest
# For now, just keep the most recent connections
max_conns = self.connection_config.max_connections_per_peer
connections_to_remove = connections[:-max_conns]
for conn in connections_to_remove:
logger.debug(f"Trimming old connection for peer {peer_id}")
trio.lowlevel.spawn_system_task(self._close_connection_async, conn)
# Keep only the most recent connections
max_conns = self.connection_config.max_connections_per_peer
self.connections[peer_id] = connections[-max_conns:]
async def _close_connection_async(self, connection: INetConn) -> None:
"""Close a connection asynchronously."""
try:
await connection.close()
except Exception as e:
logger.warning(f"Error closing connection: {e}")
def remove_conn(self, swarm_conn: SwarmConn) -> None:
"""
Simply remove the connection from Swarm's records, without closing
the connection.
"""
peer_id = swarm_conn.muxed_conn.peer_id
if peer_id not in self.connections:
return
del self.connections[peer_id]
if peer_id in self.connections:
self.connections[peer_id] = [
conn for conn in self.connections[peer_id] if conn != swarm_conn
]
if not self.connections[peer_id]:
del self.connections[peer_id]
# Notifee
@ -484,3 +814,21 @@ class Swarm(Service, INetworkService):
async with trio.open_nursery() as nursery:
for notifee in self.notifees:
nursery.start_soon(notifier, notifee)
# Backward compatibility properties
@property
def connections_legacy(self) -> dict[ID, INetConn]:
"""
Legacy 1:1 mapping for backward compatibility.
Returns
-------
dict[ID, INetConn]
Legacy mapping with only the first connection per peer.
"""
legacy_conns = {}
for peer_id, conns in self.connections.items():
if conns:
legacy_conns[peer_id] = conns[0]
return legacy_conns

View File

@ -15,6 +15,7 @@ from libp2p.custom_types import (
from libp2p.peer.id import (
ID,
)
from libp2p.peer.peerstore import env_to_send_in_RPC
from .exceptions import (
PubsubRouterError,
@ -103,6 +104,11 @@ class FloodSub(IPubsubRouter):
)
rpc_msg = rpc_pb2.RPC(publish=[pubsub_msg])
# Add the senderRecord of the peer in the RPC msg
if isinstance(self.pubsub, Pubsub):
envelope_bytes, _ = env_to_send_in_RPC(self.pubsub.host)
rpc_msg.senderRecord = envelope_bytes
logger.debug("publishing message %s", pubsub_msg)
if self.pubsub is None:

View File

@ -34,10 +34,12 @@ from libp2p.peer.peerinfo import (
)
from libp2p.peer.peerstore import (
PERMANENT_ADDR_TTL,
env_to_send_in_RPC,
)
from libp2p.pubsub import (
floodsub,
)
from libp2p.pubsub.utils import maybe_consume_signed_record
from libp2p.tools.async_service import (
Service,
)
@ -226,6 +228,12 @@ class GossipSub(IPubsubRouter, Service):
:param rpc: RPC message
:param sender_peer_id: id of the peer who sent the message
"""
# Process the senderRecord if sent
if isinstance(self.pubsub, Pubsub):
if not maybe_consume_signed_record(rpc, self.pubsub.host, sender_peer_id):
logger.error("Received an invalid-signed-record, ignoring the message")
return
control_message = rpc.control
# Relay each rpc control message to the appropriate handler
@ -253,6 +261,11 @@ class GossipSub(IPubsubRouter, Service):
)
rpc_msg = rpc_pb2.RPC(publish=[pubsub_msg])
# Add the senderRecord of the peer in the RPC msg
if isinstance(self.pubsub, Pubsub):
envelope_bytes, _ = env_to_send_in_RPC(self.pubsub.host)
rpc_msg.senderRecord = envelope_bytes
logger.debug("publishing message %s", pubsub_msg)
for peer_id in peers_gen:
@ -818,6 +831,13 @@ class GossipSub(IPubsubRouter, Service):
# 1) Package these messages into a single packet
packet: rpc_pb2.RPC = rpc_pb2.RPC()
# Here the an RPC message is being created and published in response
# to the iwant control msg, so we will send a freshly created senderRecord
# with the RPC msg
if isinstance(self.pubsub, Pubsub):
envelope_bytes, _ = env_to_send_in_RPC(self.pubsub.host)
packet.senderRecord = envelope_bytes
packet.publish.extend(msgs_to_forward)
if self.pubsub is None:
@ -973,6 +993,12 @@ class GossipSub(IPubsubRouter, Service):
raise NoPubsubAttached
# Add control message to packet
packet: rpc_pb2.RPC = rpc_pb2.RPC()
# Add the sender's peer-record in the RPC msg
if isinstance(self.pubsub, Pubsub):
envelope_bytes, _ = env_to_send_in_RPC(self.pubsub.host)
packet.senderRecord = envelope_bytes
packet.control.CopyFrom(control_msg)
# Get stream for peer from pubsub

View File

@ -14,6 +14,7 @@ message RPC {
}
optional ControlMessage control = 3;
optional bytes senderRecord = 4;
}
message Message {

View File

@ -1,11 +1,12 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/pubsub/pb/rpc.proto
# Protobuf Python Version: 4.25.3
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
@ -13,39 +14,39 @@ _sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1alibp2p/pubsub/pb/rpc.proto\x12\tpubsub.pb\"\xb4\x01\n\x03RPC\x12-\n\rsubscriptions\x18\x01 \x03(\x0b\x32\x16.pubsub.pb.RPC.SubOpts\x12#\n\x07publish\x18\x02 \x03(\x0b\x32\x12.pubsub.pb.Message\x12*\n\x07\x63ontrol\x18\x03 \x01(\x0b\x32\x19.pubsub.pb.ControlMessage\x1a-\n\x07SubOpts\x12\x11\n\tsubscribe\x18\x01 \x01(\x08\x12\x0f\n\x07topicid\x18\x02 \x01(\t\"i\n\x07Message\x12\x0f\n\x07\x66rom_id\x18\x01 \x01(\x0c\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\x12\r\n\x05seqno\x18\x03 \x01(\x0c\x12\x10\n\x08topicIDs\x18\x04 \x03(\t\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x12\x0b\n\x03key\x18\x06 \x01(\x0c\"\xb0\x01\n\x0e\x43ontrolMessage\x12&\n\x05ihave\x18\x01 \x03(\x0b\x32\x17.pubsub.pb.ControlIHave\x12&\n\x05iwant\x18\x02 \x03(\x0b\x32\x17.pubsub.pb.ControlIWant\x12&\n\x05graft\x18\x03 \x03(\x0b\x32\x17.pubsub.pb.ControlGraft\x12&\n\x05prune\x18\x04 \x03(\x0b\x32\x17.pubsub.pb.ControlPrune\"3\n\x0c\x43ontrolIHave\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\x12\n\nmessageIDs\x18\x02 \x03(\t\"\"\n\x0c\x43ontrolIWant\x12\x12\n\nmessageIDs\x18\x01 \x03(\t\"\x1f\n\x0c\x43ontrolGraft\x12\x0f\n\x07topicID\x18\x01 \x01(\t\"T\n\x0c\x43ontrolPrune\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\"\n\x05peers\x18\x02 \x03(\x0b\x32\x13.pubsub.pb.PeerInfo\x12\x0f\n\x07\x62\x61\x63koff\x18\x03 \x01(\x04\"4\n\x08PeerInfo\x12\x0e\n\x06peerID\x18\x01 \x01(\x0c\x12\x18\n\x10signedPeerRecord\x18\x02 \x01(\x0c\"\x87\x03\n\x0fTopicDescriptor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x31\n\x04\x61uth\x18\x02 \x01(\x0b\x32#.pubsub.pb.TopicDescriptor.AuthOpts\x12/\n\x03\x65nc\x18\x03 \x01(\x0b\x32\".pubsub.pb.TopicDescriptor.EncOpts\x1a|\n\x08\x41uthOpts\x12:\n\x04mode\x18\x01 \x01(\x0e\x32,.pubsub.pb.TopicDescriptor.AuthOpts.AuthMode\x12\x0c\n\x04keys\x18\x02 \x03(\x0c\"&\n\x08\x41uthMode\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03KEY\x10\x01\x12\x07\n\x03WOT\x10\x02\x1a\x83\x01\n\x07\x45ncOpts\x12\x38\n\x04mode\x18\x01 \x01(\x0e\x32*.pubsub.pb.TopicDescriptor.EncOpts.EncMode\x12\x11\n\tkeyHashes\x18\x02 \x03(\x0c\"+\n\x07\x45ncMode\x12\x08\n\x04NONE\x10\x00\x12\r\n\tSHAREDKEY\x10\x01\x12\x07\n\x03WOT\x10\x02')
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1alibp2p/pubsub/pb/rpc.proto\x12\tpubsub.pb\"\xca\x01\n\x03RPC\x12-\n\rsubscriptions\x18\x01 \x03(\x0b\x32\x16.pubsub.pb.RPC.SubOpts\x12#\n\x07publish\x18\x02 \x03(\x0b\x32\x12.pubsub.pb.Message\x12*\n\x07\x63ontrol\x18\x03 \x01(\x0b\x32\x19.pubsub.pb.ControlMessage\x12\x14\n\x0csenderRecord\x18\x04 \x01(\x0c\x1a-\n\x07SubOpts\x12\x11\n\tsubscribe\x18\x01 \x01(\x08\x12\x0f\n\x07topicid\x18\x02 \x01(\t\"i\n\x07Message\x12\x0f\n\x07\x66rom_id\x18\x01 \x01(\x0c\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\x12\r\n\x05seqno\x18\x03 \x01(\x0c\x12\x10\n\x08topicIDs\x18\x04 \x03(\t\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x12\x0b\n\x03key\x18\x06 \x01(\x0c\"\xb0\x01\n\x0e\x43ontrolMessage\x12&\n\x05ihave\x18\x01 \x03(\x0b\x32\x17.pubsub.pb.ControlIHave\x12&\n\x05iwant\x18\x02 \x03(\x0b\x32\x17.pubsub.pb.ControlIWant\x12&\n\x05graft\x18\x03 \x03(\x0b\x32\x17.pubsub.pb.ControlGraft\x12&\n\x05prune\x18\x04 \x03(\x0b\x32\x17.pubsub.pb.ControlPrune\"3\n\x0c\x43ontrolIHave\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\x12\n\nmessageIDs\x18\x02 \x03(\t\"\"\n\x0c\x43ontrolIWant\x12\x12\n\nmessageIDs\x18\x01 \x03(\t\"\x1f\n\x0c\x43ontrolGraft\x12\x0f\n\x07topicID\x18\x01 \x01(\t\"T\n\x0c\x43ontrolPrune\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\"\n\x05peers\x18\x02 \x03(\x0b\x32\x13.pubsub.pb.PeerInfo\x12\x0f\n\x07\x62\x61\x63koff\x18\x03 \x01(\x04\"4\n\x08PeerInfo\x12\x0e\n\x06peerID\x18\x01 \x01(\x0c\x12\x18\n\x10signedPeerRecord\x18\x02 \x01(\x0c\"\x87\x03\n\x0fTopicDescriptor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x31\n\x04\x61uth\x18\x02 \x01(\x0b\x32#.pubsub.pb.TopicDescriptor.AuthOpts\x12/\n\x03\x65nc\x18\x03 \x01(\x0b\x32\".pubsub.pb.TopicDescriptor.EncOpts\x1a|\n\x08\x41uthOpts\x12:\n\x04mode\x18\x01 \x01(\x0e\x32,.pubsub.pb.TopicDescriptor.AuthOpts.AuthMode\x12\x0c\n\x04keys\x18\x02 \x03(\x0c\"&\n\x08\x41uthMode\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03KEY\x10\x01\x12\x07\n\x03WOT\x10\x02\x1a\x83\x01\n\x07\x45ncOpts\x12\x38\n\x04mode\x18\x01 \x01(\x0e\x32*.pubsub.pb.TopicDescriptor.EncOpts.EncMode\x12\x11\n\tkeyHashes\x18\x02 \x03(\x0c\"+\n\x07\x45ncMode\x12\x08\n\x04NONE\x10\x00\x12\r\n\tSHAREDKEY\x10\x01\x12\x07\n\x03WOT\x10\x02')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.pubsub.pb.rpc_pb2', globals())
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.pubsub.pb.rpc_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_RPC._serialized_start=42
_RPC._serialized_end=222
_RPC_SUBOPTS._serialized_start=177
_RPC_SUBOPTS._serialized_end=222
_MESSAGE._serialized_start=224
_MESSAGE._serialized_end=329
_CONTROLMESSAGE._serialized_start=332
_CONTROLMESSAGE._serialized_end=508
_CONTROLIHAVE._serialized_start=510
_CONTROLIHAVE._serialized_end=561
_CONTROLIWANT._serialized_start=563
_CONTROLIWANT._serialized_end=597
_CONTROLGRAFT._serialized_start=599
_CONTROLGRAFT._serialized_end=630
_CONTROLPRUNE._serialized_start=632
_CONTROLPRUNE._serialized_end=716
_PEERINFO._serialized_start=718
_PEERINFO._serialized_end=770
_TOPICDESCRIPTOR._serialized_start=773
_TOPICDESCRIPTOR._serialized_end=1164
_TOPICDESCRIPTOR_AUTHOPTS._serialized_start=906
_TOPICDESCRIPTOR_AUTHOPTS._serialized_end=1030
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_start=992
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_end=1030
_TOPICDESCRIPTOR_ENCOPTS._serialized_start=1033
_TOPICDESCRIPTOR_ENCOPTS._serialized_end=1164
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_start=1121
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_end=1164
_globals['_RPC']._serialized_start=42
_globals['_RPC']._serialized_end=244
_globals['_RPC_SUBOPTS']._serialized_start=199
_globals['_RPC_SUBOPTS']._serialized_end=244
_globals['_MESSAGE']._serialized_start=246
_globals['_MESSAGE']._serialized_end=351
_globals['_CONTROLMESSAGE']._serialized_start=354
_globals['_CONTROLMESSAGE']._serialized_end=530
_globals['_CONTROLIHAVE']._serialized_start=532
_globals['_CONTROLIHAVE']._serialized_end=583
_globals['_CONTROLIWANT']._serialized_start=585
_globals['_CONTROLIWANT']._serialized_end=619
_globals['_CONTROLGRAFT']._serialized_start=621
_globals['_CONTROLGRAFT']._serialized_end=652
_globals['_CONTROLPRUNE']._serialized_start=654
_globals['_CONTROLPRUNE']._serialized_end=738
_globals['_PEERINFO']._serialized_start=740
_globals['_PEERINFO']._serialized_end=792
_globals['_TOPICDESCRIPTOR']._serialized_start=795
_globals['_TOPICDESCRIPTOR']._serialized_end=1186
_globals['_TOPICDESCRIPTOR_AUTHOPTS']._serialized_start=928
_globals['_TOPICDESCRIPTOR_AUTHOPTS']._serialized_end=1052
_globals['_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE']._serialized_start=1014
_globals['_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE']._serialized_end=1052
_globals['_TOPICDESCRIPTOR_ENCOPTS']._serialized_start=1055
_globals['_TOPICDESCRIPTOR_ENCOPTS']._serialized_end=1186
_globals['_TOPICDESCRIPTOR_ENCOPTS_ENCMODE']._serialized_start=1143
_globals['_TOPICDESCRIPTOR_ENCOPTS_ENCMODE']._serialized_end=1186
# @@protoc_insertion_point(module_scope)

View File

@ -1,323 +1,132 @@
"""
@generated by mypy-protobuf. Do not edit manually!
isort:skip_file
Modified from https://github.com/libp2p/go-libp2p-pubsub/blob/master/pb/rpc.proto"""
from google.protobuf.internal import containers as _containers
from google.protobuf.internal import enum_type_wrapper as _enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from typing import ClassVar as _ClassVar, Iterable as _Iterable, Mapping as _Mapping, Optional as _Optional, Union as _Union
import builtins
import collections.abc
import google.protobuf.descriptor
import google.protobuf.internal.containers
import google.protobuf.internal.enum_type_wrapper
import google.protobuf.message
import sys
import typing
DESCRIPTOR: _descriptor.FileDescriptor
if sys.version_info >= (3, 10):
import typing as typing_extensions
else:
import typing_extensions
class RPC(_message.Message):
__slots__ = ("subscriptions", "publish", "control", "senderRecord")
class SubOpts(_message.Message):
__slots__ = ("subscribe", "topicid")
SUBSCRIBE_FIELD_NUMBER: _ClassVar[int]
TOPICID_FIELD_NUMBER: _ClassVar[int]
subscribe: bool
topicid: str
def __init__(self, subscribe: bool = ..., topicid: _Optional[str] = ...) -> None: ...
SUBSCRIPTIONS_FIELD_NUMBER: _ClassVar[int]
PUBLISH_FIELD_NUMBER: _ClassVar[int]
CONTROL_FIELD_NUMBER: _ClassVar[int]
SENDERRECORD_FIELD_NUMBER: _ClassVar[int]
subscriptions: _containers.RepeatedCompositeFieldContainer[RPC.SubOpts]
publish: _containers.RepeatedCompositeFieldContainer[Message]
control: ControlMessage
senderRecord: bytes
def __init__(self, subscriptions: _Optional[_Iterable[_Union[RPC.SubOpts, _Mapping]]] = ..., publish: _Optional[_Iterable[_Union[Message, _Mapping]]] = ..., control: _Optional[_Union[ControlMessage, _Mapping]] = ..., senderRecord: _Optional[bytes] = ...) -> None: ... # type: ignore
DESCRIPTOR: google.protobuf.descriptor.FileDescriptor
class Message(_message.Message):
__slots__ = ("from_id", "data", "seqno", "topicIDs", "signature", "key")
FROM_ID_FIELD_NUMBER: _ClassVar[int]
DATA_FIELD_NUMBER: _ClassVar[int]
SEQNO_FIELD_NUMBER: _ClassVar[int]
TOPICIDS_FIELD_NUMBER: _ClassVar[int]
SIGNATURE_FIELD_NUMBER: _ClassVar[int]
KEY_FIELD_NUMBER: _ClassVar[int]
from_id: bytes
data: bytes
seqno: bytes
topicIDs: _containers.RepeatedScalarFieldContainer[str]
signature: bytes
key: bytes
def __init__(self, from_id: _Optional[bytes] = ..., data: _Optional[bytes] = ..., seqno: _Optional[bytes] = ..., topicIDs: _Optional[_Iterable[str]] = ..., signature: _Optional[bytes] = ..., key: _Optional[bytes] = ...) -> None: ...
@typing.final
class RPC(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class ControlMessage(_message.Message):
__slots__ = ("ihave", "iwant", "graft", "prune")
IHAVE_FIELD_NUMBER: _ClassVar[int]
IWANT_FIELD_NUMBER: _ClassVar[int]
GRAFT_FIELD_NUMBER: _ClassVar[int]
PRUNE_FIELD_NUMBER: _ClassVar[int]
ihave: _containers.RepeatedCompositeFieldContainer[ControlIHave]
iwant: _containers.RepeatedCompositeFieldContainer[ControlIWant]
graft: _containers.RepeatedCompositeFieldContainer[ControlGraft]
prune: _containers.RepeatedCompositeFieldContainer[ControlPrune]
def __init__(self, ihave: _Optional[_Iterable[_Union[ControlIHave, _Mapping]]] = ..., iwant: _Optional[_Iterable[_Union[ControlIWant, _Mapping]]] = ..., graft: _Optional[_Iterable[_Union[ControlGraft, _Mapping]]] = ..., prune: _Optional[_Iterable[_Union[ControlPrune, _Mapping]]] = ...) -> None: ... # type: ignore
@typing.final
class SubOpts(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class ControlIHave(_message.Message):
__slots__ = ("topicID", "messageIDs")
TOPICID_FIELD_NUMBER: _ClassVar[int]
MESSAGEIDS_FIELD_NUMBER: _ClassVar[int]
topicID: str
messageIDs: _containers.RepeatedScalarFieldContainer[str]
def __init__(self, topicID: _Optional[str] = ..., messageIDs: _Optional[_Iterable[str]] = ...) -> None: ...
SUBSCRIBE_FIELD_NUMBER: builtins.int
TOPICID_FIELD_NUMBER: builtins.int
subscribe: builtins.bool
"""subscribe or unsubscribe"""
topicid: builtins.str
def __init__(
self,
*,
subscribe: builtins.bool | None = ...,
topicid: builtins.str | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["subscribe", b"subscribe", "topicid", b"topicid"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["subscribe", b"subscribe", "topicid", b"topicid"]) -> None: ...
class ControlIWant(_message.Message):
__slots__ = ("messageIDs",)
MESSAGEIDS_FIELD_NUMBER: _ClassVar[int]
messageIDs: _containers.RepeatedScalarFieldContainer[str]
def __init__(self, messageIDs: _Optional[_Iterable[str]] = ...) -> None: ...
SUBSCRIPTIONS_FIELD_NUMBER: builtins.int
PUBLISH_FIELD_NUMBER: builtins.int
CONTROL_FIELD_NUMBER: builtins.int
@property
def subscriptions(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___RPC.SubOpts]: ...
@property
def publish(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Message]: ...
@property
def control(self) -> global___ControlMessage: ...
def __init__(
self,
*,
subscriptions: collections.abc.Iterable[global___RPC.SubOpts] | None = ...,
publish: collections.abc.Iterable[global___Message] | None = ...,
control: global___ControlMessage | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["control", b"control"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["control", b"control", "publish", b"publish", "subscriptions", b"subscriptions"]) -> None: ...
class ControlGraft(_message.Message):
__slots__ = ("topicID",)
TOPICID_FIELD_NUMBER: _ClassVar[int]
topicID: str
def __init__(self, topicID: _Optional[str] = ...) -> None: ...
global___RPC = RPC
class ControlPrune(_message.Message):
__slots__ = ("topicID", "peers", "backoff")
TOPICID_FIELD_NUMBER: _ClassVar[int]
PEERS_FIELD_NUMBER: _ClassVar[int]
BACKOFF_FIELD_NUMBER: _ClassVar[int]
topicID: str
peers: _containers.RepeatedCompositeFieldContainer[PeerInfo]
backoff: int
def __init__(self, topicID: _Optional[str] = ..., peers: _Optional[_Iterable[_Union[PeerInfo, _Mapping]]] = ..., backoff: _Optional[int] = ...) -> None: ... # type: ignore
@typing.final
class Message(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class PeerInfo(_message.Message):
__slots__ = ("peerID", "signedPeerRecord")
PEERID_FIELD_NUMBER: _ClassVar[int]
SIGNEDPEERRECORD_FIELD_NUMBER: _ClassVar[int]
peerID: bytes
signedPeerRecord: bytes
def __init__(self, peerID: _Optional[bytes] = ..., signedPeerRecord: _Optional[bytes] = ...) -> None: ...
FROM_ID_FIELD_NUMBER: builtins.int
DATA_FIELD_NUMBER: builtins.int
SEQNO_FIELD_NUMBER: builtins.int
TOPICIDS_FIELD_NUMBER: builtins.int
SIGNATURE_FIELD_NUMBER: builtins.int
KEY_FIELD_NUMBER: builtins.int
from_id: builtins.bytes
data: builtins.bytes
seqno: builtins.bytes
signature: builtins.bytes
key: builtins.bytes
@property
def topicIDs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.str]: ...
def __init__(
self,
*,
from_id: builtins.bytes | None = ...,
data: builtins.bytes | None = ...,
seqno: builtins.bytes | None = ...,
topicIDs: collections.abc.Iterable[builtins.str] | None = ...,
signature: builtins.bytes | None = ...,
key: builtins.bytes | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["data", b"data", "from_id", b"from_id", "key", b"key", "seqno", b"seqno", "signature", b"signature"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["data", b"data", "from_id", b"from_id", "key", b"key", "seqno", b"seqno", "signature", b"signature", "topicIDs", b"topicIDs"]) -> None: ...
global___Message = Message
@typing.final
class ControlMessage(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
IHAVE_FIELD_NUMBER: builtins.int
IWANT_FIELD_NUMBER: builtins.int
GRAFT_FIELD_NUMBER: builtins.int
PRUNE_FIELD_NUMBER: builtins.int
@property
def ihave(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ControlIHave]: ...
@property
def iwant(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ControlIWant]: ...
@property
def graft(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ControlGraft]: ...
@property
def prune(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___ControlPrune]: ...
def __init__(
self,
*,
ihave: collections.abc.Iterable[global___ControlIHave] | None = ...,
iwant: collections.abc.Iterable[global___ControlIWant] | None = ...,
graft: collections.abc.Iterable[global___ControlGraft] | None = ...,
prune: collections.abc.Iterable[global___ControlPrune] | None = ...,
) -> None: ...
def ClearField(self, field_name: typing.Literal["graft", b"graft", "ihave", b"ihave", "iwant", b"iwant", "prune", b"prune"]) -> None: ...
global___ControlMessage = ControlMessage
@typing.final
class ControlIHave(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
TOPICID_FIELD_NUMBER: builtins.int
MESSAGEIDS_FIELD_NUMBER: builtins.int
topicID: builtins.str
@property
def messageIDs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.str]: ...
def __init__(
self,
*,
topicID: builtins.str | None = ...,
messageIDs: collections.abc.Iterable[builtins.str] | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["topicID", b"topicID"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["messageIDs", b"messageIDs", "topicID", b"topicID"]) -> None: ...
global___ControlIHave = ControlIHave
@typing.final
class ControlIWant(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
MESSAGEIDS_FIELD_NUMBER: builtins.int
@property
def messageIDs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.str]: ...
def __init__(
self,
*,
messageIDs: collections.abc.Iterable[builtins.str] | None = ...,
) -> None: ...
def ClearField(self, field_name: typing.Literal["messageIDs", b"messageIDs"]) -> None: ...
global___ControlIWant = ControlIWant
@typing.final
class ControlGraft(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
TOPICID_FIELD_NUMBER: builtins.int
topicID: builtins.str
def __init__(
self,
*,
topicID: builtins.str | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["topicID", b"topicID"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["topicID", b"topicID"]) -> None: ...
global___ControlGraft = ControlGraft
@typing.final
class ControlPrune(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
TOPICID_FIELD_NUMBER: builtins.int
PEERS_FIELD_NUMBER: builtins.int
BACKOFF_FIELD_NUMBER: builtins.int
topicID: builtins.str
backoff: builtins.int
@property
def peers(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___PeerInfo]: ...
def __init__(
self,
*,
topicID: builtins.str | None = ...,
peers: collections.abc.Iterable[global___PeerInfo] | None = ...,
backoff: builtins.int | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["backoff", b"backoff", "topicID", b"topicID"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["backoff", b"backoff", "peers", b"peers", "topicID", b"topicID"]) -> None: ...
global___ControlPrune = ControlPrune
@typing.final
class PeerInfo(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
PEERID_FIELD_NUMBER: builtins.int
SIGNEDPEERRECORD_FIELD_NUMBER: builtins.int
peerID: builtins.bytes
signedPeerRecord: builtins.bytes
def __init__(
self,
*,
peerID: builtins.bytes | None = ...,
signedPeerRecord: builtins.bytes | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["peerID", b"peerID", "signedPeerRecord", b"signedPeerRecord"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["peerID", b"peerID", "signedPeerRecord", b"signedPeerRecord"]) -> None: ...
global___PeerInfo = PeerInfo
@typing.final
class TopicDescriptor(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
@typing.final
class AuthOpts(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class _AuthMode:
ValueType = typing.NewType("ValueType", builtins.int)
V: typing_extensions.TypeAlias = ValueType
class _AuthModeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[TopicDescriptor.AuthOpts._AuthMode.ValueType], builtins.type):
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
NONE: TopicDescriptor.AuthOpts._AuthMode.ValueType # 0
"""no authentication, anyone can publish"""
KEY: TopicDescriptor.AuthOpts._AuthMode.ValueType # 1
"""only messages signed by keys in the topic descriptor are accepted"""
WOT: TopicDescriptor.AuthOpts._AuthMode.ValueType # 2
"""web of trust, certificates can allow publisher set to grow"""
class AuthMode(_AuthMode, metaclass=_AuthModeEnumTypeWrapper): ...
NONE: TopicDescriptor.AuthOpts.AuthMode.ValueType # 0
"""no authentication, anyone can publish"""
KEY: TopicDescriptor.AuthOpts.AuthMode.ValueType # 1
"""only messages signed by keys in the topic descriptor are accepted"""
WOT: TopicDescriptor.AuthOpts.AuthMode.ValueType # 2
"""web of trust, certificates can allow publisher set to grow"""
MODE_FIELD_NUMBER: builtins.int
KEYS_FIELD_NUMBER: builtins.int
mode: global___TopicDescriptor.AuthOpts.AuthMode.ValueType
@property
def keys(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]:
"""root keys to trust"""
def __init__(
self,
*,
mode: global___TopicDescriptor.AuthOpts.AuthMode.ValueType | None = ...,
keys: collections.abc.Iterable[builtins.bytes] | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["mode", b"mode"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["keys", b"keys", "mode", b"mode"]) -> None: ...
@typing.final
class EncOpts(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class _EncMode:
ValueType = typing.NewType("ValueType", builtins.int)
V: typing_extensions.TypeAlias = ValueType
class _EncModeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[TopicDescriptor.EncOpts._EncMode.ValueType], builtins.type):
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
NONE: TopicDescriptor.EncOpts._EncMode.ValueType # 0
"""no encryption, anyone can read"""
SHAREDKEY: TopicDescriptor.EncOpts._EncMode.ValueType # 1
"""messages are encrypted with shared key"""
WOT: TopicDescriptor.EncOpts._EncMode.ValueType # 2
"""web of trust, certificates can allow publisher set to grow"""
class EncMode(_EncMode, metaclass=_EncModeEnumTypeWrapper): ...
NONE: TopicDescriptor.EncOpts.EncMode.ValueType # 0
"""no encryption, anyone can read"""
SHAREDKEY: TopicDescriptor.EncOpts.EncMode.ValueType # 1
"""messages are encrypted with shared key"""
WOT: TopicDescriptor.EncOpts.EncMode.ValueType # 2
"""web of trust, certificates can allow publisher set to grow"""
MODE_FIELD_NUMBER: builtins.int
KEYHASHES_FIELD_NUMBER: builtins.int
mode: global___TopicDescriptor.EncOpts.EncMode.ValueType
@property
def keyHashes(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]:
"""the hashes of the shared keys used (salted)"""
def __init__(
self,
*,
mode: global___TopicDescriptor.EncOpts.EncMode.ValueType | None = ...,
keyHashes: collections.abc.Iterable[builtins.bytes] | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["mode", b"mode"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["keyHashes", b"keyHashes", "mode", b"mode"]) -> None: ...
NAME_FIELD_NUMBER: builtins.int
AUTH_FIELD_NUMBER: builtins.int
ENC_FIELD_NUMBER: builtins.int
name: builtins.str
@property
def auth(self) -> global___TopicDescriptor.AuthOpts: ...
@property
def enc(self) -> global___TopicDescriptor.EncOpts: ...
def __init__(
self,
*,
name: builtins.str | None = ...,
auth: global___TopicDescriptor.AuthOpts | None = ...,
enc: global___TopicDescriptor.EncOpts | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["auth", b"auth", "enc", b"enc", "name", b"name"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["auth", b"auth", "enc", b"enc", "name", b"name"]) -> None: ...
global___TopicDescriptor = TopicDescriptor
class TopicDescriptor(_message.Message):
__slots__ = ("name", "auth", "enc")
class AuthOpts(_message.Message):
__slots__ = ("mode", "keys")
class AuthMode(int, metaclass=_enum_type_wrapper.EnumTypeWrapper):
__slots__ = ()
NONE: _ClassVar[TopicDescriptor.AuthOpts.AuthMode]
KEY: _ClassVar[TopicDescriptor.AuthOpts.AuthMode]
WOT: _ClassVar[TopicDescriptor.AuthOpts.AuthMode]
NONE: TopicDescriptor.AuthOpts.AuthMode
KEY: TopicDescriptor.AuthOpts.AuthMode
WOT: TopicDescriptor.AuthOpts.AuthMode
MODE_FIELD_NUMBER: _ClassVar[int]
KEYS_FIELD_NUMBER: _ClassVar[int]
mode: TopicDescriptor.AuthOpts.AuthMode
keys: _containers.RepeatedScalarFieldContainer[bytes]
def __init__(self, mode: _Optional[_Union[TopicDescriptor.AuthOpts.AuthMode, str]] = ..., keys: _Optional[_Iterable[bytes]] = ...) -> None: ...
class EncOpts(_message.Message):
__slots__ = ("mode", "keyHashes")
class EncMode(int, metaclass=_enum_type_wrapper.EnumTypeWrapper):
__slots__ = ()
NONE: _ClassVar[TopicDescriptor.EncOpts.EncMode]
SHAREDKEY: _ClassVar[TopicDescriptor.EncOpts.EncMode]
WOT: _ClassVar[TopicDescriptor.EncOpts.EncMode]
NONE: TopicDescriptor.EncOpts.EncMode
SHAREDKEY: TopicDescriptor.EncOpts.EncMode
WOT: TopicDescriptor.EncOpts.EncMode
MODE_FIELD_NUMBER: _ClassVar[int]
KEYHASHES_FIELD_NUMBER: _ClassVar[int]
mode: TopicDescriptor.EncOpts.EncMode
keyHashes: _containers.RepeatedScalarFieldContainer[bytes]
def __init__(self, mode: _Optional[_Union[TopicDescriptor.EncOpts.EncMode, str]] = ..., keyHashes: _Optional[_Iterable[bytes]] = ...) -> None: ...
NAME_FIELD_NUMBER: _ClassVar[int]
AUTH_FIELD_NUMBER: _ClassVar[int]
ENC_FIELD_NUMBER: _ClassVar[int]
name: str
auth: TopicDescriptor.AuthOpts
enc: TopicDescriptor.EncOpts
def __init__(self, name: _Optional[str] = ..., auth: _Optional[_Union[TopicDescriptor.AuthOpts, _Mapping]] = ..., enc: _Optional[_Union[TopicDescriptor.EncOpts, _Mapping]] = ...) -> None: ... # type: ignore

View File

@ -56,6 +56,8 @@ from libp2p.peer.id import (
from libp2p.peer.peerdata import (
PeerDataError,
)
from libp2p.peer.peerstore import env_to_send_in_RPC
from libp2p.pubsub.utils import maybe_consume_signed_record
from libp2p.tools.async_service import (
Service,
)
@ -247,6 +249,10 @@ class Pubsub(Service, IPubsub):
packet.subscriptions.extend(
[rpc_pb2.RPC.SubOpts(subscribe=True, topicid=topic_id)]
)
# Add the sender's signedRecord in the RPC message
envelope_bytes, _ = env_to_send_in_RPC(self.host)
packet.senderRecord = envelope_bytes
return packet
async def continuously_read_stream(self, stream: INetStream) -> None:
@ -263,6 +269,14 @@ class Pubsub(Service, IPubsub):
incoming: bytes = await read_varint_prefixed_bytes(stream)
rpc_incoming: rpc_pb2.RPC = rpc_pb2.RPC()
rpc_incoming.ParseFromString(incoming)
# Process the sender's signed-record if sent
if not maybe_consume_signed_record(rpc_incoming, self.host, peer_id):
logger.error(
"Received an invalid-signed-record, ignoring the incoming msg"
)
continue
if rpc_incoming.publish:
# deal with RPC.publish
for msg in rpc_incoming.publish:
@ -572,6 +586,9 @@ class Pubsub(Service, IPubsub):
[rpc_pb2.RPC.SubOpts(subscribe=True, topicid=topic_id)]
)
# Add the senderRecord of the peer in the RPC msg
envelope_bytes, _ = env_to_send_in_RPC(self.host)
packet.senderRecord = envelope_bytes
# Send out subscribe message to all peers
await self.message_all_peers(packet.SerializeToString())
@ -604,6 +621,9 @@ class Pubsub(Service, IPubsub):
packet.subscriptions.extend(
[rpc_pb2.RPC.SubOpts(subscribe=False, topicid=topic_id)]
)
# Add the senderRecord of the peer in the RPC msg
envelope_bytes, _ = env_to_send_in_RPC(self.host)
packet.senderRecord = envelope_bytes
# Send out unsubscribe message to all peers
await self.message_all_peers(packet.SerializeToString())

50
libp2p/pubsub/utils.py Normal file
View File

@ -0,0 +1,50 @@
import logging
from libp2p.abc import IHost
from libp2p.peer.envelope import consume_envelope
from libp2p.peer.id import ID
from libp2p.pubsub.pb.rpc_pb2 import RPC
logger = logging.getLogger("pubsub-example.utils")
def maybe_consume_signed_record(msg: RPC, host: IHost, peer_id: ID) -> bool:
"""
Attempt to parse and store a signed-peer-record (Envelope) received during
PubSub communication. If the record is invalid, the peer-id does not match, or
updating the peerstore fails, the function logs an error and returns False.
Parameters
----------
msg : RPC
The protobuf message received during PubSub communication.
host : IHost
The local host instance, providing access to the peerstore for storing
verified peer records.
peer_id : ID | None, optional
The expected peer ID for record validation. If provided, the peer ID
inside the record must match this value.
Returns
-------
bool
True if a valid signed peer record was successfully consumed and stored,
False otherwise.
"""
if msg.HasField("senderRecord"):
try:
# Convert the signed-peer-record(Envelope) from
# protobuf bytes
envelope, record = consume_envelope(msg.senderRecord, "libp2p-peer-record")
if not record.peer_id == peer_id:
return False
# Use the default TTL of 2 hours (7200 seconds)
if not host.get_peerstore().consume_peer_record(envelope, 7200):
logger.error("Failed to update the Certified-Addr-Book")
return False
except Exception as e:
logger.error("Failed to update the Certified-Addr-Book: %s", e)
return False
return True

View File

@ -118,6 +118,8 @@ class SecurityMultistream(ABC):
# Select protocol if non-initiator
protocol, _ = await self.multiselect.negotiate(communicator)
if protocol is None:
raise MultiselectError("fail to negotiate a security protocol")
raise MultiselectError(
"Failed to negotiate a security protocol: no protocol selected"
)
# Return transport from protocol
return self.transports[protocol]

View File

@ -85,7 +85,9 @@ class MuxerMultistream:
else:
protocol, _ = await self.multiselect.negotiate(communicator)
if protocol is None:
raise MultiselectError("fail to negotiate a stream muxer protocol")
raise MultiselectError(
"Fail to negotiate a stream muxer protocol: no protocol selected"
)
return self.transports[protocol]
async def new_conn(self, conn: ISecureConn, peer_id: ID) -> IMuxedConn:

View File

@ -51,6 +51,8 @@ class QUICTransportConfig:
"""Configuration for QUIC transport."""
# Connection settings
max_connections_per_peer: int = 3
load_balancing_strategy: str = "round_robin"
idle_timeout: float = 30.0 # Seconds before an idle connection is closed.
max_datagram_size: int = (
1200 # Maximum size of UDP datagrams to avoid IP fragmentation.

View File

@ -1,9 +1,7 @@
from libp2p.abc import (
IListener,
IMuxedConn,
IRawConnection,
ISecureConn,
ITransport,
)
from libp2p.custom_types import (
TMuxerOptions,
@ -43,10 +41,6 @@ class TransportUpgrader:
self.security_multistream = SecurityMultistream(secure_transports_by_protocol)
self.muxer_multistream = MuxerMultistream(muxer_transports_by_protocol)
def upgrade_listener(self, transport: ITransport, listeners: IListener) -> None:
"""Upgrade multiaddr listeners to libp2p-transport listeners."""
# TODO: Figure out what to do with this function.
async def upgrade_security(
self,
raw_conn: IRawConnection,

View File

@ -0,0 +1 @@
Added multiselect type consistency in negotiate method. Updates all the usages of the method.

View File

@ -0,0 +1 @@
Enhanced Swarm networking with retry logic, exponential backoff, and multi-connection support. Added configurable retry mechanisms that automatically recover from transient connection failures using exponential backoff with jitter to prevent thundering herd problems. Introduced connection pooling that allows multiple concurrent connections per peer for improved performance and fault tolerance. Added load balancing across connections and automatic connection health management. All enhancements are fully backward compatible and can be configured through new RetryConfig and ConnectionConfig classes.

View File

@ -0,0 +1,5 @@
Remove unused upgrade_listener function from transport upgrader
- Remove unused `upgrade_listener` function from `libp2p/transport/upgrader.py` (Issue 2 from #726)
- Clean up unused imports related to the removed function
- Improve code maintainability by removing dead code

View File

@ -0,0 +1 @@
PubSub routers now include signed-peer-records in RPC messages for secure peer-info exchange.

View File

@ -1,3 +1,10 @@
from unittest.mock import (
AsyncMock,
MagicMock,
)
import pytest
from libp2p import (
new_swarm,
)
@ -10,6 +17,9 @@ from libp2p.host.basic_host import (
from libp2p.host.defaults import (
get_default_protocols,
)
from libp2p.host.exceptions import (
StreamFailure,
)
def test_default_protocols():
@ -22,3 +32,30 @@ def test_default_protocols():
# NOTE: comparing keys for equality as handlers may be closures that do not compare
# in the way this test is concerned with
assert handlers.keys() == get_default_protocols(host).keys()
@pytest.mark.trio
async def test_swarm_stream_handler_no_protocol_selected(monkeypatch):
key_pair = create_new_key_pair()
swarm = new_swarm(key_pair)
host = BasicHost(swarm)
# Create a mock net_stream
net_stream = MagicMock()
net_stream.reset = AsyncMock()
net_stream.muxed_conn.peer_id = "peer-test"
# Monkeypatch negotiate to simulate "no protocol selected"
async def fake_negotiate(comm, timeout):
return None, None
monkeypatch.setattr(host.multiselect, "negotiate", fake_negotiate)
# Now run the handler and expect StreamFailure
with pytest.raises(
StreamFailure, match="Failed to negotiate protocol: no protocol selected"
):
await host._swarm_stream_handler(net_stream)
# Ensure reset was called since negotiation failed
net_stream.reset.assert_awaited()

View File

@ -164,8 +164,8 @@ async def test_live_peers_unexpected_drop(security_protocol):
assert peer_a_id in host_b.get_live_peers()
# Simulate unexpected connection drop by directly closing the connection
conn = host_a.get_network().connections[peer_b_id]
await conn.muxed_conn.close()
conns = host_a.get_network().connections[peer_b_id]
await conns[0].muxed_conn.close()
# Allow for connection cleanup
await trio.sleep(0.1)

View File

@ -0,0 +1,325 @@
import time
from typing import cast
from unittest.mock import Mock
import pytest
from multiaddr import Multiaddr
import trio
from libp2p.abc import INetConn, INetStream
from libp2p.network.exceptions import SwarmException
from libp2p.network.swarm import (
ConnectionConfig,
RetryConfig,
Swarm,
)
from libp2p.peer.id import ID
class MockConnection(INetConn):
"""Mock connection for testing."""
def __init__(self, peer_id: ID, is_closed: bool = False):
self.peer_id = peer_id
self._is_closed = is_closed
self.streams = set() # Track streams properly
# Mock the muxed_conn attribute that Swarm expects
self.muxed_conn = Mock()
self.muxed_conn.peer_id = peer_id
# Required by INetConn interface
self.event_started = trio.Event()
async def close(self):
self._is_closed = True
@property
def is_closed(self) -> bool:
return self._is_closed
async def new_stream(self) -> INetStream:
# Create a mock stream and add it to the connection's stream set
mock_stream = Mock(spec=INetStream)
self.streams.add(mock_stream)
return mock_stream
def get_streams(self) -> tuple[INetStream, ...]:
"""Return all streams associated with this connection."""
return tuple(self.streams)
def get_transport_addresses(self) -> list[Multiaddr]:
"""Mock implementation of get_transport_addresses."""
return []
class MockNetStream(INetStream):
"""Mock network stream for testing."""
def __init__(self, peer_id: ID):
self.peer_id = peer_id
@pytest.mark.trio
async def test_retry_config_defaults():
"""Test RetryConfig default values."""
config = RetryConfig()
assert config.max_retries == 3
assert config.initial_delay == 0.1
assert config.max_delay == 30.0
assert config.backoff_multiplier == 2.0
assert config.jitter_factor == 0.1
@pytest.mark.trio
async def test_connection_config_defaults():
"""Test ConnectionConfig default values."""
config = ConnectionConfig()
assert config.max_connections_per_peer == 3
assert config.connection_timeout == 30.0
assert config.load_balancing_strategy == "round_robin"
@pytest.mark.trio
async def test_enhanced_swarm_constructor():
"""Test enhanced Swarm constructor with new configuration."""
# Create mock dependencies
peer_id = ID(b"QmTest")
peerstore = Mock()
upgrader = Mock()
transport = Mock()
# Test with default config
swarm = Swarm(peer_id, peerstore, upgrader, transport)
assert swarm.retry_config.max_retries == 3
assert swarm.connection_config.max_connections_per_peer == 3
assert isinstance(swarm.connections, dict)
# Test with custom config
custom_retry = RetryConfig(max_retries=5, initial_delay=0.5)
custom_conn = ConnectionConfig(max_connections_per_peer=5)
swarm = Swarm(peer_id, peerstore, upgrader, transport, custom_retry, custom_conn)
assert swarm.retry_config.max_retries == 5
assert swarm.retry_config.initial_delay == 0.5
assert swarm.connection_config.max_connections_per_peer == 5
@pytest.mark.trio
async def test_swarm_backoff_calculation():
"""Test exponential backoff calculation with jitter."""
peer_id = ID(b"QmTest")
peerstore = Mock()
upgrader = Mock()
transport = Mock()
retry_config = RetryConfig(
initial_delay=0.1, max_delay=1.0, backoff_multiplier=2.0, jitter_factor=0.1
)
swarm = Swarm(peer_id, peerstore, upgrader, transport, retry_config)
# Test backoff calculation
delay1 = swarm._calculate_backoff_delay(0)
delay2 = swarm._calculate_backoff_delay(1)
delay3 = swarm._calculate_backoff_delay(2)
# Should increase exponentially
assert delay2 > delay1
assert delay3 > delay2
# Should respect max delay
assert delay1 <= 1.0
assert delay2 <= 1.0
assert delay3 <= 1.0
# Should have jitter
assert delay1 != 0.1 # Should have jitter added
@pytest.mark.trio
async def test_swarm_retry_logic():
"""Test retry logic in dial operations."""
peer_id = ID(b"QmTest")
peerstore = Mock()
upgrader = Mock()
transport = Mock()
# Configure for fast testing
retry_config = RetryConfig(
max_retries=2,
initial_delay=0.01, # Very short for testing
max_delay=0.1,
)
swarm = Swarm(peer_id, peerstore, upgrader, transport, retry_config)
# Mock the single attempt method to fail twice then succeed
attempt_count = [0]
async def mock_single_attempt(addr, peer_id):
attempt_count[0] += 1
if attempt_count[0] < 3:
raise SwarmException(f"Attempt {attempt_count[0]} failed")
return MockConnection(peer_id)
swarm._dial_addr_single_attempt = mock_single_attempt
# Test retry logic
start_time = time.time()
result = await swarm._dial_with_retry(Mock(spec=Multiaddr), peer_id)
end_time = time.time()
# Should have succeeded after 3 attempts
assert attempt_count[0] == 3
assert isinstance(result, MockConnection)
assert end_time - start_time > 0.01 # Should have some delay
@pytest.mark.trio
async def test_swarm_load_balancing_strategies():
"""Test load balancing strategies."""
peer_id = ID(b"QmTest")
peerstore = Mock()
upgrader = Mock()
transport = Mock()
swarm = Swarm(peer_id, peerstore, upgrader, transport)
# Create mock connections with different stream counts
conn1 = MockConnection(peer_id)
conn2 = MockConnection(peer_id)
conn3 = MockConnection(peer_id)
# Add some streams to simulate load
await conn1.new_stream()
await conn1.new_stream()
await conn2.new_stream()
connections = [conn1, conn2, conn3]
# Test round-robin strategy
swarm.connection_config.load_balancing_strategy = "round_robin"
# Cast to satisfy type checker
connections_cast = cast("list[INetConn]", connections)
selected1 = swarm._select_connection(connections_cast, peer_id)
selected2 = swarm._select_connection(connections_cast, peer_id)
selected3 = swarm._select_connection(connections_cast, peer_id)
# Should cycle through connections
assert selected1 in connections
assert selected2 in connections
assert selected3 in connections
# Test least loaded strategy
swarm.connection_config.load_balancing_strategy = "least_loaded"
least_loaded = swarm._select_connection(connections_cast, peer_id)
# conn3 has 0 streams, conn2 has 1 stream, conn1 has 2 streams
# So conn3 should be selected as least loaded
assert least_loaded == conn3
# Test default strategy (first connection)
swarm.connection_config.load_balancing_strategy = "unknown"
default_selected = swarm._select_connection(connections_cast, peer_id)
assert default_selected == conn1
@pytest.mark.trio
async def test_swarm_multiple_connections_api():
"""Test the new multiple connections API methods."""
peer_id = ID(b"QmTest")
peerstore = Mock()
upgrader = Mock()
transport = Mock()
swarm = Swarm(peer_id, peerstore, upgrader, transport)
# Test empty connections
assert swarm.get_connections() == []
assert swarm.get_connections(peer_id) == []
assert swarm.get_connection(peer_id) is None
assert swarm.get_connections_map() == {}
# Add some connections
conn1 = MockConnection(peer_id)
conn2 = MockConnection(peer_id)
swarm.connections[peer_id] = [conn1, conn2]
# Test get_connections with peer_id
peer_connections = swarm.get_connections(peer_id)
assert len(peer_connections) == 2
assert conn1 in peer_connections
assert conn2 in peer_connections
# Test get_connections without peer_id (all connections)
all_connections = swarm.get_connections()
assert len(all_connections) == 2
assert conn1 in all_connections
assert conn2 in all_connections
# Test get_connection (backward compatibility)
single_conn = swarm.get_connection(peer_id)
assert single_conn in [conn1, conn2]
# Test get_connections_map
connections_map = swarm.get_connections_map()
assert peer_id in connections_map
assert connections_map[peer_id] == [conn1, conn2]
@pytest.mark.trio
async def test_swarm_connection_trimming():
"""Test connection trimming when limit is exceeded."""
peer_id = ID(b"QmTest")
peerstore = Mock()
upgrader = Mock()
transport = Mock()
# Set max connections to 2
connection_config = ConnectionConfig(max_connections_per_peer=2)
swarm = Swarm(
peer_id, peerstore, upgrader, transport, connection_config=connection_config
)
# Add 3 connections
conn1 = MockConnection(peer_id)
conn2 = MockConnection(peer_id)
conn3 = MockConnection(peer_id)
swarm.connections[peer_id] = [conn1, conn2, conn3]
# Trigger trimming
swarm._trim_connections(peer_id)
# Should have only 2 connections
assert len(swarm.connections[peer_id]) == 2
# The most recent connections should remain
remaining = swarm.connections[peer_id]
assert conn2 in remaining
assert conn3 in remaining
@pytest.mark.trio
async def test_swarm_backward_compatibility():
"""Test backward compatibility features."""
peer_id = ID(b"QmTest")
peerstore = Mock()
upgrader = Mock()
transport = Mock()
swarm = Swarm(peer_id, peerstore, upgrader, transport)
# Add connections
conn1 = MockConnection(peer_id)
conn2 = MockConnection(peer_id)
swarm.connections[peer_id] = [conn1, conn2]
# Test connections_legacy property
legacy_connections = swarm.connections_legacy
assert peer_id in legacy_connections
# Should return first connection
assert legacy_connections[peer_id] in [conn1, conn2]
if __name__ == "__main__":
pytest.main([__file__])

View File

@ -51,14 +51,19 @@ async def test_swarm_dial_peer(security_protocol):
for addr in transport.get_addrs()
)
swarms[0].peerstore.add_addrs(swarms[1].get_peer_id(), addrs, 10000)
await swarms[0].dial_peer(swarms[1].get_peer_id())
# New: dial_peer now returns list of connections
connections = await swarms[0].dial_peer(swarms[1].get_peer_id())
assert len(connections) > 0
# Verify connections are established in both directions
assert swarms[0].get_peer_id() in swarms[1].connections
assert swarms[1].get_peer_id() in swarms[0].connections
# Test: Reuse connections when we already have ones with a peer.
conn_to_1 = swarms[0].connections[swarms[1].get_peer_id()]
conn = await swarms[0].dial_peer(swarms[1].get_peer_id())
assert conn is conn_to_1
existing_connections = swarms[0].get_connections(swarms[1].get_peer_id())
new_connections = await swarms[0].dial_peer(swarms[1].get_peer_id())
assert new_connections == existing_connections
@pytest.mark.trio
@ -107,7 +112,8 @@ async def test_swarm_close_peer(security_protocol):
@pytest.mark.trio
async def test_swarm_remove_conn(swarm_pair):
swarm_0, swarm_1 = swarm_pair
conn_0 = swarm_0.connections[swarm_1.get_peer_id()]
# Get the first connection from the list
conn_0 = swarm_0.connections[swarm_1.get_peer_id()][0]
swarm_0.remove_conn(conn_0)
assert swarm_1.get_peer_id() not in swarm_0.connections
# Test: Remove twice. There should not be errors.
@ -115,6 +121,67 @@ async def test_swarm_remove_conn(swarm_pair):
assert swarm_1.get_peer_id() not in swarm_0.connections
@pytest.mark.trio
async def test_swarm_multiple_connections(security_protocol):
"""Test multiple connections per peer functionality."""
async with SwarmFactory.create_batch_and_listen(
2, security_protocol=security_protocol
) as swarms:
# Setup multiple addresses for peer
addrs = tuple(
addr
for transport in swarms[1].listeners.values()
for addr in transport.get_addrs()
)
swarms[0].peerstore.add_addrs(swarms[1].get_peer_id(), addrs, 10000)
# Dial peer - should return list of connections
connections = await swarms[0].dial_peer(swarms[1].get_peer_id())
assert len(connections) > 0
# Test get_connections method
peer_connections = swarms[0].get_connections(swarms[1].get_peer_id())
assert len(peer_connections) == len(connections)
# Test get_connections_map method
connections_map = swarms[0].get_connections_map()
assert swarms[1].get_peer_id() in connections_map
assert len(connections_map[swarms[1].get_peer_id()]) == len(connections)
# Test get_connection method (backward compatibility)
single_conn = swarms[0].get_connection(swarms[1].get_peer_id())
assert single_conn is not None
assert single_conn in connections
@pytest.mark.trio
async def test_swarm_load_balancing(security_protocol):
"""Test load balancing across multiple connections."""
async with SwarmFactory.create_batch_and_listen(
2, security_protocol=security_protocol
) as swarms:
# Setup connection
addrs = tuple(
addr
for transport in swarms[1].listeners.values()
for addr in transport.get_addrs()
)
swarms[0].peerstore.add_addrs(swarms[1].get_peer_id(), addrs, 10000)
# Create multiple streams - should use load balancing
streams = []
for _ in range(5):
stream = await swarms[0].new_stream(swarms[1].get_peer_id())
streams.append(stream)
# Verify streams were created successfully
assert len(streams) == 5
# Clean up
for stream in streams:
await stream.close()
@pytest.mark.trio
async def test_swarm_multiaddr(security_protocol):
async with SwarmFactory.create_batch_and_listen(

View File

@ -8,8 +8,10 @@ from typing import (
from unittest.mock import patch
import pytest
import multiaddr
import trio
from libp2p.crypto.rsa import create_new_key_pair
from libp2p.custom_types import AsyncValidatorFn
from libp2p.exceptions import (
ValidationError,
@ -17,9 +19,11 @@ from libp2p.exceptions import (
from libp2p.network.stream.exceptions import (
StreamEOF,
)
from libp2p.peer.envelope import Envelope, seal_record
from libp2p.peer.id import (
ID,
)
from libp2p.peer.peer_record import PeerRecord
from libp2p.pubsub.pb import (
rpc_pb2,
)
@ -87,6 +91,45 @@ async def test_re_unsubscribe():
assert TESTING_TOPIC not in pubsubs_fsub[0].topic_ids
@pytest.mark.trio
async def test_reissue_when_listen_addrs_change():
async with PubsubFactory.create_batch_with_floodsub(2) as pubsubs_fsub:
await connect(pubsubs_fsub[0].host, pubsubs_fsub[1].host)
await pubsubs_fsub[0].subscribe(TESTING_TOPIC)
# Yield to let 0 notify 1
await trio.sleep(1)
assert pubsubs_fsub[0].my_id in pubsubs_fsub[1].peer_topics[TESTING_TOPIC]
# Check whether signed-records were transfered properly in the subscribe call
envelope_b_sub = (
pubsubs_fsub[1]
.host.get_peerstore()
.get_peer_record(pubsubs_fsub[0].host.get_id())
)
assert isinstance(envelope_b_sub, Envelope)
# Simulate pubsubs_fsub[1].host listen addrs changing (different port)
new_addr = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/123")
# Patch just for the duration we force A to unsubscribe
with patch.object(pubsubs_fsub[0].host, "get_addrs", return_value=[new_addr]):
# Unsubscribe from A's side so that a new_record is issued
await pubsubs_fsub[0].unsubscribe(TESTING_TOPIC)
await trio.sleep(1)
# B should be holding A's new record with bumped seq
envelope_b_unsub = (
pubsubs_fsub[1]
.host.get_peerstore()
.get_peer_record(pubsubs_fsub[0].host.get_id())
)
assert isinstance(envelope_b_unsub, Envelope)
# This proves that a freshly signed record was issued rather than
# the latest-cached-one creating one.
assert envelope_b_sub.record().seq < envelope_b_unsub.record().seq
@pytest.mark.trio
async def test_peers_subscribe():
async with PubsubFactory.create_batch_with_floodsub(2) as pubsubs_fsub:
@ -95,11 +138,71 @@ async def test_peers_subscribe():
# Yield to let 0 notify 1
await trio.sleep(1)
assert pubsubs_fsub[0].my_id in pubsubs_fsub[1].peer_topics[TESTING_TOPIC]
# Check whether signed-records were transfered properly in the subscribe call
envelope_b_sub = (
pubsubs_fsub[1]
.host.get_peerstore()
.get_peer_record(pubsubs_fsub[0].host.get_id())
)
assert isinstance(envelope_b_sub, Envelope)
await pubsubs_fsub[0].unsubscribe(TESTING_TOPIC)
# Yield to let 0 notify 1
await trio.sleep(1)
assert pubsubs_fsub[0].my_id not in pubsubs_fsub[1].peer_topics[TESTING_TOPIC]
envelope_b_unsub = (
pubsubs_fsub[1]
.host.get_peerstore()
.get_peer_record(pubsubs_fsub[0].host.get_id())
)
assert isinstance(envelope_b_unsub, Envelope)
# This proves that the latest-cached-record was re-issued rather than
# freshly creating one.
assert envelope_b_sub.record().seq == envelope_b_unsub.record().seq
@pytest.mark.trio
async def test_peer_subscribe_fail_upon_invald_record_transfer():
async with PubsubFactory.create_batch_with_floodsub(2) as pubsubs_fsub:
await connect(pubsubs_fsub[0].host, pubsubs_fsub[1].host)
# Corrupt host_a's local peer record
envelope = pubsubs_fsub[0].host.get_peerstore().get_local_record()
if envelope is not None:
true_record = envelope.record()
key_pair = create_new_key_pair()
if envelope is not None:
envelope.public_key = key_pair.public_key
pubsubs_fsub[0].host.get_peerstore().set_local_record(envelope)
await pubsubs_fsub[0].subscribe(TESTING_TOPIC)
# Yeild to let 0 notify 1
await trio.sleep(1)
assert pubsubs_fsub[0].my_id not in pubsubs_fsub[1].peer_topics.get(
TESTING_TOPIC, set()
)
# Create a corrupt envelope with correct signature but false peer-id
false_record = PeerRecord(
ID.from_pubkey(key_pair.public_key), true_record.addrs
)
false_envelope = seal_record(
false_record, pubsubs_fsub[0].host.get_private_key()
)
pubsubs_fsub[0].host.get_peerstore().set_local_record(false_envelope)
await pubsubs_fsub[0].subscribe(TESTING_TOPIC)
# Yeild to let 0 notify 1
await trio.sleep(1)
assert pubsubs_fsub[0].my_id not in pubsubs_fsub[1].peer_topics.get(
TESTING_TOPIC, set()
)
@pytest.mark.trio
async def test_get_hello_packet():

View File

@ -51,6 +51,9 @@ async def perform_simple_test(assertion_func, security_protocol):
# Extract the secured connection from either Mplex or Yamux implementation
def get_secured_conn(conn):
# conn is now a list, get the first connection
if isinstance(conn, list):
conn = conn[0]
muxed_conn = conn.muxed_conn
# Direct attribute access for known implementations
has_secured_conn = hasattr(muxed_conn, "secured_conn")

View File

@ -74,7 +74,8 @@ async def test_multiplexer_preference_parameter(muxer_preference):
assert len(connections) > 0, "Connection not established"
# Get the first connection
conn = list(connections.values())[0]
conns = list(connections.values())[0]
conn = conns[0] # Get first connection from the list
muxed_conn = conn.muxed_conn
# Define a simple echo protocol
@ -150,7 +151,8 @@ async def test_explicit_muxer_options(muxer_option_func, expected_stream_class):
assert len(connections) > 0, "Connection not established"
# Get the first connection
conn = list(connections.values())[0]
conns = list(connections.values())[0]
conn = conns[0] # Get first connection from the list
muxed_conn = conn.muxed_conn
# Define a simple echo protocol
@ -219,7 +221,8 @@ async def test_global_default_muxer(global_default):
assert len(connections) > 0, "Connection not established"
# Get the first connection
conn = list(connections.values())[0]
conns = list(connections.values())[0]
conn = conns[0] # Get first connection from the list
muxed_conn = conn.muxed_conn
# Define a simple echo protocol

View File

@ -669,8 +669,8 @@ async def swarm_conn_pair_factory(
async with swarm_pair_factory(
security_protocol=security_protocol, muxer_opt=muxer_opt
) as swarms:
conn_0 = swarms[0].connections[swarms[1].get_peer_id()]
conn_1 = swarms[1].connections[swarms[0].get_peer_id()]
conn_0 = swarms[0].connections[swarms[1].get_peer_id()][0]
conn_1 = swarms[1].connections[swarms[0].get_peer_id()][0]
yield cast(SwarmConn, conn_0), cast(SwarmConn, conn_1)