157 Commits

Author SHA1 Message Date
58b33ba2e8 Merge branch 'main' into dependency-chore 2025-07-06 21:21:27 -07:00
0679efb299 Merge pull request #648 from lla-dane/feat/match-peerstore
WIP: Matching `py-libp2p <-> go-libp2p` PeerStore Implementation
2025-07-06 21:20:00 -07:00
b21591f8d5 remove redundants 2025-07-06 14:45:42 +05:30
d1c31483bd Implemented addr_stream in the peerstore 2025-07-06 14:45:42 +05:30
51c08de1bc test added: clear protocol data 2025-07-06 14:45:42 +05:30
faeacf686a fix typos 2025-07-06 14:45:42 +05:30
9943697054 Added docstrings 2025-07-06 14:45:42 +05:30
ff966bbfa0 Metadata: added test 2025-07-06 14:45:42 +05:30
1b025e552c Key-Book: added tests 2025-07-06 14:45:42 +05:30
4e53327079 Metrics: added tests 2025-07-06 14:45:42 +05:30
3d369bc142 Proto-Book: added tests 2025-07-06 14:45:42 +05:30
5de458482c refactor after rebase 2025-07-06 14:45:42 +05:30
f3d8cbf968 feat: Matching go-libp2p PeerStore implementation 2025-07-06 14:45:42 +05:30
e6f96d32e2 Merge pull request #640 from kaneki003/main
Identifying & resolving race-around conditions in YamuxStream
2025-07-05 14:49:35 -07:00
51313a5909 Merge branch 'main' into main 2025-07-05 14:39:31 -07:00
8d2b889605 Merge pull request #708 from lla-dane/todo/bounded-nursery
Added tests for identify push concurrency cap under high peer load
2025-07-05 14:28:31 -07:00
bfe3dee781 updated newsfragment 2025-07-04 17:32:48 +05:30
a7d122a0f9 added extra tests for identifu push for concurrency cap 2025-07-04 17:28:44 +05:30
8bfd4bde94 created concurrency limit configurable 2025-07-04 16:56:35 +05:30
383d7cb722 added tests 2025-07-04 16:56:20 +05:30
a89ba8ef81 added newsfragment 2025-07-04 16:55:56 +05:30
31b6a6f237 todo/bounded nursery in identify-push 2025-07-04 16:55:55 +05:30
5ac4fc1aba seperated tests for better understanding 2025-07-03 22:20:35 +05:30
00ba846f7b Merge branch 'main' into dependency-chore 2025-07-03 21:02:31 +05:30
f96fe0c1b6 Merge branch 'main' into main 2025-07-03 01:43:12 -07:00
3403689495 Merge pull request #626 from sukhman-sukh/limit_concurrency
Limit concurrency to push identify message to peers
2025-07-03 01:27:23 -07:00
2ee23fdec1 Fix ci 2025-07-03 11:12:05 +05:30
96c41773ea Add newsfrgment 2025-07-03 10:45:15 +05:30
83b42b7850 Merge branch 'main' into limit_concurrency 2025-07-02 10:22:21 -07:00
5a95212697 Merge branch 'main' into main 2025-07-02 10:22:01 -07:00
007527ef75 Merge branch 'main' into dependency-chore 2025-07-02 10:17:57 -07:00
975ea1bd8e Merge pull request #696 from lla-dane/fix/negotiate_timeout
fix: added negotiate timeout to MuxerMultistream
2025-07-02 10:11:36 -07:00
88db4ceb21 Fix lint 2025-07-02 13:26:52 +05:30
ad0b5505ba make limit configurable in push_identify_to_peers 2025-07-02 13:12:30 +05:30
572c6915f6 added tests for negotiate/response timeout 2025-07-01 23:27:19 +05:30
98438916ad Merge branch 'main' into dependency-chore 2025-07-01 09:36:09 -07:00
966cef58de fix: update dependencies to latest compatible versions
Signed-off-by: varun-r-mallya <varunrmallya@gmail.com>
2025-07-01 21:05:45 +05:30
01319638cd rebase with latest commit 2025-07-01 18:37:46 +05:30
a7eb9b5fbd negotiate timeout configurable in application code 2025-07-01 18:35:17 +05:30
fee4208d89 fix docstrings 2025-07-01 18:33:26 +05:30
4df454ebdc fix docstrings 2025-07-01 18:33:26 +05:30
715e528a56 DEFAULT_NEGOTIATE_TIMEOUT configurable 2025-07-01 18:33:26 +05:30
d0e73f5438 Updated newsfragment 2025-07-01 18:33:26 +05:30
8753024add Updated the timeout wrapper for read/write operations 2025-07-01 18:33:26 +05:30
621ea321ab Set default-negotiate-timeout = 5 sec 2025-07-01 18:33:26 +05:30
6d8f695778 updated multiselect.py and newsfragment 2025-07-01 18:33:26 +05:30
ba231dab79 added newsfragment 2025-07-01 18:33:26 +05:30
83d11db852 fix: added negotiate timeout to MuxerMultistream 2025-07-01 18:33:26 +05:30
cbe2b4bd99 Merge branch 'main' into limit_concurrency 2025-06-30 07:47:12 -07:00
0038ef99d4 Merge branch 'main' into main 2025-06-30 07:43:32 -07:00
4c739f6259 Merge pull request #707 from guha-rahul/add_degree
feat: adds degree to connect some
2025-06-30 07:33:48 -07:00
5306bdc8cb add newsfragment 2025-06-30 16:10:21 +05:30
647034221a add edge cases 2025-06-30 16:07:55 +05:30
ee2c979e71 Merge branch 'main' into add_degree 2025-06-29 22:19:43 -07:00
ad87e50eb7 move concurrency_limit to identify_push
Signed-off-by: sukhman <sukhmansinghsaluja@gmail.com>
2025-06-30 10:13:02 +05:30
92c9ba7e46 Merge pull request #689 from guha-rahul/write_msg_pubsub
feat: Implement WriteMsg method for efficient RPC message writing
2025-06-29 14:30:26 -07:00
ac51d87046 Merge branch 'main' into write_msg_pubsub 2025-06-29 10:12:16 -07:00
520e555b82 Merge pull request #649 from sumanjeet0012/feature/mDNS
feat: Implement Multicast DNS
2025-06-29 09:55:50 -07:00
6be05639e4 Merge branch 'main' into feature/mDNS 2025-06-29 09:43:03 -07:00
e8e0cf74d1 docs: add mDNS discovery option to new_host function docs 2025-06-29 16:38:52 +05:30
211e951678 fix: improve async validator handling in Pubsub class (#705)
Signed-off-by: varun-r-mallya <varunrmallya@gmail.com>
2025-06-29 12:32:00 +02:00
ef16f3c993 fix: accept new streams for both DATA and WINDOW_UPDATE frames with the SYN flag (#702)
* fix: accept new streams for both  and  frames with the  flag

* doc: newsfragment

---------

Co-authored-by: Manu Sheel Gupta <manusheel.edu@gmail.com>
2025-06-29 10:50:17 +02:00
2201d9e8d2 update link 2025-06-27 13:53:06 +05:30
7ed33e9a55 smol fix , adds degree 2025-06-27 04:41:52 +05:30
4eff928a6d fix: update logging messages 2025-06-27 00:09:17 +05:30
644fc77b6c Merge branch 'main' into limit_concurrency 2025-06-26 07:46:35 -07:00
4947578139 add newsfragment 2025-06-26 19:39:41 +05:30
460f502bb9 Merge branch 'main' into write_msg_pubsub 2025-06-26 19:35:11 +05:30
6a92fa26eb Merge pull request #704 from sukhman-sukh/piggyback-gossipsub
Remove piggyback TODO from gossipsub
2025-06-26 06:53:32 -07:00
e8a484f8e4 Merge branch 'main' into piggyback-gossipsub 2025-06-26 06:44:03 -07:00
74134e9b63 Remove piggyback TODO from gossipsub
Signed-off-by: sukhman <sukhmansinghsaluja@gmail.com>
2025-06-26 15:31:16 +05:30
983a4a001c Fix pre-commit checks 2025-06-26 15:26:09 +05:30
1fb3f9c72b Fix failing ci
Signed-off-by: sukhman <sukhmansinghsaluja@gmail.com>
2025-06-26 15:26:09 +05:30
8afb99c5b1 add test for counded concurrency
Signed-off-by: sukhman <sukhmansinghsaluja@gmail.com>
2025-06-26 15:26:09 +05:30
ae16909f79 Test: Connected Peers Receive Pushes 2025-06-26 15:26:09 +05:30
b7d62c0f85 Limit concurrency to push identify message to peers
Signed-off-by: sukhman <sukhmansinghsaluja@gmail.com>
2025-06-26 15:26:09 +05:30
2277d822f1 Merge pull request #690 from mystical-prog/px-backoff
Peer Exchange and Back Off
2025-06-25 22:37:33 -07:00
b736cfa333 Merge branch 'main' into px-backoff 2025-06-25 22:28:41 -07:00
73ebd27c59 added isolated_topics_test and stress_test 2025-06-26 01:28:21 +05:30
c914818f48 fix: enhanced logging to show dependencies logs 2025-06-26 01:15:10 +05:30
5262566f6a fix: check for mDNS attribute before accessing it in BasicHost 2025-06-26 00:36:59 +05:30
f274d20715 feat: attached mdns instance with host 2025-06-25 23:44:32 +05:30
f12ca4e9c1 Merge branch 'main' into write_msg_pubsub 2025-06-24 14:42:48 -07:00
4d8afa6448 Merge branch 'main' into main 2025-06-24 14:34:35 -07:00
5a3adad093 Merge pull request #631 from Winter-Soren/feat/619-store-pubkey-peerid-peerstore
feat: store pubkey and peerid in peerstore
2025-06-24 14:20:22 -07:00
e50f9fc8e5 Merge branch 'libp2p:main' into main 2025-06-24 18:55:10 +05:30
724375e1fa updated doc-string and reverted mplex-changes 2025-06-24 18:05:15 +05:30
28d0e5759a removed redundant function and added try catch block 2025-06-24 14:25:47 +05:30
9adf9aa499 refactor: improve test structure in mDNS tests 2025-06-24 14:25:47 +05:30
dcc8bbb619 feat: add unit and integration tests for mDNS. 2025-06-24 14:25:46 +05:30
b258ff3ea2 fix: correct logger name typo and update protocol in peer info extraction 2025-06-24 14:25:46 +05:30
31b694aa29 fix: ensure newline at end of file in newsfragments/649.feature.rst 2025-06-24 14:25:45 +05:30
293087bd06 feat: added newsfragment for mDNS 2025-06-24 14:25:45 +05:30
35248f8167 fix: ensure newline at end of file in libp2p.discovery.events and libp2p.discovery.mdns documentation 2025-06-24 14:25:44 +05:30
e018af09ae feat: add documentation for libp2p.discovery.events and libp2p.discovery.mdns packages 2025-06-24 14:25:44 +05:30
7135e6cd4d fix: ensure newline at end of file in libp2p.discovery documentation 2025-06-24 14:25:43 +05:30
77a9788a69 feat: add initial documentation for libp2p.discovery package 2025-06-24 14:25:43 +05:30
555e389109 fix: correct heading formatting in mDNS example documentation 2025-06-24 14:25:42 +05:30
8f0762f95c fix: remove unnecessary blank lines in mDNS example documentation 2025-06-24 14:25:42 +05:30
67bcad1674 Refactored mDNS example and added script for example 2025-06-24 14:25:41 +05:30
3b53120092 fixed some errors during rebase 2025-06-24 14:25:40 +05:30
89ed86d903 feat: add logging for mDNS peer discovery and update dependencies 2025-06-24 14:25:40 +05:30
387f4879d1 fix lint 2025-06-24 14:25:39 +05:30
e2f95f4df3 feat: emitted event from demo file 2025-06-24 14:25:39 +05:30
f43e7e367a refactored code 2025-06-24 14:25:38 +05:30
3262749db7 added event emmiter 2025-06-24 14:25:38 +05:30
cd7eaba4a4 feat: implement mDNS discovery with PeerListener 2025-06-24 14:25:37 +05:30
6add1cb685 feat: implement broadcasting in mdns 2025-06-24 14:25:37 +05:30
742bc7bca3 feat: add stringGen function to generate random strings 2025-06-24 14:25:36 +05:30
cbd4f9b502 feat: init mDNS discovery module 2025-06-24 14:25:35 +05:30
4e2be87c73 Merge pull request #695 from LVivona/patch-1
chore(kad_dht): centralize shared values in common.py file
2025-06-23 08:55:21 -07:00
fbee0ba2ab added newsfragment 2025-06-23 01:00:46 +05:30
ea6eef6ed5 test px and backoff 2025-06-23 00:41:13 +05:30
fd818d9102 test: added tests to ensure handshake adds pubkey to existing peer ID without one; peerstore unchanged on ID mismatch 2025-06-22 16:01:02 +05:30
2c0a6c0adb Merge branch 'main' into feat/619-store-pubkey-peerid-peerstore 2025-06-22 15:26:51 +05:30
3a4338e1df chore: eliminate self.protocol_id attribute \w in PeerRouting 2025-06-22 00:25:48 -04:00
feb8db6655 style: enforce multiline import style 2025-06-22 00:15:44 -04:00
ebdde7b5aa style: enforce multiline import style for consistency 2025-06-21 15:08:11 -04:00
24e73207d2 fixed failing demo
Co-authored-by: Khwahish Patel <khwahish.p1@ahduni.edu.in>
2025-06-21 18:54:17 +05:30
303bf3060a implemented peer exchange 2025-06-21 18:54:17 +05:30
788b4cf51a added complete back_off implementation 2025-06-21 18:54:17 +05:30
b78468ca32 added params for peer exchange and back off 2025-06-21 18:54:17 +05:30
c48618825d updated protobuf for prune message 2025-06-21 18:54:16 +05:30
d7cdae8a0f intgrated n==-1 case in read() 2025-06-21 17:51:27 +05:30
df17788ec3 resolving build-fails 2025-06-21 14:10:09 +05:30
209deffc8a resolved recv_window updates,added support for read_EOF 2025-06-21 13:40:12 +05:30
0a7e13f0ed Merge branch 'libp2p:main' into main 2025-06-21 13:39:38 +05:30
811c217ee6 style: isort fix ording of imports 2025-06-20 16:01:11 -04:00
d03ca45bd8 style: fix flake8 linting errors 2025-06-20 11:57:50 -04:00
8bddbfb9bb Merge branch 'main' into write_msg_pubsub 2025-06-20 07:29:56 -07:00
79ac01308c remove: unused custom_types TProtocol import 2025-06-19 21:38:02 -04:00
dfc0bb4ec8 chore(kad_dht): centralize shared values in common.py 2025-06-19 21:24:39 -04:00
09b4c846a4 feat: add support for sparse connect (#680)
* init

* add newsfragment

* fix
2025-06-19 06:18:45 -06:00
66bd027161 Feat/587-circuit-relay (#611)
* feat: implemented setup of circuit relay and test cases

* chore: remove test files to be rewritten

* added 1 test suite for protocol

* added 1 test suite for discovery

* fixed protocol timeouts and message types to handle reservations and stream operations.

* Resolved merge conflict in libp2p/tools/utils.py by combining timeout approach with retry mechanism

* fix: linting issues

* docs: updated documentation with circuit-relay

* chore: added enums, improved typing, security and examples

* fix: created proper __init__ file to ensure importability

* fix: replace transport_opt with listen_addrs in examples, fixed typing and improved code

* fix type checking issues across relay module and test suite

* regenerated circuit_pb2 file protobuf version 3

* fixed circuit relay example and moved imports to top in test_security_multistream

* chore: moved imports to the top

* chore: fixed linting of test_circuit_v2_transport.py

---------

Co-authored-by: Manu Sheel Gupta <manusheel.edu@gmail.com>
2025-06-18 15:39:39 -06:00
8c16b316ac added newsfragement and tests that would fail without these changes but pass with them 2025-06-18 23:32:48 +05:30
d4ed859b19 Merge branch 'main' into feat/619-store-pubkey-peerid-peerstore 2025-06-18 22:45:33 +05:30
79094d70d3 Optimize pubsub publishing to support multiple topics in single RPC message (#686)
* init

* add newsfragment

* lint

---------

Co-authored-by: Manu Sheel Gupta <manusheel.edu@gmail.com>
2025-06-17 15:23:03 -06:00
2ed2587fc9 fix: removed dummy ID(b) from upgrade_security for inbound connections (#681)
* fix: removed dummy ID(b) from upgrade_security for inbound connections

* added newsfragment

* updated newsfragment
2025-06-17 06:25:50 -06:00
d61bca78ab Kademlia DHT implementation in py-libp2p (#579)
* initialise the module

* added content routing

* added routing module

* added peer routing

* added value store

* added utilities functions

* added main kademlia file

* fixed create_key_from_binary function

* example to test kademlia dht

* added protocol ID and enhanced logging for peer store size in provider and consumer nodes

* refactor: specify stream type in handle_stream method and add peer in routing table

* removed content routing

* added default value of count for finding closest peers

* added functions to find close peers

* refactor: remove content routing and enhance peer discovery

* added put value function

* added get value function

* fix: improve logging and handle key encoding in get_value method

* refactor: remove ContentRouting import from __init__.py

* refactor: improved basic kademlia example

* added protobuf files

* replaced json with protobuf

* refactor: enhance peer discovery and routing logic in KadDHT

* refactor: enhance Kademlia routing table to use PeerInfo objects and improve peer management

* refactor: enhance peer addition logic to utilize PeerInfo objects in routing table

* feat: implement content provider functionality in Kademlia DHT

* refactor: update value store to use datetime for validity management

* refactor: update RoutingTable initialization to include host reference

* refactor: enhance KBucket and RoutingTable for improved peer management and functionality

* refactor: streamline peer discovery and value storage methods in KadDHT

* refactor: update KadDHT and related classes for async peer management and enhanced value storage

* refactor: enhance ProviderStore initialization and improve peer routing integration

* test: add tests for Kademlia DHT functionality

* fix linting issues

* pydocstyle issues fixed

* CICD pipeline issues solved

* fix: update docstring format for find_peer method

* refactor: improve logging and remove unused code in DHT implementation

* refactor: clean up logging and remove unused imports in DHT and test files

* Refactor logging setup and improve DHT stream handling with varint length prefixes

* Update bootstrap peer handling in basic_dht example and refactor peer routing to accept string addresses

* Enhance peer querying in Kademlia DHT by implementing parallel queries using Trio.

* Enhance peer querying by adding deduplication checks

* Refactor DHT implementation to use varint for length prefixes and enhance logging for better traceability

* Add base58 encoding for value storage and enhance logging in basic_dht example

* Refactor Kademlia DHT to support server/client modes

* Added unit tests

* Refactor documentation to fixsome warning

* Add unit tests and remove outdated tests

* Fixed precommit errora

* Refactor error handling test to raise StringParseError for invalid bootstrap addresses

* Add libp2p.kad_dht to the list of subpackages in documentation

* Fix expiration and republish checks to use inclusive comparison

* Add __init__.py file to libp2p.kad_dht.pb package

* Refactor get value and put value to run in parallel with query timeout

* Refactor provider message handling to use parallel processing with timeout

* Add methods for provider store in KadDHT class

* Refactor KadDHT and ProviderStore methods to improve type hints and enhance parallel processing

* Add documentation for libp2p.kad_dht.pb module.

* Update documentation for libp2p.kad_dht package to include subpackages and correct formatting

* Fix formatting in documentation for libp2p.kad_dht package by correcting the subpackage reference

* Fix header formatting in libp2p.kad_dht.pb documentation

* Change log level from info to debug for various logging statements.

* fix CICD issues (post revamp)

* fixed value store unit test

* Refactored kademlia example

* Refactor Kademlia example: enhance logging, improve bootstrap node connection, and streamline server address handling

* removed bootstrap module

* Refactor Kademlia DHT example and core modules: enhance logging, remove unused code, and improve peer handling

* Added docs of kad dht example

* Update server address log file path to use the script's directory

* Refactor: Introduce DHTMode enum for clearer mode management

* moved xor_distance function to utils.py

* Enhance logging in ValueStore and KadDHT: include decoded value in debug logs and update parameter description for validity

* Add handling for closest peers in GET_VALUE response when value is not found

* Handled failure scenario for PUT_VALUE

* Remove kademlia demo from project scripts and contributing documentation

* spelling and logging

---------

Co-authored-by: pacrob <5199899+pacrob@users.noreply.github.com>
2025-06-16 14:46:40 -06:00
a3492cf82f Merge branch 'main' into feat/619-store-pubkey-peerid-peerstore 2025-06-16 22:06:47 +05:30
733ef86e62 refactor(gossipsub.py): Add helper function to fanout and gossipsub (#678)
* fanout and gossibsub helper

* add newsfragment

* remove dub fanout check
2025-06-16 07:23:31 -06:00
0caf8647c5 Merge pull request #684 from guha-rahul/use_decapsulate
fix: replace complex logic with decapsulate
2025-06-16 04:37:14 -07:00
c33ab32c33 init 2025-06-16 02:50:40 +05:30
193e8f9cb8 add newsfragment 2025-06-15 19:58:52 +05:30
10b39dad1c replace complex logic with decapsulate 2025-06-15 19:51:55 +05:30
2248108b54 fixed types and improved code 2025-06-11 23:51:57 +05:30
a762df6042 Merge branch 'main' into feat/619-store-pubkey-peerid-peerstore 2025-06-11 23:34:12 +05:30
01b9e89e83 Merge branch 'main' into main 2025-06-11 19:39:06 +05:30
d2825af045 fix(examples/echo/echo.py): Add max message length to stream.read (#671)
Signed-off-by: varun-r-mallya <varunrmallya@gmail.com>
2025-06-10 11:44:07 -06:00
0f483dd744 Bump version: 0.2.7 → 0.2.8 2025-06-10 11:31:46 -06:00
0197b515c1 Compile release notes for v0.2.8 2025-06-10 11:31:25 -06:00
f27f4ddd85 remove references to removed setup.py (#674) 2025-06-10 11:24:34 -06:00
7e377ede36 Merge branch 'main' into feat/619-store-pubkey-peerid-peerstore 2025-06-10 21:12:28 +05:30
d733b78dba Merge branch 'libp2p:main' into main 2025-06-10 20:17:55 +05:30
e397ce25a6 Updated Yamux impl.,added tests for yamux and mplex 2025-06-10 20:12:19 +05:30
630aac703d add make pr (#672) 2025-06-10 08:34:22 -06:00
30b5811d39 feat: store pubkey and peerid in peerstore 2025-05-29 20:07:48 +05:30
134 changed files with 14490 additions and 516 deletions

View File

@ -14,6 +14,7 @@ help:
@echo "package-test - build package and install it in a venv for manual testing" @echo "package-test - build package and install it in a venv for manual testing"
@echo "notes - consume towncrier newsfragments and update release notes in docs - requires bump to be set" @echo "notes - consume towncrier newsfragments and update release notes in docs - requires bump to be set"
@echo "release - package and upload a release (does not run notes target) - requires bump to be set" @echo "release - package and upload a release (does not run notes target) - requires bump to be set"
@echo "pr - run clean, fix, lint, typecheck, and test i.e basically everything you need to do before creating a PR"
clean-build: clean-build:
rm -fr build/ rm -fr build/
@ -47,6 +48,8 @@ typecheck:
test: test:
python -m pytest tests -n auto python -m pytest tests -n auto
pr: clean fix lint typecheck test
# protobufs management # protobufs management
PB = libp2p/crypto/pb/crypto.proto \ PB = libp2p/crypto/pb/crypto.proto \
@ -55,7 +58,10 @@ PB = libp2p/crypto/pb/crypto.proto \
libp2p/security/secio/pb/spipe.proto \ libp2p/security/secio/pb/spipe.proto \
libp2p/security/noise/pb/noise.proto \ libp2p/security/noise/pb/noise.proto \
libp2p/identity/identify/pb/identify.proto \ libp2p/identity/identify/pb/identify.proto \
libp2p/host/autonat/pb/autonat.proto libp2p/host/autonat/pb/autonat.proto \
libp2p/relay/circuit_v2/pb/circuit.proto \
libp2p/kad_dht/pb/kademlia.proto
PY = $(PB:.proto=_pb2.py) PY = $(PB:.proto=_pb2.py)
PYI = $(PB:.proto=_pb2.pyi) PYI = $(PB:.proto=_pb2.pyi)
@ -87,7 +93,7 @@ validate-newsfragments:
check-docs: build-docs validate-newsfragments check-docs: build-docs validate-newsfragments
build-docs: build-docs:
sphinx-apidoc -o docs/ . setup.py "*conftest*" tests/ sphinx-apidoc -o docs/ . "*conftest*" tests/
$(MAKE) -C docs clean $(MAKE) -C docs clean
$(MAKE) -C docs html $(MAKE) -C docs html
$(MAKE) -C docs doctest $(MAKE) -C docs doctest

View File

@ -0,0 +1,499 @@
Circuit Relay v2 Example
========================
This example demonstrates how to use Circuit Relay v2 in py-libp2p. It includes three components:
1. A relay node that provides relay services
2. A destination node that accepts relayed connections
3. A source node that connects to the destination through the relay
Prerequisites
-------------
First, ensure you have py-libp2p installed:
.. code-block:: console
$ python -m pip install libp2p
Collecting libp2p
...
Successfully installed libp2p-x.x.x
Relay Node
----------
Create a file named ``relay_node.py`` with the following content:
.. code-block:: python
import trio
import logging
import multiaddr
import traceback
from libp2p import new_host
from libp2p.relay.circuit_v2.protocol import CircuitV2Protocol
from libp2p.relay.circuit_v2.transport import CircuitV2Transport
from libp2p.relay.circuit_v2.config import RelayConfig
from libp2p.tools.async_service import background_trio_service
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("relay_node")
async def run_relay():
listen_addr = multiaddr.Multiaddr("/ip4/0.0.0.0/tcp/9000")
host = new_host()
config = RelayConfig(
enable_hop=True, # Act as a relay
enable_stop=True, # Accept relayed connections
enable_client=False, # Don't use other relays
max_circuit_duration=3600, # 1 hour
max_circuit_bytes=1024 * 1024 * 10, # 10MB
)
# Initialize the relay protocol with allow_hop=True to act as a relay
protocol = CircuitV2Protocol(host, limits=config.limits, allow_hop=True)
print(f"Created relay protocol with hop enabled: {protocol.allow_hop}")
# Start the protocol service
async with host.run(listen_addrs=[listen_addr]):
peer_id = host.get_id()
print("\n" + "="*50)
print(f"Relay node started with ID: {peer_id}")
print(f"Relay node multiaddr: /ip4/127.0.0.1/tcp/9000/p2p/{peer_id}")
print("="*50 + "\n")
print(f"Listening on: {host.get_addrs()}")
try:
async with background_trio_service(protocol):
print("Protocol service started")
transport = CircuitV2Transport(host, protocol, config)
print("Relay service started successfully")
print(f"Relay limits: {protocol.limits}")
while True:
await trio.sleep(10)
print("Relay node still running...")
print(f"Active connections: {len(host.get_network().connections)}")
except Exception as e:
print(f"Error in relay service: {e}")
traceback.print_exc()
if __name__ == "__main__":
try:
trio.run(run_relay)
except Exception as e:
print(f"Error running relay: {e}")
traceback.print_exc()
Destination Node
----------------
Create a file named ``destination_node.py`` with the following content:
.. code-block:: python
import trio
import logging
import multiaddr
import traceback
import sys
from libp2p import new_host
from libp2p.relay.circuit_v2.protocol import CircuitV2Protocol
from libp2p.relay.circuit_v2.transport import CircuitV2Transport
from libp2p.relay.circuit_v2.config import RelayConfig
from libp2p.peer.peerinfo import info_from_p2p_addr
from libp2p.tools.async_service import background_trio_service
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("destination_node")
async def handle_echo_stream(stream):
"""Handle incoming stream by echoing received data."""
try:
print(f"New echo stream from: {stream.get_protocol()}")
while True:
data = await stream.read(1024)
if not data:
print("Stream closed by remote")
break
message = data.decode('utf-8')
print(f"Received: {message}")
response = f"Echo: {message}".encode('utf-8')
await stream.write(response)
print(f"Sent response: Echo: {message}")
except Exception as e:
print(f"Error handling stream: {e}")
traceback.print_exc()
finally:
await stream.close()
print("Stream closed")
async def run_destination(relay_peer_id=None):
"""
Run a simple destination node that accepts connections.
This is a simplified version that doesn't use the relay functionality.
"""
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/9001")
host = new_host()
# Configure as a relay receiver (stop)
config = RelayConfig(
enable_stop=True, # Accept relayed connections
enable_client=True, # Use relays for outbound connections
max_circuit_duration=3600, # 1 hour
max_circuit_bytes=1024 * 1024 * 10, # 10MB
)
# Initialize the relay protocol
protocol = CircuitV2Protocol(host, limits=config.limits, allow_hop=False)
async with host.run(listen_addrs=[listen_addr]):
# Print host information
dest_peer_id = host.get_id()
print("\n" + "="*50)
print(f"Destination node started with ID: {dest_peer_id}")
print(f"Use this ID in the source node: {dest_peer_id}")
print("="*50 + "\n")
print(f"Listening on: {host.get_addrs()}")
# Set stream handler for the echo protocol
host.set_stream_handler("/echo/1.0.0", handle_echo_stream)
print("Registered echo protocol handler")
# Start the protocol service in the background
async with background_trio_service(protocol):
print("Protocol service started")
# Create and register the transport
transport = CircuitV2Transport(host, protocol, config)
print("Transport created")
# Create a listener for relayed connections
listener = transport.create_listener(handle_echo_stream)
print("Created relay listener")
# Start listening for relayed connections
async with trio.open_nursery() as nursery:
await listener.listen("/p2p-circuit", nursery)
print("Destination node ready to accept relayed connections")
if not relay_peer_id:
print("No relay peer ID provided. Please enter the relay's peer ID:")
print("Waiting for relay peer ID input...")
while True:
if sys.stdin.isatty(): # Only try to read from stdin if it's a terminal
try:
relay_peer_id = input("Enter relay peer ID: ").strip()
if relay_peer_id:
break
except EOFError:
await trio.sleep(5)
else:
print("No terminal detected. Waiting for relay peer ID as command line argument.")
await trio.sleep(10)
continue
# Connect to the relay node with the provided relay peer ID
relay_addr_str = f"/ip4/127.0.0.1/tcp/9000/p2p/{relay_peer_id}"
print(f"Connecting to relay at {relay_addr_str}")
try:
# Convert string address to multiaddr, then to peer info
relay_maddr = multiaddr.Multiaddr(relay_addr_str)
relay_peer_info = info_from_p2p_addr(relay_maddr)
await host.connect(relay_peer_info)
print("Connected to relay successfully")
# Add the relay to the transport's discovery
transport.discovery._add_relay(relay_peer_info.peer_id)
print(f"Added relay {relay_peer_info.peer_id} to discovery")
# Keep the node running
while True:
await trio.sleep(10)
print("Destination node still running...")
except Exception as e:
print(f"Failed to connect to relay: {e}")
traceback.print_exc()
if __name__ == "__main__":
print("Starting destination node...")
relay_id = None
if len(sys.argv) > 1:
relay_id = sys.argv[1]
print(f"Using provided relay ID: {relay_id}")
trio.run(run_destination, relay_id)
Source Node
-----------
Create a file named ``source_node.py`` with the following content:
.. code-block:: python
import trio
import logging
import multiaddr
import traceback
import sys
from libp2p import new_host
from libp2p.peer.peerinfo import PeerInfo
from libp2p.peer.id import ID
from libp2p.relay.circuit_v2.protocol import CircuitV2Protocol
from libp2p.relay.circuit_v2.transport import CircuitV2Transport
from libp2p.relay.circuit_v2.config import RelayConfig
from libp2p.peer.peerinfo import info_from_p2p_addr
from libp2p.tools.async_service import background_trio_service
from libp2p.relay.circuit_v2.discovery import RelayInfo
# Configure logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("source_node")
async def run_source(relay_peer_id=None, destination_peer_id=None):
# Create a libp2p host
listen_addr = multiaddr.Multiaddr("/ip4/0.0.0.0/tcp/9002")
host = new_host()
# Configure as a relay client
config = RelayConfig(
enable_client=True, # Use relays for outbound connections
max_circuit_duration=3600, # 1 hour
max_circuit_bytes=1024 * 1024 * 10, # 10MB
)
# Initialize the relay protocol
protocol = CircuitV2Protocol(host, limits=config.limits, allow_hop=False)
# Start the protocol service
async with host.run(listen_addrs=[listen_addr]):
# Print host information
print(f"Source node started with ID: {host.get_id()}")
print(f"Listening on: {host.get_addrs()}")
# Start the protocol service in the background
async with background_trio_service(protocol):
print("Protocol service started")
# Create and register the transport
transport = CircuitV2Transport(host, protocol, config)
# Get relay peer ID if not provided
if not relay_peer_id:
print("No relay peer ID provided. Please enter the relay's peer ID:")
while True:
if sys.stdin.isatty(): # Only try to read from stdin if it's a terminal
try:
relay_peer_id = input("Enter relay peer ID: ").strip()
if relay_peer_id:
break
except EOFError:
await trio.sleep(5)
else:
print("No terminal detected. Waiting for relay peer ID as command line argument.")
await trio.sleep(10)
continue
# Connect to the relay node with the provided relay peer ID
relay_addr_str = f"/ip4/127.0.0.1/tcp/9000/p2p/{relay_peer_id}"
print(f"Connecting to relay at {relay_addr_str}")
try:
# Convert string address to multiaddr, then to peer info
relay_maddr = multiaddr.Multiaddr(relay_addr_str)
relay_peer_info = info_from_p2p_addr(relay_maddr)
await host.connect(relay_peer_info)
print("Connected to relay successfully")
# Manually add the relay to the discovery service
relay_id = relay_peer_info.peer_id
now = trio.current_time()
# Create relay info and add it to discovery
relay_info = RelayInfo(
peer_id=relay_id,
discovered_at=now,
last_seen=now
)
transport.discovery._discovered_relays[relay_id] = relay_info
print(f"Added relay {relay_id} to discovery")
# Start relay discovery in the background
async with background_trio_service(transport.discovery):
print("Relay discovery started")
# Wait for relay discovery
await trio.sleep(5)
print("Relay discovery completed")
# Get destination peer ID if not provided
if not destination_peer_id:
print("No destination peer ID provided. Please enter the destination's peer ID:")
while True:
if sys.stdin.isatty(): # Only try to read from stdin if it's a terminal
try:
destination_peer_id = input("Enter destination peer ID: ").strip()
if destination_peer_id:
break
except EOFError:
await trio.sleep(5)
else:
print("No terminal detected. Waiting for destination peer ID as command line argument.")
await trio.sleep(10)
continue
print(f"Attempting to connect to {destination_peer_id} via relay")
# Check if we have any discovered relays
discovered_relays = list(transport.discovery._discovered_relays.keys())
print(f"Discovered relays: {discovered_relays}")
try:
# Create a circuit relay multiaddr for the destination
dest_id = ID.from_base58(destination_peer_id)
# Create a circuit multiaddr that includes the relay
# Format: /ip4/127.0.0.1/tcp/9000/p2p/RELAY_ID/p2p-circuit/p2p/DEST_ID
circuit_addr = multiaddr.Multiaddr(f"{relay_addr_str}/p2p-circuit/p2p/{destination_peer_id}")
print(f"Created circuit address: {circuit_addr}")
# Dial using the circuit address
connection = await transport.dial(circuit_addr)
print("Connection established through relay!")
# Open a stream using the echo protocol
stream = await connection.new_stream("/echo/1.0.0")
# Send messages periodically
for i in range(5):
message = f"Hello from source, message {i+1}"
print(f"Sending: {message}")
await stream.write(message.encode('utf-8'))
response = await stream.read(1024)
print(f"Received: {response.decode('utf-8')}")
await trio.sleep(1)
# Close the stream
await stream.close()
print("Stream closed")
except Exception as e:
print(f"Error connecting through relay: {e}")
print("Detailed error:")
traceback.print_exc()
# Keep the node running for a while
await trio.sleep(30)
print("Source node shutting down")
except Exception as e:
print(f"Error: {e}")
traceback.print_exc()
if __name__ == "__main__":
relay_id = None
dest_id = None
# Parse command line arguments if provided
if len(sys.argv) > 1:
relay_id = sys.argv[1]
print(f"Using provided relay ID: {relay_id}")
if len(sys.argv) > 2:
dest_id = sys.argv[2]
print(f"Using provided destination ID: {dest_id}")
trio.run(run_source, relay_id, dest_id)
Running the Example
-------------------
1. First, start the relay node:
.. code-block:: console
$ python relay_node.py
Created relay protocol with hop enabled: True
==================================================
Relay node started with ID: QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx
Relay node multiaddr: /ip4/127.0.0.1/tcp/9000/p2p/QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx
==================================================
Listening on: [<Multiaddr /ip4/0.0.0.0/tcp/9000/p2p/QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx>]
Protocol service started
Relay service started successfully
Relay limits: RelayLimits(duration=3600, data=10485760, max_circuit_conns=8, max_reservations=4)
Note the relay node\'s peer ID (in this example: `QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx`). You\'ll need this for the other nodes.
2. Next, start the destination node:
.. code-block:: console
$ python destination_node.py
Starting destination node...
==================================================
Destination node started with ID: QmPBr38KeQG2ibyL4fxq6yJWpfoVNCqJMHBdNyn1Qe4h5s
Use this ID in the source node: QmPBr38KeQG2ibyL4fxq6yJWpfoVNCqJMHBdNyn1Qe4h5s
==================================================
Listening on: [<Multiaddr /ip4/0.0.0.0/tcp/9001/p2p/QmPBr38KeQG2ibyL4fxq6yJWpfoVNCqJMHBdNyn1Qe4h5s>]
Registered echo protocol handler
Protocol service started
Transport created
Created relay listener
Destination node ready to accept relayed connections
No relay peer ID provided. Please enter the relay\'s peer ID:
Waiting for relay peer ID input...
Enter relay peer ID: QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx
Connecting to relay at /ip4/127.0.0.1/tcp/9000/p2p/QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx
Connected to relay successfully
Added relay QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx to discovery
Destination node still running...
Note the destination node's peer ID (in this example: `QmPBr38KeQG2ibyL4fxq6yJWpfoVNCqJMHBdNyn1Qe4h5s`). You'll need this for the source node.
3. Finally, start the source node:
.. code-block:: console
$ python source_node.py
Source node started with ID: QmPyM56cgmFoHTgvMgGfDWRdVRQznmxCDDDg2dJ8ygVXj3
Listening on: [<Multiaddr /ip4/0.0.0.0/tcp/9002/p2p/QmPyM56cgmFoHTgvMgGfDWRdVRQznmxCDDDg2dJ8ygVXj3>]
Protocol service started
No relay peer ID provided. Please enter the relay\'s peer ID:
Enter relay peer ID: QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx
Connecting to relay at /ip4/127.0.0.1/tcp/9000/p2p/QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx
Connected to relay successfully
Added relay QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx to discovery
Relay discovery started
Relay discovery completed
No destination peer ID provided. Please enter the destination\'s peer ID:
Enter destination peer ID: QmPBr38KeQG2ibyL4fxq6yJWpfoVNCqJMHBdNyn1Qe4h5s
Attempting to connect to QmPBr38KeQG2ibyL4fxq6yJWpfoVNCqJMHBdNyn1Qe4h5s via relay
Discovered relays: [<libp2p.peer.id.ID (QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx)>]
Created circuit address: /ip4/127.0.0.1/tcp/9000/p2p/QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx/p2p-circuit/p2p/QmPBr38KeQG2ibyL4fxq6yJWpfoVNCqJMHBdNyn1Qe4h5s
At this point, the source node will establish a connection through the relay to the destination node and start sending messages.
4. Alternatively, you can provide the peer IDs as command-line arguments:
.. code-block:: console
# For the destination node (provide relay ID)
$ python destination_node.py QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx
# For the source node (provide both relay and destination IDs)
$ python source_node.py QmaUigQJ9nJERa6GaZuyfaiX91QjYwoQJ46JS3k7ys7SLx QmPBr38KeQG2ibyL4fxq6yJWpfoVNCqJMHBdNyn1Qe4h5s
This example demonstrates how to use Circuit Relay v2 to establish connections between peers that cannot connect directly. The peer IDs are dynamically generated for each node, and the relay facilitates communication between the source and destination nodes.

124
docs/examples.kademlia.rst Normal file
View File

@ -0,0 +1,124 @@
Kademlia DHT Demo
=================
This example demonstrates a Kademlia Distributed Hash Table (DHT) implementation with both value storage/retrieval and content provider advertisement/discovery functionality.
.. code-block:: console
$ python -m pip install libp2p
Collecting libp2p
...
Successfully installed libp2p-x.x.x
$ cd examples/kademlia
$ python kademlia.py --mode server
2025-06-13 19:51:25,424 - kademlia-example - INFO - Running in server mode on port 0
2025-06-13 19:51:25,426 - kademlia-example - INFO - Connected to bootstrap nodes: []
2025-06-13 19:51:25,426 - kademlia-example - INFO - To connect to this node, use: --bootstrap /ip4/127.0.0.1/tcp/28910/p2p/16Uiu2HAm7EsNv5vvjPAehGAVfChjYjD63ZHyWogQRdzntSbAg9ef
2025-06-13 19:51:25,426 - kademlia-example - INFO - Saved server address to log: /ip4/127.0.0.1/tcp/28910/p2p/16Uiu2HAm7EsNv5vvjPAehGAVfChjYjD63ZHyWogQRdzntSbAg9ef
2025-06-13 19:51:25,427 - kademlia-example - INFO - DHT service started in SERVER mode
2025-06-13 19:51:25,427 - kademlia-example - INFO - Stored value 'Hello message from Sumanjeet' with key: FVDjasarSFDoLPMdgnp1dHSbW2ZAfN8NU2zNbCQeczgP
2025-06-13 19:51:25,427 - kademlia-example - INFO - Successfully advertised as server for content: 361f2ed1183bca491b8aec11f0b9e5c06724759b0f7480ae7fb4894901993bc8
Copy the line that starts with ``--bootstrap``, open a new terminal in the same folder and run the client:
.. code-block:: console
$ python kademlia.py --mode client --bootstrap /ip4/127.0.0.1/tcp/28910/p2p/16Uiu2HAm7EsNv5vvjPAehGAVfChjYjD63ZHyWogQRdzntSbAg9ef
2025-06-13 19:51:37,022 - kademlia-example - INFO - Running in client mode on port 0
2025-06-13 19:51:37,026 - kademlia-example - INFO - Connected to bootstrap nodes: [<libp2p.peer.id.ID (16Uiu2HAm7EsNv5vvjPAehGAVfChjYjD63ZHyWogQRdzntSbAg9ef)>]
2025-06-13 19:51:37,027 - kademlia-example - INFO - DHT service started in CLIENT mode
2025-06-13 19:51:37,027 - kademlia-example - INFO - Looking up key: FVDjasarSFDoLPMdgnp1dHSbW2ZAfN8NU2zNbCQeczgP
2025-06-13 19:51:37,031 - kademlia-example - INFO - Retrieved value: Hello message from Sumanjeet
2025-06-13 19:51:37,031 - kademlia-example - INFO - Looking for servers of content: 361f2ed1183bca491b8aec11f0b9e5c06724759b0f7480ae7fb4894901993bc8
2025-06-13 19:51:37,035 - kademlia-example - INFO - Found 1 servers for content: ['16Uiu2HAm7EsNv5vvjPAehGAVfChjYjD63ZHyWogQRdzntSbAg9ef']
Alternatively, if you run the server first, the client can automatically extract the bootstrap address from the server log file:
.. code-block:: console
$ python kademlia.py --mode client
2025-06-13 19:51:37,022 - kademlia-example - INFO - Running in client mode on port 0
2025-06-13 19:51:37,026 - kademlia-example - INFO - Connected to bootstrap nodes: [<libp2p.peer.id.ID (16Uiu2HAm7EsNv5vvjPAehGAVfChjYjD63ZHyWogQRdzntSbAg9ef)>]
2025-06-13 19:51:37,027 - kademlia-example - INFO - DHT service started in CLIENT mode
2025-06-13 19:51:37,027 - kademlia-example - INFO - Looking up key: FVDjasarSFDoLPMdgnp1dHSbW2ZAfN8NU2zNbCQeczgP
2025-06-13 19:51:37,031 - kademlia-example - INFO - Retrieved value: Hello message from Sumanjeet
2025-06-13 19:51:37,031 - kademlia-example - INFO - Looking for servers of content: 361f2ed1183bca491b8aec11f0b9e5c06724759b0f7480ae7fb4894901993bc8
2025-06-13 19:51:37,035 - kademlia-example - INFO - Found 1 servers for content: ['16Uiu2HAm7EsNv5vvjPAehGAVfChjYjD63ZHyWogQRdzntSbAg9ef']
The demo showcases key DHT operations:
- **Value Storage & Retrieval**: The server stores a value, and the client retrieves it
- **Content Provider Discovery**: The server advertises content, and the client finds providers
- **Peer Discovery**: Automatic bootstrap and peer routing using the Kademlia algorithm
- **Network Resilience**: Distributed storage across multiple nodes (when available)
Command Line Options
--------------------
The Kademlia demo supports several command line options for customization:
.. code-block:: console
$ python kademlia.py --help
usage: kademlia.py [-h] [--mode MODE] [--port PORT] [--bootstrap [BOOTSTRAP ...]] [--verbose]
Kademlia DHT example with content server functionality
options:
-h, --help show this help message and exit
--mode MODE Run as a server or client node (default: server)
--port PORT Port to listen on (0 for random) (default: 0)
--bootstrap [BOOTSTRAP ...]
Multiaddrs of bootstrap nodes. Provide a space-separated list of addresses.
This is required for client mode.
--verbose Enable verbose logging
**Examples:**
Start server on a specific port:
.. code-block:: console
$ python kademlia.py --mode server --port 8000
Start client with verbose logging:
.. code-block:: console
$ python kademlia.py --mode client --verbose
Connect to multiple bootstrap nodes:
.. code-block:: console
$ python kademlia.py --mode client --bootstrap /ip4/127.0.0.1/tcp/8000/p2p/... /ip4/127.0.0.1/tcp/8001/p2p/...
How It Works
------------
The Kademlia DHT implementation demonstrates several key concepts:
**Server Mode:**
- Stores key-value pairs in the distributed hash table
- Advertises itself as a content provider for specific content
- Handles incoming DHT requests from other nodes
- Maintains routing table with known peers
**Client Mode:**
- Connects to bootstrap nodes to join the network
- Retrieves values by their keys from the DHT
- Discovers content providers for specific content
- Performs network lookups using the Kademlia algorithm
**Key Components:**
- **Routing Table**: Organizes peers in k-buckets based on XOR distance
- **Value Store**: Manages key-value storage with TTL (time-to-live)
- **Provider Store**: Tracks which peers provide specific content
- **Peer Routing**: Implements iterative lookups to find closest peers
The full source code for this example is below:
.. literalinclude:: ../examples/kademlia/kademlia.py
:language: python
:linenos:

64
docs/examples.mDNS.rst Normal file
View File

@ -0,0 +1,64 @@
mDNS Peer Discovery Example
===========================
This example demonstrates how to use mDNS (Multicast DNS) for peer discovery in py-libp2p.
Prerequisites
-------------
First, ensure you have py-libp2p installed and your environment is activated:
.. code-block:: console
$ python -m pip install libp2p
Running the Example
-------------------
The mDNS demo script allows you to discover peers on your local network using mDNS. To start a peer, run:
.. code-block:: console
$ mdns-demo
You should see output similar to:
.. code-block:: console
Run this from another console to start another peer on a different port:
python mdns-demo -p <ANOTHER_PORT>
Waiting for mDNS peer discovery events...
2025-06-20 23:28:12,052 - libp2p.example.discovery.mdns - INFO - Starting peer Discovery
To discover peers, open another terminal and run the same command with a different port:
.. code-block:: console
$ python mdns-demo -p 9001
You should see output indicating that a new peer has been discovered:
.. code-block:: console
Run this from the same folder in another console to start another peer on a different port:
python mdns-demo -p <ANOTHER_PORT>
Waiting for mDNS peer discovery events...
2025-06-20 23:43:43,786 - libp2p.example.discovery.mdns - INFO - Starting peer Discovery
2025-06-20 23:43:43,790 - libp2p.example.discovery.mdns - INFO - Discovered: 16Uiu2HAmGxy5NdQEjZWtrYUMrzdp3Syvg7MB2E5Lx8weA9DanYxj
When a new peer is discovered, its peer ID will be printed in the console output.
How it Works
------------
- Each node advertises itself on the local network using mDNS.
- When a new peer is discovered, the handler prints its peer ID.
- This is useful for local peer discovery without requiring a DHT or bootstrap nodes.
You can modify the script to perform additional actions when peers are discovered, such as opening streams or exchanging messages.

View File

@ -11,3 +11,6 @@ Examples
examples.echo examples.echo
examples.ping examples.ping
examples.pubsub examples.pubsub
examples.circuit_relay
examples.kademlia
examples.mDNS

View File

@ -12,10 +12,6 @@ The Python implementation of the libp2p networking stack
getting_started getting_started
release_notes release_notes
.. toctree::
:maxdepth: 1
:caption: Community
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: py-libp2p :caption: py-libp2p

View File

@ -0,0 +1,21 @@
libp2p.discovery.events package
===============================
Submodules
----------
libp2p.discovery.events.peerDiscovery module
--------------------------------------------
.. automodule:: libp2p.discovery.events.peerDiscovery
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: libp2p.discovery.events
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,45 @@
libp2p.discovery.mdns package
=============================
Submodules
----------
libp2p.discovery.mdns.broadcaster module
----------------------------------------
.. automodule:: libp2p.discovery.mdns.broadcaster
:members:
:undoc-members:
:show-inheritance:
libp2p.discovery.mdns.listener module
-------------------------------------
.. automodule:: libp2p.discovery.mdns.listener
:members:
:undoc-members:
:show-inheritance:
libp2p.discovery.mdns.mdns module
---------------------------------
.. automodule:: libp2p.discovery.mdns.mdns
:members:
:undoc-members:
:show-inheritance:
libp2p.discovery.mdns.utils module
----------------------------------
.. automodule:: libp2p.discovery.mdns.utils
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: libp2p.discovery.mdns
:members:
:undoc-members:
:show-inheritance:

22
docs/libp2p.discovery.rst Normal file
View File

@ -0,0 +1,22 @@
libp2p.discovery package
========================
Subpackages
-----------
.. toctree::
:maxdepth: 4
libp2p.discovery.events
libp2p.discovery.mdns
Submodules
----------
Module contents
---------------
.. automodule:: libp2p.discovery
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,22 @@
libp2p.kad\_dht.pb package
==========================
Submodules
----------
libp2p.kad_dht.pb.kademlia_pb2 module
-------------------------------------
.. automodule:: libp2p.kad_dht.pb.kademlia_pb2
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: libp2p.kad_dht.pb
:no-index:
:members:
:undoc-members:
:show-inheritance:

77
docs/libp2p.kad_dht.rst Normal file
View File

@ -0,0 +1,77 @@
libp2p.kad\_dht package
=======================
Subpackages
-----------
.. toctree::
:maxdepth: 4
libp2p.kad_dht.pb
Submodules
----------
libp2p.kad\_dht.kad\_dht module
-------------------------------
.. automodule:: libp2p.kad_dht.kad_dht
:members:
:undoc-members:
:show-inheritance:
libp2p.kad\_dht.peer\_routing module
------------------------------------
.. automodule:: libp2p.kad_dht.peer_routing
:members:
:undoc-members:
:show-inheritance:
libp2p.kad\_dht.provider\_store module
--------------------------------------
.. automodule:: libp2p.kad_dht.provider_store
:members:
:undoc-members:
:show-inheritance:
libp2p.kad\_dht.routing\_table module
-------------------------------------
.. automodule:: libp2p.kad_dht.routing_table
:members:
:undoc-members:
:show-inheritance:
libp2p.kad\_dht.utils module
----------------------------
.. automodule:: libp2p.kad_dht.utils
:members:
:undoc-members:
:show-inheritance:
libp2p.kad\_dht.value\_store module
-----------------------------------
.. automodule:: libp2p.kad_dht.value_store
:members:
:undoc-members:
:show-inheritance:
libp2p.kad\_dht.pb
------------------
.. automodule:: libp2p.kad_dht.pb
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: libp2p.kad_dht
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,22 @@
libp2p.relay.circuit_v2.pb package
==================================
Submodules
----------
libp2p.relay.circuit_v2.pb.circuit_pb2 module
---------------------------------------------
.. automodule:: libp2p.relay.circuit_v2.pb.circuit_pb2
:members:
:show-inheritance:
:undoc-members:
Module contents
---------------
.. automodule:: libp2p.relay.circuit_v2.pb
:members:
:show-inheritance:
:undoc-members:
:no-index:

View File

@ -0,0 +1,70 @@
libp2p.relay.circuit_v2 package
===============================
Subpackages
-----------
.. toctree::
:maxdepth: 4
libp2p.relay.circuit_v2.pb
Submodules
----------
libp2p.relay.circuit_v2.protocol module
---------------------------------------
.. automodule:: libp2p.relay.circuit_v2.protocol
:members:
:show-inheritance:
:undoc-members:
libp2p.relay.circuit_v2.transport module
----------------------------------------
.. automodule:: libp2p.relay.circuit_v2.transport
:members:
:show-inheritance:
:undoc-members:
libp2p.relay.circuit_v2.discovery module
----------------------------------------
.. automodule:: libp2p.relay.circuit_v2.discovery
:members:
:show-inheritance:
:undoc-members:
libp2p.relay.circuit_v2.resources module
----------------------------------------
.. automodule:: libp2p.relay.circuit_v2.resources
:members:
:show-inheritance:
:undoc-members:
libp2p.relay.circuit_v2.config module
-------------------------------------
.. automodule:: libp2p.relay.circuit_v2.config
:members:
:show-inheritance:
:undoc-members:
libp2p.relay.circuit_v2.protocol_buffer module
----------------------------------------------
.. automodule:: libp2p.relay.circuit_v2.protocol_buffer
:members:
:show-inheritance:
:undoc-members:
Module contents
---------------
.. automodule:: libp2p.relay.circuit_v2
:members:
:show-inheritance:
:undoc-members:
:no-index:

19
docs/libp2p.relay.rst Normal file
View File

@ -0,0 +1,19 @@
libp2p.relay package
====================
Subpackages
-----------
.. toctree::
:maxdepth: 4
libp2p.relay.circuit_v2
Module contents
---------------
.. automodule:: libp2p.relay
:members:
:show-inheritance:
:undoc-members:
:no-index:

View File

@ -8,13 +8,16 @@ Subpackages
:maxdepth: 4 :maxdepth: 4
libp2p.crypto libp2p.crypto
libp2p.discovery
libp2p.host libp2p.host
libp2p.identity libp2p.identity
libp2p.io libp2p.io
libp2p.kad_dht
libp2p.network libp2p.network
libp2p.peer libp2p.peer
libp2p.protocol_muxer libp2p.protocol_muxer
libp2p.pubsub libp2p.pubsub
libp2p.relay
libp2p.security libp2p.security
libp2p.stream_muxer libp2p.stream_muxer
libp2p.tools libp2p.tools

View File

@ -3,6 +3,51 @@ Release Notes
.. towncrier release notes start .. towncrier release notes start
py-libp2p v0.2.8 (2025-06-10)
-----------------------------
Breaking Changes
~~~~~~~~~~~~~~~~
- The `NetStream.state` property is now async and requires `await`. Update any direct state access to use `await stream.state`. (`#300 <https://github.com/libp2p/py-libp2p/issues/300>`__)
Bugfixes
~~~~~~~~
- Added proper state management and resource cleanup to `NetStream`, fixing memory leaks and improved error handling. (`#300 <https://github.com/libp2p/py-libp2p/issues/300>`__)
Improved Documentation
~~~~~~~~~~~~~~~~~~~~~~
- Updated examples to automatically use random port, when `-p` flag is not given (`#661 <https://github.com/libp2p/py-libp2p/issues/661>`__)
Features
~~~~~~~~
- Allow passing `listen_addrs` to `new_swarm` to customize swarm listening behavior. (`#616 <https://github.com/libp2p/py-libp2p/issues/616>`__)
- Feature: Support for sending `ls` command over `multistream-select` to list supported protocols from remote peer.
This allows inspecting which protocol handlers a peer supports at runtime. (`#622 <https://github.com/libp2p/py-libp2p/issues/622>`__)
- implement AsyncContextManager for IMuxedStream to support async with (`#629 <https://github.com/libp2p/py-libp2p/issues/629>`__)
- feat: add method to compute time since last message published by a peer and remove fanout peers based on ttl. (`#636 <https://github.com/libp2p/py-libp2p/issues/636>`__)
- implement blacklist management for `pubsub.Pubsub` with methods to get, add, remove, check, and clear blacklisted peer IDs. (`#641 <https://github.com/libp2p/py-libp2p/issues/641>`__)
- fix: remove expired peers from peerstore based on TTL (`#650 <https://github.com/libp2p/py-libp2p/issues/650>`__)
Internal Changes - for py-libp2p Contributors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Modernizes several aspects of the project, notably using ``pyproject.toml`` for project info instead of ``setup.py``, using ``ruff`` to replace several separate linting tools, and ``pyrefly`` in addition to ``mypy`` for typing. Also includes changes across the codebase to conform to new linting and typing rules. (`#618 <https://github.com/libp2p/py-libp2p/issues/618>`__)
Removals
~~~~~~~~
- Removes support for python 3.9 and updates some code conventions, notably using ``|`` operator in typing instead of ``Optional`` or ``Union`` (`#618 <https://github.com/libp2p/py-libp2p/issues/618>`__)
py-libp2p v0.2.7 (2025-05-22) py-libp2p v0.2.7 (2025-05-22)
----------------------------- -----------------------------

View File

@ -24,6 +24,12 @@ async def main():
insecure_transport = InsecureTransport( insecure_transport = InsecureTransport(
# local_key_pair: The key pair used for libp2p identity # local_key_pair: The key pair used for libp2p identity
local_key_pair=key_pair, local_key_pair=key_pair,
# secure_bytes_provider: Optional function to generate secure random bytes
# (defaults to secrets.token_bytes)
secure_bytes_provider=None, # Use default implementation
# peerstore: Optional peerstore to store peer IDs and public keys
# (defaults to None)
peerstore=None,
) )
# Create a security options dictionary mapping protocol ID to transport # Create a security options dictionary mapping protocol ID to transport

View File

@ -0,0 +1,300 @@
#!/usr/bin/env python
"""
A basic example of using the Kademlia DHT implementation, with all setup logic inlined.
This example demonstrates both value storage/retrieval and content server
advertisement/discovery.
"""
import argparse
import logging
import os
import random
import secrets
import sys
import base58
from multiaddr import (
Multiaddr,
)
import trio
from libp2p import (
new_host,
)
from libp2p.abc import (
IHost,
)
from libp2p.crypto.secp256k1 import (
create_new_key_pair,
)
from libp2p.kad_dht.kad_dht import (
DHTMode,
KadDHT,
)
from libp2p.kad_dht.utils import (
create_key_from_binary,
)
from libp2p.tools.async_service import (
background_trio_service,
)
from libp2p.tools.utils import (
info_from_p2p_addr,
)
# Configure logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
handlers=[logging.StreamHandler()],
)
logger = logging.getLogger("kademlia-example")
# Configure DHT module loggers to inherit from the parent logger
# This ensures all kademlia-example.* loggers use the same configuration
# Get the directory where this script is located
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
SERVER_ADDR_LOG = os.path.join(SCRIPT_DIR, "server_node_addr.txt")
# Set the level for all child loggers
for module in [
"kad_dht",
"value_store",
"peer_routing",
"routing_table",
"provider_store",
]:
child_logger = logging.getLogger(f"kademlia-example.{module}")
child_logger.setLevel(logging.INFO)
child_logger.propagate = True # Allow propagation to parent
# File to store node information
bootstrap_nodes = []
# function to take bootstrap_nodes as input and connects to them
async def connect_to_bootstrap_nodes(host: IHost, bootstrap_addrs: list[str]) -> None:
"""
Connect to the bootstrap nodes provided in the list.
params: host: The host instance to connect to
bootstrap_addrs: List of bootstrap node addresses
Returns
-------
None
"""
for addr in bootstrap_addrs:
try:
peerInfo = info_from_p2p_addr(Multiaddr(addr))
host.get_peerstore().add_addrs(peerInfo.peer_id, peerInfo.addrs, 3600)
await host.connect(peerInfo)
except Exception as e:
logger.error(f"Failed to connect to bootstrap node {addr}: {e}")
def save_server_addr(addr: str) -> None:
"""Append the server's multiaddress to the log file."""
try:
with open(SERVER_ADDR_LOG, "w") as f:
f.write(addr + "\n")
logger.info(f"Saved server address to log: {addr}")
except Exception as e:
logger.error(f"Failed to save server address: {e}")
def load_server_addrs() -> list[str]:
"""Load all server multiaddresses from the log file."""
if not os.path.exists(SERVER_ADDR_LOG):
return []
try:
with open(SERVER_ADDR_LOG) as f:
return [line.strip() for line in f if line.strip()]
except Exception as e:
logger.error(f"Failed to load server addresses: {e}")
return []
async def run_node(
port: int, mode: str, bootstrap_addrs: list[str] | None = None
) -> None:
"""Run a node that serves content in the DHT with setup inlined."""
try:
if port <= 0:
port = random.randint(10000, 60000)
logger.debug(f"Using port: {port}")
# Convert string mode to DHTMode enum
if mode is None or mode.upper() == "CLIENT":
dht_mode = DHTMode.CLIENT
elif mode.upper() == "SERVER":
dht_mode = DHTMode.SERVER
else:
logger.error(f"Invalid mode: {mode}. Must be 'client' or 'server'")
sys.exit(1)
# Load server addresses for client mode
if dht_mode == DHTMode.CLIENT:
server_addrs = load_server_addrs()
if server_addrs:
logger.info(f"Loaded {len(server_addrs)} server addresses from log")
bootstrap_nodes.append(server_addrs[0]) # Use the first server address
else:
logger.warning("No server addresses found in log file")
if bootstrap_addrs:
for addr in bootstrap_addrs:
bootstrap_nodes.append(addr)
key_pair = create_new_key_pair(secrets.token_bytes(32))
host = new_host(key_pair=key_pair)
listen_addr = Multiaddr(f"/ip4/127.0.0.1/tcp/{port}")
async with host.run(listen_addrs=[listen_addr]):
peer_id = host.get_id().pretty()
addr_str = f"/ip4/127.0.0.1/tcp/{port}/p2p/{peer_id}"
await connect_to_bootstrap_nodes(host, bootstrap_nodes)
dht = KadDHT(host, dht_mode)
# take all peer ids from the host and add them to the dht
for peer_id in host.get_peerstore().peer_ids():
await dht.routing_table.add_peer(peer_id)
logger.info(f"Connected to bootstrap nodes: {host.get_connected_peers()}")
bootstrap_cmd = f"--bootstrap {addr_str}"
logger.info("To connect to this node, use: %s", bootstrap_cmd)
# Save server address in server mode
if dht_mode == DHTMode.SERVER:
save_server_addr(addr_str)
# Start the DHT service
async with background_trio_service(dht):
logger.info(f"DHT service started in {dht_mode.value} mode")
val_key = create_key_from_binary(b"py-libp2p kademlia example value")
content = b"Hello from python node "
content_key = create_key_from_binary(content)
if dht_mode == DHTMode.SERVER:
# Store a value in the DHT
msg = "Hello message from Sumanjeet"
val_data = msg.encode()
await dht.put_value(val_key, val_data)
logger.info(
f"Stored value '{val_data.decode()}'"
f"with key: {base58.b58encode(val_key).decode()}"
)
# Advertise as content server
success = await dht.provider_store.provide(content_key)
if success:
logger.info(
"Successfully advertised as server"
f"for content: {content_key.hex()}"
)
else:
logger.warning("Failed to advertise as content server")
else:
# retrieve the value
logger.info(
"Looking up key: %s", base58.b58encode(val_key).decode()
)
val_data = await dht.get_value(val_key)
if val_data:
try:
logger.info(f"Retrieved value: {val_data.decode()}")
except UnicodeDecodeError:
logger.info(f"Retrieved value (bytes): {val_data!r}")
else:
logger.warning("Failed to retrieve value")
# Also check if we can find servers for our own content
logger.info("Looking for servers of content: %s", content_key.hex())
providers = await dht.provider_store.find_providers(content_key)
if providers:
logger.info(
"Found %d servers for content: %s",
len(providers),
[p.peer_id.pretty() for p in providers],
)
else:
logger.warning(
"No servers found for content %s", content_key.hex()
)
# Keep the node running
while True:
logger.debug(
"Status - Connected peers: %d,"
"Peers in store: %d, Values in store: %d",
len(dht.host.get_connected_peers()),
len(dht.host.get_peerstore().peer_ids()),
len(dht.value_store.store),
)
await trio.sleep(10)
except Exception as e:
logger.error(f"Server node error: {e}", exc_info=True)
sys.exit(1)
def parse_args():
"""Parse command line arguments."""
parser = argparse.ArgumentParser(
description="Kademlia DHT example with content server functionality"
)
parser.add_argument(
"--mode",
default="server",
help="Run as a server or client node",
)
parser.add_argument(
"--port",
type=int,
default=0,
help="Port to listen on (0 for random)",
)
parser.add_argument(
"--bootstrap",
type=str,
nargs="*",
help=(
"Multiaddrs of bootstrap nodes. "
"Provide a space-separated list of addresses. "
"This is required for client mode."
),
)
# add option to use verbose logging
parser.add_argument(
"--verbose",
action="store_true",
help="Enable verbose logging",
)
args = parser.parse_args()
# Set logging level based on verbosity
if args.verbose:
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.getLogger().setLevel(logging.INFO)
return args
def main():
"""Main entry point for the kademlia demo."""
try:
args = parse_args()
logger.info(
"Running in %s mode on port %d",
args.mode,
args.port,
)
trio.run(run_node, args.port, args.mode, args.bootstrap)
except Exception as e:
logger.critical(f"Script failed: {e}", exc_info=True)
sys.exit(1)
if __name__ == "__main__":
main()

74
examples/mDNS/mDNS.py Normal file
View File

@ -0,0 +1,74 @@
import argparse
import logging
import secrets
import multiaddr
import trio
from libp2p import (
new_host,
)
from libp2p.abc import PeerInfo
from libp2p.crypto.secp256k1 import (
create_new_key_pair,
)
from libp2p.discovery.events.peerDiscovery import peerDiscovery
logger = logging.getLogger("libp2p.discovery.mdns")
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
handler.setFormatter(
logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
)
logger.addHandler(handler)
# Set root logger to DEBUG to capture all logs from dependencies
logging.getLogger().setLevel(logging.DEBUG)
def onPeerDiscovery(peerinfo: PeerInfo):
logger.info(f"Discovered: {peerinfo.peer_id}")
async def run(port: int) -> None:
secret = secrets.token_bytes(32)
key_pair = create_new_key_pair(secret)
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
peerDiscovery.register_peer_discovered_handler(onPeerDiscovery)
print(
"Run this from the same folder in another console to "
"start another peer on a different port:\n\n"
"mdns-demo -p <ANOTHER_PORT>\n"
)
print("Waiting for mDNS peer discovery events...\n")
logger.info("Starting peer Discovery")
host = new_host(key_pair=key_pair, enable_mDNS=True)
async with host.run(listen_addrs=[listen_addr]):
await trio.sleep_forever()
def main() -> None:
description = """
This program demonstrates mDNS peer discovery using libp2p.
To use it, run 'mdns-demo -p <PORT>', where <PORT> is the port number.
Start multiple peers on different ports to see discovery in action.
"""
parser = argparse.ArgumentParser(description=description)
parser.add_argument("-p", "--port", default=0, type=int, help="source port number")
parser.add_argument(
"-v", "--verbose", action="store_true", help="Enable verbose output"
)
args = parser.parse_args()
if args.verbose:
logger.setLevel(logging.DEBUG)
try:
trio.run(run, args.port)
except KeyboardInterrupt:
logger.info("Exiting...")
if __name__ == "__main__":
main()

View File

@ -32,6 +32,9 @@ from libp2p.custom_types import (
TProtocol, TProtocol,
TSecurityOptions, TSecurityOptions,
) )
from libp2p.discovery.mdns.mdns import (
MDNSDiscovery,
)
from libp2p.host.basic_host import ( from libp2p.host.basic_host import (
BasicHost, BasicHost,
) )
@ -81,6 +84,8 @@ DEFAULT_MUXER = "YAMUX"
# Multiplexer options # Multiplexer options
MUXER_YAMUX = "YAMUX" MUXER_YAMUX = "YAMUX"
MUXER_MPLEX = "MPLEX" MUXER_MPLEX = "MPLEX"
DEFAULT_NEGOTIATE_TIMEOUT = 5
def set_default_muxer(muxer_name: Literal["YAMUX", "MPLEX"]) -> None: def set_default_muxer(muxer_name: Literal["YAMUX", "MPLEX"]) -> None:
@ -200,7 +205,9 @@ def new_swarm(
key_pair, noise_privkey=noise_key_pair.private_key key_pair, noise_privkey=noise_key_pair.private_key
), ),
TProtocol(secio.ID): secio.Transport(key_pair), TProtocol(secio.ID): secio.Transport(key_pair),
TProtocol(PLAINTEXT_PROTOCOL_ID): InsecureTransport(key_pair), TProtocol(PLAINTEXT_PROTOCOL_ID): InsecureTransport(
key_pair, peerstore=peerstore_opt
),
} }
# Use given muxer preference if provided, otherwise use global default # Use given muxer preference if provided, otherwise use global default
@ -243,6 +250,8 @@ def new_host(
disc_opt: IPeerRouting | None = None, disc_opt: IPeerRouting | None = None,
muxer_preference: Literal["YAMUX", "MPLEX"] | None = None, muxer_preference: Literal["YAMUX", "MPLEX"] | None = None,
listen_addrs: Sequence[multiaddr.Multiaddr] | None = None, listen_addrs: Sequence[multiaddr.Multiaddr] | None = None,
enable_mDNS: bool = False,
negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> IHost: ) -> IHost:
""" """
Create a new libp2p host based on the given parameters. Create a new libp2p host based on the given parameters.
@ -254,6 +263,7 @@ def new_host(
:param disc_opt: optional discovery :param disc_opt: optional discovery
:param muxer_preference: optional explicit muxer preference :param muxer_preference: optional explicit muxer preference
:param listen_addrs: optional list of multiaddrs to listen on :param listen_addrs: optional list of multiaddrs to listen on
:param enable_mDNS: whether to enable mDNS discovery
:return: return a host instance :return: return a host instance
""" """
swarm = new_swarm( swarm = new_swarm(
@ -266,8 +276,7 @@ def new_host(
) )
if disc_opt is not None: if disc_opt is not None:
return RoutedHost(swarm, disc_opt) return RoutedHost(swarm, disc_opt, enable_mDNS)
return BasicHost(swarm) return BasicHost(network=swarm,enable_mDNS=enable_mDNS , negotitate_timeout=negotiate_timeout)
__version__ = __version("libp2p") __version__ = __version("libp2p")

View File

@ -385,6 +385,18 @@ class IPeerMetadata(ABC):
:raises Exception: If the operation is unsuccessful. :raises Exception: If the operation is unsuccessful.
""" """
@abstractmethod
def clear_metadata(self, peer_id: ID) -> None:
"""
Remove all stored metadata for the specified peer.
Parameters
----------
peer_id : ID
The peer identifier whose metadata are to be removed.
"""
# -------------------------- addrbook interface.py -------------------------- # -------------------------- addrbook interface.py --------------------------
@ -476,10 +488,272 @@ class IAddrBook(ABC):
""" """
# -------------------------- keybook interface.py --------------------------
class IKeyBook(ABC):
"""
Interface for an key book.
Provides methods for managing cryptographic keys.
"""
@abstractmethod
def pubkey(self, peer_id: ID) -> PublicKey:
"""
Returns the public key of the specified peer
Parameters
----------
peer_id : ID
The peer identifier whose public key is to be returned.
"""
@abstractmethod
def privkey(self, peer_id: ID) -> PrivateKey:
"""
Returns the private key of the specified peer
Parameters
----------
peer_id : ID
The peer identifier whose private key is to be returned.
"""
@abstractmethod
def add_pubkey(self, peer_id: ID, pubkey: PublicKey) -> None:
"""
Adds the public key for a specified peer
Parameters
----------
peer_id : ID
The peer identifier whose public key is to be added
pubkey: PublicKey
The public key of the peer
"""
@abstractmethod
def add_privkey(self, peer_id: ID, privkey: PrivateKey) -> None:
"""
Adds the private key for a specified peer
Parameters
----------
peer_id : ID
The peer identifier whose private key is to be added
privkey: PrivateKey
The private key of the peer
"""
@abstractmethod
def add_key_pair(self, peer_id: ID, key_pair: KeyPair) -> None:
"""
Adds the key pair for a specified peer
Parameters
----------
peer_id : ID
The peer identifier whose key pair is to be added
key_pair: KeyPair
The key pair of the peer
"""
@abstractmethod
def peer_with_keys(self) -> list[ID]:
"""Returns all the peer IDs stored in the AddrBook"""
@abstractmethod
def clear_keydata(self, peer_id: ID) -> None:
"""
Remove all stored keydata for the specified peer.
Parameters
----------
peer_id : ID
The peer identifier whose keys are to be removed.
"""
# -------------------------- metrics interface.py --------------------------
class IMetrics(ABC):
"""
Interface for metrics of peer interaction.
Provides methods for managing the metrics.
"""
@abstractmethod
def record_latency(self, peer_id: ID, RTT: float) -> None:
"""
Records a new round-trip time (RTT) latency value for the specified peer
using Exponentially Weighted Moving Average (EWMA).
Parameters
----------
peer_id : ID
The identifier of the peer for which latency is being recorded.
RTT : float
The round-trip time latency value to record.
"""
@abstractmethod
def latency_EWMA(self, peer_id: ID) -> float:
"""
Returns the current latency value for the specified peer using
Exponentially Weighted Moving Average (EWMA).
Parameters
----------
peer_id : ID
The identifier of the peer whose latency EWMA is to be returned.
"""
@abstractmethod
def clear_metrics(self, peer_id: ID) -> None:
"""
Clears the stored latency metrics for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer whose latency metrics are to be cleared.
"""
# -------------------------- protobook interface.py --------------------------
class IProtoBook(ABC):
"""
Interface for a protocol book.
Provides methods for managing the list of supported protocols.
"""
@abstractmethod
def get_protocols(self, peer_id: ID) -> list[str]:
"""
Returns the list of protocols associated with the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer whose supported protocols are to be returned.
"""
@abstractmethod
def add_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
Adds the given protocols to the specified peer's protocol list.
Parameters
----------
peer_id : ID
The identifier of the peer to which protocols will be added.
protocols : Sequence[str]
A sequence of protocol strings to add.
"""
@abstractmethod
def set_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
Replaces the existing protocols of the specified peer with the given list.
Parameters
----------
peer_id : ID
The identifier of the peer whose protocols are to be set.
protocols : Sequence[str]
A sequence of protocol strings to assign.
"""
@abstractmethod
def remove_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
Removes the specified protocols from the peer's protocol list.
Parameters
----------
peer_id : ID
The identifier of the peer from which protocols will be removed.
protocols : Sequence[str]
A sequence of protocol strings to remove.
"""
@abstractmethod
def supports_protocols(self, peer_id: ID, protocols: Sequence[str]) -> list[str]:
"""
Returns the list of protocols from the input sequence that the peer supports.
Parameters
----------
peer_id : ID
The identifier of the peer to check for protocol support.
protocols : Sequence[str]
A sequence of protocol strings to check against the peer's
supported protocols.
"""
@abstractmethod
def first_supported_protocol(self, peer_id: ID, protocols: Sequence[str]) -> str:
"""
Returns the first protocol from the input list that the peer supports.
Parameters
----------
peer_id : ID
The identifier of the peer to check for supported protocols.
protocols : Sequence[str]
A sequence of protocol strings to check.
Returns
-------
str
The first matching protocol string, or an empty string
if none are supported.
"""
@abstractmethod
def clear_protocol_data(self, peer_id: ID) -> None:
"""
Clears all protocol data associated with the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer whose protocol data will be cleared.
"""
# -------------------------- peerstore interface.py -------------------------- # -------------------------- peerstore interface.py --------------------------
class IPeerStore(IAddrBook, IPeerMetadata): class IPeerStore(IPeerMetadata, IAddrBook, IKeyBook, IMetrics, IProtoBook):
""" """
Interface for a peer store. Interface for a peer store.
@ -487,85 +761,7 @@ class IPeerStore(IAddrBook, IPeerMetadata):
management, protocol handling, and key storage. management, protocol handling, and key storage.
""" """
@abstractmethod # -------METADATA---------
def peer_info(self, peer_id: ID) -> PeerInfo:
"""
Retrieve the peer information for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
Returns
-------
PeerInfo
The peer information object for the given peer.
"""
@abstractmethod
def get_protocols(self, peer_id: ID) -> list[str]:
"""
Retrieve the protocols associated with the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
Returns
-------
list[str]
A list of protocol identifiers.
Raises
------
PeerStoreError
If the peer ID is not found.
"""
@abstractmethod
def add_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
Add additional protocols for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
protocols : Sequence[str]
The protocols to add.
"""
@abstractmethod
def set_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
Set the protocols for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
protocols : Sequence[str]
The protocols to set.
"""
@abstractmethod
def peer_ids(self) -> list[ID]:
"""
Retrieve all peer identifiers stored in the peer store.
Returns
-------
list[ID]
A list of all peer IDs in the store.
"""
@abstractmethod @abstractmethod
def get(self, peer_id: ID, key: str) -> Any: def get(self, peer_id: ID, key: str) -> Any:
""" """
@ -606,6 +802,19 @@ class IPeerStore(IAddrBook, IPeerMetadata):
""" """
@abstractmethod
def clear_metadata(self, peer_id: ID) -> None:
"""
Clears the stored latency metrics for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer whose latency metrics are to be cleared.
"""
# --------ADDR-BOOK---------
@abstractmethod @abstractmethod
def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int) -> None: def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int) -> None:
""" """
@ -679,25 +888,7 @@ class IPeerStore(IAddrBook, IPeerMetadata):
""" """
@abstractmethod # --------KEY-BOOK----------
def add_pubkey(self, peer_id: ID, pubkey: PublicKey) -> None:
"""
Add a public key for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
pubkey : PublicKey
The public key to add.
Raises
------
PeerStoreError
If the peer already has a public key set.
"""
@abstractmethod @abstractmethod
def pubkey(self, peer_id: ID) -> PublicKey: def pubkey(self, peer_id: ID) -> PublicKey:
""" """
@ -720,25 +911,6 @@ class IPeerStore(IAddrBook, IPeerMetadata):
""" """
@abstractmethod
def add_privkey(self, peer_id: ID, privkey: PrivateKey) -> None:
"""
Add a private key for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
privkey : PrivateKey
The private key to add.
Raises
------
PeerStoreError
If the peer already has a private key set.
"""
@abstractmethod @abstractmethod
def privkey(self, peer_id: ID) -> PrivateKey: def privkey(self, peer_id: ID) -> PrivateKey:
""" """
@ -761,6 +933,44 @@ class IPeerStore(IAddrBook, IPeerMetadata):
""" """
@abstractmethod
def add_pubkey(self, peer_id: ID, pubkey: PublicKey) -> None:
"""
Add a public key for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
pubkey : PublicKey
The public key to add.
Raises
------
PeerStoreError
If the peer already has a public key set.
"""
@abstractmethod
def add_privkey(self, peer_id: ID, privkey: PrivateKey) -> None:
"""
Add a private key for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
privkey : PrivateKey
The private key to add.
Raises
------
PeerStoreError
If the peer already has a private key set.
"""
@abstractmethod @abstractmethod
def add_key_pair(self, peer_id: ID, key_pair: KeyPair) -> None: def add_key_pair(self, peer_id: ID, key_pair: KeyPair) -> None:
""" """
@ -780,6 +990,213 @@ class IPeerStore(IAddrBook, IPeerMetadata):
""" """
@abstractmethod
def peer_with_keys(self) -> list[ID]:
"""Returns all the peer IDs stored in the AddrBook"""
@abstractmethod
def clear_keydata(self, peer_id: ID) -> None:
"""
Remove all stored keydata for the specified peer.
Parameters
----------
peer_id : ID
The peer identifier whose keys are to be removed.
"""
# -------METRICS---------
@abstractmethod
def record_latency(self, peer_id: ID, RTT: float) -> None:
"""
Records a new round-trip time (RTT) latency value for the specified peer
using Exponentially Weighted Moving Average (EWMA).
Parameters
----------
peer_id : ID
The identifier of the peer for which latency is being recorded.
RTT : float
The round-trip time latency value to record.
"""
@abstractmethod
def latency_EWMA(self, peer_id: ID) -> float:
"""
Returns the current latency value for the specified peer using
Exponentially Weighted Moving Average (EWMA).
Parameters
----------
peer_id : ID
The identifier of the peer whose latency EWMA is to be returned.
"""
@abstractmethod
def clear_metrics(self, peer_id: ID) -> None:
"""
Clears the stored latency metrics for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer whose latency metrics are to be cleared.
"""
# --------PROTO-BOOK----------
@abstractmethod
def get_protocols(self, peer_id: ID) -> list[str]:
"""
Retrieve the protocols associated with the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
Returns
-------
list[str]
A list of protocol identifiers.
Raises
------
PeerStoreError
If the peer ID is not found.
"""
@abstractmethod
def add_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
Add additional protocols for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
protocols : Sequence[str]
The protocols to add.
"""
@abstractmethod
def set_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
Set the protocols for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
protocols : Sequence[str]
The protocols to set.
"""
@abstractmethod
def remove_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
Removes the specified protocols from the peer's protocol list.
Parameters
----------
peer_id : ID
The identifier of the peer from which protocols will be removed.
protocols : Sequence[str]
A sequence of protocol strings to remove.
"""
@abstractmethod
def supports_protocols(self, peer_id: ID, protocols: Sequence[str]) -> list[str]:
"""
Returns the list of protocols from the input sequence that the peer supports.
Parameters
----------
peer_id : ID
The identifier of the peer to check for protocol support.
protocols : Sequence[str]
A sequence of protocol strings to check against the peer's
supported protocols.
"""
@abstractmethod
def first_supported_protocol(self, peer_id: ID, protocols: Sequence[str]) -> str:
"""
Returns the first protocol from the input list that the peer supports.
Parameters
----------
peer_id : ID
The identifier of the peer to check for supported protocols.
protocols : Sequence[str]
A sequence of protocol strings to check.
Returns
-------
str
The first matching protocol string, or an empty string
if none are supported.
"""
@abstractmethod
def clear_protocol_data(self, peer_id: ID) -> None:
"""
Clears all protocol data associated with the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer whose protocol data will be cleared.
"""
# --------PEER-STORE--------
@abstractmethod
def peer_info(self, peer_id: ID) -> PeerInfo:
"""
Retrieve the peer information for the specified peer.
Parameters
----------
peer_id : ID
The identifier of the peer.
Returns
-------
PeerInfo
The peer information object for the given peer.
"""
@abstractmethod
def peer_ids(self) -> list[ID]:
"""
Retrieve all peer identifiers stored in the peer store.
Returns
-------
list[ID]
A list of all peer IDs in the store.
"""
@abstractmethod
def clear_peerdata(self, peer_id: ID) -> None:
"""clear_peerdata"""
# -------------------------- listener interface.py -------------------------- # -------------------------- listener interface.py --------------------------
@ -1315,6 +1732,60 @@ class IPeerData(ABC):
""" """
@abstractmethod
def remove_protocols(self, protocols: Sequence[str]) -> None:
"""
Removes the specified protocols from this peer's list of supported protocols.
Parameters
----------
protocols : Sequence[str]
A sequence of protocol strings to be removed.
"""
@abstractmethod
def supports_protocols(self, protocols: Sequence[str]) -> list[str]:
"""
Returns the list of protocols from the input sequence that are supported
by this peer.
Parameters
----------
protocols : Sequence[str]
A sequence of protocol strings to check against this peer's supported
protocols.
Returns
-------
list[str]
A list of protocol strings that are supported.
"""
@abstractmethod
def first_supported_protocol(self, protocols: Sequence[str]) -> str:
"""
Returns the first protocol from the input list that this peer supports.
Parameters
----------
protocols : Sequence[str]
A sequence of protocol strings to check for support.
Returns
-------
str
The first matching protocol, or an empty string if none are supported.
"""
@abstractmethod
def clear_protocol_data(self) -> None:
"""
Clears all protocol data associated with this peer.
"""
@abstractmethod @abstractmethod
def add_addrs(self, addrs: Sequence[Multiaddr]) -> None: def add_addrs(self, addrs: Sequence[Multiaddr]) -> None:
""" """
@ -1324,6 +1795,8 @@ class IPeerData(ABC):
---------- ----------
addrs : Sequence[Multiaddr] addrs : Sequence[Multiaddr]
A sequence of multiaddresses to add. A sequence of multiaddresses to add.
ttl: inr
Time to live for the peer record
""" """
@ -1382,6 +1855,12 @@ class IPeerData(ABC):
""" """
@abstractmethod
def clear_metadata(self) -> None:
"""
Clears all metadata entries associated with this peer.
"""
@abstractmethod @abstractmethod
def add_pubkey(self, pubkey: PublicKey) -> None: def add_pubkey(self, pubkey: PublicKey) -> None:
""" """
@ -1440,6 +1919,45 @@ class IPeerData(ABC):
""" """
@abstractmethod
def clear_keydata(self) -> None:
"""
Clears all cryptographic key data associated with this peer,
including both public and private keys.
"""
@abstractmethod
def record_latency(self, new_latency: float) -> None:
"""
Records a new latency measurement using
Exponentially Weighted Moving Average (EWMA).
Parameters
----------
new_latency : float
The new round-trip time (RTT) latency value to incorporate
into the EWMA calculation.
"""
@abstractmethod
def latency_EWMA(self) -> float:
"""
Returns the current EWMA value of the recorded latency.
Returns
-------
float
The current latency estimate based on EWMA.
"""
@abstractmethod
def clear_metrics(self) -> None:
"""
Clears all latency-related metrics and resets the internal state.
"""
@abstractmethod @abstractmethod
def update_last_identified(self) -> None: def update_last_identified(self) -> None:
""" """
@ -2130,14 +2648,14 @@ class IPubsub(ServiceAPI):
... ...
@abstractmethod @abstractmethod
async def publish(self, topic_id: str, data: bytes) -> None: async def publish(self, topic_id: str | list[str], data: bytes) -> None:
""" """
Publish a message to a topic. Publish a message to a topic or multiple topics.
Parameters Parameters
---------- ----------
topic_id : str topic_id : str | list[str]
The identifier of the topic. The identifier of the topic (str) or topics (list[str]).
data : bytes data : bytes
The data to publish. The data to publish.

View File

View File

View File

@ -0,0 +1,26 @@
from collections.abc import (
Callable,
)
from libp2p.abc import (
PeerInfo,
)
TTL: int = 60 * 60 # Time-to-live for discovered peers in seconds
class PeerDiscovery:
def __init__(self) -> None:
self._peer_discovered_handlers: list[Callable[[PeerInfo], None]] = []
def register_peer_discovered_handler(
self, handler: Callable[[PeerInfo], None]
) -> None:
self._peer_discovered_handlers.append(handler)
def emit_peer_discovered(self, peer_info: PeerInfo) -> None:
for handler in self._peer_discovered_handlers:
handler(peer_info)
peerDiscovery = PeerDiscovery()

View File

View File

@ -0,0 +1,91 @@
import logging
import socket
from zeroconf import (
EventLoopBlocked,
ServiceInfo,
Zeroconf,
)
logger = logging.getLogger("libp2p.discovery.mdns.broadcaster")
class PeerBroadcaster:
"""
Broadcasts this peer's presence on the local network using mDNS/zeroconf.
Registers a service with the peer's ID in the TXT record as per libp2p spec.
"""
def __init__(
self,
zeroconf: Zeroconf,
service_type: str,
service_name: str,
peer_id: str,
port: int,
):
self.zeroconf = zeroconf
self.service_type = service_type
self.peer_id = peer_id
self.port = port
self.service_name = service_name
# Get the local IP address
local_ip = self._get_local_ip()
hostname = socket.gethostname()
self.service_info = ServiceInfo(
type_=self.service_type,
name=self.service_name,
port=self.port,
properties={b"id": self.peer_id.encode()},
server=f"{hostname}.local.",
addresses=[socket.inet_aton(local_ip)],
)
def _get_local_ip(self) -> str:
"""Get the local IP address of this machine"""
try:
# Connect to a remote address to determine the local IP
# This doesn't actually send data
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s:
s.connect(("8.8.8.8", 80))
local_ip = s.getsockname()[0]
return local_ip
except Exception:
# Fallback to localhost if we can't determine the IP
return "127.0.0.1"
def register(self) -> None:
"""Register the peer's mDNS service on the network."""
try:
self.zeroconf.register_service(self.service_info)
logger.debug(f"mDNS service registered: {self.service_name}")
except EventLoopBlocked as e:
logger.warning(
"EventLoopBlocked while registering mDNS '%s': %s", self.service_name, e
)
except Exception as e:
logger.error(
"Unexpected error during mDNS registration for '%s': %r",
self.service_name,
e,
)
def unregister(self) -> None:
"""Unregister the peer's mDNS service from the network."""
try:
self.zeroconf.unregister_service(self.service_info)
logger.debug(f"mDNS service unregistered: {self.service_name}")
except EventLoopBlocked as e:
logger.warning(
"EventLoopBlocked while unregistering mDNS '%s': %s",
self.service_name,
e,
)
except Exception as e:
logger.error(
"Unexpected error during mDNS unregistration for '%s': %r",
self.service_name,
e,
)

View File

@ -0,0 +1,83 @@
import logging
import socket
from zeroconf import (
ServiceBrowser,
ServiceInfo,
ServiceListener,
Zeroconf,
)
from libp2p.abc import IPeerStore, Multiaddr
from libp2p.discovery.events.peerDiscovery import peerDiscovery
from libp2p.peer.id import ID
from libp2p.peer.peerinfo import PeerInfo
logger = logging.getLogger("libp2p.discovery.mdns.listener")
class PeerListener(ServiceListener):
"""mDNS listener — now a true ServiceListener subclass."""
def __init__(
self,
peerstore: IPeerStore,
zeroconf: Zeroconf,
service_type: str,
service_name: str,
) -> None:
self.peerstore = peerstore
self.zeroconf = zeroconf
self.service_type = service_type
self.service_name = service_name
self.discovered_services: dict[str, ID] = {}
self.browser = ServiceBrowser(self.zeroconf, self.service_type, listener=self)
def add_service(self, zc: Zeroconf, type_: str, name: str) -> None:
if name == self.service_name:
return
logger.debug(f"Adding service: {name}")
info = zc.get_service_info(type_, name, timeout=5000)
if not info:
return
peer_info = self._extract_peer_info(info)
if peer_info:
self.discovered_services[name] = peer_info.peer_id
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
peerDiscovery.emit_peer_discovered(peer_info)
logger.debug(f"Discovered Peer: {peer_info.peer_id}")
def remove_service(self, zc: Zeroconf, type_: str, name: str) -> None:
if name == self.service_name:
return
logger.debug(f"Removing service: {name}")
peer_id = self.discovered_services.pop(name)
self.peerstore.clear_addrs(peer_id)
logger.debug(f"Removed Peer: {peer_id}")
def update_service(self, zc: Zeroconf, type_: str, name: str) -> None:
info = zc.get_service_info(type_, name, timeout=5000)
if not info:
return
peer_info = self._extract_peer_info(info)
if peer_info:
self.peerstore.clear_addrs(peer_info.peer_id)
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
logger.debug(f"Updated Peer {peer_info.peer_id}")
def _extract_peer_info(self, info: ServiceInfo) -> PeerInfo | None:
try:
addrs = [
Multiaddr(f"/ip4/{socket.inet_ntoa(addr)}/tcp/{info.port}")
for addr in info.addresses
]
pid_bytes = info.properties.get(b"id")
if not pid_bytes:
return None
pid = ID.from_base58(pid_bytes.decode())
return PeerInfo(peer_id=pid, addrs=addrs)
except Exception:
return None
def stop(self) -> None:
self.browser.cancel()

View File

@ -0,0 +1,73 @@
"""
mDNS-based peer discovery for py-libp2p.
Conforms to https://github.com/libp2p/specs/blob/master/discovery/mdns.md
Uses zeroconf for mDNS broadcast/listen. Async operations use trio.
"""
import logging
from zeroconf import (
Zeroconf,
)
from libp2p.abc import (
INetworkService,
)
from .broadcaster import (
PeerBroadcaster,
)
from .listener import (
PeerListener,
)
from .utils import (
stringGen,
)
logger = logging.getLogger("libp2p.discovery.mdns")
SERVICE_TYPE = "_p2p._udp.local."
MCAST_PORT = 5353
MCAST_ADDR = "224.0.0.251"
class MDNSDiscovery:
"""
mDNS-based peer discovery for py-libp2p, using zeroconf.
Conforms to the libp2p mDNS discovery spec.
"""
def __init__(self, swarm: INetworkService, port: int = 8000):
self.peer_id = str(swarm.get_peer_id())
self.port = port
self.zeroconf = Zeroconf()
self.serviceName = f"{stringGen()}.{SERVICE_TYPE}"
self.peerstore = swarm.peerstore
self.swarm = swarm
self.broadcaster = PeerBroadcaster(
zeroconf=self.zeroconf,
service_type=SERVICE_TYPE,
service_name=self.serviceName,
peer_id=self.peer_id,
port=self.port,
)
self.listener = PeerListener(
zeroconf=self.zeroconf,
peerstore=self.peerstore,
service_type=SERVICE_TYPE,
service_name=self.serviceName,
)
def start(self) -> None:
"""Register this peer and start listening for others."""
logger.debug(
f"Starting mDNS discovery for peer {self.peer_id} on port {self.port}"
)
self.broadcaster.register()
# Listener is started in constructor
def stop(self) -> None:
"""Unregister this peer and clean up zeroconf resources."""
logger.debug("Stopping mDNS discovery")
self.broadcaster.unregister()
self.zeroconf.close()

View File

@ -0,0 +1,11 @@
import random
import string
def stringGen(len: int = 63) -> str:
"""Generate a random string of lowercase letters and digits."""
charset = string.ascii_lowercase + string.digits
result = []
for _ in range(len):
result.append(random.choice(charset))
return "".join(result)

View File

@ -29,6 +29,7 @@ from libp2p.custom_types import (
StreamHandlerFn, StreamHandlerFn,
TProtocol, TProtocol,
) )
from libp2p.discovery.mdns.mdns import MDNSDiscovery
from libp2p.host.defaults import ( from libp2p.host.defaults import (
get_default_protocols, get_default_protocols,
) )
@ -70,6 +71,7 @@ if TYPE_CHECKING:
logger = logging.getLogger("libp2p.network.basic_host") logger = logging.getLogger("libp2p.network.basic_host")
DEFAULT_NEGOTIATE_TIMEOUT = 5
class BasicHost(IHost): class BasicHost(IHost):
@ -89,15 +91,20 @@ class BasicHost(IHost):
def __init__( def __init__(
self, self,
network: INetworkService, network: INetworkService,
enable_mDNS: bool = False,
default_protocols: Optional["OrderedDict[TProtocol, StreamHandlerFn]"] = None, default_protocols: Optional["OrderedDict[TProtocol, StreamHandlerFn]"] = None,
negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> None: ) -> None:
self._network = network self._network = network
self._network.set_stream_handler(self._swarm_stream_handler) self._network.set_stream_handler(self._swarm_stream_handler)
self.peerstore = self._network.peerstore self.peerstore = self._network.peerstore
self.negotiate_timeout = negotitate_timeout
# Protocol muxing # Protocol muxing
default_protocols = default_protocols or get_default_protocols(self) default_protocols = default_protocols or get_default_protocols(self)
self.multiselect = Multiselect(dict(default_protocols.items())) self.multiselect = Multiselect(dict(default_protocols.items()))
self.multiselect_client = MultiselectClient() self.multiselect_client = MultiselectClient()
if enable_mDNS:
self.mDNS = MDNSDiscovery(network)
def get_id(self) -> ID: def get_id(self) -> ID:
""" """
@ -162,7 +169,14 @@ class BasicHost(IHost):
network = self.get_network() network = self.get_network()
async with background_trio_service(network): async with background_trio_service(network):
await network.listen(*listen_addrs) await network.listen(*listen_addrs)
yield if hasattr(self, "mDNS") and self.mDNS is not None:
logger.debug("Starting mDNS Discovery")
self.mDNS.start()
try:
yield
finally:
if hasattr(self, "mDNS") and self.mDNS is not None:
self.mDNS.stop()
return _run() return _run()
@ -178,7 +192,10 @@ class BasicHost(IHost):
self.multiselect.add_handler(protocol_id, stream_handler) self.multiselect.add_handler(protocol_id, stream_handler)
async def new_stream( async def new_stream(
self, peer_id: ID, protocol_ids: Sequence[TProtocol] self,
peer_id: ID,
protocol_ids: Sequence[TProtocol],
negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> INetStream: ) -> INetStream:
""" """
:param peer_id: peer_id that host is connecting :param peer_id: peer_id that host is connecting
@ -190,7 +207,9 @@ class BasicHost(IHost):
# Perform protocol muxing to determine protocol to use # Perform protocol muxing to determine protocol to use
try: try:
selected_protocol = await self.multiselect_client.select_one_of( selected_protocol = await self.multiselect_client.select_one_of(
list(protocol_ids), MultiselectCommunicator(net_stream) list(protocol_ids),
MultiselectCommunicator(net_stream),
negotitate_timeout,
) )
except MultiselectClientError as error: except MultiselectClientError as error:
logger.debug("fail to open a stream to peer %s, error=%s", peer_id, error) logger.debug("fail to open a stream to peer %s, error=%s", peer_id, error)
@ -200,7 +219,12 @@ class BasicHost(IHost):
net_stream.set_protocol(selected_protocol) net_stream.set_protocol(selected_protocol)
return net_stream return net_stream
async def send_command(self, peer_id: ID, command: str) -> list[str]: async def send_command(
self,
peer_id: ID,
command: str,
response_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> list[str]:
""" """
Send a multistream-select command to the specified peer and return Send a multistream-select command to the specified peer and return
the response. the response.
@ -214,7 +238,7 @@ class BasicHost(IHost):
try: try:
response = await self.multiselect_client.query_multistream_command( response = await self.multiselect_client.query_multistream_command(
MultiselectCommunicator(new_stream), command MultiselectCommunicator(new_stream), command, response_timeout
) )
except MultiselectClientError as error: except MultiselectClientError as error:
logger.debug("fail to open a stream to peer %s, error=%s", peer_id, error) logger.debug("fail to open a stream to peer %s, error=%s", peer_id, error)
@ -253,7 +277,7 @@ class BasicHost(IHost):
# Perform protocol muxing to determine protocol to use # Perform protocol muxing to determine protocol to use
try: try:
protocol, handler = await self.multiselect.negotiate( protocol, handler = await self.multiselect.negotiate(
MultiselectCommunicator(net_stream) MultiselectCommunicator(net_stream), self.negotiate_timeout
) )
except MultiselectError as error: except MultiselectError as error:
peer_id = net_stream.muxed_conn.peer_id peer_id = net_stream.muxed_conn.peer_id

View File

@ -18,8 +18,10 @@ from libp2p.peer.peerinfo import (
class RoutedHost(BasicHost): class RoutedHost(BasicHost):
_router: IPeerRouting _router: IPeerRouting
def __init__(self, network: INetworkService, router: IPeerRouting): def __init__(
super().__init__(network) self, network: INetworkService, router: IPeerRouting, enable_mDNS: bool = False
):
super().__init__(network, enable_mDNS)
self._router = router self._router = router
async def connect(self, peer_info: PeerInfo) -> None: async def connect(self, peer_info: PeerInfo) -> None:

View File

@ -40,6 +40,7 @@ logger = logging.getLogger(__name__)
ID_PUSH = TProtocol("/ipfs/id/push/1.0.0") ID_PUSH = TProtocol("/ipfs/id/push/1.0.0")
PROTOCOL_VERSION = "ipfs/0.1.0" PROTOCOL_VERSION = "ipfs/0.1.0"
AGENT_VERSION = get_agent_version() AGENT_VERSION = get_agent_version()
CONCURRENCY_LIMIT = 10
def identify_push_handler_for(host: IHost) -> StreamHandlerFn: def identify_push_handler_for(host: IHost) -> StreamHandlerFn:
@ -132,7 +133,10 @@ async def _update_peerstore_from_identify(
async def push_identify_to_peer( async def push_identify_to_peer(
host: IHost, peer_id: ID, observed_multiaddr: Multiaddr | None = None host: IHost,
peer_id: ID,
observed_multiaddr: Multiaddr | None = None,
limit: trio.Semaphore = trio.Semaphore(CONCURRENCY_LIMIT),
) -> bool: ) -> bool:
""" """
Push an identify message to a specific peer. Push an identify message to a specific peer.
@ -146,25 +150,26 @@ async def push_identify_to_peer(
True if the push was successful, False otherwise. True if the push was successful, False otherwise.
""" """
try: async with limit:
# Create a new stream to the peer using the identify/push protocol try:
stream = await host.new_stream(peer_id, [ID_PUSH]) # Create a new stream to the peer using the identify/push protocol
stream = await host.new_stream(peer_id, [ID_PUSH])
# Create the identify message # Create the identify message
identify_msg = _mk_identify_protobuf(host, observed_multiaddr) identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
response = identify_msg.SerializeToString() response = identify_msg.SerializeToString()
# Send the identify message # Send the identify message
await stream.write(response) await stream.write(response)
# Close the stream # Close the stream
await stream.close() await stream.close()
logger.debug("Successfully pushed identify to peer %s", peer_id) logger.debug("Successfully pushed identify to peer %s", peer_id)
return True return True
except Exception as e: except Exception as e:
logger.error("Error pushing identify to peer %s: %s", peer_id, e) logger.error("Error pushing identify to peer %s: %s", peer_id, e)
return False return False
async def push_identify_to_peers( async def push_identify_to_peers(
@ -179,13 +184,10 @@ async def push_identify_to_peers(
""" """
if peer_ids is None: if peer_ids is None:
# Get all connected peers # Get all connected peers
peer_ids = set(host.get_peerstore().peer_ids()) peer_ids = set(host.get_connected_peers())
# Push to each peer in parallel using a trio.Nursery # Push to each peer in parallel using a trio.Nursery
# TODO: Consider using a bounded nursery to limit concurrency # limiting concurrent connections to 10
# and avoid overwhelming the network. This can be done by using
# trio.open_nursery(max_concurrent=10) or similar.
# For now, we will use an unbounded nursery for simplicity.
async with trio.open_nursery() as nursery: async with trio.open_nursery() as nursery:
for peer_id in peer_ids: for peer_id in peer_ids:
nursery.start_soon(push_identify_to_peer, host, peer_id, observed_multiaddr) nursery.start_soon(push_identify_to_peer, host, peer_id, observed_multiaddr)

View File

@ -0,0 +1,30 @@
"""
Kademlia DHT implementation for py-libp2p.
This module provides a Distributed Hash Table (DHT) implementation
based on the Kademlia protocol.
"""
from .kad_dht import (
KadDHT,
)
from .peer_routing import (
PeerRouting,
)
from .routing_table import (
RoutingTable,
)
from .utils import (
create_key_from_binary,
)
from .value_store import (
ValueStore,
)
__all__ = [
"KadDHT",
"RoutingTable",
"PeerRouting",
"ValueStore",
"create_key_from_binary",
]

14
libp2p/kad_dht/common.py Normal file
View File

@ -0,0 +1,14 @@
"""
Shared constants and protocol parameters for the Kademlia DHT.
"""
from libp2p.custom_types import (
TProtocol,
)
# Constants for the Kademlia algorithm
ALPHA = 3 # Concurrency parameter
PROTOCOL_ID = TProtocol("/ipfs/kad/1.0.0")
QUERY_TIMEOUT = 10
TTL = DEFAULT_TTL = 24 * 60 * 60 # 24 hours in seconds

616
libp2p/kad_dht/kad_dht.py Normal file
View File

@ -0,0 +1,616 @@
"""
Kademlia DHT implementation for py-libp2p.
This module provides a complete Distributed Hash Table (DHT)
implementation based on the Kademlia algorithm and protocol.
"""
from enum import (
Enum,
)
import logging
import time
from multiaddr import (
Multiaddr,
)
import trio
import varint
from libp2p.abc import (
IHost,
)
from libp2p.network.stream.net_stream import (
INetStream,
)
from libp2p.peer.id import (
ID,
)
from libp2p.peer.peerinfo import (
PeerInfo,
)
from libp2p.tools.async_service import (
Service,
)
from .common import (
ALPHA,
PROTOCOL_ID,
QUERY_TIMEOUT,
)
from .pb.kademlia_pb2 import (
Message,
)
from .peer_routing import (
PeerRouting,
)
from .provider_store import (
ProviderStore,
)
from .routing_table import (
RoutingTable,
)
from .value_store import (
ValueStore,
)
logger = logging.getLogger("kademlia-example.kad_dht")
# logger = logging.getLogger("libp2p.kademlia")
# Default parameters
ROUTING_TABLE_REFRESH_INTERVAL = 60 # 1 min in seconds for testing
class DHTMode(Enum):
"""DHT operation modes."""
CLIENT = "CLIENT"
SERVER = "SERVER"
class KadDHT(Service):
"""
Kademlia DHT implementation for libp2p.
This class provides a DHT implementation that combines routing table management,
peer discovery, content routing, and value storage.
"""
def __init__(self, host: IHost, mode: DHTMode):
"""
Initialize a new Kademlia DHT node.
:param host: The libp2p host.
:param mode: The mode of host (Client or Server) - must be DHTMode enum
"""
super().__init__()
self.host = host
self.local_peer_id = host.get_id()
# Validate that mode is a DHTMode enum
if not isinstance(mode, DHTMode):
raise TypeError(f"mode must be DHTMode enum, got {type(mode)}")
self.mode = mode
# Initialize the routing table
self.routing_table = RoutingTable(self.local_peer_id, self.host)
# Initialize peer routing
self.peer_routing = PeerRouting(host, self.routing_table)
# Initialize value store
self.value_store = ValueStore(host=host, local_peer_id=self.local_peer_id)
# Initialize provider store with host and peer_routing references
self.provider_store = ProviderStore(host=host, peer_routing=self.peer_routing)
# Last time we republished provider records
self._last_provider_republish = time.time()
# Set protocol handlers
host.set_stream_handler(PROTOCOL_ID, self.handle_stream)
async def run(self) -> None:
"""Run the DHT service."""
logger.info(f"Starting Kademlia DHT with peer ID {self.local_peer_id}")
# Main service loop
while self.manager.is_running:
# Periodically refresh the routing table
await self.refresh_routing_table()
# Check if it's time to republish provider records
current_time = time.time()
# await self._republish_provider_records()
self._last_provider_republish = current_time
# Clean up expired values and provider records
expired_values = self.value_store.cleanup_expired()
if expired_values > 0:
logger.debug(f"Cleaned up {expired_values} expired values")
self.provider_store.cleanup_expired()
# Wait before next maintenance cycle
await trio.sleep(ROUTING_TABLE_REFRESH_INTERVAL)
async def switch_mode(self, new_mode: DHTMode) -> DHTMode:
"""
Switch the DHT mode.
:param new_mode: The new mode - must be DHTMode enum
:return: The new mode as DHTMode enum
"""
# Validate that new_mode is a DHTMode enum
if not isinstance(new_mode, DHTMode):
raise TypeError(f"new_mode must be DHTMode enum, got {type(new_mode)}")
if new_mode == DHTMode.CLIENT:
self.routing_table.cleanup_routing_table()
self.mode = new_mode
logger.info(f"Switched to {new_mode.value} mode")
return self.mode
async def handle_stream(self, stream: INetStream) -> None:
"""
Handle an incoming DHT stream using varint length prefixes.
"""
if self.mode == DHTMode.CLIENT:
stream.close
return
peer_id = stream.muxed_conn.peer_id
logger.debug(f"Received DHT stream from peer {peer_id}")
await self.add_peer(peer_id)
logger.debug(f"Added peer {peer_id} to routing table")
try:
# Read varint-prefixed length for the message
length_prefix = b""
while True:
byte = await stream.read(1)
if not byte:
logger.warning("Stream closed while reading varint length")
await stream.close()
return
length_prefix += byte
if byte[0] & 0x80 == 0:
break
msg_length = varint.decode_bytes(length_prefix)
# Read the message bytes
msg_bytes = await stream.read(msg_length)
if len(msg_bytes) < msg_length:
logger.warning("Failed to read full message from stream")
await stream.close()
return
try:
# Parse as protobuf
message = Message()
message.ParseFromString(msg_bytes)
logger.debug(
f"Received DHT message from {peer_id}, type: {message.type}"
)
# Handle FIND_NODE message
if message.type == Message.MessageType.FIND_NODE:
# Get target key directly from protobuf
target_key = message.key
# Find closest peers to the target key
closest_peers = self.routing_table.find_local_closest_peers(
target_key, 20
)
logger.debug(f"Found {len(closest_peers)} peers close to target")
# Build response message with protobuf
response = Message()
response.type = Message.MessageType.FIND_NODE
# Add closest peers to response
for peer in closest_peers:
# Skip if the peer is the requester
if peer == peer_id:
continue
# Add peer to closerPeers field
peer_proto = response.closerPeers.add()
peer_proto.id = peer.to_bytes()
peer_proto.connection = Message.ConnectionType.CAN_CONNECT
# Add addresses if available
try:
addrs = self.host.get_peerstore().addrs(peer)
if addrs:
for addr in addrs:
peer_proto.addrs.append(addr.to_bytes())
except Exception:
pass
# Serialize and send response
response_bytes = response.SerializeToString()
await stream.write(varint.encode(len(response_bytes)))
await stream.write(response_bytes)
logger.debug(
f"Sent FIND_NODE response with{len(response.closerPeers)} peers"
)
# Handle ADD_PROVIDER message
elif message.type == Message.MessageType.ADD_PROVIDER:
# Process ADD_PROVIDER
key = message.key
logger.debug(f"Received ADD_PROVIDER for key {key.hex()}")
# Extract provider information
for provider_proto in message.providerPeers:
try:
# Validate that the provider is the sender
provider_id = ID(provider_proto.id)
if provider_id != peer_id:
logger.warning(
f"Provider ID {provider_id} doesn't"
f"match sender {peer_id}, ignoring"
)
continue
# Convert addresses to Multiaddr
addrs = []
for addr_bytes in provider_proto.addrs:
try:
addrs.append(Multiaddr(addr_bytes))
except Exception as e:
logger.warning(f"Failed to parse address: {e}")
# Add to provider store
provider_info = PeerInfo(provider_id, addrs)
self.provider_store.add_provider(key, provider_info)
logger.debug(
f"Added provider {provider_id} for key {key.hex()}"
)
except Exception as e:
logger.warning(f"Failed to process provider info: {e}")
# Send acknowledgement
response = Message()
response.type = Message.MessageType.ADD_PROVIDER
response.key = key
response_bytes = response.SerializeToString()
await stream.write(varint.encode(len(response_bytes)))
await stream.write(response_bytes)
logger.debug("Sent ADD_PROVIDER acknowledgement")
# Handle GET_PROVIDERS message
elif message.type == Message.MessageType.GET_PROVIDERS:
# Process GET_PROVIDERS
key = message.key
logger.debug(f"Received GET_PROVIDERS request for key {key.hex()}")
# Find providers for the key
providers = self.provider_store.get_providers(key)
logger.debug(
f"Found {len(providers)} providers for key {key.hex()}"
)
# Create response
response = Message()
response.type = Message.MessageType.GET_PROVIDERS
response.key = key
# Add provider information to response
for provider_info in providers:
provider_proto = response.providerPeers.add()
provider_proto.id = provider_info.peer_id.to_bytes()
provider_proto.connection = Message.ConnectionType.CAN_CONNECT
# Add addresses if available
for addr in provider_info.addrs:
provider_proto.addrs.append(addr.to_bytes())
# Also include closest peers if we don't have providers
if not providers:
closest_peers = self.routing_table.find_local_closest_peers(
key, 20
)
logger.debug(
f"No providers found, including {len(closest_peers)}"
"closest peers"
)
for peer in closest_peers:
# Skip if peer is the requester
if peer == peer_id:
continue
peer_proto = response.closerPeers.add()
peer_proto.id = peer.to_bytes()
peer_proto.connection = Message.ConnectionType.CAN_CONNECT
# Add addresses if available
try:
addrs = self.host.get_peerstore().addrs(peer)
for addr in addrs:
peer_proto.addrs.append(addr.to_bytes())
except Exception:
pass
# Serialize and send response
response_bytes = response.SerializeToString()
await stream.write(varint.encode(len(response_bytes)))
await stream.write(response_bytes)
logger.debug("Sent GET_PROVIDERS response")
# Handle GET_VALUE message
elif message.type == Message.MessageType.GET_VALUE:
# Process GET_VALUE
key = message.key
logger.debug(f"Received GET_VALUE request for key {key.hex()}")
value = self.value_store.get(key)
if value:
logger.debug(f"Found value for key {key.hex()}")
# Create response using protobuf
response = Message()
response.type = Message.MessageType.GET_VALUE
# Create record
response.key = key
response.record.key = key
response.record.value = value
response.record.timeReceived = str(time.time())
# Serialize and send response
response_bytes = response.SerializeToString()
await stream.write(varint.encode(len(response_bytes)))
await stream.write(response_bytes)
logger.debug("Sent GET_VALUE response")
else:
logger.debug(f"No value found for key {key.hex()}")
# Create response with closest peers when no value is found
response = Message()
response.type = Message.MessageType.GET_VALUE
response.key = key
# Add closest peers to key
closest_peers = self.routing_table.find_local_closest_peers(
key, 20
)
logger.debug(
"No value found,"
f"including {len(closest_peers)} closest peers"
)
for peer in closest_peers:
# Skip if peer is the requester
if peer == peer_id:
continue
peer_proto = response.closerPeers.add()
peer_proto.id = peer.to_bytes()
peer_proto.connection = Message.ConnectionType.CAN_CONNECT
# Add addresses if available
try:
addrs = self.host.get_peerstore().addrs(peer)
for addr in addrs:
peer_proto.addrs.append(addr.to_bytes())
except Exception:
pass
# Serialize and send response
response_bytes = response.SerializeToString()
await stream.write(varint.encode(len(response_bytes)))
await stream.write(response_bytes)
logger.debug("Sent GET_VALUE response with closest peers")
# Handle PUT_VALUE message
elif message.type == Message.MessageType.PUT_VALUE and message.HasField(
"record"
):
# Process PUT_VALUE
key = message.record.key
value = message.record.value
success = False
try:
if not (key and value):
raise ValueError(
"Missing key or value in PUT_VALUE message"
)
self.value_store.put(key, value)
logger.debug(f"Stored value {value.hex()} for key {key.hex()}")
success = True
except Exception as e:
logger.warning(
f"Failed to store value {value.hex()} for key "
f"{key.hex()}: {e}"
)
finally:
# Send acknowledgement
response = Message()
response.type = Message.MessageType.PUT_VALUE
if success:
response.key = key
response_bytes = response.SerializeToString()
await stream.write(varint.encode(len(response_bytes)))
await stream.write(response_bytes)
logger.debug("Sent PUT_VALUE acknowledgement")
except Exception as proto_err:
logger.warning(f"Failed to parse protobuf message: {proto_err}")
await stream.close()
except Exception as e:
logger.error(f"Error handling DHT stream: {e}")
await stream.close()
async def refresh_routing_table(self) -> None:
"""Refresh the routing table."""
logger.debug("Refreshing routing table")
await self.peer_routing.refresh_routing_table()
# Peer routing methods
async def find_peer(self, peer_id: ID) -> PeerInfo | None:
"""
Find a peer with the given ID.
"""
logger.debug(f"Finding peer: {peer_id}")
return await self.peer_routing.find_peer(peer_id)
# Value storage and retrieval methods
async def put_value(self, key: bytes, value: bytes) -> None:
"""
Store a value in the DHT.
"""
logger.debug(f"Storing value for key {key.hex()}")
# 1. Store locally first
self.value_store.put(key, value)
try:
decoded_value = value.decode("utf-8")
except UnicodeDecodeError:
decoded_value = value.hex()
logger.debug(
f"Stored value locally for key {key.hex()} with value {decoded_value}"
)
# 2. Get closest peers, excluding self
closest_peers = [
peer
for peer in self.routing_table.find_local_closest_peers(key)
if peer != self.local_peer_id
]
logger.debug(f"Found {len(closest_peers)} peers to store value at")
# 3. Store at remote peers in batches of ALPHA, in parallel
stored_count = 0
for i in range(0, len(closest_peers), ALPHA):
batch = closest_peers[i : i + ALPHA]
batch_results = [False] * len(batch)
async def store_one(idx: int, peer: ID) -> None:
try:
with trio.move_on_after(QUERY_TIMEOUT):
success = await self.value_store._store_at_peer(
peer, key, value
)
batch_results[idx] = success
if success:
logger.debug(f"Stored value at peer {peer}")
else:
logger.debug(f"Failed to store value at peer {peer}")
except Exception as e:
logger.debug(f"Error storing value at peer {peer}: {e}")
async with trio.open_nursery() as nursery:
for idx, peer in enumerate(batch):
nursery.start_soon(store_one, idx, peer)
stored_count += sum(batch_results)
logger.info(f"Successfully stored value at {stored_count} peers")
async def get_value(self, key: bytes) -> bytes | None:
logger.debug(f"Getting value for key: {key.hex()}")
# 1. Check local store first
value = self.value_store.get(key)
if value:
logger.debug("Found value locally")
return value
# 2. Get closest peers, excluding self
closest_peers = [
peer
for peer in self.routing_table.find_local_closest_peers(key)
if peer != self.local_peer_id
]
logger.debug(f"Searching {len(closest_peers)} peers for value")
# 3. Query ALPHA peers at a time in parallel
for i in range(0, len(closest_peers), ALPHA):
batch = closest_peers[i : i + ALPHA]
found_value = None
async def query_one(peer: ID) -> None:
nonlocal found_value
try:
with trio.move_on_after(QUERY_TIMEOUT):
value = await self.value_store._get_from_peer(peer, key)
if value is not None and found_value is None:
found_value = value
logger.debug(f"Found value at peer {peer}")
except Exception as e:
logger.debug(f"Error querying peer {peer}: {e}")
async with trio.open_nursery() as nursery:
for peer in batch:
nursery.start_soon(query_one, peer)
if found_value is not None:
self.value_store.put(key, found_value)
logger.info("Successfully retrieved value from network")
return found_value
# 4. Not found
logger.warning(f"Value not found for key {key.hex()}")
return None
# Add these methods in the Utility methods section
# Utility methods
async def add_peer(self, peer_id: ID) -> bool:
"""
Add a peer to the routing table.
params: peer_id: The peer ID to add.
Returns
-------
bool
True if peer was added or updated, False otherwise.
"""
return await self.routing_table.add_peer(peer_id)
async def provide(self, key: bytes) -> bool:
"""
Reference to provider_store.provide for convenience.
"""
return await self.provider_store.provide(key)
async def find_providers(self, key: bytes, count: int = 20) -> list[PeerInfo]:
"""
Reference to provider_store.find_providers for convenience.
"""
return await self.provider_store.find_providers(key, count)
def get_routing_table_size(self) -> int:
"""
Get the number of peers in the routing table.
Returns
-------
int
Number of peers.
"""
return self.routing_table.size()
def get_value_store_size(self) -> int:
"""
Get the number of items in the value store.
Returns
-------
int
Number of items.
"""
return self.value_store.size()

View File

View File

@ -0,0 +1,38 @@
syntax = "proto3";
message Record {
bytes key = 1;
bytes value = 2;
string timeReceived = 5;
};
message Message {
enum MessageType {
PUT_VALUE = 0;
GET_VALUE = 1;
ADD_PROVIDER = 2;
GET_PROVIDERS = 3;
FIND_NODE = 4;
PING = 5;
}
enum ConnectionType {
NOT_CONNECTED = 0;
CONNECTED = 1;
CAN_CONNECT = 2;
CANNOT_CONNECT = 3;
}
message Peer {
bytes id = 1;
repeated bytes addrs = 2;
ConnectionType connection = 3;
}
MessageType type = 1;
int32 clusterLevelRaw = 10;
bytes key = 2;
Record record = 3;
repeated Peer closerPeers = 8;
repeated Peer providerPeers = 9;
}

View File

@ -0,0 +1,33 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/kad_dht/pb/kademlia.proto
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n libp2p/kad_dht/pb/kademlia.proto\":\n\x06Record\x12\x0b\n\x03key\x18\x01 \x01(\x0c\x12\r\n\x05value\x18\x02 \x01(\x0c\x12\x14\n\x0ctimeReceived\x18\x05 \x01(\t\"\xca\x03\n\x07Message\x12\"\n\x04type\x18\x01 \x01(\x0e\x32\x14.Message.MessageType\x12\x17\n\x0f\x63lusterLevelRaw\x18\n \x01(\x05\x12\x0b\n\x03key\x18\x02 \x01(\x0c\x12\x17\n\x06record\x18\x03 \x01(\x0b\x32\x07.Record\x12\"\n\x0b\x63loserPeers\x18\x08 \x03(\x0b\x32\r.Message.Peer\x12$\n\rproviderPeers\x18\t \x03(\x0b\x32\r.Message.Peer\x1aN\n\x04Peer\x12\n\n\x02id\x18\x01 \x01(\x0c\x12\r\n\x05\x61\x64\x64rs\x18\x02 \x03(\x0c\x12+\n\nconnection\x18\x03 \x01(\x0e\x32\x17.Message.ConnectionType\"i\n\x0bMessageType\x12\r\n\tPUT_VALUE\x10\x00\x12\r\n\tGET_VALUE\x10\x01\x12\x10\n\x0c\x41\x44\x44_PROVIDER\x10\x02\x12\x11\n\rGET_PROVIDERS\x10\x03\x12\r\n\tFIND_NODE\x10\x04\x12\x08\n\x04PING\x10\x05\"W\n\x0e\x43onnectionType\x12\x11\n\rNOT_CONNECTED\x10\x00\x12\r\n\tCONNECTED\x10\x01\x12\x0f\n\x0b\x43\x41N_CONNECT\x10\x02\x12\x12\n\x0e\x43\x41NNOT_CONNECT\x10\x03\x62\x06proto3')
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.kad_dht.pb.kademlia_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_globals['_RECORD']._serialized_start=36
_globals['_RECORD']._serialized_end=94
_globals['_MESSAGE']._serialized_start=97
_globals['_MESSAGE']._serialized_end=555
_globals['_MESSAGE_PEER']._serialized_start=281
_globals['_MESSAGE_PEER']._serialized_end=359
_globals['_MESSAGE_MESSAGETYPE']._serialized_start=361
_globals['_MESSAGE_MESSAGETYPE']._serialized_end=466
_globals['_MESSAGE_CONNECTIONTYPE']._serialized_start=468
_globals['_MESSAGE_CONNECTIONTYPE']._serialized_end=555
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,133 @@
"""
@generated by mypy-protobuf. Do not edit manually!
isort:skip_file
"""
import builtins
import collections.abc
import google.protobuf.descriptor
import google.protobuf.internal.containers
import google.protobuf.internal.enum_type_wrapper
import google.protobuf.message
import sys
import typing
if sys.version_info >= (3, 10):
import typing as typing_extensions
else:
import typing_extensions
DESCRIPTOR: google.protobuf.descriptor.FileDescriptor
@typing.final
class Record(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
KEY_FIELD_NUMBER: builtins.int
VALUE_FIELD_NUMBER: builtins.int
TIMERECEIVED_FIELD_NUMBER: builtins.int
key: builtins.bytes
value: builtins.bytes
timeReceived: builtins.str
def __init__(
self,
*,
key: builtins.bytes = ...,
value: builtins.bytes = ...,
timeReceived: builtins.str = ...,
) -> None: ...
def ClearField(self, field_name: typing.Literal["key", b"key", "timeReceived", b"timeReceived", "value", b"value"]) -> None: ...
global___Record = Record
@typing.final
class Message(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class _MessageType:
ValueType = typing.NewType("ValueType", builtins.int)
V: typing_extensions.TypeAlias = ValueType
class _MessageTypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[Message._MessageType.ValueType], builtins.type):
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
PUT_VALUE: Message._MessageType.ValueType # 0
GET_VALUE: Message._MessageType.ValueType # 1
ADD_PROVIDER: Message._MessageType.ValueType # 2
GET_PROVIDERS: Message._MessageType.ValueType # 3
FIND_NODE: Message._MessageType.ValueType # 4
PING: Message._MessageType.ValueType # 5
class MessageType(_MessageType, metaclass=_MessageTypeEnumTypeWrapper): ...
PUT_VALUE: Message.MessageType.ValueType # 0
GET_VALUE: Message.MessageType.ValueType # 1
ADD_PROVIDER: Message.MessageType.ValueType # 2
GET_PROVIDERS: Message.MessageType.ValueType # 3
FIND_NODE: Message.MessageType.ValueType # 4
PING: Message.MessageType.ValueType # 5
class _ConnectionType:
ValueType = typing.NewType("ValueType", builtins.int)
V: typing_extensions.TypeAlias = ValueType
class _ConnectionTypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[Message._ConnectionType.ValueType], builtins.type):
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
NOT_CONNECTED: Message._ConnectionType.ValueType # 0
CONNECTED: Message._ConnectionType.ValueType # 1
CAN_CONNECT: Message._ConnectionType.ValueType # 2
CANNOT_CONNECT: Message._ConnectionType.ValueType # 3
class ConnectionType(_ConnectionType, metaclass=_ConnectionTypeEnumTypeWrapper): ...
NOT_CONNECTED: Message.ConnectionType.ValueType # 0
CONNECTED: Message.ConnectionType.ValueType # 1
CAN_CONNECT: Message.ConnectionType.ValueType # 2
CANNOT_CONNECT: Message.ConnectionType.ValueType # 3
@typing.final
class Peer(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
ID_FIELD_NUMBER: builtins.int
ADDRS_FIELD_NUMBER: builtins.int
CONNECTION_FIELD_NUMBER: builtins.int
id: builtins.bytes
connection: global___Message.ConnectionType.ValueType
@property
def addrs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]: ...
def __init__(
self,
*,
id: builtins.bytes = ...,
addrs: collections.abc.Iterable[builtins.bytes] | None = ...,
connection: global___Message.ConnectionType.ValueType = ...,
) -> None: ...
def ClearField(self, field_name: typing.Literal["addrs", b"addrs", "connection", b"connection", "id", b"id"]) -> None: ...
TYPE_FIELD_NUMBER: builtins.int
CLUSTERLEVELRAW_FIELD_NUMBER: builtins.int
KEY_FIELD_NUMBER: builtins.int
RECORD_FIELD_NUMBER: builtins.int
CLOSERPEERS_FIELD_NUMBER: builtins.int
PROVIDERPEERS_FIELD_NUMBER: builtins.int
type: global___Message.MessageType.ValueType
clusterLevelRaw: builtins.int
key: builtins.bytes
@property
def record(self) -> global___Record: ...
@property
def closerPeers(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Message.Peer]: ...
@property
def providerPeers(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___Message.Peer]: ...
def __init__(
self,
*,
type: global___Message.MessageType.ValueType = ...,
clusterLevelRaw: builtins.int = ...,
key: builtins.bytes = ...,
record: global___Record | None = ...,
closerPeers: collections.abc.Iterable[global___Message.Peer] | None = ...,
providerPeers: collections.abc.Iterable[global___Message.Peer] | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["record", b"record"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["closerPeers", b"closerPeers", "clusterLevelRaw", b"clusterLevelRaw", "key", b"key", "providerPeers", b"providerPeers", "record", b"record", "type", b"type"]) -> None: ...
global___Message = Message

View File

@ -0,0 +1,415 @@
"""
Peer routing implementation for Kademlia DHT.
This module implements the peer routing interface using Kademlia's algorithm
to efficiently locate peers in a distributed network.
"""
import logging
import trio
import varint
from libp2p.abc import (
IHost,
INetStream,
IPeerRouting,
)
from libp2p.peer.id import (
ID,
)
from libp2p.peer.peerinfo import (
PeerInfo,
)
from .common import (
ALPHA,
PROTOCOL_ID,
)
from .pb.kademlia_pb2 import (
Message,
)
from .routing_table import (
RoutingTable,
)
from .utils import (
sort_peer_ids_by_distance,
)
# logger = logging.getLogger("libp2p.kademlia.peer_routing")
logger = logging.getLogger("kademlia-example.peer_routing")
MAX_PEER_LOOKUP_ROUNDS = 20 # Maximum number of rounds in peer lookup
class PeerRouting(IPeerRouting):
"""
Implementation of peer routing using the Kademlia algorithm.
This class provides methods to find peers in the DHT network
and helps maintain the routing table.
"""
def __init__(self, host: IHost, routing_table: RoutingTable):
"""
Initialize the peer routing service.
:param host: The libp2p host
:param routing_table: The Kademlia routing table
"""
self.host = host
self.routing_table = routing_table
async def find_peer(self, peer_id: ID) -> PeerInfo | None:
"""
Find a peer with the given ID.
:param peer_id: The ID of the peer to find
Returns
-------
Optional[PeerInfo]
The peer information if found, None otherwise
"""
# Check if this is actually our peer ID
if peer_id == self.host.get_id():
try:
# Return our own peer info
return PeerInfo(peer_id, self.host.get_addrs())
except Exception:
logger.exception("Error getting our own peer info")
return None
# First check if the peer is in our routing table
peer_info = self.routing_table.get_peer_info(peer_id)
if peer_info:
logger.debug(f"Found peer {peer_id} in routing table")
return peer_info
# Then check if the peer is in our peerstore
try:
addrs = self.host.get_peerstore().addrs(peer_id)
if addrs:
logger.debug(f"Found peer {peer_id} in peerstore")
return PeerInfo(peer_id, addrs)
except Exception:
pass
# If not found locally, search the network
try:
closest_peers = await self.find_closest_peers_network(peer_id.to_bytes())
logger.info(f"Closest peers found: {closest_peers}")
# Check if we found the peer we're looking for
for found_peer in closest_peers:
if found_peer == peer_id:
try:
addrs = self.host.get_peerstore().addrs(found_peer)
if addrs:
return PeerInfo(found_peer, addrs)
except Exception:
pass
except Exception as e:
logger.error(f"Error searching for peer {peer_id}: {e}")
# Not found
logger.info(f"Peer {peer_id} not found")
return None
async def _query_single_peer_for_closest(
self, peer: ID, target_key: bytes, new_peers: list[ID]
) -> None:
"""
Query a single peer for closest peers and append results to the shared list.
params: peer : ID
The peer to query
params: target_key : bytes
The target key to find closest peers for
params: new_peers : list[ID]
Shared list to append results to
"""
try:
result = await self._query_peer_for_closest(peer, target_key)
# Add deduplication to prevent duplicate peers
for peer_id in result:
if peer_id not in new_peers:
new_peers.append(peer_id)
logger.debug(
"Queried peer %s for closest peers, got %d results (%d unique)",
peer,
len(result),
len([p for p in result if p not in new_peers[: -len(result)]]),
)
except Exception as e:
logger.debug(f"Query to peer {peer} failed: {e}")
async def find_closest_peers_network(
self, target_key: bytes, count: int = 20
) -> list[ID]:
"""
Find the closest peers to a target key in the entire network.
Performs an iterative lookup by querying peers for their closest peers.
Returns
-------
list[ID]
Closest peer IDs
"""
# Start with closest peers from our routing table
closest_peers = self.routing_table.find_local_closest_peers(target_key, count)
logger.debug("Local closest peers: %d found", len(closest_peers))
queried_peers: set[ID] = set()
rounds = 0
# Return early if we have no peers to start with
if not closest_peers:
logger.warning("No local peers available for network lookup")
return []
# Iterative lookup until convergence
while rounds < MAX_PEER_LOOKUP_ROUNDS:
rounds += 1
logger.debug(f"Lookup round {rounds}/{MAX_PEER_LOOKUP_ROUNDS}")
# Find peers we haven't queried yet
peers_to_query = [p for p in closest_peers if p not in queried_peers]
if not peers_to_query:
logger.debug("No more unqueried peers available, ending lookup")
break # No more peers to query
# Query these peers for their closest peers to target
peers_batch = peers_to_query[:ALPHA] # Limit to ALPHA peers at a time
# Mark these peers as queried before we actually query them
for peer in peers_batch:
queried_peers.add(peer)
# Run queries in parallel for this batch using trio nursery
new_peers: list[ID] = [] # Shared array to collect all results
async with trio.open_nursery() as nursery:
for peer in peers_batch:
nursery.start_soon(
self._query_single_peer_for_closest, peer, target_key, new_peers
)
# If we got no new peers, we're done
if not new_peers:
logger.debug("No new peers discovered in this round, ending lookup")
break
# Update our list of closest peers
all_candidates = closest_peers + new_peers
old_closest_peers = closest_peers[:]
closest_peers = sort_peer_ids_by_distance(target_key, all_candidates)[
:count
]
logger.debug(f"Updated closest peers count: {len(closest_peers)}")
# Check if we made any progress (found closer peers)
if closest_peers == old_closest_peers:
logger.debug("No improvement in closest peers, ending lookup")
break
logger.info(
f"Network lookup completed after {rounds} rounds, "
f"found {len(closest_peers)} peers"
)
return closest_peers
async def _query_peer_for_closest(self, peer: ID, target_key: bytes) -> list[ID]:
"""
Query a peer for their closest peers
to the target key using varint length prefix
"""
stream = None
results = []
try:
# Add the peer to our routing table regardless of query outcome
try:
addrs = self.host.get_peerstore().addrs(peer)
if addrs:
peer_info = PeerInfo(peer, addrs)
await self.routing_table.add_peer(peer_info)
except Exception as e:
logger.debug(f"Failed to add peer {peer} to routing table: {e}")
# Open a stream to the peer using the Kademlia protocol
logger.debug(f"Opening stream to {peer} for closest peers query")
try:
stream = await self.host.new_stream(peer, [PROTOCOL_ID])
logger.debug(f"Stream opened to {peer}")
except Exception as e:
logger.warning(f"Failed to open stream to {peer}: {e}")
return []
# Create and send FIND_NODE request using protobuf
find_node_msg = Message()
find_node_msg.type = Message.MessageType.FIND_NODE
find_node_msg.key = target_key # Set target key directly as bytes
# Serialize and send the protobuf message with varint length prefix
proto_bytes = find_node_msg.SerializeToString()
logger.debug(
f"Sending FIND_NODE: {proto_bytes.hex()} (len={len(proto_bytes)})"
)
await stream.write(varint.encode(len(proto_bytes)))
await stream.write(proto_bytes)
# Read varint-prefixed response length
length_bytes = b""
while True:
b = await stream.read(1)
if not b:
logger.warning(
"Error reading varint length from stream: connection closed"
)
return []
length_bytes += b
if b[0] & 0x80 == 0:
break
response_length = varint.decode_bytes(length_bytes)
# Read response data
response_bytes = b""
remaining = response_length
while remaining > 0:
chunk = await stream.read(remaining)
if not chunk:
logger.debug(f"Connection closed by peer {peer} while reading data")
return []
response_bytes += chunk
remaining -= len(chunk)
# Parse the protobuf response
response_msg = Message()
response_msg.ParseFromString(response_bytes)
logger.debug(
"Received response from %s with %d peers",
peer,
len(response_msg.closerPeers),
)
# Process closest peers from response
if response_msg.type == Message.MessageType.FIND_NODE:
for peer_data in response_msg.closerPeers:
new_peer_id = ID(peer_data.id)
if new_peer_id not in results:
results.append(new_peer_id)
if peer_data.addrs:
from multiaddr import (
Multiaddr,
)
addrs = [Multiaddr(addr) for addr in peer_data.addrs]
self.host.get_peerstore().add_addrs(new_peer_id, addrs, 3600)
except Exception as e:
logger.debug(f"Error querying peer {peer} for closest: {e}")
finally:
if stream:
await stream.close()
return results
async def _handle_kad_stream(self, stream: INetStream) -> None:
"""
Handle incoming Kademlia protocol streams.
params: stream: The incoming stream
Returns
-------
None
"""
try:
# Read message length
length_bytes = await stream.read(4)
if not length_bytes:
return
message_length = int.from_bytes(length_bytes, byteorder="big")
# Read message
message_bytes = await stream.read(message_length)
if not message_bytes:
return
# Parse protobuf message
kad_message = Message()
try:
kad_message.ParseFromString(message_bytes)
if kad_message.type == Message.MessageType.FIND_NODE:
# Get target key directly from protobuf message
target_key = kad_message.key
# Find closest peers to target
closest_peers = self.routing_table.find_local_closest_peers(
target_key, 20
)
# Create protobuf response
response = Message()
response.type = Message.MessageType.FIND_NODE
# Add peer information to response
for peer_id in closest_peers:
peer_proto = response.closerPeers.add()
peer_proto.id = peer_id.to_bytes()
peer_proto.connection = Message.ConnectionType.CAN_CONNECT
# Add addresses if available
try:
addrs = self.host.get_peerstore().addrs(peer_id)
if addrs:
for addr in addrs:
peer_proto.addrs.append(addr.to_bytes())
except Exception:
pass
# Send response
response_bytes = response.SerializeToString()
await stream.write(len(response_bytes).to_bytes(4, byteorder="big"))
await stream.write(response_bytes)
except Exception as parse_err:
logger.error(f"Failed to parse protocol buffer message: {parse_err}")
except Exception as e:
logger.debug(f"Error handling Kademlia stream: {e}")
finally:
await stream.close()
async def refresh_routing_table(self) -> None:
"""
Refresh the routing table by performing lookups for random keys.
Returns
-------
None
"""
logger.info("Refreshing routing table")
# Perform a lookup for ourselves to populate the routing table
local_id = self.host.get_id()
closest_peers = await self.find_closest_peers_network(local_id.to_bytes())
# Add discovered peers to routing table
for peer_id in closest_peers:
try:
addrs = self.host.get_peerstore().addrs(peer_id)
if addrs:
peer_info = PeerInfo(peer_id, addrs)
await self.routing_table.add_peer(peer_info)
except Exception as e:
logger.debug(f"Failed to add discovered peer {peer_id}: {e}")

View File

@ -0,0 +1,577 @@
"""
Provider record storage for Kademlia DHT.
This module implements the storage for content provider records in the Kademlia DHT.
"""
import logging
import time
from typing import (
Any,
)
from multiaddr import (
Multiaddr,
)
import trio
import varint
from libp2p.abc import (
IHost,
)
from libp2p.custom_types import (
TProtocol,
)
from libp2p.peer.id import (
ID,
)
from libp2p.peer.peerinfo import (
PeerInfo,
)
from .common import (
ALPHA,
PROTOCOL_ID,
QUERY_TIMEOUT,
)
from .pb.kademlia_pb2 import (
Message,
)
# logger = logging.getLogger("libp2p.kademlia.provider_store")
logger = logging.getLogger("kademlia-example.provider_store")
# Constants for provider records (based on IPFS standards)
PROVIDER_RECORD_REPUBLISH_INTERVAL = 22 * 60 * 60 # 22 hours in seconds
PROVIDER_RECORD_EXPIRATION_INTERVAL = 48 * 60 * 60 # 48 hours in seconds
PROVIDER_ADDRESS_TTL = 30 * 60 # 30 minutes in seconds
class ProviderRecord:
"""
A record for a content provider in the DHT.
Contains the peer information and timestamp.
"""
def __init__(
self,
provider_info: PeerInfo,
timestamp: float | None = None,
) -> None:
"""
Initialize a new provider record.
:param provider_info: The provider's peer information
:param timestamp: Time this record was created/updated
(defaults to current time)
"""
self.provider_info = provider_info
self.timestamp = timestamp or time.time()
def is_expired(self) -> bool:
"""
Check if this provider record has expired.
Returns
-------
bool
True if the record has expired
"""
current_time = time.time()
return (current_time - self.timestamp) >= PROVIDER_RECORD_EXPIRATION_INTERVAL
def should_republish(self) -> bool:
"""
Check if this provider record should be republished.
Returns
-------
bool
True if the record should be republished
"""
current_time = time.time()
return (current_time - self.timestamp) >= PROVIDER_RECORD_REPUBLISH_INTERVAL
@property
def peer_id(self) -> ID:
"""Get the provider's peer ID."""
return self.provider_info.peer_id
@property
def addresses(self) -> list[Multiaddr]:
"""Get the provider's addresses."""
return self.provider_info.addrs
class ProviderStore:
"""
Store for content provider records in the Kademlia DHT.
Maps content keys to provider records, with support for expiration.
"""
def __init__(self, host: IHost, peer_routing: Any = None) -> None:
"""
Initialize a new provider store.
:param host: The libp2p host instance (optional)
:param peer_routing: The peer routing instance (optional)
"""
# Maps content keys to a dict of provider records (peer_id -> record)
self.providers: dict[bytes, dict[str, ProviderRecord]] = {}
self.host = host
self.peer_routing = peer_routing
self.providing_keys: set[bytes] = set()
self.local_peer_id = host.get_id()
async def _republish_provider_records(self) -> None:
"""Republish all provider records for content this node is providing."""
# First, republish keys we're actively providing
for key in self.providing_keys:
logger.debug(f"Republishing provider record for key {key.hex()}")
await self.provide(key)
# Also check for any records that should be republished
time.time()
for key, providers in self.providers.items():
for peer_id_str, record in providers.items():
# Only republish records for our own peer
if self.local_peer_id and str(self.local_peer_id) == peer_id_str:
if record.should_republish():
logger.debug(
f"Republishing old provider record for key {key.hex()}"
)
await self.provide(key)
async def provide(self, key: bytes) -> bool:
"""
Advertise that this node can provide a piece of content.
Finds the k closest peers to the key and sends them ADD_PROVIDER messages.
:param key: The content key (multihash) to advertise
Returns
-------
bool
True if the advertisement was successful
"""
if not self.host or not self.peer_routing:
logger.error("Host or peer_routing not initialized, cannot provide content")
return False
# Add to local provider store
local_addrs = []
for addr in self.host.get_addrs():
local_addrs.append(addr)
local_peer_info = PeerInfo(self.host.get_id(), local_addrs)
self.add_provider(key, local_peer_info)
# Track that we're providing this key
self.providing_keys.add(key)
# Find the k closest peers to the key
closest_peers = await self.peer_routing.find_closest_peers_network(key)
logger.debug(
"Found %d peers close to key %s for provider advertisement",
len(closest_peers),
key.hex(),
)
# Send ADD_PROVIDER messages to these ALPHA peers in parallel.
success_count = 0
for i in range(0, len(closest_peers), ALPHA):
batch = closest_peers[i : i + ALPHA]
results: list[bool] = [False] * len(batch)
async def send_one(
idx: int, peer_id: ID, results: list[bool] = results
) -> None:
if peer_id == self.local_peer_id:
return
try:
with trio.move_on_after(QUERY_TIMEOUT):
success = await self._send_add_provider(peer_id, key)
results[idx] = success
if not success:
logger.warning(f"Failed to send ADD_PROVIDER to {peer_id}")
except Exception as e:
logger.warning(f"Error sending ADD_PROVIDER to {peer_id}: {e}")
async with trio.open_nursery() as nursery:
for idx, peer_id in enumerate(batch):
nursery.start_soon(send_one, idx, peer_id, results)
success_count += sum(results)
logger.info(f"Successfully advertised to {success_count} peers")
return success_count > 0
async def _send_add_provider(self, peer_id: ID, key: bytes) -> bool:
"""
Send ADD_PROVIDER message to a specific peer.
:param peer_id: The peer to send the message to
:param key: The content key being provided
Returns
-------
bool
True if the message was successfully sent and acknowledged
"""
try:
result = False
# Open a stream to the peer
stream = await self.host.new_stream(peer_id, [TProtocol(PROTOCOL_ID)])
# Get our addresses to include in the message
addrs = []
for addr in self.host.get_addrs():
addrs.append(addr.to_bytes())
# Create the ADD_PROVIDER message
message = Message()
message.type = Message.MessageType.ADD_PROVIDER
message.key = key
# Add our provider info
provider = message.providerPeers.add()
provider.id = self.local_peer_id.to_bytes()
provider.addrs.extend(addrs)
# Serialize and send the message
proto_bytes = message.SerializeToString()
await stream.write(varint.encode(len(proto_bytes)))
await stream.write(proto_bytes)
logger.debug(f"Sent ADD_PROVIDER to {peer_id} for key {key.hex()}")
# Read response length prefix
length_bytes = b""
while True:
logger.debug("Reading response length prefix in add provider")
b = await stream.read(1)
if not b:
return False
length_bytes += b
if b[0] & 0x80 == 0:
break
response_length = varint.decode_bytes(length_bytes)
# Read response data
response_bytes = b""
remaining = response_length
while remaining > 0:
chunk = await stream.read(remaining)
if not chunk:
return False
response_bytes += chunk
remaining -= len(chunk)
# Parse response
response = Message()
response.ParseFromString(response_bytes)
# Check response type
response.type == Message.MessageType.ADD_PROVIDER
if response.type:
result = True
except Exception as e:
logger.warning(f"Error sending ADD_PROVIDER to {peer_id}: {e}")
finally:
await stream.close()
return result
async def find_providers(self, key: bytes, count: int = 20) -> list[PeerInfo]:
"""
Find content providers for a given key.
:param key: The content key to look for
:param count: Maximum number of providers to return
Returns
-------
List[PeerInfo]
List of content providers
"""
if not self.host or not self.peer_routing:
logger.error("Host or peer_routing not initialized, cannot find providers")
return []
# Check local provider store first
local_providers = self.get_providers(key)
if local_providers:
logger.debug(
f"Found {len(local_providers)} providers locally for {key.hex()}"
)
return local_providers[:count]
logger.debug("local providers are %s", local_providers)
# Find the closest peers to the key
closest_peers = await self.peer_routing.find_closest_peers_network(key)
logger.debug(
f"Searching {len(closest_peers)} peers for providers of {key.hex()}"
)
# Query these peers for providers in batches of ALPHA, in parallel, with timeout
all_providers = []
for i in range(0, len(closest_peers), ALPHA):
batch = closest_peers[i : i + ALPHA]
batch_results: list[list[PeerInfo]] = [[] for _ in batch]
async def get_one(
idx: int,
peer_id: ID,
batch_results: list[list[PeerInfo]] = batch_results,
) -> None:
if peer_id == self.local_peer_id:
return
try:
with trio.move_on_after(QUERY_TIMEOUT):
providers = await self._get_providers_from_peer(peer_id, key)
if providers:
for provider in providers:
self.add_provider(key, provider)
batch_results[idx] = providers
else:
logger.debug(f"No providers found at peer {peer_id}")
except Exception as e:
logger.warning(f"Failed to get providers from {peer_id}: {e}")
async with trio.open_nursery() as nursery:
for idx, peer_id in enumerate(batch):
nursery.start_soon(get_one, idx, peer_id, batch_results)
for providers in batch_results:
all_providers.extend(providers)
if len(all_providers) >= count:
return all_providers[:count]
return all_providers[:count]
async def _get_providers_from_peer(self, peer_id: ID, key: bytes) -> list[PeerInfo]:
"""
Get content providers from a specific peer.
:param peer_id: The peer to query
:param key: The content key to look for
Returns
-------
List[PeerInfo]
List of provider information
"""
providers: list[PeerInfo] = []
try:
# Open a stream to the peer
stream = await self.host.new_stream(peer_id, [TProtocol(PROTOCOL_ID)])
try:
# Create the GET_PROVIDERS message
message = Message()
message.type = Message.MessageType.GET_PROVIDERS
message.key = key
# Serialize and send the message
proto_bytes = message.SerializeToString()
await stream.write(varint.encode(len(proto_bytes)))
await stream.write(proto_bytes)
# Read response length prefix
length_bytes = b""
while True:
b = await stream.read(1)
if not b:
return []
length_bytes += b
if b[0] & 0x80 == 0:
break
response_length = varint.decode_bytes(length_bytes)
# Read response data
response_bytes = b""
remaining = response_length
while remaining > 0:
chunk = await stream.read(remaining)
if not chunk:
return []
response_bytes += chunk
remaining -= len(chunk)
# Parse response
response = Message()
response.ParseFromString(response_bytes)
# Check response type
if response.type != Message.MessageType.GET_PROVIDERS:
return []
# Extract provider information
providers = []
for provider_proto in response.providerPeers:
try:
# Create peer ID from bytes
provider_id = ID(provider_proto.id)
# Convert addresses to Multiaddr
addrs = []
for addr_bytes in provider_proto.addrs:
try:
addrs.append(Multiaddr(addr_bytes))
except Exception:
pass # Skip invalid addresses
# Create PeerInfo and add to result
providers.append(PeerInfo(provider_id, addrs))
except Exception as e:
logger.warning(f"Failed to parse provider info: {e}")
finally:
await stream.close()
return providers
except Exception as e:
logger.warning(f"Error getting providers from {peer_id}: {e}")
return []
def add_provider(self, key: bytes, provider: PeerInfo) -> None:
"""
Add a provider for a given content key.
:param key: The content key
:param provider: The provider's peer information
Returns
-------
None
"""
# Initialize providers for this key if needed
if key not in self.providers:
self.providers[key] = {}
# Add or update the provider record
peer_id_str = str(provider.peer_id) # Use string representation as dict key
self.providers[key][peer_id_str] = ProviderRecord(
provider_info=provider, timestamp=time.time()
)
logger.debug(f"Added provider {provider.peer_id} for key {key.hex()}")
def get_providers(self, key: bytes) -> list[PeerInfo]:
"""
Get all providers for a given content key.
:param key: The content key
Returns
-------
List[PeerInfo]
List of providers for the key
"""
if key not in self.providers:
return []
# Collect valid provider records (not expired)
result = []
current_time = time.time()
expired_peers = []
for peer_id_str, record in self.providers[key].items():
# Check if the record has expired
if current_time - record.timestamp > PROVIDER_RECORD_EXPIRATION_INTERVAL:
expired_peers.append(peer_id_str)
continue
# Use addresses only if they haven't expired
addresses = []
if current_time - record.timestamp <= PROVIDER_ADDRESS_TTL:
addresses = record.addresses
# Create PeerInfo and add to results
result.append(PeerInfo(record.peer_id, addresses))
# Clean up expired records
for peer_id in expired_peers:
del self.providers[key][peer_id]
# Remove the key if no providers left
if not self.providers[key]:
del self.providers[key]
return result
def cleanup_expired(self) -> None:
"""Remove expired provider records."""
current_time = time.time()
expired_keys = []
for key, providers in self.providers.items():
expired_providers = []
for peer_id_str, record in providers.items():
if (
current_time - record.timestamp
> PROVIDER_RECORD_EXPIRATION_INTERVAL
):
expired_providers.append(peer_id_str)
logger.debug(
f"Removing expired provider {peer_id_str} for key {key.hex()}"
)
# Remove expired providers
for peer_id in expired_providers:
del providers[peer_id]
# Track empty keys for removal
if not providers:
expired_keys.append(key)
# Remove empty keys
for key in expired_keys:
del self.providers[key]
logger.debug(f"Removed key with no providers: {key.hex()}")
def get_provided_keys(self, peer_id: ID) -> list[bytes]:
"""
Get all content keys provided by a specific peer.
:param peer_id: The peer ID to look for
Returns
-------
List[bytes]
List of content keys provided by the peer
"""
peer_id_str = str(peer_id)
result = []
for key, providers in self.providers.items():
if peer_id_str in providers:
result.append(key)
return result
def size(self) -> int:
"""
Get the total number of provider records in the store.
Returns
-------
int
Total number of provider records across all keys
"""
total = 0
for providers in self.providers.values():
total += len(providers)
return total

View File

@ -0,0 +1,600 @@
"""
Kademlia DHT routing table implementation.
"""
from collections import (
OrderedDict,
)
import logging
import time
import trio
from libp2p.abc import (
IHost,
)
from libp2p.kad_dht.utils import (
xor_distance,
)
from libp2p.peer.id import (
ID,
)
from libp2p.peer.peerinfo import (
PeerInfo,
)
from .common import (
PROTOCOL_ID,
)
from .pb.kademlia_pb2 import (
Message,
)
# logger = logging.getLogger("libp2p.kademlia.routing_table")
logger = logging.getLogger("kademlia-example.routing_table")
# Default parameters
BUCKET_SIZE = 20 # k in the Kademlia paper
MAXIMUM_BUCKETS = 256 # Maximum number of buckets (for 256-bit keys)
PEER_REFRESH_INTERVAL = 60 # Interval to refresh peers in seconds
STALE_PEER_THRESHOLD = 3600 # Time in seconds after which a peer is considered stale
class KBucket:
"""
A k-bucket implementation for the Kademlia DHT.
Each k-bucket stores up to k (BUCKET_SIZE) peers, sorted by least-recently seen.
"""
def __init__(
self,
host: IHost,
bucket_size: int = BUCKET_SIZE,
min_range: int = 0,
max_range: int = 2**256,
):
"""
Initialize a new k-bucket.
:param host: The host this bucket belongs to
:param bucket_size: Maximum number of peers to store in the bucket
:param min_range: Lower boundary of the bucket's key range (inclusive)
:param max_range: Upper boundary of the bucket's key range (exclusive)
"""
self.bucket_size = bucket_size
self.host = host
self.min_range = min_range
self.max_range = max_range
# Store PeerInfo objects along with last-seen timestamp
self.peers: OrderedDict[ID, tuple[PeerInfo, float]] = OrderedDict()
def peer_ids(self) -> list[ID]:
"""Get all peer IDs in the bucket."""
return list(self.peers.keys())
def peer_infos(self) -> list[PeerInfo]:
"""Get all PeerInfo objects in the bucket."""
return [info for info, _ in self.peers.values()]
def get_oldest_peer(self) -> ID | None:
"""Get the least-recently seen peer."""
if not self.peers:
return None
return next(iter(self.peers.keys()))
async def add_peer(self, peer_info: PeerInfo) -> bool:
"""
Add a peer to the bucket. Returns True if the peer was added or updated,
False if the bucket is full.
"""
current_time = time.time()
peer_id = peer_info.peer_id
# If peer is already in the bucket, move it to the end (most recently seen)
if peer_id in self.peers:
self.refresh_peer_last_seen(peer_id)
return True
# If bucket has space, add the peer
if len(self.peers) < self.bucket_size:
self.peers[peer_id] = (peer_info, current_time)
return True
# If bucket is full, we need to replace the least-recently seen peer
# Get the least-recently seen peer
oldest_peer_id = self.get_oldest_peer()
if oldest_peer_id is None:
logger.warning("No oldest peer found when bucket is full")
return False
# Check if the old peer is responsive to ping request
try:
# Try to ping the oldest peer, not the new peer
response = await self._ping_peer(oldest_peer_id)
if response:
# If the old peer is still alive, we will not add the new peer
logger.debug(
"Old peer %s is still alive, cannot add new peer %s",
oldest_peer_id,
peer_id,
)
return False
except Exception as e:
# If the old peer is unresponsive, we can replace it with the new peer
logger.debug(
"Old peer %s is unresponsive, replacing with new peer %s: %s",
oldest_peer_id,
peer_id,
str(e),
)
self.peers.popitem(last=False) # Remove oldest peer
self.peers[peer_id] = (peer_info, current_time)
return True
# If we got here, the oldest peer responded but we couldn't add the new peer
return False
def remove_peer(self, peer_id: ID) -> bool:
"""
Remove a peer from the bucket.
Returns True if the peer was in the bucket, False otherwise.
"""
if peer_id in self.peers:
del self.peers[peer_id]
return True
return False
def has_peer(self, peer_id: ID) -> bool:
"""Check if the peer is in the bucket."""
return peer_id in self.peers
def get_peer_info(self, peer_id: ID) -> PeerInfo | None:
"""Get the PeerInfo for a given peer ID if it exists in the bucket."""
if peer_id in self.peers:
return self.peers[peer_id][0]
return None
def size(self) -> int:
"""Get the number of peers in the bucket."""
return len(self.peers)
def get_stale_peers(self, stale_threshold_seconds: int = 3600) -> list[ID]:
"""
Get peers that haven't been pinged recently.
params: stale_threshold_seconds: Time in seconds
params: after which a peer is considered stale
Returns
-------
list[ID]
List of peer IDs that need to be refreshed
"""
current_time = time.time()
stale_peers = []
for peer_id, (_, last_seen) in self.peers.items():
if current_time - last_seen > stale_threshold_seconds:
stale_peers.append(peer_id)
return stale_peers
async def _periodic_peer_refresh(self) -> None:
"""Background task to periodically refresh peers"""
try:
while True:
await trio.sleep(PEER_REFRESH_INTERVAL) # Check every minute
# Find stale peers (not pinged in last hour)
stale_peers = self.get_stale_peers(
stale_threshold_seconds=STALE_PEER_THRESHOLD
)
if stale_peers:
logger.debug(f"Found {len(stale_peers)} stale peers to refresh")
for peer_id in stale_peers:
try:
# Try to ping the peer
logger.debug("Pinging stale peer %s", peer_id)
responce = await self._ping_peer(peer_id)
if responce:
# Update the last seen time
self.refresh_peer_last_seen(peer_id)
logger.debug(f"Refreshed peer {peer_id}")
else:
# If ping fails, remove the peer
logger.debug(f"Failed to ping peer {peer_id}")
self.remove_peer(peer_id)
logger.info(f"Removed unresponsive peer {peer_id}")
logger.debug(f"Successfully refreshed peer {peer_id}")
except Exception as e:
# If ping fails, remove the peer
logger.debug(
"Failed to ping peer %s: %s",
peer_id,
e,
)
self.remove_peer(peer_id)
logger.info(f"Removed unresponsive peer {peer_id}")
except trio.Cancelled:
logger.debug("Peer refresh task cancelled")
except Exception as e:
logger.error(f"Error in peer refresh task: {e}", exc_info=True)
async def _ping_peer(self, peer_id: ID) -> bool:
"""
Ping a peer using protobuf message to check
if it's still alive and update last seen time.
params: peer_id: The ID of the peer to ping
Returns
-------
bool
True if ping successful, False otherwise
"""
result = False
# Get peer info directly from the bucket
peer_info = self.get_peer_info(peer_id)
if not peer_info:
raise ValueError(f"Peer {peer_id} not in bucket")
try:
# Open a stream to the peer with the DHT protocol
stream = await self.host.new_stream(peer_id, [PROTOCOL_ID])
try:
# Create ping protobuf message
ping_msg = Message()
ping_msg.type = Message.PING # Use correct enum
# Serialize and send with length prefix (4 bytes big-endian)
msg_bytes = ping_msg.SerializeToString()
logger.debug(
f"Sending PING message to {peer_id}, size: {len(msg_bytes)} bytes"
)
await stream.write(len(msg_bytes).to_bytes(4, byteorder="big"))
await stream.write(msg_bytes)
# Wait for response with timeout
with trio.move_on_after(2): # 2 second timeout
# Read response length (4 bytes)
length_bytes = await stream.read(4)
if not length_bytes or len(length_bytes) < 4:
logger.warning(f"Peer {peer_id} disconnected during ping")
return False
msg_len = int.from_bytes(length_bytes, byteorder="big")
if (
msg_len <= 0 or msg_len > 1024 * 1024
): # Sanity check on message size
logger.warning(
f"Invalid message length from {peer_id}: {msg_len}"
)
return False
logger.debug(
f"Receiving response from {peer_id}, size: {msg_len} bytes"
)
# Read full message
response_bytes = await stream.read(msg_len)
if not response_bytes:
logger.warning(f"Failed to read response from {peer_id}")
return False
# Parse protobuf response
response = Message()
try:
response.ParseFromString(response_bytes)
except Exception as e:
logger.warning(
f"Failed to parse protobuf response from {peer_id}: {e}"
)
return False
if response.type == Message.PING:
# Update the last seen timestamp for this peer
logger.debug(f"Successfully pinged peer {peer_id}")
result = True
return result
else:
logger.warning(
f"Unexpected response type from {peer_id}: {response.type}"
)
return False
# If we get here, the ping timed out
logger.warning(f"Ping to peer {peer_id} timed out")
return False
finally:
await stream.close()
return result
except Exception as e:
logger.error(f"Error pinging peer {peer_id}: {str(e)}")
return False
def refresh_peer_last_seen(self, peer_id: ID) -> bool:
"""
Update the last-seen timestamp for a peer in the bucket.
params: peer_id: The ID of the peer to refresh
Returns
-------
bool
True if the peer was found and refreshed, False otherwise
"""
if peer_id in self.peers:
# Get current peer info and update the timestamp
peer_info, _ = self.peers[peer_id]
current_time = time.time()
self.peers[peer_id] = (peer_info, current_time)
# Move to end of ordered dict to mark as most recently seen
self.peers.move_to_end(peer_id)
return True
return False
def key_in_range(self, key: bytes) -> bool:
"""
Check if a key is in the range of this bucket.
params: key: The key to check (bytes)
Returns
-------
bool
True if the key is in range, False otherwise
"""
key_int = int.from_bytes(key, byteorder="big")
return self.min_range <= key_int < self.max_range
def split(self) -> tuple["KBucket", "KBucket"]:
"""
Split the bucket into two buckets.
Returns
-------
tuple
(lower_bucket, upper_bucket)
"""
midpoint = (self.min_range + self.max_range) // 2
lower_bucket = KBucket(self.host, self.bucket_size, self.min_range, midpoint)
upper_bucket = KBucket(self.host, self.bucket_size, midpoint, self.max_range)
# Redistribute peers
for peer_id, (peer_info, timestamp) in self.peers.items():
peer_key = int.from_bytes(peer_id.to_bytes(), byteorder="big")
if peer_key < midpoint:
lower_bucket.peers[peer_id] = (peer_info, timestamp)
else:
upper_bucket.peers[peer_id] = (peer_info, timestamp)
return lower_bucket, upper_bucket
class RoutingTable:
"""
The Kademlia routing table maintains information on which peers to contact for any
given peer ID in the network.
"""
def __init__(self, local_id: ID, host: IHost) -> None:
"""
Initialize the routing table.
:param local_id: The ID of the local node.
:param host: The host this routing table belongs to.
"""
self.local_id = local_id
self.host = host
self.buckets = [KBucket(host, BUCKET_SIZE)]
async def add_peer(self, peer_obj: PeerInfo | ID) -> bool:
"""
Add a peer to the routing table.
:param peer_obj: Either PeerInfo object or peer ID to add
Returns
-------
bool: True if the peer was added or updated, False otherwise
"""
peer_id = None
peer_info = None
try:
# Handle different types of input
if isinstance(peer_obj, PeerInfo):
# Already have PeerInfo object
peer_info = peer_obj
peer_id = peer_obj.peer_id
else:
# Assume it's a peer ID
peer_id = peer_obj
# Try to get addresses from the peerstore if available
try:
addrs = self.host.get_peerstore().addrs(peer_id)
if addrs:
# Create PeerInfo object
peer_info = PeerInfo(peer_id, addrs)
else:
logger.debug(
"No addresses found for peer %s in peerstore, skipping",
peer_id,
)
return False
except Exception as peerstore_error:
# Handle case where peer is not in peerstore yet
logger.debug(
"Peer %s not found in peerstore: %s, skipping",
peer_id,
str(peerstore_error),
)
return False
# Don't add ourselves
if peer_id == self.local_id:
return False
# Find the right bucket for this peer
bucket = self.find_bucket(peer_id)
# Try to add to the bucket
success = await bucket.add_peer(peer_info)
if success:
logger.debug(f"Successfully added peer {peer_id} to routing table")
return success
except Exception as e:
logger.debug(f"Error adding peer {peer_obj} to routing table: {e}")
return False
def remove_peer(self, peer_id: ID) -> bool:
"""
Remove a peer from the routing table.
:param peer_id: The ID of the peer to remove
Returns
-------
bool: True if the peer was removed, False otherwise
"""
bucket = self.find_bucket(peer_id)
return bucket.remove_peer(peer_id)
def find_bucket(self, peer_id: ID) -> KBucket:
"""
Find the bucket that would contain the given peer ID or PeerInfo.
:param peer_obj: Either a peer ID or a PeerInfo object
Returns
-------
KBucket: The bucket for this peer
"""
for bucket in self.buckets:
if bucket.key_in_range(peer_id.to_bytes()):
return bucket
return self.buckets[0]
def find_local_closest_peers(self, key: bytes, count: int = 20) -> list[ID]:
"""
Find the closest peers to a given key.
:param key: The key to find closest peers to (bytes)
:param count: Maximum number of peers to return
Returns
-------
List[ID]: List of peer IDs closest to the key
"""
# Get all peers from all buckets
all_peers = []
for bucket in self.buckets:
all_peers.extend(bucket.peer_ids())
# Sort by XOR distance to the key
all_peers.sort(key=lambda p: xor_distance(p.to_bytes(), key))
return all_peers[:count]
def get_peer_ids(self) -> list[ID]:
"""
Get all peer IDs in the routing table.
Returns
-------
:param List[ID]: List of all peer IDs
"""
peers = []
for bucket in self.buckets:
peers.extend(bucket.peer_ids())
return peers
def get_peer_info(self, peer_id: ID) -> PeerInfo | None:
"""
Get the peer info for a specific peer.
:param peer_id: The ID of the peer to get info for
Returns
-------
PeerInfo: The peer info, or None if not found
"""
bucket = self.find_bucket(peer_id)
return bucket.get_peer_info(peer_id)
def peer_in_table(self, peer_id: ID) -> bool:
"""
Check if a peer is in the routing table.
:param peer_id: The ID of the peer to check
Returns
-------
bool: True if the peer is in the routing table, False otherwise
"""
bucket = self.find_bucket(peer_id)
return bucket.has_peer(peer_id)
def size(self) -> int:
"""
Get the number of peers in the routing table.
Returns
-------
int: Number of peers
"""
count = 0
for bucket in self.buckets:
count += bucket.size()
return count
def get_stale_peers(self, stale_threshold_seconds: int = 3600) -> list[ID]:
"""
Get all stale peers from all buckets
params: stale_threshold_seconds:
Time in seconds after which a peer is considered stale
Returns
-------
list[ID]
List of stale peer IDs
"""
stale_peers = []
for bucket in self.buckets:
stale_peers.extend(bucket.get_stale_peers(stale_threshold_seconds))
return stale_peers
def cleanup_routing_table(self) -> None:
"""
Cleanup the routing table by removing all data.
This is useful for resetting the routing table during tests or reinitialization.
"""
self.buckets = [KBucket(self.host, BUCKET_SIZE)]
logger.info("Routing table cleaned up, all data removed.")

117
libp2p/kad_dht/utils.py Normal file
View File

@ -0,0 +1,117 @@
"""
Utility functions for Kademlia DHT implementation.
"""
import base58
import multihash
from libp2p.peer.id import (
ID,
)
def create_key_from_binary(binary_data: bytes) -> bytes:
"""
Creates a key for the DHT by hashing binary data with SHA-256.
params: binary_data: The binary data to hash.
Returns
-------
bytes: The resulting key.
"""
return multihash.digest(binary_data, "sha2-256").digest
def xor_distance(key1: bytes, key2: bytes) -> int:
"""
Calculate the XOR distance between two keys.
params: key1: First key (bytes)
params: key2: Second key (bytes)
Returns
-------
int: The XOR distance between the keys
"""
# Ensure the inputs are bytes
if not isinstance(key1, bytes) or not isinstance(key2, bytes):
raise TypeError("Both key1 and key2 must be bytes objects")
# Convert to integers
k1 = int.from_bytes(key1, byteorder="big")
k2 = int.from_bytes(key2, byteorder="big")
# Calculate XOR distance
return k1 ^ k2
def bytes_to_base58(data: bytes) -> str:
"""
Convert bytes to base58 encoded string.
params: data: Input bytes
Returns
-------
str: Base58 encoded string
"""
return base58.b58encode(data).decode("utf-8")
def sort_peer_ids_by_distance(target_key: bytes, peer_ids: list[ID]) -> list[ID]:
"""
Sort a list of peer IDs by their distance to the target key.
params: target_key: The target key to measure distance from
params: peer_ids: List of peer IDs to sort
Returns
-------
List[ID]: Sorted list of peer IDs from closest to furthest
"""
def get_distance(peer_id: ID) -> int:
# Hash the peer ID bytes to get a key for distance calculation
peer_hash = multihash.digest(peer_id.to_bytes(), "sha2-256").digest
return xor_distance(target_key, peer_hash)
return sorted(peer_ids, key=get_distance)
def shared_prefix_len(first: bytes, second: bytes) -> int:
"""
Calculate the number of prefix bits shared by two byte sequences.
params: first: First byte sequence
params: second: Second byte sequence
Returns
-------
int: Number of shared prefix bits
"""
# Compare each byte to find the first bit difference
common_length = 0
for i in range(min(len(first), len(second))):
byte_first = first[i]
byte_second = second[i]
if byte_first == byte_second:
common_length += 8
else:
# Find specific bit where they differ
xor = byte_first ^ byte_second
# Count leading zeros in the xor result
for j in range(7, -1, -1):
if (xor >> j) & 1 == 1:
return common_length + (7 - j)
# This shouldn't be reached if xor != 0
return common_length + 8
return common_length

View File

@ -0,0 +1,393 @@
"""
Value store implementation for Kademlia DHT.
Provides a way to store and retrieve key-value pairs with optional expiration.
"""
import logging
import time
import varint
from libp2p.abc import (
IHost,
)
from libp2p.custom_types import (
TProtocol,
)
from libp2p.peer.id import (
ID,
)
from .common import (
DEFAULT_TTL,
PROTOCOL_ID,
)
from .pb.kademlia_pb2 import (
Message,
)
# logger = logging.getLogger("libp2p.kademlia.value_store")
logger = logging.getLogger("kademlia-example.value_store")
class ValueStore:
"""
Store for key-value pairs in a Kademlia DHT.
Values are stored with a timestamp and optional expiration time.
"""
def __init__(self, host: IHost, local_peer_id: ID):
"""
Initialize an empty value store.
:param host: The libp2p host instance.
:param local_peer_id: The local peer ID to ignore in peer requests.
"""
# Store format: {key: (value, validity)}
self.store: dict[bytes, tuple[bytes, float]] = {}
# Store references to the host and local peer ID for making requests
self.host = host
self.local_peer_id = local_peer_id
def put(self, key: bytes, value: bytes, validity: float = 0.0) -> None:
"""
Store a value in the DHT.
:param key: The key to store the value under
:param value: The value to store
:param validity: validity in seconds before the value expires.
Defaults to `DEFAULT_TTL` if set to 0.0.
Returns
-------
None
"""
if validity == 0.0:
validity = time.time() + DEFAULT_TTL
logger.debug(
"Storing value for key %s... with validity %s", key.hex(), validity
)
self.store[key] = (value, validity)
logger.debug(f"Stored value for key {key.hex()}")
async def _store_at_peer(self, peer_id: ID, key: bytes, value: bytes) -> bool:
"""
Store a value at a specific peer.
params: peer_id: The ID of the peer to store the value at
params: key: The key to store
params: value: The value to store
Returns
-------
bool
True if the value was successfully stored, False otherwise
"""
result = False
stream = None
try:
# Don't try to store at ourselves
if self.local_peer_id and peer_id == self.local_peer_id:
result = True
return result
if not self.host:
logger.error("Host not initialized, cannot store value at peer")
return False
logger.debug(f"Storing value for key {key.hex()} at peer {peer_id}")
# Open a stream to the peer
stream = await self.host.new_stream(peer_id, [PROTOCOL_ID])
logger.debug(f"Opened stream to peer {peer_id}")
# Create the PUT_VALUE message with protobuf
message = Message()
message.type = Message.MessageType.PUT_VALUE
# Set message fields
message.key = key
message.record.key = key
message.record.value = value
message.record.timeReceived = str(time.time())
# Serialize and send the protobuf message with length prefix
proto_bytes = message.SerializeToString()
await stream.write(varint.encode(len(proto_bytes)))
await stream.write(proto_bytes)
logger.debug("Sent PUT_VALUE protobuf message with varint length")
# Read varint-prefixed response length
length_bytes = b""
while True:
logger.debug("Reading varint length prefix for response...")
b = await stream.read(1)
if not b:
logger.warning("Connection closed while reading varint length")
return False
length_bytes += b
if b[0] & 0x80 == 0:
break
logger.debug(f"Received varint length bytes: {length_bytes.hex()}")
response_length = varint.decode_bytes(length_bytes)
logger.debug("Response length: %d bytes", response_length)
# Read response data
response_bytes = b""
remaining = response_length
while remaining > 0:
chunk = await stream.read(remaining)
if not chunk:
logger.debug(
f"Connection closed by peer {peer_id} while reading data"
)
return False
response_bytes += chunk
remaining -= len(chunk)
# Parse protobuf response
response = Message()
response.ParseFromString(response_bytes)
# Check if response is valid
if response.type == Message.MessageType.PUT_VALUE:
if response.key:
result = True
return result
except Exception as e:
logger.warning(f"Failed to store value at peer {peer_id}: {e}")
return False
finally:
if stream:
await stream.close()
return result
def get(self, key: bytes) -> bytes | None:
"""
Retrieve a value from the DHT.
params: key: The key to look up
Returns
-------
Optional[bytes]
The stored value, or None if not found or expired
"""
logger.debug("Retrieving value for key %s...", key.hex()[:8])
if key not in self.store:
return None
value, validity = self.store[key]
logger.debug(
"Found value for key %s... with validity %s",
key.hex(),
validity,
)
# Check if the value has expired
if validity is not None and validity < time.time():
logger.debug(
"Value for key %s... has expired, removing it",
key.hex()[:8],
)
self.remove(key)
return None
return value
async def _get_from_peer(self, peer_id: ID, key: bytes) -> bytes | None:
"""
Retrieve a value from a specific peer.
params: peer_id: The ID of the peer to retrieve the value from
params: key: The key to retrieve
Returns
-------
Optional[bytes]
The value if found, None otherwise
"""
stream = None
try:
# Don't try to get from ourselves
if peer_id == self.local_peer_id:
return None
logger.debug(f"Getting value for key {key.hex()} from peer {peer_id}")
# Open a stream to the peer
stream = await self.host.new_stream(peer_id, [TProtocol(PROTOCOL_ID)])
logger.debug(f"Opened stream to peer {peer_id} for GET_VALUE")
# Create the GET_VALUE message using protobuf
message = Message()
message.type = Message.MessageType.GET_VALUE
message.key = key
# Serialize and send the protobuf message
proto_bytes = message.SerializeToString()
await stream.write(varint.encode(len(proto_bytes)))
await stream.write(proto_bytes)
# Read response length
length_bytes = b""
while True:
b = await stream.read(1)
if not b:
logger.warning("Connection closed while reading length")
return None
length_bytes += b
if b[0] & 0x80 == 0:
break
response_length = varint.decode_bytes(length_bytes)
# Read response data
response_bytes = b""
remaining = response_length
while remaining > 0:
chunk = await stream.read(remaining)
if not chunk:
logger.debug(
f"Connection closed by peer {peer_id} while reading data"
)
return None
response_bytes += chunk
remaining -= len(chunk)
# Parse protobuf response
try:
response = Message()
response.ParseFromString(response_bytes)
logger.debug(
f"Received protobuf response from peer"
f" {peer_id}, type: {response.type}"
)
# Process protobuf response
if (
response.type == Message.MessageType.GET_VALUE
and response.HasField("record")
and response.record.value
):
logger.debug(
f"Received value for key {key.hex()} from peer {peer_id}"
)
return response.record.value
# Handle case where value is not found but peer infos are returned
else:
logger.debug(
f"Value not found for key {key.hex()} from peer {peer_id},"
f" received {len(response.closerPeers)} closer peers"
)
return None
except Exception as proto_err:
logger.warning(f"Failed to parse as protobuf: {proto_err}")
return None
except Exception as e:
logger.warning(f"Failed to get value from peer {peer_id}: {e}")
return None
finally:
if stream:
await stream.close()
def remove(self, key: bytes) -> bool:
"""
Remove a value from the DHT.
params: key: The key to remove
Returns
-------
bool
True if the key was found and removed, False otherwise
"""
if key in self.store:
del self.store[key]
logger.debug(f"Removed value for key {key.hex()[:8]}...")
return True
return False
def has(self, key: bytes) -> bool:
"""
Check if a key exists in the store and hasn't expired.
params: key: The key to check
Returns
-------
bool
True if the key exists and hasn't expired, False otherwise
"""
if key not in self.store:
return False
_, validity = self.store[key]
if validity is not None and time.time() > validity:
self.remove(key)
return False
return True
def cleanup_expired(self) -> int:
"""
Remove all expired values from the store.
Returns
-------
int
The number of expired values that were removed
"""
current_time = time.time()
expired_keys = [
key for key, (_, validity) in self.store.items() if current_time > validity
]
for key in expired_keys:
del self.store[key]
if expired_keys:
logger.debug(f"Cleaned up {len(expired_keys)} expired values")
return len(expired_keys)
def get_keys(self) -> list[bytes]:
"""
Get all non-expired keys in the store.
Returns
-------
list[bytes]
List of keys
"""
# Clean up expired values first
self.cleanup_expired()
return list(self.store.keys())
def size(self) -> int:
"""
Get the number of items in the store (after removing expired entries).
Returns
-------
int
Number of items
"""
self.cleanup_expired()
return len(self.store)

View File

@ -187,7 +187,7 @@ class Swarm(Service, INetworkService):
# Per, https://discuss.libp2p.io/t/multistream-security/130, we first secure # Per, https://discuss.libp2p.io/t/multistream-security/130, we first secure
# the conn and then mux the conn # the conn and then mux the conn
try: try:
secured_conn = await self.upgrader.upgrade_security(raw_conn, peer_id, True) secured_conn = await self.upgrader.upgrade_security(raw_conn, True, peer_id)
except SecurityUpgradeFailure as error: except SecurityUpgradeFailure as error:
logger.debug("failed to upgrade security for peer %s", peer_id) logger.debug("failed to upgrade security for peer %s", peer_id)
await raw_conn.close() await raw_conn.close()
@ -257,10 +257,7 @@ class Swarm(Service, INetworkService):
# Per, https://discuss.libp2p.io/t/multistream-security/130, we first # Per, https://discuss.libp2p.io/t/multistream-security/130, we first
# secure the conn and then mux the conn # secure the conn and then mux the conn
try: try:
# FIXME: This dummy `ID(b"")` for the remote peer is useless. secured_conn = await self.upgrader.upgrade_security(raw_conn, False)
secured_conn = await self.upgrader.upgrade_security(
raw_conn, ID(b""), False
)
except SecurityUpgradeFailure as error: except SecurityUpgradeFailure as error:
logger.debug("failed to upgrade security for peer at %s", maddr) logger.debug("failed to upgrade security for peer at %s", maddr)
await raw_conn.close() await raw_conn.close()

View File

@ -18,6 +18,13 @@ from libp2p.crypto.keys import (
PublicKey, PublicKey,
) )
"""
Latency EWMA Smoothing governs the deacy of the EWMA (the speed at which
is changes). This must be a normalized (0-1) value.
1 is 100% change, 0 is no change.
"""
LATENCY_EWMA_SMOOTHING = 0.1
class PeerData(IPeerData): class PeerData(IPeerData):
pubkey: PublicKey | None pubkey: PublicKey | None
@ -27,6 +34,7 @@ class PeerData(IPeerData):
addrs: list[Multiaddr] addrs: list[Multiaddr]
last_identified: int last_identified: int
ttl: int # Keep ttl=0 by default for always valid ttl: int # Keep ttl=0 by default for always valid
latmap: float
def __init__(self) -> None: def __init__(self) -> None:
self.pubkey = None self.pubkey = None
@ -36,6 +44,9 @@ class PeerData(IPeerData):
self.addrs = [] self.addrs = []
self.last_identified = int(time.time()) self.last_identified = int(time.time())
self.ttl = 0 self.ttl = 0
self.latmap = 0
# --------PROTO-BOOK--------
def get_protocols(self) -> list[str]: def get_protocols(self) -> list[str]:
""" """
@ -55,6 +66,37 @@ class PeerData(IPeerData):
""" """
self.protocols = list(protocols) self.protocols = list(protocols)
def remove_protocols(self, protocols: Sequence[str]) -> None:
"""
:param protocols: protocols to remove
"""
for protocol in protocols:
if protocol in self.protocols:
self.protocols.remove(protocol)
def supports_protocols(self, protocols: Sequence[str]) -> list[str]:
"""
:param protocols: protocols to check from
:return: all supported protocols in the given list
"""
return [proto for proto in protocols if proto in self.protocols]
def first_supported_protocol(self, protocols: Sequence[str]) -> str:
"""
:param protocols: protocols to check from
:return: first supported protocol in the given list
"""
for protocol in protocols:
if protocol in self.protocols:
return protocol
return "None supported"
def clear_protocol_data(self) -> None:
"""Clear all protocols"""
self.protocols = []
# -------ADDR-BOOK---------
def add_addrs(self, addrs: Sequence[Multiaddr]) -> None: def add_addrs(self, addrs: Sequence[Multiaddr]) -> None:
""" """
:param addrs: multiaddresses to add :param addrs: multiaddresses to add
@ -73,6 +115,7 @@ class PeerData(IPeerData):
"""Clear all addresses.""" """Clear all addresses."""
self.addrs = [] self.addrs = []
# -------METADATA-----------
def put_metadata(self, key: str, val: Any) -> None: def put_metadata(self, key: str, val: Any) -> None:
""" """
:param key: key in KV pair :param key: key in KV pair
@ -90,6 +133,11 @@ class PeerData(IPeerData):
return self.metadata[key] return self.metadata[key]
raise PeerDataError("key not found") raise PeerDataError("key not found")
def clear_metadata(self) -> None:
"""Clears metadata."""
self.metadata = {}
# -------KEY-BOOK---------------
def add_pubkey(self, pubkey: PublicKey) -> None: def add_pubkey(self, pubkey: PublicKey) -> None:
""" """
:param pubkey: :param pubkey:
@ -120,9 +168,41 @@ class PeerData(IPeerData):
raise PeerDataError("private key not found") raise PeerDataError("private key not found")
return self.privkey return self.privkey
def clear_keydata(self) -> None:
"""Clears keydata"""
self.pubkey = None
self.privkey = None
# ----------METRICS--------------
def record_latency(self, new_latency: float) -> None:
"""
Records a new latency measurement for the given peer
using Exponentially Weighted Moving Average (EWMA)
:param new_latency: the new latency value
"""
s = LATENCY_EWMA_SMOOTHING
if s > 1 or s < 0:
s = 0.1
if self.latmap == 0:
self.latmap = new_latency
else:
prev = self.latmap
updated = ((1.0 - s) * prev) + (s * new_latency)
self.latmap = updated
def latency_EWMA(self) -> float:
"""Returns the latency EWMA value"""
return self.latmap
def clear_metrics(self) -> None:
"""Clear the latency metrics"""
self.latmap = 0
def update_last_identified(self) -> None: def update_last_identified(self) -> None:
self.last_identified = int(time.time()) self.last_identified = int(time.time())
# ----------TTL------------------
def get_last_identified(self) -> int: def get_last_identified(self) -> int:
""" """
:return: last identified timestamp :return: last identified timestamp

View File

@ -66,5 +66,23 @@ def info_from_p2p_addr(addr: multiaddr.Multiaddr) -> PeerInfo:
return PeerInfo(peer_id, [addr]) return PeerInfo(peer_id, [addr])
def peer_info_to_bytes(peer_info: PeerInfo) -> bytes:
lines = [str(peer_info.peer_id)] + [str(addr) for addr in peer_info.addrs]
return "\n".join(lines).encode("utf-8")
def peer_info_from_bytes(data: bytes) -> PeerInfo:
try:
lines = data.decode("utf-8").splitlines()
if not lines:
raise InvalidAddrError("no data to decode PeerInfo")
peer_id = ID.from_base58(lines[0])
addrs = [multiaddr.Multiaddr(addr_str) for addr_str in lines[1:]]
return PeerInfo(peer_id, addrs)
except Exception as e:
raise InvalidAddrError(f"failed to decode PeerInfo: {e}")
class InvalidAddrError(ValueError): class InvalidAddrError(ValueError):
pass pass

View File

@ -2,6 +2,7 @@ from collections import (
defaultdict, defaultdict,
) )
from collections.abc import ( from collections.abc import (
AsyncIterable,
Sequence, Sequence,
) )
from typing import ( from typing import (
@ -11,6 +12,8 @@ from typing import (
from multiaddr import ( from multiaddr import (
Multiaddr, Multiaddr,
) )
import trio
from trio import MemoryReceiveChannel, MemorySendChannel
from libp2p.abc import ( from libp2p.abc import (
IPeerStore, IPeerStore,
@ -40,6 +43,7 @@ class PeerStore(IPeerStore):
def __init__(self) -> None: def __init__(self) -> None:
self.peer_data_map = defaultdict(PeerData) self.peer_data_map = defaultdict(PeerData)
self.addr_update_channels: dict[ID, MemorySendChannel[Multiaddr]] = {}
def peer_info(self, peer_id: ID) -> PeerInfo: def peer_info(self, peer_id: ID) -> PeerInfo:
""" """
@ -53,6 +57,29 @@ class PeerStore(IPeerStore):
return PeerInfo(peer_id, peer_data.get_addrs()) return PeerInfo(peer_id, peer_data.get_addrs())
raise PeerStoreError("peer ID not found") raise PeerStoreError("peer ID not found")
def peer_ids(self) -> list[ID]:
"""
:return: all of the peer IDs stored in peer store
"""
return list(self.peer_data_map.keys())
def clear_peerdata(self, peer_id: ID) -> None:
"""Clears the peer data of the peer"""
def valid_peer_ids(self) -> list[ID]:
"""
:return: all of the valid peer IDs stored in peer store
"""
valid_peer_ids: list[ID] = []
for peer_id, peer_data in self.peer_data_map.items():
if not peer_data.is_expired():
valid_peer_ids.append(peer_id)
else:
peer_data.clear_addrs()
return valid_peer_ids
# --------PROTO-BOOK--------
def get_protocols(self, peer_id: ID) -> list[str]: def get_protocols(self, peer_id: ID) -> list[str]:
""" """
:param peer_id: peer ID to get protocols for :param peer_id: peer ID to get protocols for
@ -79,23 +106,31 @@ class PeerStore(IPeerStore):
peer_data = self.peer_data_map[peer_id] peer_data = self.peer_data_map[peer_id]
peer_data.set_protocols(list(protocols)) peer_data.set_protocols(list(protocols))
def peer_ids(self) -> list[ID]: def remove_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
:param peer_id: peer ID to get info for
:param protocols: unsupported protocols to remove
"""
peer_data = self.peer_data_map[peer_id]
peer_data.remove_protocols(protocols)
def supports_protocols(self, peer_id: ID, protocols: Sequence[str]) -> list[str]:
""" """
:return: all of the peer IDs stored in peer store :return: all of the peer IDs stored in peer store
""" """
return list(self.peer_data_map.keys()) peer_data = self.peer_data_map[peer_id]
return peer_data.supports_protocols(protocols)
def valid_peer_ids(self) -> list[ID]: def first_supported_protocol(self, peer_id: ID, protocols: Sequence[str]) -> str:
""" peer_data = self.peer_data_map[peer_id]
:return: all of the valid peer IDs stored in peer store return peer_data.first_supported_protocol(protocols)
"""
valid_peer_ids: list[ID] = [] def clear_protocol_data(self, peer_id: ID) -> None:
for peer_id, peer_data in self.peer_data_map.items(): """Clears prtocoldata"""
if not peer_data.is_expired(): peer_data = self.peer_data_map[peer_id]
valid_peer_ids.append(peer_id) peer_data.clear_protocol_data()
else:
peer_data.clear_addrs() # ------METADATA---------
return valid_peer_ids
def get(self, peer_id: ID, key: str) -> Any: def get(self, peer_id: ID, key: str) -> Any:
""" """
@ -121,6 +156,13 @@ class PeerStore(IPeerStore):
peer_data = self.peer_data_map[peer_id] peer_data = self.peer_data_map[peer_id]
peer_data.put_metadata(key, val) peer_data.put_metadata(key, val)
def clear_metadata(self, peer_id: ID) -> None:
"""Clears metadata"""
peer_data = self.peer_data_map[peer_id]
peer_data.clear_metadata()
# -------ADDR-BOOK--------
def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int = 0) -> None: def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int = 0) -> None:
""" """
:param peer_id: peer ID to add address for :param peer_id: peer ID to add address for
@ -140,6 +182,13 @@ class PeerStore(IPeerStore):
peer_data.set_ttl(ttl) peer_data.set_ttl(ttl)
peer_data.update_last_identified() peer_data.update_last_identified()
if peer_id in self.addr_update_channels:
for addr in addrs:
try:
self.addr_update_channels[peer_id].send_nowait(addr)
except trio.WouldBlock:
pass # Or consider logging / dropping / replacing stream
def addrs(self, peer_id: ID) -> list[Multiaddr]: def addrs(self, peer_id: ID) -> list[Multiaddr]:
""" """
:param peer_id: peer ID to get addrs for :param peer_id: peer ID to get addrs for
@ -165,7 +214,7 @@ class PeerStore(IPeerStore):
def peers_with_addrs(self) -> list[ID]: def peers_with_addrs(self) -> list[ID]:
""" """
:return: all of the peer IDs which has addrs stored in peer store :return: all of the peer IDs which has addrsfloat stored in peer store
""" """
# Add all peers with addrs at least 1 to output # Add all peers with addrs at least 1 to output
output: list[ID] = [] output: list[ID] = []
@ -179,6 +228,27 @@ class PeerStore(IPeerStore):
peer_data.clear_addrs() peer_data.clear_addrs()
return output return output
async def addr_stream(self, peer_id: ID) -> AsyncIterable[Multiaddr]:
"""
Returns an async stream of newly added addresses for the given peer.
This function allows consumers to subscribe to address updates for a peer
and receive each new address as it is added via `add_addr` or `add_addrs`.
:param peer_id: The ID of the peer to monitor address updates for.
:return: An async iterator yielding Multiaddr instances as they are added.
"""
send: MemorySendChannel[Multiaddr]
receive: MemoryReceiveChannel[Multiaddr]
send, receive = trio.open_memory_channel(0)
self.addr_update_channels[peer_id] = send
async for addr in receive:
yield addr
# -------KEY-BOOK---------
def add_pubkey(self, peer_id: ID, pubkey: PublicKey) -> None: def add_pubkey(self, peer_id: ID, pubkey: PublicKey) -> None:
""" """
:param peer_id: peer ID to add public key for :param peer_id: peer ID to add public key for
@ -239,6 +309,45 @@ class PeerStore(IPeerStore):
self.add_pubkey(peer_id, key_pair.public_key) self.add_pubkey(peer_id, key_pair.public_key)
self.add_privkey(peer_id, key_pair.private_key) self.add_privkey(peer_id, key_pair.private_key)
def peer_with_keys(self) -> list[ID]:
"""Returns the peer_ids for which keys are stored"""
return [
peer_id
for peer_id, pdata in self.peer_data_map.items()
if pdata.pubkey is not None
]
def clear_keydata(self, peer_id: ID) -> None:
"""Clears the keys of the peer"""
peer_data = self.peer_data_map[peer_id]
peer_data.clear_keydata()
# --------METRICS--------
def record_latency(self, peer_id: ID, RTT: float) -> None:
"""
Records a new latency measurement for the given peer
using Exponentially Weighted Moving Average (EWMA)
:param peer_id: peer ID to get private key for
:param RTT: the new latency value (round trip time)
"""
peer_data = self.peer_data_map[peer_id]
peer_data.record_latency(RTT)
def latency_EWMA(self, peer_id: ID) -> float:
"""
:param peer_id: peer ID to get private key for
:return: The latency EWMA value for that peer
"""
peer_data = self.peer_data_map[peer_id]
return peer_data.latency_EWMA()
def clear_metrics(self, peer_id: ID) -> None:
"""Clear the latency metrics"""
peer_data = self.peer_data_map[peer_id]
peer_data.clear_metrics()
class PeerStoreError(KeyError): class PeerStoreError(KeyError):
"""Raised when peer ID is not found in peer store.""" """Raised when peer ID is not found in peer store."""

View File

@ -1,3 +1,5 @@
import trio
from libp2p.abc import ( from libp2p.abc import (
IMultiselectCommunicator, IMultiselectCommunicator,
IMultiselectMuxer, IMultiselectMuxer,
@ -14,6 +16,7 @@ from .exceptions import (
MULTISELECT_PROTOCOL_ID = "/multistream/1.0.0" MULTISELECT_PROTOCOL_ID = "/multistream/1.0.0"
PROTOCOL_NOT_FOUND_MSG = "na" PROTOCOL_NOT_FOUND_MSG = "na"
DEFAULT_NEGOTIATE_TIMEOUT = 5
class Multiselect(IMultiselectMuxer): class Multiselect(IMultiselectMuxer):
@ -47,47 +50,56 @@ class Multiselect(IMultiselectMuxer):
# FIXME: Make TProtocol Optional[TProtocol] to keep types consistent # FIXME: Make TProtocol Optional[TProtocol] to keep types consistent
async def negotiate( async def negotiate(
self, communicator: IMultiselectCommunicator self,
communicator: IMultiselectCommunicator,
negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> tuple[TProtocol, StreamHandlerFn | None]: ) -> tuple[TProtocol, StreamHandlerFn | None]:
""" """
Negotiate performs protocol selection. Negotiate performs protocol selection.
:param stream: stream to negotiate on :param stream: stream to negotiate on
:param negotiate_timeout: timeout for negotiation
:return: selected protocol name, handler function :return: selected protocol name, handler function
:raise MultiselectError: raised when negotiation failed :raise MultiselectError: raised when negotiation failed
""" """
await self.handshake(communicator) try:
with trio.fail_after(negotiate_timeout):
await self.handshake(communicator)
while True: while True:
try:
command = await communicator.read()
except MultiselectCommunicatorError as error:
raise MultiselectError() from error
if command == "ls":
supported_protocols = [p for p in self.handlers.keys() if p is not None]
response = "\n".join(supported_protocols) + "\n"
try:
await communicator.write(response)
except MultiselectCommunicatorError as error:
raise MultiselectError() from error
else:
protocol = TProtocol(command)
if protocol in self.handlers:
try: try:
await communicator.write(protocol) command = await communicator.read()
except MultiselectCommunicatorError as error: except MultiselectCommunicatorError as error:
raise MultiselectError() from error raise MultiselectError() from error
return protocol, self.handlers[protocol] if command == "ls":
try: supported_protocols = [
await communicator.write(PROTOCOL_NOT_FOUND_MSG) p for p in self.handlers.keys() if p is not None
except MultiselectCommunicatorError as error: ]
raise MultiselectError() from error response = "\n".join(supported_protocols) + "\n"
raise MultiselectError("Negotiation failed: no matching protocol") try:
await communicator.write(response)
except MultiselectCommunicatorError as error:
raise MultiselectError() from error
else:
protocol = TProtocol(command)
if protocol in self.handlers:
try:
await communicator.write(protocol)
except MultiselectCommunicatorError as error:
raise MultiselectError() from error
return protocol, self.handlers[protocol]
try:
await communicator.write(PROTOCOL_NOT_FOUND_MSG)
except MultiselectCommunicatorError as error:
raise MultiselectError() from error
raise MultiselectError("Negotiation failed: no matching protocol")
except trio.TooSlowError:
raise MultiselectError("handshake read timeout")
async def handshake(self, communicator: IMultiselectCommunicator) -> None: async def handshake(self, communicator: IMultiselectCommunicator) -> None:
""" """

View File

@ -2,6 +2,8 @@ from collections.abc import (
Sequence, Sequence,
) )
import trio
from libp2p.abc import ( from libp2p.abc import (
IMultiselectClient, IMultiselectClient,
IMultiselectCommunicator, IMultiselectCommunicator,
@ -17,6 +19,7 @@ from .exceptions import (
MULTISELECT_PROTOCOL_ID = "/multistream/1.0.0" MULTISELECT_PROTOCOL_ID = "/multistream/1.0.0"
PROTOCOL_NOT_FOUND_MSG = "na" PROTOCOL_NOT_FOUND_MSG = "na"
DEFAULT_NEGOTIATE_TIMEOUT = 5
class MultiselectClient(IMultiselectClient): class MultiselectClient(IMultiselectClient):
@ -40,6 +43,7 @@ class MultiselectClient(IMultiselectClient):
try: try:
handshake_contents = await communicator.read() handshake_contents = await communicator.read()
except MultiselectCommunicatorError as error: except MultiselectCommunicatorError as error:
raise MultiselectClientError() from error raise MultiselectClientError() from error
@ -47,7 +51,10 @@ class MultiselectClient(IMultiselectClient):
raise MultiselectClientError("multiselect protocol ID mismatch") raise MultiselectClientError("multiselect protocol ID mismatch")
async def select_one_of( async def select_one_of(
self, protocols: Sequence[TProtocol], communicator: IMultiselectCommunicator self,
protocols: Sequence[TProtocol],
communicator: IMultiselectCommunicator,
negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> TProtocol: ) -> TProtocol:
""" """
For each protocol, send message to multiselect selecting protocol and For each protocol, send message to multiselect selecting protocol and
@ -56,22 +63,32 @@ class MultiselectClient(IMultiselectClient):
:param protocol: protocol to select :param protocol: protocol to select
:param communicator: communicator to use to communicate with counterparty :param communicator: communicator to use to communicate with counterparty
:param negotiate_timeout: timeout for negotiation
:return: selected protocol :return: selected protocol
:raise MultiselectClientError: raised when protocol negotiation failed :raise MultiselectClientError: raised when protocol negotiation failed
""" """
await self.handshake(communicator) try:
with trio.fail_after(negotitate_timeout):
await self.handshake(communicator)
for protocol in protocols: for protocol in protocols:
try: try:
selected_protocol = await self.try_select(communicator, protocol) selected_protocol = await self.try_select(
return selected_protocol communicator, protocol
except MultiselectClientError: )
pass return selected_protocol
except MultiselectClientError:
pass
raise MultiselectClientError("protocols not supported") raise MultiselectClientError("protocols not supported")
except trio.TooSlowError:
raise MultiselectClientError("response timed out")
async def query_multistream_command( async def query_multistream_command(
self, communicator: IMultiselectCommunicator, command: str self,
communicator: IMultiselectCommunicator,
command: str,
response_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> list[str]: ) -> list[str]:
""" """
Send a multistream-select command over the given communicator and return Send a multistream-select command over the given communicator and return
@ -79,26 +96,32 @@ class MultiselectClient(IMultiselectClient):
:param communicator: communicator to use to communicate with counterparty :param communicator: communicator to use to communicate with counterparty
:param command: supported multistream-select command(e.g., ls) :param command: supported multistream-select command(e.g., ls)
:param negotiate_timeout: timeout for negotiation
:raise MultiselectClientError: If the communicator fails to process data. :raise MultiselectClientError: If the communicator fails to process data.
:return: list of strings representing the response from peer. :return: list of strings representing the response from peer.
""" """
await self.handshake(communicator)
if command == "ls":
try:
await communicator.write("ls")
except MultiselectCommunicatorError as error:
raise MultiselectClientError() from error
else:
raise ValueError("Command not supported")
try: try:
response = await communicator.read() with trio.fail_after(response_timeout):
response_list = response.strip().splitlines() await self.handshake(communicator)
except MultiselectCommunicatorError as error:
raise MultiselectClientError() from error
return response_list if command == "ls":
try:
await communicator.write("ls")
except MultiselectCommunicatorError as error:
raise MultiselectClientError() from error
else:
raise ValueError("Command not supported")
try:
response = await communicator.read()
response_list = response.strip().splitlines()
except MultiselectCommunicatorError as error:
raise MultiselectClientError() from error
return response_list
except trio.TooSlowError:
raise MultiselectClientError("command response timed out")
async def try_select( async def try_select(
self, communicator: IMultiselectCommunicator, protocol: TProtocol self, communicator: IMultiselectCommunicator, protocol: TProtocol
@ -118,6 +141,7 @@ class MultiselectClient(IMultiselectClient):
try: try:
response = await communicator.read() response = await communicator.read()
except MultiselectCommunicatorError as error: except MultiselectCommunicatorError as error:
raise MultiselectClientError() from error raise MultiselectClientError() from error

View File

@ -12,15 +12,9 @@ from libp2p.abc import (
from libp2p.custom_types import ( from libp2p.custom_types import (
TProtocol, TProtocol,
) )
from libp2p.network.stream.exceptions import (
StreamClosed,
)
from libp2p.peer.id import ( from libp2p.peer.id import (
ID, ID,
) )
from libp2p.utils import (
encode_varint_prefixed,
)
from .exceptions import ( from .exceptions import (
PubsubRouterError, PubsubRouterError,
@ -120,13 +114,7 @@ class FloodSub(IPubsubRouter):
if peer_id not in pubsub.peers: if peer_id not in pubsub.peers:
continue continue
stream = pubsub.peers[peer_id] stream = pubsub.peers[peer_id]
# FIXME: We should add a `WriteMsg` similar to write delimited messages. await pubsub.write_msg(stream, rpc_msg)
# Ref: https://github.com/libp2p/go-libp2p-pubsub/blob/master/comm.go#L107
try:
await stream.write(encode_varint_prefixed(rpc_msg.SerializeToString()))
except StreamClosed:
logger.debug("Fail to publish message to %s: stream closed", peer_id)
pubsub._handle_dead_peer(peer_id)
async def join(self, topic: str) -> None: async def join(self, topic: str) -> None:
""" """

View File

@ -24,14 +24,13 @@ from libp2p.abc import (
from libp2p.custom_types import ( from libp2p.custom_types import (
TProtocol, TProtocol,
) )
from libp2p.network.stream.exceptions import (
StreamClosed,
)
from libp2p.peer.id import ( from libp2p.peer.id import (
ID, ID,
) )
from libp2p.peer.peerinfo import ( from libp2p.peer.peerinfo import (
PeerInfo, PeerInfo,
peer_info_from_bytes,
peer_info_to_bytes,
) )
from libp2p.peer.peerstore import ( from libp2p.peer.peerstore import (
PERMANENT_ADDR_TTL, PERMANENT_ADDR_TTL,
@ -42,9 +41,6 @@ from libp2p.pubsub import (
from libp2p.tools.async_service import ( from libp2p.tools.async_service import (
Service, Service,
) )
from libp2p.utils import (
encode_varint_prefixed,
)
from .exceptions import ( from .exceptions import (
NoPubsubAttached, NoPubsubAttached,
@ -92,6 +88,12 @@ class GossipSub(IPubsubRouter, Service):
direct_connect_initial_delay: float direct_connect_initial_delay: float
direct_connect_interval: int direct_connect_interval: int
do_px: bool
px_peers_count: int
back_off: dict[str, dict[ID, int]]
prune_back_off: int
unsubscribe_back_off: int
def __init__( def __init__(
self, self,
protocols: Sequence[TProtocol], protocols: Sequence[TProtocol],
@ -106,6 +108,10 @@ class GossipSub(IPubsubRouter, Service):
heartbeat_interval: int = 120, heartbeat_interval: int = 120,
direct_connect_initial_delay: float = 0.1, direct_connect_initial_delay: float = 0.1,
direct_connect_interval: int = 300, direct_connect_interval: int = 300,
do_px: bool = False,
px_peers_count: int = 16,
prune_back_off: int = 60,
unsubscribe_back_off: int = 10,
) -> None: ) -> None:
self.protocols = list(protocols) self.protocols = list(protocols)
self.pubsub = None self.pubsub = None
@ -140,6 +146,12 @@ class GossipSub(IPubsubRouter, Service):
self.direct_connect_initial_delay = direct_connect_initial_delay self.direct_connect_initial_delay = direct_connect_initial_delay
self.time_since_last_publish = {} self.time_since_last_publish = {}
self.do_px = do_px
self.px_peers_count = px_peers_count
self.back_off = dict()
self.prune_back_off = prune_back_off
self.unsubscribe_back_off = unsubscribe_back_off
async def run(self) -> None: async def run(self) -> None:
self.manager.run_daemon_task(self.heartbeat) self.manager.run_daemon_task(self.heartbeat)
if len(self.direct_peers) > 0: if len(self.direct_peers) > 0:
@ -249,14 +261,10 @@ class GossipSub(IPubsubRouter, Service):
if peer_id not in self.pubsub.peers: if peer_id not in self.pubsub.peers:
continue continue
stream = self.pubsub.peers[peer_id] stream = self.pubsub.peers[peer_id]
# FIXME: We should add a `WriteMsg` similar to write delimited messages.
# Ref: https://github.com/libp2p/go-libp2p-pubsub/blob/master/comm.go#L107
# TODO: Go use `sendRPC`, which possibly piggybacks gossip/control messages. # TODO: Go use `sendRPC`, which possibly piggybacks gossip/control messages.
try: await self.pubsub.write_msg(stream, rpc_msg)
await stream.write(encode_varint_prefixed(rpc_msg.SerializeToString()))
except StreamClosed:
logger.debug("Fail to publish message to %s: stream closed", peer_id)
self.pubsub._handle_dead_peer(peer_id)
for topic in pubsub_msg.topicIDs: for topic in pubsub_msg.topicIDs:
self.time_since_last_publish[topic] = int(time.time()) self.time_since_last_publish[topic] = int(time.time())
@ -334,15 +342,22 @@ class GossipSub(IPubsubRouter, Service):
self.mesh[topic] = set() self.mesh[topic] = set()
topic_in_fanout: bool = topic in self.fanout topic_in_fanout: bool = topic in self.fanout
fanout_peers: set[ID] = self.fanout[topic] if topic_in_fanout else set() fanout_peers: set[ID] = set()
if topic_in_fanout:
for peer in self.fanout[topic]:
if self._check_back_off(peer, topic):
continue
fanout_peers.add(peer)
fanout_size = len(fanout_peers) fanout_size = len(fanout_peers)
if not topic_in_fanout or (topic_in_fanout and fanout_size < self.degree): if not topic_in_fanout or (topic_in_fanout and fanout_size < self.degree):
# There are less than D peers (let this number be x) # There are less than D peers (let this number be x)
# in the fanout for a topic (or the topic is not in the fanout). # in the fanout for a topic (or the topic is not in the fanout).
# Selects the remaining number of peers (D-x) from peers.gossipsub[topic]. # Selects the remaining number of peers (D-x) from peers.gossipsub[topic].
if topic in self.pubsub.peer_topics: if self.pubsub is not None and topic in self.pubsub.peer_topics:
selected_peers = self._get_in_topic_gossipsub_peers_from_minus( selected_peers = self._get_in_topic_gossipsub_peers_from_minus(
topic, self.degree - fanout_size, fanout_peers topic, self.degree - fanout_size, fanout_peers, True
) )
# Combine fanout peers with selected peers # Combine fanout peers with selected peers
fanout_peers.update(selected_peers) fanout_peers.update(selected_peers)
@ -369,7 +384,8 @@ class GossipSub(IPubsubRouter, Service):
return return
# Notify the peers in mesh[topic] with a PRUNE(topic) message # Notify the peers in mesh[topic] with a PRUNE(topic) message
for peer in self.mesh[topic]: for peer in self.mesh[topic]:
await self.emit_prune(topic, peer) await self.emit_prune(topic, peer, self.do_px, True)
self._add_back_off(peer, topic, True)
# Forget mesh[topic] # Forget mesh[topic]
self.mesh.pop(topic, None) self.mesh.pop(topic, None)
@ -459,8 +475,8 @@ class GossipSub(IPubsubRouter, Service):
self.fanout_heartbeat() self.fanout_heartbeat()
# Get the peers to send IHAVE to # Get the peers to send IHAVE to
peers_to_gossip = self.gossip_heartbeat() peers_to_gossip = self.gossip_heartbeat()
# Pack GRAFT, PRUNE and IHAVE for the same peer into one control message and # Pack(piggyback) GRAFT, PRUNE and IHAVE for the same peer into
# send it # one control message and send it
await self._emit_control_msgs( await self._emit_control_msgs(
peers_to_graft, peers_to_prune, peers_to_gossip peers_to_graft, peers_to_prune, peers_to_gossip
) )
@ -505,7 +521,7 @@ class GossipSub(IPubsubRouter, Service):
if num_mesh_peers_in_topic < self.degree_low: if num_mesh_peers_in_topic < self.degree_low:
# Select D - |mesh[topic]| peers from peers.gossipsub[topic] - mesh[topic] # noqa: E501 # Select D - |mesh[topic]| peers from peers.gossipsub[topic] - mesh[topic] # noqa: E501
selected_peers = self._get_in_topic_gossipsub_peers_from_minus( selected_peers = self._get_in_topic_gossipsub_peers_from_minus(
topic, self.degree - num_mesh_peers_in_topic, self.mesh[topic] topic, self.degree - num_mesh_peers_in_topic, self.mesh[topic], True
) )
for peer in selected_peers: for peer in selected_peers:
@ -528,82 +544,97 @@ class GossipSub(IPubsubRouter, Service):
peers_to_prune[peer].append(topic) peers_to_prune[peer].append(topic)
return peers_to_graft, peers_to_prune return peers_to_graft, peers_to_prune
def fanout_heartbeat(self) -> None: def _handle_topic_heartbeat(
# Note: the comments here are the exact pseudocode from the spec self,
for topic in list(self.fanout): topic: str,
if ( current_peers: set[ID],
self.pubsub is not None is_fanout: bool = False,
and topic not in self.pubsub.peer_topics peers_to_gossip: DefaultDict[ID, dict[str, list[str]]] | None = None,
and self.time_since_last_publish.get(topic, 0) + self.time_to_live ) -> tuple[set[ID], bool]:
< int(time.time()) """
Helper method to handle heartbeat for a single topic,
supporting both fanout and gossip.
:param topic: The topic to handle
:param current_peers: Current set of peers in the topic
:param is_fanout: Whether this is a fanout topic (affects expiration check)
:param peers_to_gossip: Optional dictionary to store peers to gossip to
:return: Tuple of (updated_peers, should_remove_topic)
"""
if self.pubsub is None:
raise NoPubsubAttached
# Skip if no peers have subscribed to the topic
if topic not in self.pubsub.peer_topics:
return current_peers, False
# For fanout topics, check if we should remove the topic
if is_fanout:
if self.time_since_last_publish.get(topic, 0) + self.time_to_live < int(
time.time()
): ):
# Remove topic from fanout return set(), True
# Check if peers are still in the topic and remove the ones that are not
in_topic_peers: set[ID] = {
peer for peer in current_peers if peer in self.pubsub.peer_topics[topic]
}
# If we need more peers to reach target degree
if len(in_topic_peers) < self.degree:
# Select additional peers from peers.gossipsub[topic]
selected_peers = self._get_in_topic_gossipsub_peers_from_minus(
topic, self.degree - len(in_topic_peers), in_topic_peers, True
)
# Add the selected peers
in_topic_peers.update(selected_peers)
# Handle gossip if requested
if peers_to_gossip is not None:
msg_ids = self.mcache.window(topic)
if msg_ids:
# Select D peers from peers.gossipsub[topic] excluding current peers
peers_to_emit_ihave_to = self._get_in_topic_gossipsub_peers_from_minus(
topic, self.degree, current_peers, True
)
msg_id_strs = [str(msg_id) for msg_id in msg_ids]
for peer in peers_to_emit_ihave_to:
peers_to_gossip[peer][topic] = msg_id_strs
return in_topic_peers, False
def fanout_heartbeat(self) -> None:
"""
Maintain fanout topics by:
1. Removing expired topics
2. Removing peers that are no longer in the topic
3. Adding new peers if needed to maintain the target degree
"""
for topic in list(self.fanout):
updated_peers, should_remove = self._handle_topic_heartbeat(
topic, self.fanout[topic], is_fanout=True
)
if should_remove:
del self.fanout[topic] del self.fanout[topic]
else: else:
# Check if fanout peers are still in the topic and remove the ones that are not # noqa: E501 self.fanout[topic] = updated_peers
# ref: https://github.com/libp2p/go-libp2p-pubsub/blob/01b9825fbee1848751d90a8469e3f5f43bac8466/gossipsub.go#L498-L504 # noqa: E501
in_topic_fanout_peers: list[ID] = []
if self.pubsub is not None:
in_topic_fanout_peers = [
peer
for peer in self.fanout[topic]
if peer in self.pubsub.peer_topics[topic]
]
self.fanout[topic] = set(in_topic_fanout_peers)
num_fanout_peers_in_topic = len(self.fanout[topic])
# If |fanout[topic]| < D
if num_fanout_peers_in_topic < self.degree:
# Select D - |fanout[topic]| peers from peers.gossipsub[topic] - fanout[topic] # noqa: E501
selected_peers = self._get_in_topic_gossipsub_peers_from_minus(
topic,
self.degree - num_fanout_peers_in_topic,
self.fanout[topic],
)
# Add the peers to fanout[topic]
self.fanout[topic].update(selected_peers)
def gossip_heartbeat(self) -> DefaultDict[ID, dict[str, list[str]]]: def gossip_heartbeat(self) -> DefaultDict[ID, dict[str, list[str]]]:
peers_to_gossip: DefaultDict[ID, dict[str, list[str]]] = defaultdict(dict) peers_to_gossip: DefaultDict[ID, dict[str, list[str]]] = defaultdict(dict)
# Handle mesh topics
for topic in self.mesh: for topic in self.mesh:
msg_ids = self.mcache.window(topic) self._handle_topic_heartbeat(
if msg_ids: topic, self.mesh[topic], peers_to_gossip=peers_to_gossip
if self.pubsub is None: )
raise NoPubsubAttached
# Get all pubsub peers in a topic and only add them if they are
# gossipsub peers too
if topic in self.pubsub.peer_topics:
# Select D peers from peers.gossipsub[topic]
peers_to_emit_ihave_to = (
self._get_in_topic_gossipsub_peers_from_minus(
topic, self.degree, self.mesh[topic]
)
)
msg_id_strs = [str(msg_id) for msg_id in msg_ids] # Handle fanout topics that aren't in mesh
for peer in peers_to_emit_ihave_to:
peers_to_gossip[peer][topic] = msg_id_strs
# TODO: Refactor and Dedup. This section is the roughly the same as the above.
# Do the same for fanout, for all topics not already hit in mesh
for topic in self.fanout: for topic in self.fanout:
msg_ids = self.mcache.window(topic) if topic not in self.mesh:
if msg_ids: self._handle_topic_heartbeat(
if self.pubsub is None: topic, self.fanout[topic], peers_to_gossip=peers_to_gossip
raise NoPubsubAttached )
# Get all pubsub peers in topic and only add if they are
# gossipsub peers also
if topic in self.pubsub.peer_topics:
# Select D peers from peers.gossipsub[topic]
peers_to_emit_ihave_to = (
self._get_in_topic_gossipsub_peers_from_minus(
topic, self.degree, self.fanout[topic]
)
)
msg_id_strs = [str(msg) for msg in msg_ids]
for peer in peers_to_emit_ihave_to:
peers_to_gossip[peer][topic] = msg_id_strs
return peers_to_gossip return peers_to_gossip
@staticmethod @staticmethod
@ -638,7 +669,11 @@ class GossipSub(IPubsubRouter, Service):
return selection return selection
def _get_in_topic_gossipsub_peers_from_minus( def _get_in_topic_gossipsub_peers_from_minus(
self, topic: str, num_to_select: int, minus: Iterable[ID] self,
topic: str,
num_to_select: int,
minus: Iterable[ID],
backoff_check: bool = False,
) -> list[ID]: ) -> list[ID]:
if self.pubsub is None: if self.pubsub is None:
raise NoPubsubAttached raise NoPubsubAttached
@ -647,8 +682,88 @@ class GossipSub(IPubsubRouter, Service):
for peer_id in self.pubsub.peer_topics[topic] for peer_id in self.pubsub.peer_topics[topic]
if self.peer_protocol[peer_id] == PROTOCOL_ID if self.peer_protocol[peer_id] == PROTOCOL_ID
} }
if backoff_check:
# filter out peers that are in back off for this topic
gossipsub_peers_in_topic = {
peer_id
for peer_id in gossipsub_peers_in_topic
if self._check_back_off(peer_id, topic) is False
}
return self.select_from_minus(num_to_select, gossipsub_peers_in_topic, minus) return self.select_from_minus(num_to_select, gossipsub_peers_in_topic, minus)
def _add_back_off(
self, peer: ID, topic: str, is_unsubscribe: bool, backoff_duration: int = 0
) -> None:
"""
Add back off for a peer in a topic.
:param peer: peer to add back off for
:param topic: topic to add back off for
:param is_unsubscribe: whether this is an unsubscribe operation
:param backoff_duration: duration of back off in seconds, if 0, use default
"""
if topic not in self.back_off:
self.back_off[topic] = dict()
backoff_till = int(time.time())
if backoff_duration > 0:
backoff_till += backoff_duration
else:
if is_unsubscribe:
backoff_till += self.unsubscribe_back_off
else:
backoff_till += self.prune_back_off
if peer not in self.back_off[topic]:
self.back_off[topic][peer] = backoff_till
else:
self.back_off[topic][peer] = max(self.back_off[topic][peer], backoff_till)
def _check_back_off(self, peer: ID, topic: str) -> bool:
"""
Check if a peer is in back off for a topic and cleanup expired back off entries.
:param peer: peer to check
:param topic: topic to check
:return: True if the peer is in back off, False otherwise
"""
if topic not in self.back_off or peer not in self.back_off[topic]:
return False
if self.back_off[topic].get(peer, 0) > int(time.time()):
return True
else:
del self.back_off[topic][peer]
return False
async def _do_px(self, px_peers: list[rpc_pb2.PeerInfo]) -> None:
if len(px_peers) > self.px_peers_count:
px_peers = px_peers[: self.px_peers_count]
for peer in px_peers:
peer_id: ID = ID(peer.peerID)
if self.pubsub and peer_id in self.pubsub.peers:
continue
try:
peer_info = peer_info_from_bytes(peer.signedPeerRecord)
try:
if self.pubsub is None:
raise NoPubsubAttached
await self.pubsub.host.connect(peer_info)
except Exception as e:
logger.warning(
"failed to connect to px peer %s: %s",
peer_id,
e,
)
continue
except Exception as e:
logger.warning(
"failed to parse peer info from px peer %s: %s",
peer_id,
e,
)
continue
# RPC handlers # RPC handlers
async def handle_ihave( async def handle_ihave(
@ -705,8 +820,6 @@ class GossipSub(IPubsubRouter, Service):
packet.publish.extend(msgs_to_forward) packet.publish.extend(msgs_to_forward)
# 2) Serialize that packet
rpc_msg: bytes = packet.SerializeToString()
if self.pubsub is None: if self.pubsub is None:
raise NoPubsubAttached raise NoPubsubAttached
@ -720,14 +833,7 @@ class GossipSub(IPubsubRouter, Service):
peer_stream = self.pubsub.peers[sender_peer_id] peer_stream = self.pubsub.peers[sender_peer_id]
# 4) And write the packet to the stream # 4) And write the packet to the stream
try: await self.pubsub.write_msg(peer_stream, packet)
await peer_stream.write(encode_varint_prefixed(rpc_msg))
except StreamClosed:
logger.debug(
"Fail to responed to iwant request from %s: stream closed",
sender_peer_id,
)
self.pubsub._handle_dead_peer(sender_peer_id)
async def handle_graft( async def handle_graft(
self, graft_msg: rpc_pb2.ControlGraft, sender_peer_id: ID self, graft_msg: rpc_pb2.ControlGraft, sender_peer_id: ID
@ -741,24 +847,46 @@ class GossipSub(IPubsubRouter, Service):
logger.warning( logger.warning(
"GRAFT: ignoring request from direct peer %s", sender_peer_id "GRAFT: ignoring request from direct peer %s", sender_peer_id
) )
await self.emit_prune(topic, sender_peer_id) await self.emit_prune(topic, sender_peer_id, False, False)
return return
if self._check_back_off(sender_peer_id, topic):
logger.warning(
"GRAFT: ignoring request from %s, back off until %d",
sender_peer_id,
self.back_off[topic][sender_peer_id],
)
self._add_back_off(sender_peer_id, topic, False)
await self.emit_prune(topic, sender_peer_id, False, False)
return
if sender_peer_id not in self.mesh[topic]: if sender_peer_id not in self.mesh[topic]:
self.mesh[topic].add(sender_peer_id) self.mesh[topic].add(sender_peer_id)
else: else:
# Respond with PRUNE if not subscribed to the topic # Respond with PRUNE if not subscribed to the topic
await self.emit_prune(topic, sender_peer_id) await self.emit_prune(topic, sender_peer_id, self.do_px, False)
async def handle_prune( async def handle_prune(
self, prune_msg: rpc_pb2.ControlPrune, sender_peer_id: ID self, prune_msg: rpc_pb2.ControlPrune, sender_peer_id: ID
) -> None: ) -> None:
topic: str = prune_msg.topicID topic: str = prune_msg.topicID
backoff_till: int = prune_msg.backoff
px_peers: list[rpc_pb2.PeerInfo] = []
for peer in prune_msg.peers:
px_peers.append(peer)
# Remove peer from mesh for topic # Remove peer from mesh for topic
if topic in self.mesh: if topic in self.mesh:
if backoff_till > 0:
self._add_back_off(sender_peer_id, topic, False, backoff_till)
else:
self._add_back_off(sender_peer_id, topic, False)
self.mesh[topic].discard(sender_peer_id) self.mesh[topic].discard(sender_peer_id)
if px_peers:
await self._do_px(px_peers)
# RPC emitters # RPC emitters
def pack_control_msgs( def pack_control_msgs(
@ -807,15 +935,36 @@ class GossipSub(IPubsubRouter, Service):
await self.emit_control_message(control_msg, id) await self.emit_control_message(control_msg, id)
async def emit_prune(self, topic: str, id: ID) -> None: async def emit_prune(
self, topic: str, to_peer: ID, do_px: bool, is_unsubscribe: bool
) -> None:
"""Emit graft message, sent to to_peer, for topic.""" """Emit graft message, sent to to_peer, for topic."""
prune_msg: rpc_pb2.ControlPrune = rpc_pb2.ControlPrune() prune_msg: rpc_pb2.ControlPrune = rpc_pb2.ControlPrune()
prune_msg.topicID = topic prune_msg.topicID = topic
back_off_duration = self.prune_back_off
if is_unsubscribe:
back_off_duration = self.unsubscribe_back_off
prune_msg.backoff = back_off_duration
if do_px:
exchange_peers = self._get_in_topic_gossipsub_peers_from_minus(
topic, self.px_peers_count, [to_peer]
)
for peer in exchange_peers:
if self.pubsub is None:
raise NoPubsubAttached
peer_info = self.pubsub.host.get_peerstore().peer_info(peer)
signed_peer_record: rpc_pb2.PeerInfo = rpc_pb2.PeerInfo()
signed_peer_record.peerID = peer.to_bytes()
signed_peer_record.signedPeerRecord = peer_info_to_bytes(peer_info)
prune_msg.peers.append(signed_peer_record)
control_msg: rpc_pb2.ControlMessage = rpc_pb2.ControlMessage() control_msg: rpc_pb2.ControlMessage = rpc_pb2.ControlMessage()
control_msg.prune.extend([prune_msg]) control_msg.prune.extend([prune_msg])
await self.emit_control_message(control_msg, id) await self.emit_control_message(control_msg, to_peer)
async def emit_control_message( async def emit_control_message(
self, control_msg: rpc_pb2.ControlMessage, to_peer: ID self, control_msg: rpc_pb2.ControlMessage, to_peer: ID
@ -826,8 +975,6 @@ class GossipSub(IPubsubRouter, Service):
packet: rpc_pb2.RPC = rpc_pb2.RPC() packet: rpc_pb2.RPC = rpc_pb2.RPC()
packet.control.CopyFrom(control_msg) packet.control.CopyFrom(control_msg)
rpc_msg: bytes = packet.SerializeToString()
# Get stream for peer from pubsub # Get stream for peer from pubsub
if to_peer not in self.pubsub.peers: if to_peer not in self.pubsub.peers:
logger.debug( logger.debug(
@ -837,8 +984,4 @@ class GossipSub(IPubsubRouter, Service):
peer_stream = self.pubsub.peers[to_peer] peer_stream = self.pubsub.peers[to_peer]
# Write rpc to stream # Write rpc to stream
try: await self.pubsub.write_msg(peer_stream, packet)
await peer_stream.write(encode_varint_prefixed(rpc_msg))
except StreamClosed:
logger.debug("Fail to emit control message to %s: stream closed", to_peer)
self.pubsub._handle_dead_peer(to_peer)

View File

@ -47,6 +47,13 @@ message ControlGraft {
message ControlPrune { message ControlPrune {
optional string topicID = 1; optional string topicID = 1;
repeated PeerInfo peers = 2;
optional uint64 backoff = 3;
}
message PeerInfo {
optional bytes peerID = 1;
optional bytes signedPeerRecord = 2;
} }
message TopicDescriptor { message TopicDescriptor {

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT! # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/pubsub/pb/rpc.proto # source: rpc.proto
"""Generated protocol buffer code.""" """Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor as _descriptor
@ -13,37 +13,39 @@ _sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1alibp2p/pubsub/pb/rpc.proto\x12\tpubsub.pb\"\xb4\x01\n\x03RPC\x12-\n\rsubscriptions\x18\x01 \x03(\x0b\x32\x16.pubsub.pb.RPC.SubOpts\x12#\n\x07publish\x18\x02 \x03(\x0b\x32\x12.pubsub.pb.Message\x12*\n\x07\x63ontrol\x18\x03 \x01(\x0b\x32\x19.pubsub.pb.ControlMessage\x1a-\n\x07SubOpts\x12\x11\n\tsubscribe\x18\x01 \x01(\x08\x12\x0f\n\x07topicid\x18\x02 \x01(\t\"i\n\x07Message\x12\x0f\n\x07\x66rom_id\x18\x01 \x01(\x0c\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\x12\r\n\x05seqno\x18\x03 \x01(\x0c\x12\x10\n\x08topicIDs\x18\x04 \x03(\t\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x12\x0b\n\x03key\x18\x06 \x01(\x0c\"\xb0\x01\n\x0e\x43ontrolMessage\x12&\n\x05ihave\x18\x01 \x03(\x0b\x32\x17.pubsub.pb.ControlIHave\x12&\n\x05iwant\x18\x02 \x03(\x0b\x32\x17.pubsub.pb.ControlIWant\x12&\n\x05graft\x18\x03 \x03(\x0b\x32\x17.pubsub.pb.ControlGraft\x12&\n\x05prune\x18\x04 \x03(\x0b\x32\x17.pubsub.pb.ControlPrune\"3\n\x0c\x43ontrolIHave\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\x12\n\nmessageIDs\x18\x02 \x03(\t\"\"\n\x0c\x43ontrolIWant\x12\x12\n\nmessageIDs\x18\x01 \x03(\t\"\x1f\n\x0c\x43ontrolGraft\x12\x0f\n\x07topicID\x18\x01 \x01(\t\"\x1f\n\x0c\x43ontrolPrune\x12\x0f\n\x07topicID\x18\x01 \x01(\t\"\x87\x03\n\x0fTopicDescriptor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x31\n\x04\x61uth\x18\x02 \x01(\x0b\x32#.pubsub.pb.TopicDescriptor.AuthOpts\x12/\n\x03\x65nc\x18\x03 \x01(\x0b\x32\".pubsub.pb.TopicDescriptor.EncOpts\x1a|\n\x08\x41uthOpts\x12:\n\x04mode\x18\x01 \x01(\x0e\x32,.pubsub.pb.TopicDescriptor.AuthOpts.AuthMode\x12\x0c\n\x04keys\x18\x02 \x03(\x0c\"&\n\x08\x41uthMode\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03KEY\x10\x01\x12\x07\n\x03WOT\x10\x02\x1a\x83\x01\n\x07\x45ncOpts\x12\x38\n\x04mode\x18\x01 \x01(\x0e\x32*.pubsub.pb.TopicDescriptor.EncOpts.EncMode\x12\x11\n\tkeyHashes\x18\x02 \x03(\x0c\"+\n\x07\x45ncMode\x12\x08\n\x04NONE\x10\x00\x12\r\n\tSHAREDKEY\x10\x01\x12\x07\n\x03WOT\x10\x02') DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\trpc.proto\x12\tpubsub.pb\"\xb4\x01\n\x03RPC\x12-\n\rsubscriptions\x18\x01 \x03(\x0b\x32\x16.pubsub.pb.RPC.SubOpts\x12#\n\x07publish\x18\x02 \x03(\x0b\x32\x12.pubsub.pb.Message\x12*\n\x07\x63ontrol\x18\x03 \x01(\x0b\x32\x19.pubsub.pb.ControlMessage\x1a-\n\x07SubOpts\x12\x11\n\tsubscribe\x18\x01 \x01(\x08\x12\x0f\n\x07topicid\x18\x02 \x01(\t\"i\n\x07Message\x12\x0f\n\x07\x66rom_id\x18\x01 \x01(\x0c\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\x12\r\n\x05seqno\x18\x03 \x01(\x0c\x12\x10\n\x08topicIDs\x18\x04 \x03(\t\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x12\x0b\n\x03key\x18\x06 \x01(\x0c\"\xb0\x01\n\x0e\x43ontrolMessage\x12&\n\x05ihave\x18\x01 \x03(\x0b\x32\x17.pubsub.pb.ControlIHave\x12&\n\x05iwant\x18\x02 \x03(\x0b\x32\x17.pubsub.pb.ControlIWant\x12&\n\x05graft\x18\x03 \x03(\x0b\x32\x17.pubsub.pb.ControlGraft\x12&\n\x05prune\x18\x04 \x03(\x0b\x32\x17.pubsub.pb.ControlPrune\"3\n\x0c\x43ontrolIHave\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\x12\n\nmessageIDs\x18\x02 \x03(\t\"\"\n\x0c\x43ontrolIWant\x12\x12\n\nmessageIDs\x18\x01 \x03(\t\"\x1f\n\x0c\x43ontrolGraft\x12\x0f\n\x07topicID\x18\x01 \x01(\t\"T\n\x0c\x43ontrolPrune\x12\x0f\n\x07topicID\x18\x01 \x01(\t\x12\"\n\x05peers\x18\x02 \x03(\x0b\x32\x13.pubsub.pb.PeerInfo\x12\x0f\n\x07\x62\x61\x63koff\x18\x03 \x01(\x04\"4\n\x08PeerInfo\x12\x0e\n\x06peerID\x18\x01 \x01(\x0c\x12\x18\n\x10signedPeerRecord\x18\x02 \x01(\x0c\"\x87\x03\n\x0fTopicDescriptor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x31\n\x04\x61uth\x18\x02 \x01(\x0b\x32#.pubsub.pb.TopicDescriptor.AuthOpts\x12/\n\x03\x65nc\x18\x03 \x01(\x0b\x32\".pubsub.pb.TopicDescriptor.EncOpts\x1a|\n\x08\x41uthOpts\x12:\n\x04mode\x18\x01 \x01(\x0e\x32,.pubsub.pb.TopicDescriptor.AuthOpts.AuthMode\x12\x0c\n\x04keys\x18\x02 \x03(\x0c\"&\n\x08\x41uthMode\x12\x08\n\x04NONE\x10\x00\x12\x07\n\x03KEY\x10\x01\x12\x07\n\x03WOT\x10\x02\x1a\x83\x01\n\x07\x45ncOpts\x12\x38\n\x04mode\x18\x01 \x01(\x0e\x32*.pubsub.pb.TopicDescriptor.EncOpts.EncMode\x12\x11\n\tkeyHashes\x18\x02 \x03(\x0c\"+\n\x07\x45ncMode\x12\x08\n\x04NONE\x10\x00\x12\r\n\tSHAREDKEY\x10\x01\x12\x07\n\x03WOT\x10\x02')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.pubsub.pb.rpc_pb2', globals()) _builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'rpc_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False: if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None DESCRIPTOR._options = None
_RPC._serialized_start=42 _RPC._serialized_start=25
_RPC._serialized_end=222 _RPC._serialized_end=205
_RPC_SUBOPTS._serialized_start=177 _RPC_SUBOPTS._serialized_start=160
_RPC_SUBOPTS._serialized_end=222 _RPC_SUBOPTS._serialized_end=205
_MESSAGE._serialized_start=224 _MESSAGE._serialized_start=207
_MESSAGE._serialized_end=329 _MESSAGE._serialized_end=312
_CONTROLMESSAGE._serialized_start=332 _CONTROLMESSAGE._serialized_start=315
_CONTROLMESSAGE._serialized_end=508 _CONTROLMESSAGE._serialized_end=491
_CONTROLIHAVE._serialized_start=510 _CONTROLIHAVE._serialized_start=493
_CONTROLIHAVE._serialized_end=561 _CONTROLIHAVE._serialized_end=544
_CONTROLIWANT._serialized_start=563 _CONTROLIWANT._serialized_start=546
_CONTROLIWANT._serialized_end=597 _CONTROLIWANT._serialized_end=580
_CONTROLGRAFT._serialized_start=599 _CONTROLGRAFT._serialized_start=582
_CONTROLGRAFT._serialized_end=630 _CONTROLGRAFT._serialized_end=613
_CONTROLPRUNE._serialized_start=632 _CONTROLPRUNE._serialized_start=615
_CONTROLPRUNE._serialized_end=663 _CONTROLPRUNE._serialized_end=699
_TOPICDESCRIPTOR._serialized_start=666 _PEERINFO._serialized_start=701
_TOPICDESCRIPTOR._serialized_end=1057 _PEERINFO._serialized_end=753
_TOPICDESCRIPTOR_AUTHOPTS._serialized_start=799 _TOPICDESCRIPTOR._serialized_start=756
_TOPICDESCRIPTOR_AUTHOPTS._serialized_end=923 _TOPICDESCRIPTOR._serialized_end=1147
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_start=885 _TOPICDESCRIPTOR_AUTHOPTS._serialized_start=889
_TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_end=923 _TOPICDESCRIPTOR_AUTHOPTS._serialized_end=1013
_TOPICDESCRIPTOR_ENCOPTS._serialized_start=926 _TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_start=975
_TOPICDESCRIPTOR_ENCOPTS._serialized_end=1057 _TOPICDESCRIPTOR_AUTHOPTS_AUTHMODE._serialized_end=1013
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_start=1014 _TOPICDESCRIPTOR_ENCOPTS._serialized_start=1016
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_end=1057 _TOPICDESCRIPTOR_ENCOPTS._serialized_end=1147
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_start=1104
_TOPICDESCRIPTOR_ENCOPTS_ENCMODE._serialized_end=1147
# @@protoc_insertion_point(module_scope) # @@protoc_insertion_point(module_scope)

View File

@ -179,17 +179,43 @@ class ControlPrune(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor DESCRIPTOR: google.protobuf.descriptor.Descriptor
TOPICID_FIELD_NUMBER: builtins.int TOPICID_FIELD_NUMBER: builtins.int
PEERS_FIELD_NUMBER: builtins.int
BACKOFF_FIELD_NUMBER: builtins.int
topicID: builtins.str topicID: builtins.str
backoff: builtins.int
@property
def peers(self) -> google.protobuf.internal.containers.RepeatedCompositeFieldContainer[global___PeerInfo]: ...
def __init__( def __init__(
self, self,
*, *,
topicID: builtins.str | None = ..., topicID: builtins.str | None = ...,
peers: collections.abc.Iterable[global___PeerInfo] | None = ...,
backoff: builtins.int | None = ...,
) -> None: ... ) -> None: ...
def HasField(self, field_name: typing.Literal["topicID", b"topicID"]) -> builtins.bool: ... def HasField(self, field_name: typing.Literal["backoff", b"backoff", "topicID", b"topicID"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["topicID", b"topicID"]) -> None: ... def ClearField(self, field_name: typing.Literal["backoff", b"backoff", "peers", b"peers", "topicID", b"topicID"]) -> None: ...
global___ControlPrune = ControlPrune global___ControlPrune = ControlPrune
@typing.final
class PeerInfo(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
PEERID_FIELD_NUMBER: builtins.int
SIGNEDPEERRECORD_FIELD_NUMBER: builtins.int
peerID: builtins.bytes
signedPeerRecord: builtins.bytes
def __init__(
self,
*,
peerID: builtins.bytes | None = ...,
signedPeerRecord: builtins.bytes | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["peerID", b"peerID", "signedPeerRecord", b"signedPeerRecord"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["peerID", b"peerID", "signedPeerRecord", b"signedPeerRecord"]) -> None: ...
global___PeerInfo = PeerInfo
@typing.final @typing.final
class TopicDescriptor(google.protobuf.message.Message): class TopicDescriptor(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor DESCRIPTOR: google.protobuf.descriptor.Descriptor

View File

@ -66,6 +66,7 @@ from libp2p.utils import (
encode_varint_prefixed, encode_varint_prefixed,
read_varint_prefixed_bytes, read_varint_prefixed_bytes,
) )
from libp2p.utils.varint import encode_uvarint
from .pb import ( from .pb import (
rpc_pb2, rpc_pb2,
@ -620,16 +621,22 @@ class Pubsub(Service, IPubsub):
logger.debug("Fail to message peer %s: stream closed", peer_id) logger.debug("Fail to message peer %s: stream closed", peer_id)
self._handle_dead_peer(peer_id) self._handle_dead_peer(peer_id)
async def publish(self, topic_id: str, data: bytes) -> None: async def publish(self, topic_id: str | list[str], data: bytes) -> None:
""" """
Publish data to a topic. Publish data to a topic or multiple topics.
:param topic_id: topic which we are going to publish the data to :param topic_id: topic (str) or topics (list[str]) to publish the data to
:param data: data which we are publishing :param data: data which we are publishing
""" """
# Handle both single topic (str) and multiple topics (list[str])
if isinstance(topic_id, str):
topic_ids = [topic_id]
else:
topic_ids = topic_id
msg = rpc_pb2.Message( msg = rpc_pb2.Message(
data=data, data=data,
topicIDs=[topic_id], topicIDs=topic_ids,
# Origin is ourself. # Origin is ourself.
from_id=self.my_id.to_bytes(), from_id=self.my_id.to_bytes(),
seqno=self._next_seqno(), seqno=self._next_seqno(),
@ -676,19 +683,18 @@ class Pubsub(Service, IPubsub):
# TODO: Implement throttle on async validators # TODO: Implement throttle on async validators
if len(async_topic_validators) > 0: if len(async_topic_validators) > 0:
# TODO: Use a better pattern # Appends to lists are thread safe in CPython
final_result: bool = True results = []
async def run_async_validator(func: AsyncValidatorFn) -> None: async def run_async_validator(func: AsyncValidatorFn) -> None:
nonlocal final_result
result = await func(msg_forwarder, msg) result = await func(msg_forwarder, msg)
final_result = final_result and result results.append(result)
async with trio.open_nursery() as nursery: async with trio.open_nursery() as nursery:
for async_validator in async_topic_validators: for async_validator in async_topic_validators:
nursery.start_soon(run_async_validator, async_validator) nursery.start_soon(run_async_validator, async_validator)
if not final_result: if not all(results):
raise ValidationError(f"Validation failed for msg={msg}") raise ValidationError(f"Validation failed for msg={msg}")
async def push_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None: async def push_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None:
@ -773,3 +779,43 @@ class Pubsub(Service, IPubsub):
def _is_subscribed_to_msg(self, msg: rpc_pb2.Message) -> bool: def _is_subscribed_to_msg(self, msg: rpc_pb2.Message) -> bool:
return any(topic in self.topic_ids for topic in msg.topicIDs) return any(topic in self.topic_ids for topic in msg.topicIDs)
async def write_msg(self, stream: INetStream, rpc_msg: rpc_pb2.RPC) -> bool:
"""
Write an RPC message to a stream with proper error handling.
Implements WriteMsg similar to go-msgio which is used in go-libp2p
Ref: https://github.com/libp2p/go-msgio/blob/master/protoio/uvarint_writer.go#L56
:param stream: stream to write the message to
:param rpc_msg: RPC message to write
:return: True if successful, False if stream was closed
"""
try:
# Calculate message size first
msg_bytes = rpc_msg.SerializeToString()
msg_size = len(msg_bytes)
# Calculate varint size and allocate exact buffer size needed
varint_bytes = encode_uvarint(msg_size)
varint_size = len(varint_bytes)
# Allocate buffer with exact size (like Go's pool.Get())
buf = bytearray(varint_size + msg_size)
# Write varint length prefix to buffer (like Go's binary.PutUvarint())
buf[:varint_size] = varint_bytes
# Write serialized message after varint (like Go's rpc.MarshalTo())
buf[varint_size:] = msg_bytes
# Single write operation (like Go's s.Write(buf))
await stream.write(bytes(buf))
return True
except StreamClosed:
peer_id = stream.muxed_conn.peer_id
logger.debug("Fail to write message to %s: stream closed", peer_id)
self._handle_dead_peer(peer_id)
return False

28
libp2p/relay/__init__.py Normal file
View File

@ -0,0 +1,28 @@
"""
Relay module for libp2p.
This package includes implementations of circuit relay protocols
for enabling connectivity between peers behind NATs or firewalls.
"""
# Import the circuit_v2 module to make it accessible
# through the relay package
from libp2p.relay.circuit_v2 import (
PROTOCOL_ID,
CircuitV2Protocol,
CircuitV2Transport,
RelayDiscovery,
RelayLimits,
RelayResourceManager,
Reservation,
)
__all__ = [
"CircuitV2Protocol",
"CircuitV2Transport",
"PROTOCOL_ID",
"RelayDiscovery",
"RelayLimits",
"RelayResourceManager",
"Reservation",
]

View File

@ -0,0 +1,32 @@
"""
Circuit Relay v2 implementation for libp2p.
This package implements the Circuit Relay v2 protocol as specified in:
https://github.com/libp2p/specs/blob/master/relay/circuit-v2.md
"""
from .discovery import (
RelayDiscovery,
)
from .protocol import (
PROTOCOL_ID,
CircuitV2Protocol,
)
from .resources import (
RelayLimits,
RelayResourceManager,
Reservation,
)
from .transport import (
CircuitV2Transport,
)
__all__ = [
"CircuitV2Protocol",
"PROTOCOL_ID",
"RelayLimits",
"Reservation",
"RelayResourceManager",
"CircuitV2Transport",
"RelayDiscovery",
]

View File

@ -0,0 +1,92 @@
"""
Configuration management for Circuit Relay v2.
This module handles configuration for relay roles, resource limits,
and discovery settings.
"""
from dataclasses import (
dataclass,
field,
)
from libp2p.peer.peerinfo import (
PeerInfo,
)
from .resources import (
RelayLimits,
)
@dataclass
class RelayConfig:
"""Configuration for Circuit Relay v2."""
# Role configuration
enable_hop: bool = False # Whether to act as a relay (hop)
enable_stop: bool = True # Whether to accept relayed connections (stop)
enable_client: bool = True # Whether to use relays for dialing
# Resource limits
limits: RelayLimits | None = None
# Discovery configuration
bootstrap_relays: list[PeerInfo] = field(default_factory=list)
min_relays: int = 3
max_relays: int = 20
discovery_interval: int = 300 # seconds
# Connection configuration
reservation_ttl: int = 3600 # seconds
max_circuit_duration: int = 3600 # seconds
max_circuit_bytes: int = 1024 * 1024 * 1024 # 1GB
def __post_init__(self) -> None:
"""Initialize default values."""
if self.limits is None:
self.limits = RelayLimits(
duration=self.max_circuit_duration,
data=self.max_circuit_bytes,
max_circuit_conns=8,
max_reservations=4,
)
@dataclass
class HopConfig:
"""Configuration specific to relay (hop) nodes."""
# Resource limits per IP
max_reservations_per_ip: int = 8
max_circuits_per_ip: int = 16
# Rate limiting
reservation_rate_per_ip: int = 4 # per minute
circuit_rate_per_ip: int = 8 # per minute
# Resource quotas
max_circuits_total: int = 64
max_reservations_total: int = 32
# Bandwidth limits
max_bandwidth_per_circuit: int = 1024 * 1024 # 1MB/s
max_bandwidth_total: int = 10 * 1024 * 1024 # 10MB/s
@dataclass
class ClientConfig:
"""Configuration specific to relay clients."""
# Relay selection
min_relay_score: float = 0.5
max_relay_latency: float = 1.0 # seconds
# Auto-relay settings
enable_auto_relay: bool = True
auto_relay_timeout: int = 30 # seconds
max_auto_relay_attempts: int = 3
# Reservation management
reservation_refresh_threshold: float = 0.8 # Refresh at 80% of TTL
max_concurrent_reservations: int = 2

View File

@ -0,0 +1,537 @@
"""
Discovery module for Circuit Relay v2.
This module handles discovering and tracking relay nodes in the network.
"""
from dataclasses import (
dataclass,
)
import logging
import time
from typing import (
Any,
Protocol as TypingProtocol,
cast,
runtime_checkable,
)
import trio
from libp2p.abc import (
IHost,
)
from libp2p.custom_types import (
TProtocol,
)
from libp2p.peer.id import (
ID,
)
from libp2p.tools.async_service import (
Service,
)
from .pb.circuit_pb2 import (
HopMessage,
)
from .protocol import (
PROTOCOL_ID,
)
from .protocol_buffer import (
StatusCode,
)
logger = logging.getLogger("libp2p.relay.circuit_v2.discovery")
# Constants
MAX_RELAYS_TO_TRACK = 10
DEFAULT_DISCOVERY_INTERVAL = 60 # seconds
STREAM_TIMEOUT = 10 # seconds
# Extended interfaces for type checking
@runtime_checkable
class IHostWithMultiselect(TypingProtocol):
"""Extended host interface with multiselect attribute."""
@property
def multiselect(self) -> Any:
"""Get the multiselect component."""
...
@dataclass
class RelayInfo:
"""Information about a discovered relay."""
peer_id: ID
discovered_at: float
last_seen: float
has_reservation: bool = False
reservation_expires_at: float | None = None
reservation_data_limit: int | None = None
class RelayDiscovery(Service):
"""
Discovery service for Circuit Relay v2 nodes.
This service discovers and keeps track of available relay nodes, and optionally
makes reservations with them.
"""
def __init__(
self,
host: IHost,
auto_reserve: bool = False,
discovery_interval: int = DEFAULT_DISCOVERY_INTERVAL,
max_relays: int = MAX_RELAYS_TO_TRACK,
) -> None:
"""
Initialize the discovery service.
Parameters
----------
host : IHost
The libp2p host this discovery service is running on
auto_reserve : bool
Whether to automatically make reservations with discovered relays
discovery_interval : int
How often to run discovery, in seconds
max_relays : int
Maximum number of relays to track
"""
super().__init__()
self.host = host
self.auto_reserve = auto_reserve
self.discovery_interval = discovery_interval
self.max_relays = max_relays
self._discovered_relays: dict[ID, RelayInfo] = {}
self._protocol_cache: dict[
ID, set[str]
] = {} # Cache protocol info to reduce queries
self.event_started = trio.Event()
self.is_running = False
async def run(self, *, task_status: Any = trio.TASK_STATUS_IGNORED) -> None:
"""Run the discovery service."""
try:
self.is_running = True
self.event_started.set()
task_status.started()
# Main discovery loop
async with trio.open_nursery() as nursery:
# Run initial discovery
nursery.start_soon(self.discover_relays)
# Set up periodic discovery
while True:
await trio.sleep(self.discovery_interval)
if not self.manager.is_running:
break
nursery.start_soon(self.discover_relays)
# Cleanup expired relays and reservations
await self._cleanup_expired()
finally:
self.is_running = False
async def discover_relays(self) -> None:
r"""
Discover relay nodes in the network.
This method queries the network for peers that support the
Circuit Relay v2 protocol.
"""
logger.debug("Starting relay discovery")
try:
# Get connected peers
connected_peers = self.host.get_connected_peers()
logger.debug(
"Checking %d connected peers for relay support", len(connected_peers)
)
# Check each peer if they support the relay protocol
for peer_id in connected_peers:
if peer_id == self.host.get_id():
continue # Skip ourselves
if peer_id in self._discovered_relays:
# Update last seen time for existing relay
self._discovered_relays[peer_id].last_seen = time.time()
continue
# Check if peer supports the relay protocol
with trio.move_on_after(5): # Don't wait too long for protocol info
if await self._supports_relay_protocol(peer_id):
await self._add_relay(peer_id)
# Limit number of relays we track
if len(self._discovered_relays) > self.max_relays:
# Sort by last seen time and keep only the most recent ones
sorted_relays = sorted(
self._discovered_relays.items(),
key=lambda x: x[1].last_seen,
reverse=True,
)
to_remove = sorted_relays[self.max_relays :]
for peer_id, _ in to_remove:
del self._discovered_relays[peer_id]
logger.debug(
"Discovery completed, tracking %d relays", len(self._discovered_relays)
)
except Exception as e:
logger.error("Error during relay discovery: %s", str(e))
async def _supports_relay_protocol(self, peer_id: ID) -> bool:
"""
Check if a peer supports the relay protocol.
Parameters
----------
peer_id : ID
The ID of the peer to check
Returns
-------
bool
True if the peer supports the relay protocol, False otherwise
"""
# Check cache first
if peer_id in self._protocol_cache:
return PROTOCOL_ID in self._protocol_cache[peer_id]
# Method 1: Try peerstore
result = await self._check_via_peerstore(peer_id)
if result is not None:
return result
# Method 2: Try direct stream connection
result = await self._check_via_direct_connection(peer_id)
if result is not None:
return result
# Method 3: Try protocols from mux
result = await self._check_via_mux(peer_id)
if result is not None:
return result
# Default: Cannot determine, assume false
return False
async def _check_via_peerstore(self, peer_id: ID) -> bool | None:
"""Check protocol support via peerstore."""
try:
peerstore = self.host.get_peerstore()
proto_getter = peerstore.get_protocols
if not callable(proto_getter):
return None
try:
# Try to get protocols
proto_result = proto_getter(peer_id)
# Get protocols list
protocols_list = []
if hasattr(proto_result, "__await__"):
protocols_list = await cast(Any, proto_result)
else:
protocols_list = proto_result
# Check result
if protocols_list is not None:
protocols = set(protocols_list)
self._protocol_cache[peer_id] = protocols
return PROTOCOL_ID in protocols
return False
except Exception as e:
logger.debug("Error getting protocols: %s", str(e))
return None
except Exception as e:
logger.debug("Error accessing peerstore: %s", str(e))
return None
async def _check_via_direct_connection(self, peer_id: ID) -> bool | None:
"""Check protocol support via direct connection."""
try:
with trio.fail_after(STREAM_TIMEOUT):
stream = await self.host.new_stream(peer_id, [PROTOCOL_ID])
if stream:
await stream.close()
self._protocol_cache[peer_id] = {PROTOCOL_ID}
return True
return False
except Exception as e:
logger.debug(
"Failed to open relay protocol stream to %s: %s", peer_id, str(e)
)
return None
async def _check_via_mux(self, peer_id: ID) -> bool | None:
"""Check protocol support via mux protocols."""
try:
if not (hasattr(self.host, "get_mux") and self.host.get_mux() is not None):
return None
mux = self.host.get_mux()
if not hasattr(mux, "protocols"):
return None
peer_protocols = set()
# Get protocols from mux with proper type safety
available_protocols = []
if hasattr(mux, "get_protocols"):
# Get protocols with proper typing
mux_protocols = mux.get_protocols()
if isinstance(mux_protocols, (list, tuple)):
available_protocols = list(mux_protocols)
for protocol in available_protocols:
try:
with trio.fail_after(2): # Quick check
# Ensure we have a proper protocol object
# Use string representation since we can't use isinstance
is_tprotocol = str(type(protocol)) == str(type(TProtocol))
protocol_obj = (
protocol if is_tprotocol else TProtocol(str(protocol))
)
stream = await self.host.new_stream(peer_id, [protocol_obj])
if stream:
peer_protocols.add(str(protocol_obj))
await stream.close()
except Exception:
pass # Ignore errors when closing the stream
self._protocol_cache[peer_id] = peer_protocols
protocol_str = str(PROTOCOL_ID)
for protocol in peer_protocols:
if protocol == protocol_str:
return True
return False
except Exception as e:
logger.debug("Error checking protocols via mux: %s", str(e))
return None
async def _add_relay(self, peer_id: ID) -> None:
"""
Add a peer as a relay and optionally make a reservation.
Parameters
----------
peer_id : ID
The ID of the peer to add as a relay
"""
now = time.time()
relay_info = RelayInfo(
peer_id=peer_id,
discovered_at=now,
last_seen=now,
)
self._discovered_relays[peer_id] = relay_info
logger.debug("Added relay %s to discovered relays", peer_id)
# If auto-reserve is enabled, make a reservation with this relay
if self.auto_reserve:
await self.make_reservation(peer_id)
async def make_reservation(self, peer_id: ID) -> bool:
"""
Make a reservation with a relay.
Parameters
----------
peer_id : ID
The ID of the relay to make a reservation with
Returns
-------
bool
True if reservation succeeded, False otherwise
"""
if peer_id not in self._discovered_relays:
logger.error("Cannot make reservation with unknown relay %s", peer_id)
return False
stream = None
try:
logger.debug("Making reservation with relay %s", peer_id)
# Open a stream to the relay with timeout
try:
with trio.fail_after(STREAM_TIMEOUT):
stream = await self.host.new_stream(peer_id, [PROTOCOL_ID])
if not stream:
logger.error("Failed to open stream to relay %s", peer_id)
return False
except trio.TooSlowError:
logger.error("Timeout opening stream to relay %s", peer_id)
return False
try:
# Create and send reservation request
request = HopMessage(
type=HopMessage.RESERVE,
peer=self.host.get_id().to_bytes(),
)
with trio.fail_after(STREAM_TIMEOUT):
await stream.write(request.SerializeToString())
# Wait for response
response_bytes = await stream.read()
if not response_bytes:
logger.error("No response received from relay %s", peer_id)
return False
# Parse response
response = HopMessage()
response.ParseFromString(response_bytes)
# Check if reservation was successful
if response.type == HopMessage.RESERVE and response.HasField(
"status"
):
# Access status code directly from protobuf object
status_code = getattr(response.status, "code", StatusCode.OK)
if status_code == StatusCode.OK:
# Update relay info with reservation details
relay_info = self._discovered_relays[peer_id]
relay_info.has_reservation = True
if response.HasField("reservation") and response.HasField(
"limit"
):
relay_info.reservation_expires_at = (
response.reservation.expire
)
relay_info.reservation_data_limit = response.limit.data
logger.debug(
"Successfully made reservation with relay %s", peer_id
)
return True
# Reservation failed
error_message = "Unknown error"
if response.HasField("status"):
# Access message directly from protobuf object
error_message = getattr(response.status, "message", "")
logger.warning(
"Reservation request rejected by relay %s: %s",
peer_id,
error_message,
)
return False
except trio.TooSlowError:
logger.error(
"Timeout during reservation process with relay %s", peer_id
)
return False
except Exception as e:
logger.error("Error making reservation with relay %s: %s", peer_id, str(e))
return False
finally:
# Always close the stream
if stream:
try:
await stream.close()
except Exception:
pass # Ignore errors when closing the stream
return False
async def _cleanup_expired(self) -> None:
"""Clean up expired relays and reservations."""
now = time.time()
to_remove = []
for peer_id, relay_info in self._discovered_relays.items():
# Check if relay hasn't been seen in a while (3x discovery interval)
if now - relay_info.last_seen > self.discovery_interval * 3:
to_remove.append(peer_id)
continue
# Check if reservation has expired
if (
relay_info.has_reservation
and relay_info.reservation_expires_at
and now > relay_info.reservation_expires_at
):
relay_info.has_reservation = False
relay_info.reservation_expires_at = None
relay_info.reservation_data_limit = None
# If auto-reserve is enabled, try to renew
if self.auto_reserve:
await self.make_reservation(peer_id)
# Remove expired relays
for peer_id in to_remove:
del self._discovered_relays[peer_id]
if peer_id in self._protocol_cache:
del self._protocol_cache[peer_id]
def get_relays(self) -> list[ID]:
"""
Get a list of discovered relay peer IDs.
Returns
-------
list[ID]
List of discovered relay peer IDs
"""
return list(self._discovered_relays.keys())
def get_relay_info(self, peer_id: ID) -> RelayInfo | None:
"""
Get information about a specific relay.
Parameters
----------
peer_id : ID
The ID of the relay to get information about
Returns
-------
Optional[RelayInfo]
Information about the relay, or None if not found
"""
return self._discovered_relays.get(peer_id)
def get_relay(self) -> ID | None:
"""
Get a single relay peer ID for connection purposes.
Prioritizes relays with active reservations.
Returns
-------
Optional[ID]
ID of a discovered relay, or None if no relays found
"""
if not self._discovered_relays:
return None
# First try to find a relay with an active reservation
for peer_id, relay_info in self._discovered_relays.items():
if relay_info and relay_info.has_reservation:
return peer_id
return next(iter(self._discovered_relays.keys()), None)

View File

@ -0,0 +1,16 @@
"""
Protocol buffer package for circuit_v2.
Contains generated protobuf code for circuit_v2 relay protocol.
"""
# Import the classes to be accessible directly from the package
from .circuit_pb2 import (
HopMessage,
Limit,
Reservation,
Status,
StopMessage,
)
__all__ = ["HopMessage", "Limit", "Reservation", "Status", "StopMessage"]

View File

@ -0,0 +1,55 @@
syntax = "proto3";
package circuit.pb.v2;
// Circuit v2 message types
message HopMessage {
enum Type {
RESERVE = 0;
CONNECT = 1;
STATUS = 2;
}
Type type = 1;
bytes peer = 2;
Reservation reservation = 3;
Limit limit = 4;
Status status = 5;
}
message StopMessage {
enum Type {
CONNECT = 0;
STATUS = 1;
}
Type type = 1;
bytes peer = 2;
Status status = 3;
}
message Reservation {
bytes voucher = 1;
bytes signature = 2;
int64 expire = 3;
}
message Limit {
int64 duration = 1;
int64 data = 2;
}
message Status {
enum Code {
OK = 0;
RESERVATION_REFUSED = 100;
RESOURCE_LIMIT_EXCEEDED = 101;
PERMISSION_DENIED = 102;
CONNECTION_FAILED = 200;
DIAL_REFUSED = 201;
STOP_FAILED = 300;
MALFORMED_MESSAGE = 400;
}
Code code = 1;
string message = 2;
}

View File

@ -0,0 +1,37 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# NO CHECKED-IN PROTOBUF GENCODE
# source: libp2p/relay/circuit_v2/pb/circuit.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n(libp2p/relay/circuit_v2/pb/circuit.proto\x12\rcircuit.pb.v2\"\xf3\x01\n\nHopMessage\x12,\n\x04type\x18\x01 \x01(\x0e\x32\x1e.circuit.pb.v2.HopMessage.Type\x12\x0c\n\x04peer\x18\x02 \x01(\x0c\x12/\n\x0breservation\x18\x03 \x01(\x0b\x32\x1a.circuit.pb.v2.Reservation\x12#\n\x05limit\x18\x04 \x01(\x0b\x32\x14.circuit.pb.v2.Limit\x12%\n\x06status\x18\x05 \x01(\x0b\x32\x15.circuit.pb.v2.Status\",\n\x04Type\x12\x0b\n\x07RESERVE\x10\x00\x12\x0b\n\x07\x43ONNECT\x10\x01\x12\n\n\x06STATUS\x10\x02\"\x92\x01\n\x0bStopMessage\x12-\n\x04type\x18\x01 \x01(\x0e\x32\x1f.circuit.pb.v2.StopMessage.Type\x12\x0c\n\x04peer\x18\x02 \x01(\x0c\x12%\n\x06status\x18\x03 \x01(\x0b\x32\x15.circuit.pb.v2.Status\"\x1f\n\x04Type\x12\x0b\n\x07\x43ONNECT\x10\x00\x12\n\n\x06STATUS\x10\x01\"A\n\x0bReservation\x12\x0f\n\x07voucher\x18\x01 \x01(\x0c\x12\x11\n\tsignature\x18\x02 \x01(\x0c\x12\x0e\n\x06\x65xpire\x18\x03 \x01(\x03\"\'\n\x05Limit\x12\x10\n\x08\x64uration\x18\x01 \x01(\x03\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x03\"\xf6\x01\n\x06Status\x12(\n\x04\x63ode\x18\x01 \x01(\x0e\x32\x1a.circuit.pb.v2.Status.Code\x12\x0f\n\x07message\x18\x02 \x01(\t\"\xb0\x01\n\x04\x43ode\x12\x06\n\x02OK\x10\x00\x12\x17\n\x13RESERVATION_REFUSED\x10\x64\x12\x1b\n\x17RESOURCE_LIMIT_EXCEEDED\x10\x65\x12\x15\n\x11PERMISSION_DENIED\x10\x66\x12\x16\n\x11\x43ONNECTION_FAILED\x10\xc8\x01\x12\x11\n\x0c\x44IAL_REFUSED\x10\xc9\x01\x12\x10\n\x0bSTOP_FAILED\x10\xac\x02\x12\x16\n\x11MALFORMED_MESSAGE\x10\x90\x03\x62\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.relay.circuit_v2.pb.circuit_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_HOPMESSAGE._serialized_start=60
_HOPMESSAGE._serialized_end=303
_HOPMESSAGE_TYPE._serialized_start=259
_HOPMESSAGE_TYPE._serialized_end=303
_STOPMESSAGE._serialized_start=306
_STOPMESSAGE._serialized_end=452
_STOPMESSAGE_TYPE._serialized_start=421
_STOPMESSAGE_TYPE._serialized_end=452
_RESERVATION._serialized_start=454
_RESERVATION._serialized_end=519
_LIMIT._serialized_start=521
_LIMIT._serialized_end=560
_STATUS._serialized_start=563
_STATUS._serialized_end=809
_STATUS_CODE._serialized_start=633
_STATUS_CODE._serialized_end=809
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,184 @@
"""
@generated by mypy-protobuf. Do not edit manually!
isort:skip_file
"""
import builtins
import google.protobuf.descriptor
import google.protobuf.internal.enum_type_wrapper
import google.protobuf.message
import sys
import typing
if sys.version_info >= (3, 10):
import typing as typing_extensions
else:
import typing_extensions
DESCRIPTOR: google.protobuf.descriptor.FileDescriptor
@typing.final
class HopMessage(google.protobuf.message.Message):
"""Circuit v2 message types"""
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class _Type:
ValueType = typing.NewType("ValueType", builtins.int)
V: typing_extensions.TypeAlias = ValueType
class _TypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[HopMessage._Type.ValueType], builtins.type):
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
RESERVE: HopMessage._Type.ValueType # 0
CONNECT: HopMessage._Type.ValueType # 1
STATUS: HopMessage._Type.ValueType # 2
class Type(_Type, metaclass=_TypeEnumTypeWrapper): ...
RESERVE: HopMessage.Type.ValueType # 0
CONNECT: HopMessage.Type.ValueType # 1
STATUS: HopMessage.Type.ValueType # 2
TYPE_FIELD_NUMBER: builtins.int
PEER_FIELD_NUMBER: builtins.int
RESERVATION_FIELD_NUMBER: builtins.int
LIMIT_FIELD_NUMBER: builtins.int
STATUS_FIELD_NUMBER: builtins.int
type: global___HopMessage.Type.ValueType
peer: builtins.bytes
@property
def reservation(self) -> global___Reservation: ...
@property
def limit(self) -> global___Limit: ...
@property
def status(self) -> global___Status: ...
def __init__(
self,
*,
type: global___HopMessage.Type.ValueType = ...,
peer: builtins.bytes = ...,
reservation: global___Reservation | None = ...,
limit: global___Limit | None = ...,
status: global___Status | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["limit", b"limit", "reservation", b"reservation", "status", b"status"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["limit", b"limit", "peer", b"peer", "reservation", b"reservation", "status", b"status", "type", b"type"]) -> None: ...
global___HopMessage = HopMessage
@typing.final
class StopMessage(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class _Type:
ValueType = typing.NewType("ValueType", builtins.int)
V: typing_extensions.TypeAlias = ValueType
class _TypeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[StopMessage._Type.ValueType], builtins.type):
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
CONNECT: StopMessage._Type.ValueType # 0
STATUS: StopMessage._Type.ValueType # 1
class Type(_Type, metaclass=_TypeEnumTypeWrapper): ...
CONNECT: StopMessage.Type.ValueType # 0
STATUS: StopMessage.Type.ValueType # 1
TYPE_FIELD_NUMBER: builtins.int
PEER_FIELD_NUMBER: builtins.int
STATUS_FIELD_NUMBER: builtins.int
type: global___StopMessage.Type.ValueType
peer: builtins.bytes
@property
def status(self) -> global___Status: ...
def __init__(
self,
*,
type: global___StopMessage.Type.ValueType = ...,
peer: builtins.bytes = ...,
status: global___Status | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["status", b"status"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["peer", b"peer", "status", b"status", "type", b"type"]) -> None: ...
global___StopMessage = StopMessage
@typing.final
class Reservation(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
VOUCHER_FIELD_NUMBER: builtins.int
SIGNATURE_FIELD_NUMBER: builtins.int
EXPIRE_FIELD_NUMBER: builtins.int
voucher: builtins.bytes
signature: builtins.bytes
expire: builtins.int
def __init__(
self,
*,
voucher: builtins.bytes = ...,
signature: builtins.bytes = ...,
expire: builtins.int = ...,
) -> None: ...
def ClearField(self, field_name: typing.Literal["expire", b"expire", "signature", b"signature", "voucher", b"voucher"]) -> None: ...
global___Reservation = Reservation
@typing.final
class Limit(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
DURATION_FIELD_NUMBER: builtins.int
DATA_FIELD_NUMBER: builtins.int
duration: builtins.int
data: builtins.int
def __init__(
self,
*,
duration: builtins.int = ...,
data: builtins.int = ...,
) -> None: ...
def ClearField(self, field_name: typing.Literal["data", b"data", "duration", b"duration"]) -> None: ...
global___Limit = Limit
@typing.final
class Status(google.protobuf.message.Message):
DESCRIPTOR: google.protobuf.descriptor.Descriptor
class _Code:
ValueType = typing.NewType("ValueType", builtins.int)
V: typing_extensions.TypeAlias = ValueType
class _CodeEnumTypeWrapper(google.protobuf.internal.enum_type_wrapper._EnumTypeWrapper[Status._Code.ValueType], builtins.type):
DESCRIPTOR: google.protobuf.descriptor.EnumDescriptor
OK: Status._Code.ValueType # 0
RESERVATION_REFUSED: Status._Code.ValueType # 100
RESOURCE_LIMIT_EXCEEDED: Status._Code.ValueType # 101
PERMISSION_DENIED: Status._Code.ValueType # 102
CONNECTION_FAILED: Status._Code.ValueType # 200
DIAL_REFUSED: Status._Code.ValueType # 201
STOP_FAILED: Status._Code.ValueType # 300
MALFORMED_MESSAGE: Status._Code.ValueType # 400
class Code(_Code, metaclass=_CodeEnumTypeWrapper): ...
OK: Status.Code.ValueType # 0
RESERVATION_REFUSED: Status.Code.ValueType # 100
RESOURCE_LIMIT_EXCEEDED: Status.Code.ValueType # 101
PERMISSION_DENIED: Status.Code.ValueType # 102
CONNECTION_FAILED: Status.Code.ValueType # 200
DIAL_REFUSED: Status.Code.ValueType # 201
STOP_FAILED: Status.Code.ValueType # 300
MALFORMED_MESSAGE: Status.Code.ValueType # 400
CODE_FIELD_NUMBER: builtins.int
MESSAGE_FIELD_NUMBER: builtins.int
code: global___Status.Code.ValueType
message: builtins.str
def __init__(
self,
*,
code: global___Status.Code.ValueType = ...,
message: builtins.str = ...,
) -> None: ...
def ClearField(self, field_name: typing.Literal["code", b"code", "message", b"message"]) -> None: ...
global___Status = Status

View File

@ -0,0 +1,800 @@
"""
Circuit Relay v2 protocol implementation.
This module implements the Circuit Relay v2 protocol as specified in:
https://github.com/libp2p/specs/blob/master/relay/circuit-v2.md
"""
import logging
import time
from typing import (
Any,
Protocol as TypingProtocol,
cast,
runtime_checkable,
)
import trio
from libp2p.abc import (
IHost,
INetStream,
)
from libp2p.custom_types import (
TProtocol,
)
from libp2p.io.abc import (
ReadWriteCloser,
)
from libp2p.peer.id import (
ID,
)
from libp2p.stream_muxer.mplex.exceptions import (
MplexStreamEOF,
MplexStreamReset,
)
from libp2p.tools.async_service import (
Service,
)
from .pb.circuit_pb2 import (
HopMessage,
Limit,
Reservation,
Status as PbStatus,
StopMessage,
)
from .protocol_buffer import (
StatusCode,
create_status,
)
from .resources import (
RelayLimits,
RelayResourceManager,
)
logger = logging.getLogger("libp2p.relay.circuit_v2")
PROTOCOL_ID = TProtocol("/libp2p/circuit/relay/2.0.0")
STOP_PROTOCOL_ID = TProtocol("/libp2p/circuit/relay/2.0.0/stop")
# Default limits for relay resources
DEFAULT_RELAY_LIMITS = RelayLimits(
duration=60 * 60, # 1 hour
data=1024 * 1024 * 1024, # 1GB
max_circuit_conns=8,
max_reservations=4,
)
# Stream operation timeouts
STREAM_READ_TIMEOUT = 15 # seconds
STREAM_WRITE_TIMEOUT = 15 # seconds
STREAM_CLOSE_TIMEOUT = 10 # seconds
MAX_READ_RETRIES = 5 # Maximum number of read retries
# Extended interfaces for type checking
@runtime_checkable
class IHostWithStreamHandlers(TypingProtocol):
"""Extended host interface with stream handler methods."""
def remove_stream_handler(self, protocol_id: TProtocol) -> None:
"""Remove a stream handler for a protocol."""
...
@runtime_checkable
class INetStreamWithExtras(TypingProtocol):
"""Extended net stream interface with additional methods."""
def get_remote_peer_id(self) -> ID:
"""Get the remote peer ID."""
...
def is_open(self) -> bool:
"""Check if the stream is open."""
...
def is_closed(self) -> bool:
"""Check if the stream is closed."""
...
class CircuitV2Protocol(Service):
"""
CircuitV2Protocol implements the Circuit Relay v2 protocol.
This protocol allows peers to establish connections through relay nodes
when direct connections are not possible (e.g., due to NAT).
"""
def __init__(
self,
host: IHost,
limits: RelayLimits | None = None,
allow_hop: bool = False,
) -> None:
"""
Initialize a Circuit Relay v2 protocol instance.
Parameters
----------
host : IHost
The libp2p host instance
limits : RelayLimits | None
Resource limits for the relay
allow_hop : bool
Whether to allow this node to act as a relay
"""
self.host = host
self.limits = limits or DEFAULT_RELAY_LIMITS
self.allow_hop = allow_hop
self.resource_manager = RelayResourceManager(self.limits)
self._active_relays: dict[ID, tuple[INetStream, INetStream | None]] = {}
self.event_started = trio.Event()
async def run(self, *, task_status: Any = trio.TASK_STATUS_IGNORED) -> None:
"""Run the protocol service."""
try:
# Register protocol handlers
if self.allow_hop:
logger.debug("Registering stream handlers for relay protocol")
self.host.set_stream_handler(PROTOCOL_ID, self._handle_hop_stream)
self.host.set_stream_handler(STOP_PROTOCOL_ID, self._handle_stop_stream)
logger.debug("Stream handlers registered successfully")
# Signal that we're ready
self.event_started.set()
task_status.started()
logger.debug("Protocol service started")
# Wait for service to be stopped
await self.manager.wait_finished()
finally:
# Clean up any active relay connections
for src_stream, dst_stream in self._active_relays.values():
await self._close_stream(src_stream)
await self._close_stream(dst_stream)
self._active_relays.clear()
# Unregister protocol handlers
if self.allow_hop:
try:
# Cast host to extended interface with remove_stream_handler
host_with_handlers = cast(IHostWithStreamHandlers, self.host)
host_with_handlers.remove_stream_handler(PROTOCOL_ID)
host_with_handlers.remove_stream_handler(STOP_PROTOCOL_ID)
except Exception as e:
logger.error("Error unregistering stream handlers: %s", str(e))
async def _close_stream(self, stream: INetStream | None) -> None:
"""Helper function to safely close a stream."""
if stream is None:
return
try:
with trio.fail_after(STREAM_CLOSE_TIMEOUT):
await stream.close()
except Exception:
try:
await stream.reset()
except Exception:
pass
async def _read_stream_with_retry(
self,
stream: INetStream,
max_retries: int = MAX_READ_RETRIES,
) -> bytes | None:
"""
Helper function to read from a stream with retries.
Parameters
----------
stream : INetStream
The stream to read from
max_retries : int
Maximum number of read retries
Returns
-------
Optional[bytes]
The data read from the stream, or None if the stream is closed/reset
Raises
------
trio.TooSlowError
If read timeout occurs after all retries
Exception
For other unexpected errors
"""
retries = 0
last_error: Any = None
backoff_time = 0.2 # Base backoff time in seconds
while retries < max_retries:
try:
with trio.fail_after(STREAM_READ_TIMEOUT):
# Try reading with timeout
logger.debug(
"Attempting to read from stream (attempt %d/%d)",
retries + 1,
max_retries,
)
data = await stream.read()
if not data: # EOF
logger.debug("Stream EOF detected")
return None
logger.debug("Successfully read %d bytes from stream", len(data))
return data
except trio.WouldBlock:
# Just retry immediately if we would block
retries += 1
logger.debug(
"Stream would block (attempt %d/%d), retrying...",
retries,
max_retries,
)
await trio.sleep(backoff_time * retries) # Increased backoff time
continue
except (MplexStreamEOF, MplexStreamReset):
# Stream closed/reset - no point retrying
logger.debug("Stream closed/reset during read")
return None
except trio.TooSlowError as e:
last_error = e
retries += 1
logger.debug(
"Read timeout (attempt %d/%d), retrying...", retries, max_retries
)
if retries < max_retries:
# Wait longer before retry with increasing backoff
await trio.sleep(backoff_time * retries) # Increased backoff
continue
except Exception as e:
logger.error("Unexpected error reading from stream: %s", str(e))
last_error = e
retries += 1
if retries < max_retries:
await trio.sleep(backoff_time * retries) # Increased backoff
continue
raise
if last_error:
if isinstance(last_error, trio.TooSlowError):
logger.error("Read timed out after %d retries", max_retries)
raise last_error
return None
async def _handle_hop_stream(self, stream: INetStream) -> None:
"""
Handle incoming HOP streams.
This handler processes relay requests from other peers.
"""
try:
# Try to get peer ID first
try:
# Cast to extended interface with get_remote_peer_id
stream_with_peer_id = cast(INetStreamWithExtras, stream)
remote_peer_id = stream_with_peer_id.get_remote_peer_id()
remote_id = str(remote_peer_id)
except Exception:
# Fall back to address if peer ID not available
remote_addr = stream.get_remote_address()
remote_id = f"peer at {remote_addr}" if remote_addr else "unknown peer"
logger.debug("Handling hop stream from %s", remote_id)
# First, handle the read timeout gracefully
try:
with trio.fail_after(
STREAM_READ_TIMEOUT * 2
): # Double the timeout for reading
msg_bytes = await stream.read()
if not msg_bytes:
logger.error(
"Empty read from stream from %s",
remote_id,
)
# Create a proto Status directly
pb_status = PbStatus()
pb_status.code = cast(Any, int(StatusCode.MALFORMED_MESSAGE))
pb_status.message = "Empty message received"
response = HopMessage(
type=HopMessage.STATUS,
status=pb_status,
)
await stream.write(response.SerializeToString())
await trio.sleep(0.5) # Longer wait to ensure message is sent
return
except trio.TooSlowError:
logger.error(
"Timeout reading from hop stream from %s",
remote_id,
)
# Create a proto Status directly
pb_status = PbStatus()
pb_status.code = cast(Any, int(StatusCode.CONNECTION_FAILED))
pb_status.message = "Stream read timeout"
response = HopMessage(
type=HopMessage.STATUS,
status=pb_status,
)
await stream.write(response.SerializeToString())
await trio.sleep(0.5) # Longer wait to ensure the message is sent
return
except Exception as e:
logger.error(
"Error reading from hop stream from %s: %s",
remote_id,
str(e),
)
# Create a proto Status directly
pb_status = PbStatus()
pb_status.code = cast(Any, int(StatusCode.MALFORMED_MESSAGE))
pb_status.message = f"Read error: {str(e)}"
response = HopMessage(
type=HopMessage.STATUS,
status=pb_status,
)
await stream.write(response.SerializeToString())
await trio.sleep(0.5) # Longer wait to ensure the message is sent
return
# Parse the message
try:
hop_msg = HopMessage()
hop_msg.ParseFromString(msg_bytes)
except Exception as e:
logger.error(
"Error parsing hop message from %s: %s",
remote_id,
str(e),
)
# Create a proto Status directly
pb_status = PbStatus()
pb_status.code = cast(Any, int(StatusCode.MALFORMED_MESSAGE))
pb_status.message = f"Parse error: {str(e)}"
response = HopMessage(
type=HopMessage.STATUS,
status=pb_status,
)
await stream.write(response.SerializeToString())
await trio.sleep(0.5) # Longer wait to ensure the message is sent
return
# Process based on message type
if hop_msg.type == HopMessage.RESERVE:
logger.debug("Handling RESERVE message from %s", remote_id)
await self._handle_reserve(stream, hop_msg)
# For RESERVE requests, let the client close the stream
return
elif hop_msg.type == HopMessage.CONNECT:
logger.debug("Handling CONNECT message from %s", remote_id)
await self._handle_connect(stream, hop_msg)
else:
logger.error("Invalid message type %d from %s", hop_msg.type, remote_id)
# Send a nice error response using _send_status method
await self._send_status(
stream,
StatusCode.MALFORMED_MESSAGE,
f"Invalid message type: {hop_msg.type}",
)
except Exception as e:
logger.error(
"Unexpected error handling hop stream from %s: %s", remote_id, str(e)
)
try:
# Send a nice error response using _send_status method
await self._send_status(
stream,
StatusCode.MALFORMED_MESSAGE,
f"Internal error: {str(e)}",
)
except Exception as e2:
logger.error(
"Failed to send error response to %s: %s", remote_id, str(e2)
)
async def _handle_stop_stream(self, stream: INetStream) -> None:
"""
Handle incoming STOP streams.
This handler processes incoming relay connections from the destination side.
"""
try:
# Read the incoming message with timeout
with trio.fail_after(STREAM_READ_TIMEOUT):
msg_bytes = await stream.read()
stop_msg = StopMessage()
stop_msg.ParseFromString(msg_bytes)
if stop_msg.type != StopMessage.CONNECT:
# Use direct attribute access to create status object for error response
await self._send_stop_status(
stream,
StatusCode.MALFORMED_MESSAGE,
"Invalid message type",
)
await self._close_stream(stream)
return
# Get the source stream from active relays
peer_id = ID(stop_msg.peer)
if peer_id not in self._active_relays:
# Use direct attribute access to create status object for error response
await self._send_stop_status(
stream,
StatusCode.CONNECTION_FAILED,
"No pending relay connection",
)
await self._close_stream(stream)
return
src_stream, _ = self._active_relays[peer_id]
self._active_relays[peer_id] = (src_stream, stream)
# Send success status to both sides
await self._send_status(
src_stream,
StatusCode.OK,
"Connection established",
)
await self._send_stop_status(
stream,
StatusCode.OK,
"Connection established",
)
# Start relaying data
async with trio.open_nursery() as nursery:
nursery.start_soon(self._relay_data, src_stream, stream, peer_id)
nursery.start_soon(self._relay_data, stream, src_stream, peer_id)
except trio.TooSlowError:
logger.error("Timeout reading from stop stream")
await self._send_stop_status(
stream,
StatusCode.CONNECTION_FAILED,
"Stream read timeout",
)
await self._close_stream(stream)
except Exception as e:
logger.error("Error handling stop stream: %s", str(e))
try:
await self._send_stop_status(
stream,
StatusCode.MALFORMED_MESSAGE,
str(e),
)
await self._close_stream(stream)
except Exception:
pass
async def _handle_reserve(self, stream: INetStream, msg: Any) -> None:
"""Handle a reservation request."""
peer_id = None
try:
peer_id = ID(msg.peer)
logger.debug("Handling reservation request from peer %s", peer_id)
# Check if we can accept more reservations
if not self.resource_manager.can_accept_reservation(peer_id):
logger.debug("Reservation limit exceeded for peer %s", peer_id)
# Send status message with STATUS type
status = create_status(
code=StatusCode.RESOURCE_LIMIT_EXCEEDED,
message="Reservation limit exceeded",
)
status_msg = HopMessage(
type=HopMessage.STATUS,
status=status.to_pb(),
)
await stream.write(status_msg.SerializeToString())
return
# Accept reservation
logger.debug("Accepting reservation from peer %s", peer_id)
ttl = self.resource_manager.reserve(peer_id)
# Send reservation success response
with trio.fail_after(STREAM_WRITE_TIMEOUT):
status = create_status(
code=StatusCode.OK, message="Reservation accepted"
)
response = HopMessage(
type=HopMessage.STATUS,
status=status.to_pb(),
reservation=Reservation(
expire=int(time.time() + ttl),
voucher=b"", # We don't use vouchers yet
signature=b"", # We don't use signatures yet
),
limit=Limit(
duration=self.limits.duration,
data=self.limits.data,
),
)
# Log the response message details for debugging
logger.debug(
"Sending reservation response: type=%s, status=%s, ttl=%d",
response.type,
getattr(response.status, "code", "unknown"),
ttl,
)
# Send the response with increased timeout
await stream.write(response.SerializeToString())
# Add a small wait to ensure the message is fully sent
await trio.sleep(0.1)
logger.debug("Reservation response sent successfully")
except Exception as e:
logger.error("Error handling reservation request: %s", str(e))
if cast(INetStreamWithExtras, stream).is_open():
try:
# Send error response
await self._send_status(
stream,
StatusCode.INTERNAL_ERROR,
f"Failed to process reservation: {str(e)}",
)
except Exception as send_err:
logger.error("Failed to send error response: %s", str(send_err))
finally:
# Always close the stream when done with reservation
if cast(INetStreamWithExtras, stream).is_open():
try:
with trio.fail_after(STREAM_CLOSE_TIMEOUT):
await stream.close()
except Exception as close_err:
logger.error("Error closing stream: %s", str(close_err))
async def _handle_connect(self, stream: INetStream, msg: Any) -> None:
"""Handle a connect request."""
peer_id = ID(msg.peer)
dst_stream: INetStream | None = None
# Verify reservation if provided
if msg.HasField("reservation"):
if not self.resource_manager.verify_reservation(peer_id, msg.reservation):
await self._send_status(
stream,
StatusCode.PERMISSION_DENIED,
"Invalid reservation",
)
await stream.reset()
return
# Check resource limits
if not self.resource_manager.can_accept_connection(peer_id):
await self._send_status(
stream,
StatusCode.RESOURCE_LIMIT_EXCEEDED,
"Connection limit exceeded",
)
await stream.reset()
return
try:
# Store the source stream with properly typed None
self._active_relays[peer_id] = (stream, None)
# Try to connect to the destination with timeout
with trio.fail_after(STREAM_READ_TIMEOUT):
dst_stream = await self.host.new_stream(peer_id, [STOP_PROTOCOL_ID])
if not dst_stream:
raise ConnectionError("Could not connect to destination")
# Send STOP CONNECT message
stop_msg = StopMessage(
type=StopMessage.CONNECT,
# Cast to extended interface with get_remote_peer_id
peer=cast(INetStreamWithExtras, stream)
.get_remote_peer_id()
.to_bytes(),
)
await dst_stream.write(stop_msg.SerializeToString())
# Wait for response from destination
resp_bytes = await dst_stream.read()
resp = StopMessage()
resp.ParseFromString(resp_bytes)
# Handle status attributes from the response
if resp.HasField("status"):
# Get code and message attributes with defaults
status_code = getattr(resp.status, "code", StatusCode.OK)
# Get message with default
status_msg = getattr(resp.status, "message", "Unknown error")
else:
status_code = StatusCode.OK
status_msg = "No status provided"
if status_code != StatusCode.OK:
raise ConnectionError(
f"Destination rejected connection: {status_msg}"
)
# Update active relays with destination stream
self._active_relays[peer_id] = (stream, dst_stream)
# Update reservation connection count
reservation = self.resource_manager._reservations.get(peer_id)
if reservation:
reservation.active_connections += 1
# Send success status
await self._send_status(
stream,
StatusCode.OK,
"Connection established",
)
# Start relaying data
async with trio.open_nursery() as nursery:
nursery.start_soon(self._relay_data, stream, dst_stream, peer_id)
nursery.start_soon(self._relay_data, dst_stream, stream, peer_id)
except (trio.TooSlowError, ConnectionError) as e:
logger.error("Error establishing relay connection: %s", str(e))
await self._send_status(
stream,
StatusCode.CONNECTION_FAILED,
str(e),
)
if peer_id in self._active_relays:
del self._active_relays[peer_id]
# Clean up reservation connection count on failure
reservation = self.resource_manager._reservations.get(peer_id)
if reservation:
reservation.active_connections -= 1
await stream.reset()
if dst_stream and not cast(INetStreamWithExtras, dst_stream).is_closed():
await dst_stream.reset()
except Exception as e:
logger.error("Unexpected error in connect handler: %s", str(e))
await self._send_status(
stream,
StatusCode.CONNECTION_FAILED,
"Internal error",
)
if peer_id in self._active_relays:
del self._active_relays[peer_id]
await stream.reset()
if dst_stream and not cast(INetStreamWithExtras, dst_stream).is_closed():
await dst_stream.reset()
async def _relay_data(
self,
src_stream: INetStream,
dst_stream: INetStream,
peer_id: ID,
) -> None:
"""
Relay data between two streams.
Parameters
----------
src_stream : INetStream
Source stream to read from
dst_stream : INetStream
Destination stream to write to
peer_id : ID
ID of the peer being relayed
"""
try:
while True:
# Read data with retries
data = await self._read_stream_with_retry(src_stream)
if not data:
logger.info("Source stream closed/reset")
break
# Write data with timeout
try:
with trio.fail_after(STREAM_WRITE_TIMEOUT):
await dst_stream.write(data)
except trio.TooSlowError:
logger.error("Timeout writing to destination stream")
break
except Exception as e:
logger.error("Error writing to destination stream: %s", str(e))
break
# Update resource usage
reservation = self.resource_manager._reservations.get(peer_id)
if reservation:
reservation.data_used += len(data)
if reservation.data_used >= reservation.limits.data:
logger.warning("Data limit exceeded for peer %s", peer_id)
break
except Exception as e:
logger.error("Error relaying data: %s", str(e))
finally:
# Clean up streams and remove from active relays
await src_stream.reset()
await dst_stream.reset()
if peer_id in self._active_relays:
del self._active_relays[peer_id]
async def _send_status(
self,
stream: ReadWriteCloser,
code: int,
message: str,
) -> None:
"""Send a status message."""
try:
logger.debug("Sending status message with code %s: %s", code, message)
with trio.fail_after(STREAM_WRITE_TIMEOUT * 2): # Double the timeout
# Create a proto Status directly
pb_status = PbStatus()
pb_status.code = cast(
Any, int(code)
) # Cast to Any to avoid type errors
pb_status.message = message
status_msg = HopMessage(
type=HopMessage.STATUS,
status=pb_status,
)
msg_bytes = status_msg.SerializeToString()
logger.debug("Status message serialized (%d bytes)", len(msg_bytes))
await stream.write(msg_bytes)
logger.debug("Status message sent, waiting for processing")
# Wait longer to ensure the message is sent
await trio.sleep(1.5)
logger.debug("Status message sending completed")
except trio.TooSlowError:
logger.error(
"Timeout sending status message: code=%s, message=%s", code, message
)
except Exception as e:
logger.error("Error sending status message: %s", str(e))
async def _send_stop_status(
self,
stream: ReadWriteCloser,
code: int,
message: str,
) -> None:
"""Send a status message on a STOP stream."""
try:
logger.debug("Sending stop status message with code %s: %s", code, message)
with trio.fail_after(STREAM_WRITE_TIMEOUT * 2): # Double the timeout
# Create a proto Status directly
pb_status = PbStatus()
pb_status.code = cast(
Any, int(code)
) # Cast to Any to avoid type errors
pb_status.message = message
status_msg = StopMessage(
type=StopMessage.STATUS,
status=pb_status,
)
await stream.write(status_msg.SerializeToString())
await trio.sleep(0.5) # Ensure message is sent
except Exception as e:
logger.error("Error sending stop status message: %s", str(e))

View File

@ -0,0 +1,55 @@
"""
Protocol buffer wrapper classes for Circuit Relay v2.
This module provides wrapper classes for protocol buffer generated objects
to make them easier to work with in type-checked code.
"""
from enum import (
IntEnum,
)
from typing import (
Any,
)
from .pb.circuit_pb2 import Status as PbStatus
# Define Status codes as an Enum for better type safety and organization
class StatusCode(IntEnum):
OK = 0
RESERVATION_REFUSED = 100
RESOURCE_LIMIT_EXCEEDED = 101
PERMISSION_DENIED = 102
CONNECTION_FAILED = 200
DIAL_REFUSED = 201
STOP_FAILED = 300
MALFORMED_MESSAGE = 400
INTERNAL_ERROR = 500
def create_status(code: int = StatusCode.OK, message: str = "") -> Any:
"""
Create a protocol buffer Status object.
Parameters
----------
code : int
The status code
message : str
The status message
Returns
-------
Any
The protocol buffer Status object
"""
# Create status object
pb_obj = PbStatus()
# Convert the integer status code to the protobuf enum value type
pb_obj.code = PbStatus.Code.ValueType(code)
pb_obj.message = message
return pb_obj

View File

@ -0,0 +1,254 @@
"""
Resource management for Circuit Relay v2.
This module handles managing resources for relay operations,
including reservations and connection limits.
"""
from dataclasses import (
dataclass,
)
import hashlib
import os
import time
from libp2p.peer.id import (
ID,
)
# Import the protobuf definitions
from .pb.circuit_pb2 import Reservation as PbReservation
@dataclass
class RelayLimits:
"""Configuration for relay resource limits."""
duration: int # Maximum duration of a relay connection in seconds
data: int # Maximum data transfer allowed in bytes
max_circuit_conns: int # Maximum number of concurrent circuit connections
max_reservations: int # Maximum number of active reservations
class Reservation:
"""Represents a relay reservation."""
def __init__(self, peer_id: ID, limits: RelayLimits):
"""
Initialize a new reservation.
Parameters
----------
peer_id : ID
The peer ID this reservation is for
limits : RelayLimits
The resource limits for this reservation
"""
self.peer_id = peer_id
self.limits = limits
self.created_at = time.time()
self.expires_at = self.created_at + limits.duration
self.data_used = 0
self.active_connections = 0
self.voucher = self._generate_voucher()
def _generate_voucher(self) -> bytes:
"""
Generate a unique cryptographically secure voucher for this reservation.
Returns
-------
bytes
A secure voucher token
"""
# Create a random token using a combination of:
# - Random bytes for unpredictability
# - Peer ID to bind it to the specific peer
# - Timestamp for uniqueness
# - Hash everything for a fixed size output
random_bytes = os.urandom(16) # 128 bits of randomness
timestamp = str(int(self.created_at * 1000000)).encode()
peer_bytes = self.peer_id.to_bytes()
# Combine all elements and hash them
h = hashlib.sha256()
h.update(random_bytes)
h.update(timestamp)
h.update(peer_bytes)
return h.digest()
def is_expired(self) -> bool:
"""Check if the reservation has expired."""
return time.time() > self.expires_at
def can_accept_connection(self) -> bool:
"""Check if a new connection can be accepted."""
return (
not self.is_expired()
and self.active_connections < self.limits.max_circuit_conns
and self.data_used < self.limits.data
)
def to_proto(self) -> PbReservation:
"""Convert the reservation to its protobuf representation."""
# TODO: For production use, implement proper signature generation
# The signature should be created by signing the voucher with the
# peer's private key. The current implementation with an empty signature
# is intended for development and testing only.
return PbReservation(
expire=int(self.expires_at),
voucher=self.voucher,
signature=b"",
)
class RelayResourceManager:
"""
Manages resources and reservations for relay operations.
This class handles:
- Tracking active reservations
- Enforcing resource limits
- Managing connection quotas
"""
def __init__(self, limits: RelayLimits):
"""
Initialize the resource manager.
Parameters
----------
limits : RelayLimits
The resource limits to enforce
"""
self.limits = limits
self._reservations: dict[ID, Reservation] = {}
def can_accept_reservation(self, peer_id: ID) -> bool:
"""
Check if a new reservation can be accepted for the given peer.
Parameters
----------
peer_id : ID
The peer ID requesting the reservation
Returns
-------
bool
True if the reservation can be accepted
"""
# Clean expired reservations
self._clean_expired()
# Check if peer already has a valid reservation
existing = self._reservations.get(peer_id)
if existing and not existing.is_expired():
return True
# Check if we're at the reservation limit
return len(self._reservations) < self.limits.max_reservations
def create_reservation(self, peer_id: ID) -> Reservation:
"""
Create a new reservation for the given peer.
Parameters
----------
peer_id : ID
The peer ID to create the reservation for
Returns
-------
Reservation
The newly created reservation
"""
reservation = Reservation(peer_id, self.limits)
self._reservations[peer_id] = reservation
return reservation
def verify_reservation(self, peer_id: ID, proto_res: PbReservation) -> bool:
"""
Verify a reservation from a protobuf message.
Parameters
----------
peer_id : ID
The peer ID the reservation is for
proto_res : PbReservation
The protobuf reservation message
Returns
-------
bool
True if the reservation is valid
"""
# TODO: Implement voucher and signature verification
reservation = self._reservations.get(peer_id)
return (
reservation is not None
and not reservation.is_expired()
and reservation.expires_at == proto_res.expire
)
def can_accept_connection(self, peer_id: ID) -> bool:
"""
Check if a new connection can be accepted for the given peer.
Parameters
----------
peer_id : ID
The peer ID requesting the connection
Returns
-------
bool
True if the connection can be accepted
"""
reservation = self._reservations.get(peer_id)
return reservation is not None and reservation.can_accept_connection()
def _clean_expired(self) -> None:
"""Remove expired reservations."""
now = time.time()
expired = [
peer_id
for peer_id, res in self._reservations.items()
if now > res.expires_at
]
for peer_id in expired:
del self._reservations[peer_id]
def reserve(self, peer_id: ID) -> int:
"""
Create or update a reservation for a peer and return the TTL.
Parameters
----------
peer_id : ID
The peer ID to reserve for
Returns
-------
int
The TTL of the reservation in seconds
"""
# Check for existing reservation
existing = self._reservations.get(peer_id)
if existing and not existing.is_expired():
# Return remaining time for existing reservation
remaining = max(0, int(existing.expires_at - time.time()))
return remaining
# Create new reservation
self.create_reservation(peer_id)
return self.limits.duration

View File

@ -0,0 +1,427 @@
"""
Transport implementation for Circuit Relay v2.
This module implements the transport layer for Circuit Relay v2,
allowing peers to establish connections through relay nodes.
"""
from collections.abc import Awaitable, Callable
import logging
import multiaddr
import trio
from libp2p.abc import (
IHost,
IListener,
INetStream,
ITransport,
ReadWriteCloser,
)
from libp2p.network.connection.raw_connection import (
RawConnection,
)
from libp2p.peer.id import (
ID,
)
from libp2p.peer.peerinfo import (
PeerInfo,
)
from libp2p.tools.async_service import (
Service,
)
from .config import (
ClientConfig,
RelayConfig,
)
from .discovery import (
RelayDiscovery,
)
from .pb.circuit_pb2 import (
HopMessage,
StopMessage,
)
from .protocol import (
PROTOCOL_ID,
CircuitV2Protocol,
)
from .protocol_buffer import (
StatusCode,
)
logger = logging.getLogger("libp2p.relay.circuit_v2.transport")
class CircuitV2Transport(ITransport):
"""
CircuitV2Transport implements the transport interface for Circuit Relay v2.
This transport allows peers to establish connections through relay nodes
when direct connections are not possible.
"""
def __init__(
self,
host: IHost,
protocol: CircuitV2Protocol,
config: RelayConfig,
) -> None:
"""
Initialize the Circuit v2 transport.
Parameters
----------
host : IHost
The libp2p host this transport is running on
protocol : CircuitV2Protocol
The Circuit v2 protocol instance
config : RelayConfig
Relay configuration
"""
self.host = host
self.protocol = protocol
self.config = config
self.client_config = ClientConfig()
self.discovery = RelayDiscovery(
host=host,
auto_reserve=config.enable_client,
discovery_interval=config.discovery_interval,
max_relays=config.max_relays,
)
async def dial(
self,
maddr: multiaddr.Multiaddr,
) -> RawConnection:
"""
Dial a peer using the multiaddr.
Parameters
----------
maddr : multiaddr.Multiaddr
The multiaddr to dial
Returns
-------
RawConnection
The established connection
Raises
------
ConnectionError
If the connection cannot be established
"""
# Extract peer ID from multiaddr - P_P2P code is 0x01A5 (421)
peer_id_str = maddr.value_for_protocol("p2p")
if not peer_id_str:
raise ConnectionError("Multiaddr does not contain peer ID")
peer_id = ID.from_base58(peer_id_str)
peer_info = PeerInfo(peer_id, [maddr])
# Use the internal dial_peer_info method
return await self.dial_peer_info(peer_info)
async def dial_peer_info(
self,
peer_info: PeerInfo,
*,
relay_peer_id: ID | None = None,
) -> RawConnection:
"""
Dial a peer through a relay.
Parameters
----------
peer_info : PeerInfo
The peer to dial
relay_peer_id : Optional[ID], optional
Optional specific relay peer to use
Returns
-------
RawConnection
The established connection
Raises
------
ConnectionError
If the connection cannot be established
"""
# If no specific relay is provided, try to find one
if relay_peer_id is None:
relay_peer_id = await self._select_relay(peer_info)
if not relay_peer_id:
raise ConnectionError("No suitable relay found")
# Get a stream to the relay
relay_stream = await self.host.new_stream(relay_peer_id, [PROTOCOL_ID])
if not relay_stream:
raise ConnectionError(f"Could not open stream to relay {relay_peer_id}")
try:
# First try to make a reservation if enabled
if self.config.enable_client:
success = await self._make_reservation(relay_stream, relay_peer_id)
if not success:
logger.warning(
"Failed to make reservation with relay %s", relay_peer_id
)
# Send HOP CONNECT message
hop_msg = HopMessage(
type=HopMessage.CONNECT,
peer=peer_info.peer_id.to_bytes(),
)
await relay_stream.write(hop_msg.SerializeToString())
# Read response
resp_bytes = await relay_stream.read()
resp = HopMessage()
resp.ParseFromString(resp_bytes)
# Access status attributes directly
status_code = getattr(resp.status, "code", StatusCode.OK)
status_msg = getattr(resp.status, "message", "Unknown error")
if status_code != StatusCode.OK:
raise ConnectionError(f"Relay connection failed: {status_msg}")
# Create raw connection from stream
return RawConnection(stream=relay_stream, initiator=True)
except Exception as e:
await relay_stream.close()
raise ConnectionError(f"Failed to establish relay connection: {str(e)}")
async def _select_relay(self, peer_info: PeerInfo) -> ID | None:
"""
Select an appropriate relay for the given peer.
Parameters
----------
peer_info : PeerInfo
The peer to connect to
Returns
-------
Optional[ID]
Selected relay peer ID, or None if no suitable relay found
"""
# Try to find a relay
attempts = 0
while attempts < self.client_config.max_auto_relay_attempts:
# Get a relay from the list of discovered relays
relays = self.discovery.get_relays()
if relays:
# TODO: Implement more sophisticated relay selection
# For now, just return the first available relay
return relays[0]
# Wait and try discovery
await trio.sleep(1)
attempts += 1
return None
async def _make_reservation(
self,
stream: INetStream,
relay_peer_id: ID,
) -> bool:
"""
Make a reservation with a relay.
Parameters
----------
stream : INetStream
Stream to the relay
relay_peer_id : ID
The relay's peer ID
Returns
-------
bool
True if reservation was successful
"""
try:
# Send reservation request
reserve_msg = HopMessage(
type=HopMessage.RESERVE,
peer=self.host.get_id().to_bytes(),
)
await stream.write(reserve_msg.SerializeToString())
# Read response
resp_bytes = await stream.read()
resp = HopMessage()
resp.ParseFromString(resp_bytes)
# Access status attributes directly
status_code = getattr(resp.status, "code", StatusCode.OK)
status_msg = getattr(resp.status, "message", "Unknown error")
if status_code != StatusCode.OK:
logger.warning(
"Reservation failed with relay %s: %s",
relay_peer_id,
status_msg,
)
return False
# Store reservation info
# TODO: Implement reservation storage and refresh mechanism
return True
except Exception as e:
logger.error("Error making reservation: %s", str(e))
return False
def create_listener(
self,
handler_function: Callable[[ReadWriteCloser], Awaitable[None]],
) -> IListener:
"""
Create a listener for incoming relay connections.
Parameters
----------
handler_function : Callable[[ReadWriteCloser], Awaitable[None]]
The handler function for new connections
Returns
-------
IListener
The created listener
"""
return CircuitV2Listener(self.host, self.protocol, self.config)
class CircuitV2Listener(Service, IListener):
"""Listener for incoming relay connections."""
def __init__(
self,
host: IHost,
protocol: CircuitV2Protocol,
config: RelayConfig,
) -> None:
"""
Initialize the Circuit v2 listener.
Parameters
----------
host : IHost
The libp2p host this listener is running on
protocol : CircuitV2Protocol
The Circuit v2 protocol instance
config : RelayConfig
Relay configuration
"""
super().__init__()
self.host = host
self.protocol = protocol
self.config = config
self.multiaddrs: list[
multiaddr.Multiaddr
] = [] # Store multiaddrs as Multiaddr objects
async def handle_incoming_connection(
self,
stream: INetStream,
remote_peer_id: ID,
) -> RawConnection:
"""
Handle an incoming relay connection.
Parameters
----------
stream : INetStream
The incoming stream
remote_peer_id : ID
The remote peer's ID
Returns
-------
RawConnection
The established connection
Raises
------
ConnectionError
If the connection cannot be established
"""
if not self.config.enable_stop:
raise ConnectionError("Stop role is not enabled")
try:
# Read STOP message
msg_bytes = await stream.read()
stop_msg = StopMessage()
stop_msg.ParseFromString(msg_bytes)
if stop_msg.type != StopMessage.CONNECT:
raise ConnectionError("Invalid STOP message type")
# Create raw connection
return RawConnection(stream=stream, initiator=False)
except Exception as e:
await stream.close()
raise ConnectionError(f"Failed to handle incoming connection: {str(e)}")
async def run(self) -> None:
"""Run the listener service."""
# Implementation would go here
async def listen(self, maddr: multiaddr.Multiaddr, nursery: trio.Nursery) -> bool:
"""
Start listening on the given multiaddr.
Parameters
----------
maddr : multiaddr.Multiaddr
The multiaddr to listen on
nursery : trio.Nursery
The nursery to run tasks in
Returns
-------
bool
True if listening successfully started
"""
# Convert string to Multiaddr if needed
addr = (
maddr
if isinstance(maddr, multiaddr.Multiaddr)
else multiaddr.Multiaddr(maddr)
)
self.multiaddrs.append(addr)
return True
def get_addrs(self) -> tuple[multiaddr.Multiaddr, ...]:
"""
Get the listening addresses.
Returns
-------
tuple[multiaddr.Multiaddr, ...]
Tuple of listening multiaddresses
"""
return tuple(self.multiaddrs)
async def close(self) -> None:
"""Close the listener."""
self.multiaddrs.clear()
await self.manager.stop()

View File

@ -1,4 +1,7 @@
from collections.abc import Callable
from libp2p.abc import ( from libp2p.abc import (
IPeerStore,
IRawConnection, IRawConnection,
ISecureConn, ISecureConn,
) )
@ -6,6 +9,7 @@ from libp2p.crypto.exceptions import (
MissingDeserializerError, MissingDeserializerError,
) )
from libp2p.crypto.keys import ( from libp2p.crypto.keys import (
KeyPair,
PrivateKey, PrivateKey,
PublicKey, PublicKey,
) )
@ -30,11 +34,15 @@ from libp2p.network.connection.exceptions import (
from libp2p.peer.id import ( from libp2p.peer.id import (
ID, ID,
) )
from libp2p.peer.peerstore import (
PeerStoreError,
)
from libp2p.security.base_session import ( from libp2p.security.base_session import (
BaseSession, BaseSession,
) )
from libp2p.security.base_transport import ( from libp2p.security.base_transport import (
BaseSecureTransport, BaseSecureTransport,
default_secure_bytes_provider,
) )
from libp2p.security.exceptions import ( from libp2p.security.exceptions import (
HandshakeFailure, HandshakeFailure,
@ -102,6 +110,7 @@ async def run_handshake(
conn: IRawConnection, conn: IRawConnection,
is_initiator: bool, is_initiator: bool,
remote_peer_id: ID | None, remote_peer_id: ID | None,
peerstore: IPeerStore | None = None,
) -> ISecureConn: ) -> ISecureConn:
"""Raise `HandshakeFailure` when handshake failed.""" """Raise `HandshakeFailure` when handshake failed."""
msg = make_exchange_message(local_private_key.get_public_key()) msg = make_exchange_message(local_private_key.get_public_key())
@ -164,7 +173,14 @@ async def run_handshake(
conn=conn, conn=conn,
) )
# TODO: Store `pubkey` and `peer_id` to `PeerStore` # Store `pubkey` and `peer_id` to `PeerStore`
if peerstore is not None:
try:
peerstore.add_pubkey(received_peer_id, received_pubkey)
except PeerStoreError:
# If peer ID and pubkey don't match, it would have already been caught above
# This might happen if the peer is already in the store
pass
return secure_conn return secure_conn
@ -175,6 +191,18 @@ class InsecureTransport(BaseSecureTransport):
transport does not add any additional security. transport does not add any additional security.
""" """
def __init__(
self,
local_key_pair: KeyPair,
secure_bytes_provider: Callable[[int], bytes] | None = None,
peerstore: IPeerStore | None = None,
) -> None:
# If secure_bytes_provider is None, use the default one
if secure_bytes_provider is None:
secure_bytes_provider = default_secure_bytes_provider
super().__init__(local_key_pair, secure_bytes_provider)
self.peerstore = peerstore
async def secure_inbound(self, conn: IRawConnection) -> ISecureConn: async def secure_inbound(self, conn: IRawConnection) -> ISecureConn:
""" """
Secure the connection, either locally or by communicating with opposing Secure the connection, either locally or by communicating with opposing
@ -183,8 +211,9 @@ class InsecureTransport(BaseSecureTransport):
:return: secure connection object (that implements secure_conn_interface) :return: secure connection object (that implements secure_conn_interface)
""" """
# For inbound connections, we don't know the remote peer ID yet
return await run_handshake( return await run_handshake(
self.local_peer, self.local_private_key, conn, False, None self.local_peer, self.local_private_key, conn, False, None, self.peerstore
) )
async def secure_outbound(self, conn: IRawConnection, peer_id: ID) -> ISecureConn: async def secure_outbound(self, conn: IRawConnection, peer_id: ID) -> ISecureConn:
@ -195,7 +224,7 @@ class InsecureTransport(BaseSecureTransport):
:return: secure connection object (that implements secure_conn_interface) :return: secure connection object (that implements secure_conn_interface)
""" """
return await run_handshake( return await run_handshake(
self.local_peer, self.local_private_key, conn, True, peer_id self.local_peer, self.local_private_key, conn, True, peer_id, self.peerstore
) )

View File

@ -31,9 +31,6 @@ from libp2p.stream_muxer.yamux.yamux import (
Yamux, Yamux,
) )
# FIXME: add negotiate timeout to `MuxerMultistream`
DEFAULT_NEGOTIATE_TIMEOUT = 60
class MuxerMultistream: class MuxerMultistream:
""" """

View File

@ -98,16 +98,32 @@ class YamuxStream(IMuxedStream):
# Flow control: Check if we have enough send window # Flow control: Check if we have enough send window
total_len = len(data) total_len = len(data)
sent = 0 sent = 0
logging.debug(f"Stream {self.stream_id}: Starts writing {total_len} bytes ")
while sent < total_len: while sent < total_len:
# Wait for available window with timeout
timeout = False
async with self.window_lock: async with self.window_lock:
# Wait for available window if self.send_window == 0:
while self.send_window == 0 and not self.closed: logging.debug(
# Release lock while waiting f"Stream {self.stream_id}: Window is zero, waiting for update"
)
# Release lock and wait with timeout
self.window_lock.release() self.window_lock.release()
await trio.sleep(0.01) # To avoid re-acquiring the lock immediately,
with trio.move_on_after(5.0) as cancel_scope:
while self.send_window == 0 and not self.closed:
await trio.sleep(0.01)
# If we timed out, cancel the scope
timeout = cancel_scope.cancelled_caught
# Re-acquire lock
await self.window_lock.acquire() await self.window_lock.acquire()
# If we timed out waiting for window update, raise an error
if timeout:
raise MuxedStreamError(
"Timed out waiting for window update after 5 seconds."
)
if self.closed: if self.closed:
raise MuxedStreamError("Stream is closed") raise MuxedStreamError("Stream is closed")
@ -123,25 +139,45 @@ class YamuxStream(IMuxedStream):
await self.conn.secured_conn.write(header + chunk) await self.conn.secured_conn.write(header + chunk)
sent += to_send sent += to_send
# If window is getting low, consider updating async def send_window_update(self, increment: int, skip_lock: bool = False) -> None:
if self.send_window < DEFAULT_WINDOW_SIZE // 2: """
await self.send_window_update() Send a window update to peer.
async def send_window_update(self, increment: int | None = None) -> None:
"""Send a window update to peer."""
if increment is None:
increment = DEFAULT_WINDOW_SIZE - self.recv_window
param:increment: The amount to increment the window size by.
If None, uses the difference between DEFAULT_WINDOW_SIZE
and current receive window.
param:skip_lock (bool): If True, skips acquiring window_lock.
This should only be used when calling from a context
that already holds the lock.
"""
if increment <= 0: if increment <= 0:
# If increment is zero or negative, skip sending update
logging.debug(
f"Stream {self.stream_id}: Skipping window update"
f"(increment={increment})"
)
return return
logging.debug(
f"Stream {self.stream_id}: Sending window update with increment={increment}"
)
async with self.window_lock: async def _do_window_update() -> None:
self.recv_window += increment
header = struct.pack( header = struct.pack(
YAMUX_HEADER_FORMAT, 0, TYPE_WINDOW_UPDATE, 0, self.stream_id, increment YAMUX_HEADER_FORMAT,
0,
TYPE_WINDOW_UPDATE,
0,
self.stream_id,
increment,
) )
await self.conn.secured_conn.write(header) await self.conn.secured_conn.write(header)
if skip_lock:
await _do_window_update()
else:
async with self.window_lock:
await _do_window_update()
async def read(self, n: int | None = -1) -> bytes: async def read(self, n: int | None = -1) -> bytes:
# Handle None value for n by converting it to -1 # Handle None value for n by converting it to -1
if n is None: if n is None:
@ -154,55 +190,68 @@ class YamuxStream(IMuxedStream):
) )
raise MuxedStreamEOF("Stream is closed for receiving") raise MuxedStreamEOF("Stream is closed for receiving")
# If reading until EOF (n == -1), block until stream is closed
if n == -1: if n == -1:
while not self.recv_closed and not self.conn.event_shutting_down.is_set(): data = b""
while not self.conn.event_shutting_down.is_set():
# Check if there's data in the buffer # Check if there's data in the buffer
buffer = self.conn.stream_buffers.get(self.stream_id) buffer = self.conn.stream_buffers.get(self.stream_id)
if buffer and len(buffer) > 0:
# Wait for closure even if data is available
logging.debug(
f"Stream {self.stream_id}:Waiting for FIN before returning data"
)
await self.conn.stream_events[self.stream_id].wait()
self.conn.stream_events[self.stream_id] = trio.Event()
else:
# No data, wait for data or closure
logging.debug(f"Stream {self.stream_id}: Waiting for data or FIN")
await self.conn.stream_events[self.stream_id].wait()
self.conn.stream_events[self.stream_id] = trio.Event()
# After loop, check if stream is closed or shutting down # If buffer is not available, check if stream is closed
async with self.conn.streams_lock:
if self.conn.event_shutting_down.is_set():
logging.debug(f"Stream {self.stream_id}: Connection shutting down")
raise MuxedStreamEOF("Connection shut down")
if self.closed:
if self.reset_received:
logging.debug(f"Stream {self.stream_id}: Stream was reset")
raise MuxedStreamReset("Stream was reset")
else:
logging.debug(
f"Stream {self.stream_id}: Stream closed cleanly (EOF)"
)
raise MuxedStreamEOF("Stream closed cleanly (EOF)")
buffer = self.conn.stream_buffers.get(self.stream_id)
if buffer is None: if buffer is None:
logging.debug( logging.debug(f"Stream {self.stream_id}: No buffer available")
f"Stream {self.stream_id}: Buffer gone, assuming closed"
)
raise MuxedStreamEOF("Stream buffer closed") raise MuxedStreamEOF("Stream buffer closed")
# If we have data in buffer, process it
if len(buffer) > 0:
chunk = bytes(buffer)
buffer.clear()
data += chunk
# Send window update for the chunk we just read
async with self.window_lock:
self.recv_window += len(chunk)
logging.debug(f"Stream {self.stream_id}: Update {len(chunk)}")
await self.send_window_update(len(chunk), skip_lock=True)
# If stream is closed (FIN received) and buffer is empty, break
if self.recv_closed and len(buffer) == 0: if self.recv_closed and len(buffer) == 0:
logging.debug(f"Stream {self.stream_id}: EOF reached") logging.debug(f"Stream {self.stream_id}: Closed with empty buffer")
raise MuxedStreamEOF("Stream is closed for receiving") break
# Return all buffered data
data = bytes(buffer) # If stream was reset, raise reset error
buffer.clear() if self.reset_received:
logging.debug(f"Stream {self.stream_id}: Returning {len(data)} bytes") logging.debug(f"Stream {self.stream_id}: Stream was reset")
raise MuxedStreamReset("Stream was reset")
# Wait for more data or stream closure
logging.debug(f"Stream {self.stream_id}: Waiting for data or FIN")
await self.conn.stream_events[self.stream_id].wait()
self.conn.stream_events[self.stream_id] = trio.Event()
# After loop exit, first check if we have data to return
if data:
logging.debug(
f"Stream {self.stream_id}: Returning {len(data)} bytes after loop"
)
return data return data
# For specific size read (n > 0), return available data immediately # No data accumulated, now check why we exited the loop
return await self.conn.read_stream(self.stream_id, n) if self.conn.event_shutting_down.is_set():
logging.debug(f"Stream {self.stream_id}: Connection shutting down")
raise MuxedStreamEOF("Connection shut down")
# Return empty data
return b""
else:
data = await self.conn.read_stream(self.stream_id, n)
async with self.window_lock:
self.recv_window += len(data)
logging.debug(
f"Stream {self.stream_id}: Sending window update after read, "
f"increment={len(data)}"
)
await self.send_window_update(len(data), skip_lock=True)
return data
async def close(self) -> None: async def close(self) -> None:
if not self.send_closed: if not self.send_closed:
@ -493,7 +542,7 @@ class Yamux(IMuxedConn):
f"type={typ}, flags={flags}, stream_id={stream_id}," f"type={typ}, flags={flags}, stream_id={stream_id},"
f"length={length}" f"length={length}"
) )
if typ == TYPE_DATA and flags & FLAG_SYN: if (typ == TYPE_DATA or typ == TYPE_WINDOW_UPDATE) and flags & FLAG_SYN:
async with self.streams_lock: async with self.streams_lock:
if stream_id not in self.streams: if stream_id not in self.streams:
stream = YamuxStream(stream_id, self, False) stream = YamuxStream(stream_id, self, False)

View File

@ -26,6 +26,7 @@ LISTEN_MADDR = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
FLOODSUB_PROTOCOL_ID = floodsub.PROTOCOL_ID FLOODSUB_PROTOCOL_ID = floodsub.PROTOCOL_ID
GOSSIPSUB_PROTOCOL_ID = gossipsub.PROTOCOL_ID GOSSIPSUB_PROTOCOL_ID = gossipsub.PROTOCOL_ID
GOSSIPSUB_PROTOCOL_ID_V1 = gossipsub.PROTOCOL_ID_V11
class GossipsubParams(NamedTuple): class GossipsubParams(NamedTuple):
@ -40,6 +41,10 @@ class GossipsubParams(NamedTuple):
heartbeat_interval: float = 0.5 heartbeat_interval: float = 0.5
direct_connect_initial_delay: float = 0.1 direct_connect_initial_delay: float = 0.1
direct_connect_interval: int = 300 direct_connect_interval: int = 300
do_px: bool = False
px_peers_count: int = 16
prune_back_off: int = 60
unsubscribe_back_off: int = 10
GOSSIPSUB_PARAMS = GossipsubParams() GOSSIPSUB_PARAMS = GossipsubParams()

View File

@ -87,14 +87,16 @@ async def connect(node1: IHost, node2: IHost) -> None:
addr = node2.get_addrs()[0] addr = node2.get_addrs()[0]
info = info_from_p2p_addr(addr) info = info_from_p2p_addr(addr)
# Add retry logic for more robust connection # Add retry logic for more robust connection with timeout
max_retries = 3 max_retries = 3
retry_delay = 0.2 retry_delay = 0.2
last_error = None last_error = None
for attempt in range(max_retries): for attempt in range(max_retries):
try: try:
await node1.connect(info) # Use timeout for each connection attempt
with trio.move_on_after(5): # 5 second timeout
await node1.connect(info)
# Verify connection is established in both directions # Verify connection is established in both directions
if ( if (

View File

@ -48,11 +48,16 @@ class TransportUpgrader:
# TODO: Figure out what to do with this function. # TODO: Figure out what to do with this function.
async def upgrade_security( async def upgrade_security(
self, raw_conn: IRawConnection, peer_id: ID, is_initiator: bool self,
raw_conn: IRawConnection,
is_initiator: bool,
peer_id: ID | None = None,
) -> ISecureConn: ) -> ISecureConn:
"""Upgrade conn to a secured connection.""" """Upgrade conn to a secured connection."""
try: try:
if is_initiator: if is_initiator:
if peer_id is None:
raise ValueError("peer_id must be provided for outbout connection")
return await self.security_multistream.secure_outbound( return await self.security_multistream.secure_outbound(
raw_conn, peer_id raw_conn, peer_id
) )

View File

@ -1 +0,0 @@
The `NetStream.state` property is now async and requires `await`. Update any direct state access to use `await stream.state`.

View File

@ -1 +0,0 @@
Added proper state management and resource cleanup to `NetStream`, fixing memory leaks and improved error handling.

View File

@ -0,0 +1 @@
Added support for ``Kademlia DHT`` in py-libp2p.

View File

@ -1 +0,0 @@
Allow passing `listen_addrs` to `new_swarm` to customize swarm listening behavior.

View File

@ -1 +0,0 @@
Modernizes several aspects of the project, notably using ``pyproject.toml`` for project info instead of ``setup.py``, using ``ruff`` to replace several separate linting tools, and ``pyrefly`` in addition to ``mypy`` for typing. Also includes changes across the codebase to conform to new linting and typing rules.

View File

@ -1 +0,0 @@
Removes support for python 3.9 and updates some code conventions, notably using ``|`` operator in typing instead of ``Optional`` or ``Union``

View File

@ -0,0 +1 @@
Limit concurrency in `push_identify_to_peers` to prevent resource congestion under high peer counts.

View File

@ -1,2 +0,0 @@
Feature: Support for sending `ls` command over `multistream-select` to list supported protocols from remote peer.
This allows inspecting which protocol handlers a peer supports at runtime.

View File

@ -1 +0,0 @@
implement AsyncContextManager for IMuxedStream to support async with

View File

@ -0,0 +1,7 @@
Store public key and peer ID in peerstore during handshake
Modified the InsecureTransport class to accept an optional peerstore parameter and updated the handshake process to store the received public key and peer ID in the peerstore when available.
Added test cases to verify:
1. The peerstore remains unchanged when handshake fails due to peer ID mismatch
2. The handshake correctly adds a public key to a peer ID that already exists in the peerstore but doesn't have a public key yet

View File

@ -1 +0,0 @@
feat: add method to compute time since last message published by a peer and remove fanout peers based on ttl.

View File

@ -0,0 +1,6 @@
Fixed several flow-control and concurrency issues in the `YamuxStream` class. Previously, stress-testing revealed that transferring data over `DEFAULT_WINDOW_SIZE` would break the stream due to inconsistent window update handling and lock management. The fixes include:
- Removed sending of window updates during writes to maintain correct flow-control.
- Added proper timeout handling when releasing and acquiring locks to prevent concurrency errors.
- Corrected the `read` function to properly handle window updates for both `read_until_EOF` and `read_n_bytes`.
- Added event logging at `send_window_updates` and `waiting_for_window_updates` for better observability.

View File

@ -1 +0,0 @@
implement blacklist management for `pubsub.Pubsub` with methods to get, add, remove, check, and clear blacklisted peer IDs.

View File

@ -0,0 +1 @@
Added support for ``Multicast DNS`` in py-libp2p

View File

@ -1 +0,0 @@
fix: remove expired peers from peerstore based on TTL

View File

@ -1 +0,0 @@
Updated examples to automatically use random port, when `-p` flag is not given

View File

@ -0,0 +1 @@
Refactored gossipsub heartbeat logic to use a single helper method `_handle_topic_heartbeat` that handles both fanout and gossip heartbeats.

View File

@ -0,0 +1 @@
Added sparse connect utility function to pubsub test utilities for creating test networks with configurable connectivity.

View File

@ -0,0 +1,2 @@
Reordered the arguments to `upgrade_security` to place `is_initiator` before `peer_id`, and made `peer_id` optional.
This allows the method to reflect the fact that peer identity is not required for inbound connections.

View File

@ -0,0 +1 @@
Uses the `decapsulate` method of the `Multiaddr` class to clean up the observed address.

View File

@ -0,0 +1 @@
Optimized pubsub publishing to send multiple topics in a single message instead of separate messages per topic.

View File

@ -0,0 +1 @@
Optimized pubsub message writing by implementing a write_msg() method that uses pre-allocated buffers and single write operations, improving performance by eliminating separate varint prefix encoding and write operations in FloodSub and GossipSub.

View File

@ -0,0 +1 @@
added peer exchange and backoff logic as part of Gossipsub v1.1 upgrade

View File

@ -0,0 +1,4 @@
Add timeout wrappers in:
1. multiselect.py: `negotiate` function
2. multiselect_client.py: `select_one_of` , `query_multistream_command` functions
to prevent indefinite hangs when a remote peer does not respond.

View File

@ -0,0 +1 @@
align stream creation logic with yamux specification

View File

@ -0,0 +1 @@
Fixed an issue in `Pubsub` where async validators were not handled reliably under concurrency. Now uses a safe aggregator list for consistent behavior.

Some files were not shown because too many files have changed in this diff Show More