166 Commits

Author SHA1 Message Date
9ed44f5fa3 Merge pull request #753 from lla-dane/feat/certified-addrbook
WIP: Certified-Address-Book interface for Peer-Store
2025-07-26 23:41:28 +05:30
8786f06862 added newsfragment 2025-07-26 22:38:28 +05:30
8c96c5a941 Add the periodic peer-store cleanup in all the examples 2025-07-26 22:38:28 +05:30
16445714f7 overwrite old_addr with new_addrs in consume_peer_record 2025-07-26 22:38:28 +05:30
64bc388b33 added peer-store cleanup task in ping example 2025-07-26 22:38:28 +05:30
09e151aafc Added test for peer-store cleanup task 2025-07-26 22:38:28 +05:30
2d335d4394 Integrated Signed-peer-record transfer with identify/identify-push 2025-07-26 22:38:28 +05:30
8b8b051885 batch operations for consume_peer_record 2025-07-26 22:38:28 +05:30
07c8d4cd1f added periodic cleanup task 2025-07-26 22:38:28 +05:30
09e6feea8e merge new addresses with existing ones, in consume_peer_record 2025-07-26 22:38:28 +05:30
601a8a3ef0 enforce_peer_record_limit 2025-07-26 22:38:28 +05:30
9d597012cc fixed the linter <> protobuf issues 2025-07-26 22:38:28 +05:30
8625226be8 fix merge conflicts 2025-07-26 22:38:28 +05:30
c2b1738cd9 fix sphinx/docutils bugs 2025-07-26 22:38:28 +05:30
83acc38281 fix tox bugs 2025-07-26 22:38:28 +05:30
1899dac84c fix tox bugs 2025-07-26 22:38:28 +05:30
aab2a0b603 Completed: CertifiedAddrBook interface with related tests 2025-07-26 22:38:28 +05:30
bab08c0900 added test for Envelope.equal 2025-07-26 22:38:28 +05:30
6431fb8788 Implemented: Envelope wrapper class + linter hacks for protobuf checks 2025-07-26 22:38:28 +05:30
c8053417d5 remove the big google-protobuf file 2025-07-26 22:38:28 +05:30
6eba9d8ca0 downgrade the peer-record protobuf files to v@25.3 2025-07-26 22:38:27 +05:30
0e1b738cbb update protobuf-version 2025-07-26 22:38:27 +05:30
2ff5ae9c90 added hacks for linting errors 2025-07-26 22:38:27 +05:30
ecc443dcfe linter respacing 2025-07-26 22:38:27 +05:30
aa6039bcd3 PeerRecord class with ProtoBuff implemented 2025-07-26 22:38:27 +05:30
8352d19113 Merge pull request #752 from Minimega12121/todo/handletimeout
todo: handle timeout
2025-07-26 21:16:12 +05:30
ceb9f7d3f7 Merge branch 'main' into todo/handletimeout 2025-07-26 20:54:53 +05:30
9b667bd472 Merge pull request #711 from sumanjeet0012/feature/bootstrap
feat: Implemented Bootstrap module in py-libp2p
2025-07-26 01:56:16 +05:30
eca548851b added new fragment and tests 2025-07-25 16:19:29 +05:30
e91f458446 Enhance peer discovery logging and address resolution handling in BootstrapDiscovery 2025-07-24 00:11:05 +05:30
0416572457 Merge branch 'main' into feature/bootstrap 2025-07-21 20:55:51 +05:30
39375fb338 Merge branch 'main' into todo/handletimeout 2025-07-21 08:17:08 -07:00
8bf261ca77 Merge pull request #766 from acul71/py-multiaddr
feat: add py-multiaddr from git
2025-07-21 08:06:04 -07:00
3a927c8419 Merge branch 'main' into feature/bootstrap 2025-07-21 06:43:34 -07:00
ec92af20e7 Merge branch 'main' into py-multiaddr 2025-07-21 06:27:11 -07:00
01db5d5fa0 Merge pull request #785 from acul71/feat/issue-784-identify-push-raw-format
feat: Add identify-push raw format support and yamux logging improvent
2025-07-21 06:27:00 -07:00
21ee417793 Pin py-multiaddr dependency to specific git commit db8124e2321f316d3b7d2733c7df11d6ad9c03e6 2025-07-20 23:48:16 +02:00
37e4fee9f8 feat: Add identify-push raw format support and yamux logging improvements
- Add comprehensive integration tests for identify-push protocol
- Support both raw protobuf and varint message formats
- Improve yamux logging integration with LIBP2P_DEBUG
- Fix RawConnError handling to reduce log noise
- Add Ctrl+C handling to identify examples
- Enhance identify-push listener/dialer demo

Fixes: #784
2025-07-20 20:19:18 +02:00
c277cce2ed Merge branch 'main' into feature/bootstrap 2025-07-20 04:39:56 -07:00
048e6deb96 fix: utf-8 in reading in py-cid 2025-07-19 23:24:39 +00:00
2dc2dd4670 Merge branch 'main' into py-multiaddr 2025-07-19 02:22:39 -07:00
e6a355d395 Merge pull request #748 from Jineshbansal/add-read-write-lock
TODO: add read/write lock
2025-07-19 02:04:52 -07:00
7b181f3ce5 Merge branch 'main' into add-read-write-lock 2025-07-18 23:53:12 -07:00
0606788ab6 Merge pull request #779 from acul71/fix/issue-778-Incorrect_handling_of_raw_format_in_identify
Fix/issue 778 incorrect handling of raw format in identify
2025-07-18 22:11:58 -07:00
7d62a2f558 Merge branch 'main' into fix/issue-778-Incorrect_handling_of_raw_format_in_identify 2025-07-19 04:25:48 +02:00
26fd169ccc doc: newsfragment raw identify message 2025-07-19 04:25:06 +02:00
99db5b309f fix raw format in identify and tests 2025-07-19 04:11:27 +02:00
7cfe5b9dc7 style: add new line within newsfragment 2025-07-19 01:19:23 +02:00
092b9c0c57 chore(newsfragment): add entry to the release notes 2025-07-19 01:19:23 +02:00
fcf0546831 style: enforce consistent import block 2025-07-19 01:19:23 +02:00
85bad2d0ae replace: attributes with cache cached_property 2025-07-19 01:19:23 +02:00
11560f5cc9 TODO: throttle on async validators (#755)
* fixed todo: throttle on async validators

* added test: validate message respects concurrency limit

* added newsfragment

* added configurable validator semaphore in the PubSub constructor

* added the concurrency-checker in the original test-validate-msg test case

* separate out a _run_async_validator function

* remove redundant run_async_validator
2025-07-18 06:01:28 -06:00
3507531344 chore: clarify newline requirement in newsfragments README.md (#775)
* chore: clarify newline requirement in README

Small change in newsfragments README.md, that reduces some possible room for pull-request tox workflow errors.

* style: remove double backticks for single backticks

the linter strikes again XD.

* docs: clarify trailing newline requirement in newsfragments for lint checks

---------

Co-authored-by: Manu Sheel Gupta <manusheel.edu@gmail.com>
2025-07-17 20:43:00 -06:00
c9162beb2b add grave that were removed by mistake 2025-07-17 20:55:49 +05:30
f587e50cab Merge branch 'main' into todo/handletimeout 2025-07-17 02:36:44 -07:00
d1a0f4f767 Merge branch 'main' into py-multiaddr 2025-07-17 02:32:52 -07:00
3ca27c6e93 Merge pull request #772 from LVivona/replace/peerID/attribute
removal: private attribute for cached_property
2025-07-17 01:58:36 -07:00
b4482e1a5e Merge branch 'main' into replace/peerID/attribute 2025-07-17 01:47:06 -07:00
ae82895d86 style: add new line within newsfragment 2025-07-16 22:12:05 -04:00
9f40d97a05 chore(newsfragment): add entry to the release notes 2025-07-16 22:08:25 -04:00
6fe28dcdd3 Merge branch 'main' into py-multiaddr 2025-07-17 02:39:05 +02:00
41b1ecb67c Merge branch 'main' into feature/bootstrap 2025-07-16 15:00:35 -07:00
e3c9b4bd54 Merge branch 'main' into add-read-write-lock 2025-07-16 14:59:44 -07:00
e132b154e3 Merge branch 'main' into todo/handletimeout 2025-07-16 14:59:24 -07:00
62ea3bbf9a Merge pull request #762 from acul71/identify-fix-varint-go
feat: add length-prefixed support to identify protocol
2025-07-16 14:53:08 -07:00
430527625b Merge branch 'main' into replace/peerID/attribute 2025-07-16 17:15:10 -04:00
4115d033a8 feat: identify identify/push raw-format fix and tests 2025-07-16 20:22:45 +02:00
93fc063e70 Merge branch 'main' into todo/handletimeout 2025-07-16 10:11:23 -07:00
4bd24621f0 Merge branch 'main' into identify-fix-varint-go 2025-07-16 10:10:24 -07:00
5315816521 Merge branch 'main' into add-read-write-lock 2025-07-16 09:38:10 -07:00
d5797572ea Merge pull request #760 from Jineshbansal/improve-error-message
Improve error message
2025-07-16 09:35:20 -07:00
311b750511 add newsfragment file 2025-07-16 20:54:22 +05:30
42f07ae1ab Merge branch 'main' into add-read-write-lock 2025-07-15 14:54:29 -07:00
773962c070 Merge branch 'main' into todo/handletimeout 2025-07-15 14:53:48 -07:00
8ccf58bb83 Merge branch 'main' into improve-error-message 2025-07-15 14:50:28 -07:00
06f0c7d35c Merge branch 'main' into identify-fix-varint-go 2025-07-15 14:50:02 -07:00
ab94e77310 Merge branch 'main' into feature/bootstrap 2025-07-15 14:40:20 -07:00
23622ea1a0 style: enforce consistent import block 2025-07-15 15:28:03 -04:00
6aeb217349 replace: attributes with cache cached_property 2025-07-15 14:59:34 -04:00
003e7bf278 Merge branch 'main' into py-multiaddr 2025-07-15 10:38:01 -07:00
719246c996 Merge pull request #764 from acul71/fix/issue-757-test-peerinfo-valid-cid
fix: added valid CID and fix typecheck
2025-07-15 10:23:51 -07:00
e013e80689 doc: newsfragments 2025-07-15 15:43:48 +00:00
6f33cde9a9 feat: add py-multiaddr from git 2025-07-13 21:56:07 +00:00
9c2560d000 fix: added valid CID and fix typecheck 2025-07-13 21:28:50 +00:00
9f38d48e26 Fix valid bootstrap address in test case 2025-07-14 01:45:12 +05:30
2c1e50428a Merge branch 'feature/bootstrap' of https://github.com/sumanjeet0012/py-libp2p into feature/bootstrap 2025-07-14 01:38:53 +05:30
9e76940e75 Refactor logging configuration to reduce verbosity and improve peer discovery events 2025-07-14 01:38:15 +05:30
53614200bd doc: fix doc issues 2025-07-13 17:43:31 +02:00
41ed0769f6 Merge branch 'main' into identify-fix-varint-go 2025-07-13 17:25:51 +02:00
1c59653946 breaking: identify protocol use now prefix-length messages by default. use use_varint_format param for old raw messages 2025-07-13 17:24:56 +02:00
4bbb08ce2d feat: add length-prefixed protobuf support to identify protocol 2025-07-13 16:13:52 +02:00
912669a924 doc: newsfragment 2025-07-13 16:04:46 +02:00
8ec67289da feat: add length-prefixed protobuf support to identify protocol 2025-07-13 15:55:37 +02:00
9cd3805542 make readwrite more safe 2025-07-13 18:37:44 +05:30
b81168dae9 improve error message 2025-07-13 17:52:05 +05:30
d03bdd75d6 Merge branch 'main' into feature/bootstrap 2025-07-12 07:48:04 -07:00
8ff7bb1f20 Merge branch 'main' into todo/handletimeout 2025-07-12 07:25:08 -07:00
5fcfc677f3 fixme/correct-type (#746)
* fixme/correct-type

* added newsfragment and test
2025-07-11 15:27:17 -06:00
dd14aad47c Add tests for discovery methods in circuit_relay_v2 (#750)
* Add test for direct_connection_relay_discovery

* Add test for mux_method_relay_discovery

* Fix newsfragments
2025-07-11 14:53:27 -06:00
96434d9977 Remove .git 2025-07-10 23:59:26 +02:00
1507100632 Add interoperability test for py-libp2p and js-libp2p with enhanced logging 2025-07-10 23:59:26 +02:00
21db1c3b72 Merge branch 'main' into feature/bootstrap 2025-07-10 20:43:25 +05:30
3592ad308f Merge branch 'main' into add-read-write-lock 2025-07-10 07:52:06 -07:00
9669a92976 Fix formatting and linting issues 2025-07-10 19:25:58 +05:30
2dfee68f20 Refactor bootstrap discovery to use async methods and update bootstrap peers list 2025-07-10 19:24:09 +05:30
505d3b2a8f Bump version: 0.2.8 → 0.2.9 2025-07-09 15:19:54 -06:00
f4eb0158fe Compile release notes for v0.2.9 2025-07-09 15:18:41 -06:00
b716d64184 fix formatting and some naming in newsfragments (#754) 2025-07-09 15:13:16 -06:00
198208aef3 validate and filter bootstrap addresses during discovery initialization 2025-07-09 20:23:47 +05:30
cda163fc48 change ReadWriteLock class 2025-07-09 18:18:37 +05:30
26ed99dafd change tests path 2025-07-09 18:09:07 +05:30
a26fd95854 Merge branch 'feature/bootstrap' of https://github.com/sumanjeet0012/py-libp2p into feature/bootstrap 2025-07-09 01:46:31 +05:30
2965b4e364 DNS resolution working 2025-07-09 01:45:15 +05:30
242998ae9d add test for read-write-lock 2025-07-08 20:06:30 +05:30
5f497c7f5d add file in newsfragments folder 2025-07-08 19:17:43 +05:30
e65e38a3f1 fix: linting error related to read 2025-07-08 19:11:56 +05:30
8fb664bfdf Fix: linting errors 2025-07-08 18:34:30 +05:30
3dcd99a2d1 todo: handle timeout 2025-07-08 17:48:57 +05:30
75abc8b863 run ruff format 2025-07-08 07:35:45 +05:30
91dca97d83 TODO: add read/write lock 2025-07-07 21:55:32 +05:30
80c686ddce Merge branch 'main' into feature/bootstrap 2025-07-07 08:51:53 -07:00
0679efb299 Merge pull request #648 from lla-dane/feat/match-peerstore
WIP: Matching `py-libp2p <-> go-libp2p` PeerStore Implementation
2025-07-06 21:20:00 -07:00
b21591f8d5 remove redundants 2025-07-06 14:45:42 +05:30
d1c31483bd Implemented addr_stream in the peerstore 2025-07-06 14:45:42 +05:30
51c08de1bc test added: clear protocol data 2025-07-06 14:45:42 +05:30
faeacf686a fix typos 2025-07-06 14:45:42 +05:30
9943697054 Added docstrings 2025-07-06 14:45:42 +05:30
ff966bbfa0 Metadata: added test 2025-07-06 14:45:42 +05:30
1b025e552c Key-Book: added tests 2025-07-06 14:45:42 +05:30
4e53327079 Metrics: added tests 2025-07-06 14:45:42 +05:30
3d369bc142 Proto-Book: added tests 2025-07-06 14:45:42 +05:30
5de458482c refactor after rebase 2025-07-06 14:45:42 +05:30
f3d8cbf968 feat: Matching go-libp2p PeerStore implementation 2025-07-06 14:45:42 +05:30
e6f96d32e2 Merge pull request #640 from kaneki003/main
Identifying & resolving race-around conditions in YamuxStream
2025-07-05 14:49:35 -07:00
51313a5909 Merge branch 'main' into main 2025-07-05 14:39:31 -07:00
8d2b889605 Merge pull request #708 from lla-dane/todo/bounded-nursery
Added tests for identify push concurrency cap under high peer load
2025-07-05 14:28:31 -07:00
bfe3dee781 updated newsfragment 2025-07-04 17:32:48 +05:30
a7d122a0f9 added extra tests for identifu push for concurrency cap 2025-07-04 17:28:44 +05:30
8bfd4bde94 created concurrency limit configurable 2025-07-04 16:56:35 +05:30
383d7cb722 added tests 2025-07-04 16:56:20 +05:30
a89ba8ef81 added newsfragment 2025-07-04 16:55:56 +05:30
31b6a6f237 todo/bounded nursery in identify-push 2025-07-04 16:55:55 +05:30
5ac4fc1aba seperated tests for better understanding 2025-07-03 22:20:35 +05:30
f96fe0c1b6 Merge branch 'main' into main 2025-07-03 01:43:12 -07:00
dcb199a6b7 Merge branch 'main' into feature/bootstrap 2025-07-03 01:31:43 -07:00
16be6fab85 Merge branch 'main' into feature/bootstrap 2025-07-03 00:21:19 +05:30
5a95212697 Merge branch 'main' into main 2025-07-02 10:22:01 -07:00
cbb1e26a4f refactor fixed some lint issues 2025-06-30 23:19:03 +05:30
69a2cb00ba remove obsolete test script and add comprehensive validation tests for bootstrap addresses 2025-06-30 23:12:48 +05:30
ec20ca81dd remove unnecessary files from .gitignore 2025-06-30 22:46:19 +05:30
0038ef99d4 Merge branch 'main' into main 2025-06-30 07:43:32 -07:00
b5ec1bd7ee Merge branch 'main' into feature/bootstrap 2025-06-30 07:35:41 -07:00
ddbd190993 docs: added bootstrap docs in doctree 2025-06-30 11:38:06 +05:30
36be4c354b fix: ensure newline at end of file in Bootstrap peer discovery module documentation 2025-06-30 11:38:05 +05:30
befb2d31db added newsfragments 2025-06-30 11:38:05 +05:30
12ad2dcdf4 Added bootstrap module 2025-06-30 11:37:40 +05:30
4d8afa6448 Merge branch 'main' into main 2025-06-24 14:34:35 -07:00
e50f9fc8e5 Merge branch 'libp2p:main' into main 2025-06-24 18:55:10 +05:30
724375e1fa updated doc-string and reverted mplex-changes 2025-06-24 18:05:15 +05:30
d7cdae8a0f intgrated n==-1 case in read() 2025-06-21 17:51:27 +05:30
df17788ec3 resolving build-fails 2025-06-21 14:10:09 +05:30
209deffc8a resolved recv_window updates,added support for read_EOF 2025-06-21 13:40:12 +05:30
0a7e13f0ed Merge branch 'libp2p:main' into main 2025-06-21 13:39:38 +05:30
01b9e89e83 Merge branch 'main' into main 2025-06-11 19:39:06 +05:30
d733b78dba Merge branch 'libp2p:main' into main 2025-06-10 20:17:55 +05:30
e397ce25a6 Updated Yamux impl.,added tests for yamux and mplex 2025-06-10 20:12:19 +05:30
117 changed files with 8024 additions and 594 deletions

View File

@ -0,0 +1,13 @@
libp2p.discovery.bootstrap package
==================================
Submodules
----------
Module contents
---------------
.. automodule:: libp2p.discovery.bootstrap
:members:
:undoc-members:
:show-inheritance:

View File

@ -7,6 +7,7 @@ Subpackages
.. toctree:: .. toctree::
:maxdepth: 4 :maxdepth: 4
libp2p.discovery.bootstrap
libp2p.discovery.events libp2p.discovery.events
libp2p.discovery.mdns libp2p.discovery.mdns

View File

@ -3,6 +3,65 @@ Release Notes
.. towncrier release notes start .. towncrier release notes start
py-libp2p v0.2.9 (2025-07-09)
-----------------------------
Breaking Changes
~~~~~~~~~~~~~~~~
- Reordered the arguments to ``upgrade_security`` to place ``is_initiator`` before ``peer_id``, and made ``peer_id`` optional.
This allows the method to reflect the fact that peer identity is not required for inbound connections. (`#681 <https://github.com/libp2p/py-libp2p/issues/681>`__)
Bugfixes
~~~~~~~~
- Add timeout wrappers in:
1. ``multiselect.py``: ``negotiate`` function
2. ``multiselect_client.py``: ``select_one_of`` , ``query_multistream_command`` functions
to prevent indefinite hangs when a remote peer does not respond. (`#696 <https://github.com/libp2p/py-libp2p/issues/696>`__)
- Align stream creation logic with yamux specification (`#701 <https://github.com/libp2p/py-libp2p/issues/701>`__)
- Fixed an issue in ``Pubsub`` where async validators were not handled reliably under concurrency. Now uses a safe aggregator list for consistent behavior. (`#702 <https://github.com/libp2p/py-libp2p/issues/702>`__)
Features
~~~~~~~~
- Added support for ``Kademlia DHT`` in py-libp2p. (`#579 <https://github.com/libp2p/py-libp2p/issues/579>`__)
- Limit concurrency in ``push_identify_to_peers`` to prevent resource congestion under high peer counts. (`#621 <https://github.com/libp2p/py-libp2p/issues/621>`__)
- Store public key and peer ID in peerstore during handshake
Modified the InsecureTransport class to accept an optional peerstore parameter and updated the handshake process to store the received public key and peer ID in the peerstore when available.
Added test cases to verify:
1. The peerstore remains unchanged when handshake fails due to peer ID mismatch
2. The handshake correctly adds a public key to a peer ID that already exists in the peerstore but doesn't have a public key yet (`#631 <https://github.com/libp2p/py-libp2p/issues/631>`__)
- Fixed several flow-control and concurrency issues in the ``YamuxStream`` class. Previously, stress-testing revealed that transferring data over ``DEFAULT_WINDOW_SIZE`` would break the stream due to inconsistent window update handling and lock management. The fixes include:
- Removed sending of window updates during writes to maintain correct flow-control.
- Added proper timeout handling when releasing and acquiring locks to prevent concurrency errors.
- Corrected the ``read`` function to properly handle window updates for both ``read_until_EOF`` and ``read_n_bytes``.
- Added event logging at ``send_window_updates`` and ``waiting_for_window_updates`` for better observability. (`#639 <https://github.com/libp2p/py-libp2p/issues/639>`__)
- Added support for ``Multicast DNS`` in py-libp2p (`#649 <https://github.com/libp2p/py-libp2p/issues/649>`__)
- Optimized pubsub publishing to send multiple topics in a single message instead of separate messages per topic. (`#685 <https://github.com/libp2p/py-libp2p/issues/685>`__)
- Optimized pubsub message writing by implementing a write_msg() method that uses pre-allocated buffers and single write operations, improving performance by eliminating separate varint prefix encoding and write operations in FloodSub and GossipSub. (`#687 <https://github.com/libp2p/py-libp2p/issues/687>`__)
- Added peer exchange and backoff logic as part of Gossipsub v1.1 upgrade (`#690 <https://github.com/libp2p/py-libp2p/issues/690>`__)
Internal Changes - for py-libp2p Contributors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Added sparse connect utility function to pubsub test utilities for creating test networks with configurable connectivity. (`#679 <https://github.com/libp2p/py-libp2p/issues/679>`__)
- Added comprehensive tests for pubsub connection utility functions to verify degree limits are enforced, excess peers are handled correctly, and edge cases (degree=0, negative values, empty lists) are managed gracefully. (`#707 <https://github.com/libp2p/py-libp2p/issues/707>`__)
- Added extra tests for identify push concurrency cap under high peer load (`#708 <https://github.com/libp2p/py-libp2p/issues/708>`__)
Miscellaneous Changes
~~~~~~~~~~~~~~~~~~~~~
- `#678 <https://github.com/libp2p/py-libp2p/issues/678>`__, `#684 <https://github.com/libp2p/py-libp2p/issues/684>`__
py-libp2p v0.2.8 (2025-06-10) py-libp2p v0.2.8 (2025-06-10)
----------------------------- -----------------------------

View File

@ -0,0 +1,136 @@
import argparse
import logging
import secrets
import multiaddr
import trio
from libp2p import new_host
from libp2p.abc import PeerInfo
from libp2p.crypto.secp256k1 import create_new_key_pair
from libp2p.discovery.events.peerDiscovery import peerDiscovery
# Configure logging
logger = logging.getLogger("libp2p.discovery.bootstrap")
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
handler.setFormatter(
logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
)
logger.addHandler(handler)
# Configure root logger to only show warnings and above to reduce noise
# This prevents verbose DEBUG messages from multiaddr, DNS, etc.
logging.getLogger().setLevel(logging.WARNING)
# Specifically silence noisy libraries
logging.getLogger("multiaddr").setLevel(logging.WARNING)
logging.getLogger("root").setLevel(logging.WARNING)
def on_peer_discovery(peer_info: PeerInfo) -> None:
"""Handler for peer discovery events."""
logger.info(f"🔍 Discovered peer: {peer_info.peer_id}")
logger.debug(f" Addresses: {[str(addr) for addr in peer_info.addrs]}")
# Example bootstrap peers
BOOTSTRAP_PEERS = [
"/dnsaddr/github.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/cloudflare.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/google.com/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"/ip6/2604:a880:1:20::203:d001/tcp/4001/p2p/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip4/128.199.219.111/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip4/104.236.76.40/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip4/178.62.158.247/tcp/4001/p2p/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
"/ip6/2604:a880:1:20::203:d001/tcp/4001/p2p/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip6/2400:6180:0:d0::151:6001/tcp/4001/p2p/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/p2p/QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm",
]
async def run(port: int, bootstrap_addrs: list[str]) -> None:
"""Run the bootstrap discovery example."""
# Generate key pair
secret = secrets.token_bytes(32)
key_pair = create_new_key_pair(secret)
# Create listen address
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
# Register peer discovery handler
peerDiscovery.register_peer_discovered_handler(on_peer_discovery)
logger.info("🚀 Starting Bootstrap Discovery Example")
logger.info(f"📍 Listening on: {listen_addr}")
logger.info(f"🌐 Bootstrap peers: {len(bootstrap_addrs)}")
print("\n" + "=" * 60)
print("Bootstrap Discovery Example")
print("=" * 60)
print("This example demonstrates connecting to bootstrap peers.")
print("Watch the logs for peer discovery events!")
print("Press Ctrl+C to exit.")
print("=" * 60)
# Create and run host with bootstrap discovery
host = new_host(key_pair=key_pair, bootstrap=bootstrap_addrs)
try:
async with host.run(listen_addrs=[listen_addr]):
# Keep running and log peer discovery events
await trio.sleep_forever()
except KeyboardInterrupt:
logger.info("👋 Shutting down...")
def main() -> None:
"""Main entry point."""
description = """
Bootstrap Discovery Example for py-libp2p
This example demonstrates how to use bootstrap peers for peer discovery.
Bootstrap peers are predefined peers that help new nodes join the network.
Usage:
python bootstrap.py -p 8000
python bootstrap.py -p 8001 --custom-bootstrap \\
"/ip4/127.0.0.1/tcp/8000/p2p/QmYourPeerID"
"""
parser = argparse.ArgumentParser(
description=description, formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument(
"-p", "--port", default=0, type=int, help="Port to listen on (default: random)"
)
parser.add_argument(
"--custom-bootstrap",
nargs="*",
help="Custom bootstrap addresses (space-separated)",
)
parser.add_argument(
"-v", "--verbose", action="store_true", help="Enable verbose output"
)
args = parser.parse_args()
if args.verbose:
logger.setLevel(logging.DEBUG)
# Use custom bootstrap addresses if provided, otherwise use defaults
bootstrap_addrs = (
args.custom_bootstrap if args.custom_bootstrap else BOOTSTRAP_PEERS
)
try:
trio.run(run, args.port, bootstrap_addrs)
except KeyboardInterrupt:
logger.info("Exiting...")
if __name__ == "__main__":
main()

View File

@ -43,6 +43,9 @@ async def run(port: int, destination: str) -> None:
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
host = new_host() host = new_host()
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery: async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
if not destination: # its the server if not destination: # its the server
async def stream_handler(stream: INetStream) -> None: async def stream_handler(stream: INetStream) -> None:

View File

@ -45,7 +45,10 @@ async def run(port: int, destination: str, seed: int | None = None) -> None:
secret = secrets.token_bytes(32) secret = secrets.token_bytes(32)
host = new_host(key_pair=create_new_key_pair(secret)) host = new_host(key_pair=create_new_key_pair(secret))
async with host.run(listen_addrs=[listen_addr]): async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
print(f"I am {host.get_id().to_string()}") print(f"I am {host.get_id().to_string()}")
if not destination: # its the server if not destination: # its the server

View File

@ -1,6 +1,7 @@
import argparse import argparse
import base64 import base64
import logging import logging
import sys
import multiaddr import multiaddr
import trio import trio
@ -8,10 +9,13 @@ import trio
from libp2p import ( from libp2p import (
new_host, new_host,
) )
from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID from libp2p.identity.identify.identify import (
from libp2p.identity.identify.pb.identify_pb2 import ( ID as IDENTIFY_PROTOCOL_ID,
Identify, identify_handler_for,
parse_identify_response,
) )
from libp2p.identity.identify.pb.identify_pb2 import Identify
from libp2p.peer.envelope import debug_dump_envelope, unmarshal_envelope
from libp2p.peer.peerinfo import ( from libp2p.peer.peerinfo import (
info_from_p2p_addr, info_from_p2p_addr,
) )
@ -30,10 +34,11 @@ def decode_multiaddrs(raw_addrs):
return decoded_addrs return decoded_addrs
def print_identify_response(identify_response): def print_identify_response(identify_response: Identify):
"""Pretty-print Identify response.""" """Pretty-print Identify response."""
public_key_b64 = base64.b64encode(identify_response.public_key).decode("utf-8") public_key_b64 = base64.b64encode(identify_response.public_key).decode("utf-8")
listen_addrs = decode_multiaddrs(identify_response.listen_addrs) listen_addrs = decode_multiaddrs(identify_response.listen_addrs)
signed_peer_record = unmarshal_envelope(identify_response.signedPeerRecord)
try: try:
observed_addr_decoded = decode_multiaddrs([identify_response.observed_addr]) observed_addr_decoded = decode_multiaddrs([identify_response.observed_addr])
except Exception: except Exception:
@ -49,8 +54,10 @@ def print_identify_response(identify_response):
f" Agent Version: {identify_response.agent_version}" f" Agent Version: {identify_response.agent_version}"
) )
debug_dump_envelope(signed_peer_record)
async def run(port: int, destination: str) -> None:
async def run(port: int, destination: str, use_varint_format: bool = True) -> None:
localhost_ip = "0.0.0.0" localhost_ip = "0.0.0.0"
if not destination: if not destination:
@ -58,39 +65,159 @@ async def run(port: int, destination: str) -> None:
listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}")
host_a = new_host() host_a = new_host()
async with host_a.run(listen_addrs=[listen_addr]): # Set up identify handler with specified format
# Set use_varint_format = False, if want to checkout the Signed-PeerRecord
identify_handler = identify_handler_for(
host_a, use_varint_format=use_varint_format
)
host_a.set_stream_handler(IDENTIFY_PROTOCOL_ID, identify_handler)
async with (
host_a.run(listen_addrs=[listen_addr]),
trio.open_nursery() as nursery,
):
# Start the peer-store cleanup task
nursery.start_soon(host_a.get_peerstore().start_cleanup_task, 60)
# Get the actual address and replace 0.0.0.0 with 127.0.0.1 for client
# connections
server_addr = str(host_a.get_addrs()[0])
client_addr = server_addr.replace("/ip4/0.0.0.0/", "/ip4/127.0.0.1/")
format_name = "length-prefixed" if use_varint_format else "raw protobuf"
format_flag = "--raw-format" if not use_varint_format else ""
print( print(
"First host listening. Run this from another console:\n\n" f"First host listening (using {format_name} format). "
f"identify-demo " f"Run this from another console:\n\n"
f"-d {host_a.get_addrs()[0]}\n" f"identify-demo {format_flag} -d {client_addr}\n"
) )
print("Waiting for incoming identify request...") print("Waiting for incoming identify request...")
await trio.sleep_forever()
# Add a custom handler to show connection events
async def custom_identify_handler(stream):
peer_id = stream.muxed_conn.peer_id
print(f"\n🔗 Received identify request from peer: {peer_id}")
# Show remote address in multiaddr format
try:
from libp2p.identity.identify.identify import (
_remote_address_to_multiaddr,
)
remote_address = stream.get_remote_address()
if remote_address:
observed_multiaddr = _remote_address_to_multiaddr(
remote_address
)
# Add the peer ID to create a complete multiaddr
complete_multiaddr = f"{observed_multiaddr}/p2p/{peer_id}"
print(f" Remote address: {complete_multiaddr}")
else:
print(f" Remote address: {remote_address}")
except Exception:
print(f" Remote address: {stream.get_remote_address()}")
# Call the original handler
await identify_handler(stream)
print(f"✅ Successfully processed identify request from {peer_id}")
# Replace the handler with our custom one
host_a.set_stream_handler(IDENTIFY_PROTOCOL_ID, custom_identify_handler)
try:
await trio.sleep_forever()
except KeyboardInterrupt:
print("\n🛑 Shutting down listener...")
logger.info("Listener interrupted by user")
return
else: else:
# Create second host (dialer) # Create second host (dialer)
listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/{localhost_ip}/tcp/{port}")
host_b = new_host() host_b = new_host()
async with host_b.run(listen_addrs=[listen_addr]): async with (
host_b.run(listen_addrs=[listen_addr]),
trio.open_nursery() as nursery,
):
# Start the peer-store cleanup task
nursery.start_soon(host_b.get_peerstore().start_cleanup_task, 60)
# Connect to the first host # Connect to the first host
print(f"dialer (host_b) listening on {host_b.get_addrs()[0]}") print(f"dialer (host_b) listening on {host_b.get_addrs()[0]}")
maddr = multiaddr.Multiaddr(destination) maddr = multiaddr.Multiaddr(destination)
info = info_from_p2p_addr(maddr) info = info_from_p2p_addr(maddr)
print(f"Second host connecting to peer: {info.peer_id}") print(f"Second host connecting to peer: {info.peer_id}")
await host_b.connect(info) try:
await host_b.connect(info)
except Exception as e:
error_msg = str(e)
if "unable to connect" in error_msg or "SwarmException" in error_msg:
print(f"\n❌ Cannot connect to peer: {info.peer_id}")
print(f" Address: {destination}")
print(f" Error: {error_msg}")
print(
"\n💡 Make sure the peer is running and the address is correct."
)
return
else:
# Re-raise other exceptions
raise
stream = await host_b.new_stream(info.peer_id, (IDENTIFY_PROTOCOL_ID,)) stream = await host_b.new_stream(info.peer_id, (IDENTIFY_PROTOCOL_ID,))
try: try:
print("Starting identify protocol...") print("Starting identify protocol...")
response = await stream.read()
# Read the response using the utility function
from libp2p.utils.varint import read_length_prefixed_protobuf
response = await read_length_prefixed_protobuf(
stream, use_varint_format
)
full_response = response
await stream.close() await stream.close()
identify_msg = Identify()
identify_msg.ParseFromString(response) # Parse the response using the robust protocol-level function
# This handles both old and new formats automatically
identify_msg = parse_identify_response(full_response)
print_identify_response(identify_msg) print_identify_response(identify_msg)
except Exception as e: except Exception as e:
print(f"Identify protocol error: {e}") error_msg = str(e)
print(f"Identify protocol error: {error_msg}")
# Check for specific format mismatch errors
if "Error parsing message" in error_msg or "DecodeError" in error_msg:
print("\n" + "=" * 60)
print("FORMAT MISMATCH DETECTED!")
print("=" * 60)
if use_varint_format:
print(
"You are using length-prefixed format (default) but the "
"listener"
)
print("is using raw protobuf format.")
print(
"\nTo fix this, run the dialer with the --raw-format flag:"
)
print(f"identify-demo --raw-format -d {destination}")
else:
print("You are using raw protobuf format but the listener")
print("is using length-prefixed format (default).")
print(
"\nTo fix this, run the dialer without the --raw-format "
"flag:"
)
print(f"identify-demo -d {destination}")
print("=" * 60)
else:
import traceback
traceback.print_exc()
return return
@ -98,9 +225,12 @@ async def run(port: int, destination: str) -> None:
def main() -> None: def main() -> None:
description = """ description = """
This program demonstrates the libp2p identify protocol. This program demonstrates the libp2p identify protocol.
First run identify-demo -p <PORT>' to start a listener. First run 'identify-demo -p <PORT> [--raw-format]' to start a listener.
Then run 'identify-demo <ANOTHER_PORT> -d <DESTINATION>' Then run 'identify-demo <ANOTHER_PORT> -d <DESTINATION>'
where <DESTINATION> is the multiaddress shown by the listener. where <DESTINATION> is the multiaddress shown by the listener.
Use --raw-format to send raw protobuf messages (old format) instead of
length-prefixed protobuf messages (new format, default).
""" """
example_maddr = ( example_maddr = (
@ -115,12 +245,35 @@ def main() -> None:
type=str, type=str,
help=f"destination multiaddr string, e.g. {example_maddr}", help=f"destination multiaddr string, e.g. {example_maddr}",
) )
parser.add_argument(
"--raw-format",
action="store_true",
help=(
"use raw protobuf format (old format) instead of "
"length-prefixed (new format)"
),
)
args = parser.parse_args() args = parser.parse_args()
# Determine format: use varint (length-prefixed) if --raw-format is specified,
# otherwise use raw protobuf format (old format)
use_varint_format = args.raw_format
try: try:
trio.run(run, *(args.port, args.destination)) if args.destination:
# Run in dialer mode
trio.run(run, *(args.port, args.destination, use_varint_format))
else:
# Run in listener mode
trio.run(run, *(args.port, args.destination, use_varint_format))
except KeyboardInterrupt: except KeyboardInterrupt:
pass print("\n👋 Goodbye!")
logger.info("Application interrupted by user")
except Exception as e:
print(f"\n❌ Error: {str(e)}")
logger.error("Error: %s", str(e))
sys.exit(1)
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -11,23 +11,26 @@ This example shows how to:
import logging import logging
import multiaddr
import trio import trio
from libp2p import ( from libp2p import (
new_host, new_host,
) )
from libp2p.abc import (
INetStream,
)
from libp2p.crypto.secp256k1 import ( from libp2p.crypto.secp256k1 import (
create_new_key_pair, create_new_key_pair,
) )
from libp2p.custom_types import ( from libp2p.custom_types import (
TProtocol, TProtocol,
) )
from libp2p.identity.identify import ( from libp2p.identity.identify.pb.identify_pb2 import (
identify_handler_for, Identify,
) )
from libp2p.identity.identify_push import ( from libp2p.identity.identify_push import (
ID_PUSH, ID_PUSH,
identify_push_handler_for,
push_identify_to_peer, push_identify_to_peer,
) )
from libp2p.peer.peerinfo import ( from libp2p.peer.peerinfo import (
@ -38,8 +41,145 @@ from libp2p.peer.peerinfo import (
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def create_custom_identify_handler(host, host_name: str):
"""Create a custom identify handler that displays received information."""
async def handle_identify(stream: INetStream) -> None:
peer_id = stream.muxed_conn.peer_id
print(f"\n🔍 {host_name} received identify request from peer: {peer_id}")
# Get the standard identify response using the existing function
from libp2p.identity.identify.identify import (
_mk_identify_protobuf,
_remote_address_to_multiaddr,
)
# Get observed address
observed_multiaddr = None
try:
remote_address = stream.get_remote_address()
if remote_address:
observed_multiaddr = _remote_address_to_multiaddr(remote_address)
except Exception:
pass
# Build the identify protobuf
identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
response_data = identify_msg.SerializeToString()
print(f" 📋 {host_name} identify information:")
if identify_msg.HasField("protocol_version"):
print(f" Protocol Version: {identify_msg.protocol_version}")
if identify_msg.HasField("agent_version"):
print(f" Agent Version: {identify_msg.agent_version}")
if identify_msg.HasField("public_key"):
print(f" Public Key: {identify_msg.public_key.hex()[:16]}...")
if identify_msg.listen_addrs:
print(" Listen Addresses:")
for addr_bytes in identify_msg.listen_addrs:
addr = multiaddr.Multiaddr(addr_bytes)
print(f" - {addr}")
if identify_msg.protocols:
print(" Supported Protocols:")
for protocol in identify_msg.protocols:
print(f" - {protocol}")
# Send the response
await stream.write(response_data)
await stream.close()
return handle_identify
def create_custom_identify_push_handler(host, host_name: str):
"""Create a custom identify/push handler that displays received information."""
async def handle_identify_push(stream: INetStream) -> None:
peer_id = stream.muxed_conn.peer_id
print(f"\n📤 {host_name} received identify/push from peer: {peer_id}")
try:
# Read the identify message using the utility function
from libp2p.utils.varint import read_length_prefixed_protobuf
data = await read_length_prefixed_protobuf(stream, use_varint_format=True)
# Parse the identify message
identify_msg = Identify()
identify_msg.ParseFromString(data)
print(" 📋 Received identify information:")
if identify_msg.HasField("protocol_version"):
print(f" Protocol Version: {identify_msg.protocol_version}")
if identify_msg.HasField("agent_version"):
print(f" Agent Version: {identify_msg.agent_version}")
if identify_msg.HasField("public_key"):
print(f" Public Key: {identify_msg.public_key.hex()[:16]}...")
if identify_msg.HasField("observed_addr") and identify_msg.observed_addr:
observed_addr = multiaddr.Multiaddr(identify_msg.observed_addr)
print(f" Observed Address: {observed_addr}")
if identify_msg.listen_addrs:
print(" Listen Addresses:")
for addr_bytes in identify_msg.listen_addrs:
addr = multiaddr.Multiaddr(addr_bytes)
print(f" - {addr}")
if identify_msg.protocols:
print(" Supported Protocols:")
for protocol in identify_msg.protocols:
print(f" - {protocol}")
# Update the peerstore with the new information
from libp2p.identity.identify_push.identify_push import (
_update_peerstore_from_identify,
)
await _update_peerstore_from_identify(
host.get_peerstore(), peer_id, identify_msg
)
print(f"{host_name} updated peerstore with new information")
except Exception as e:
print(f" ❌ Error processing identify/push: {e}")
finally:
await stream.close()
return handle_identify_push
async def display_peerstore_info(host, host_name: str, peer_id, description: str):
"""Display peerstore information for a specific peer."""
peerstore = host.get_peerstore()
try:
addrs = peerstore.addrs(peer_id)
except Exception:
addrs = []
try:
protocols = peerstore.get_protocols(peer_id)
except Exception:
protocols = []
print(f"\n📚 {host_name} peerstore for {description}:")
print(f" Peer ID: {peer_id}")
if addrs:
print(" Addresses:")
for addr in addrs:
print(f" - {addr}")
else:
print(" Addresses: None")
if protocols:
print(" Protocols:")
for protocol in protocols:
print(f" - {protocol}")
else:
print(" Protocols: None")
async def main() -> None: async def main() -> None:
print("\n==== Starting Identify-Push Example ====\n") print("\n==== Starting Enhanced Identify-Push Example ====\n")
# Create key pairs for the two hosts # Create key pairs for the two hosts
key_pair_1 = create_new_key_pair() key_pair_1 = create_new_key_pair()
@ -48,45 +188,57 @@ async def main() -> None:
# Create the first host # Create the first host
host_1 = new_host(key_pair=key_pair_1) host_1 = new_host(key_pair=key_pair_1)
# Set up the identify and identify/push handlers # Set up custom identify and identify/push handlers
host_1.set_stream_handler(TProtocol("/ipfs/id/1.0.0"), identify_handler_for(host_1)) host_1.set_stream_handler(
host_1.set_stream_handler(ID_PUSH, identify_push_handler_for(host_1)) TProtocol("/ipfs/id/1.0.0"), create_custom_identify_handler(host_1, "Host 1")
)
host_1.set_stream_handler(
ID_PUSH, create_custom_identify_push_handler(host_1, "Host 1")
)
# Create the second host # Create the second host
host_2 = new_host(key_pair=key_pair_2) host_2 = new_host(key_pair=key_pair_2)
# Set up the identify and identify/push handlers # Set up custom identify and identify/push handlers
host_2.set_stream_handler(TProtocol("/ipfs/id/1.0.0"), identify_handler_for(host_2)) host_2.set_stream_handler(
host_2.set_stream_handler(ID_PUSH, identify_push_handler_for(host_2)) TProtocol("/ipfs/id/1.0.0"), create_custom_identify_handler(host_2, "Host 2")
)
host_2.set_stream_handler(
ID_PUSH, create_custom_identify_push_handler(host_2, "Host 2")
)
# Start listening on random ports using the run context manager # Start listening on random ports using the run context manager
import multiaddr
listen_addr_1 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0") listen_addr_1 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
listen_addr_2 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0") listen_addr_2 = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
async with host_1.run([listen_addr_1]), host_2.run([listen_addr_2]): async with (
host_1.run([listen_addr_1]),
host_2.run([listen_addr_2]),
trio.open_nursery() as nursery,
):
# Start the peer-store cleanup task
nursery.start_soon(host_1.get_peerstore().start_cleanup_task, 60)
nursery.start_soon(host_2.get_peerstore().start_cleanup_task, 60)
# Get the addresses of both hosts # Get the addresses of both hosts
addr_1 = host_1.get_addrs()[0] addr_1 = host_1.get_addrs()[0]
logger.info(f"Host 1 listening on {addr_1}")
print(f"Host 1 listening on {addr_1}")
print(f"Peer ID: {host_1.get_id().pretty()}")
addr_2 = host_2.get_addrs()[0] addr_2 = host_2.get_addrs()[0]
logger.info(f"Host 2 listening on {addr_2}")
print(f"Host 2 listening on {addr_2}")
print(f"Peer ID: {host_2.get_id().pretty()}")
print("\nConnecting Host 2 to Host 1...") print("🏠 Host Configuration:")
print(f" Host 1: {addr_1}")
print(f" Host 1 Peer ID: {host_1.get_id().pretty()}")
print(f" Host 2: {addr_2}")
print(f" Host 2 Peer ID: {host_2.get_id().pretty()}")
print("\n🔗 Connecting Host 2 to Host 1...")
# Connect host_2 to host_1 # Connect host_2 to host_1
peer_info = info_from_p2p_addr(addr_1) peer_info = info_from_p2p_addr(addr_1)
await host_2.connect(peer_info) await host_2.connect(peer_info)
logger.info("Host 2 connected to Host 1") print("Host 2 successfully connected to Host 1")
print("Host 2 successfully connected to Host 1")
# Run the identify protocol from host_2 to host_1 # Run the identify protocol from host_2 to host_1
# (so Host 1 learns Host 2's address) print("\n🔄 Running identify protocol (Host 2 → Host 1)...")
from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID from libp2p.identity.identify.identify import ID as IDENTIFY_PROTOCOL_ID
stream = await host_2.new_stream(host_1.get_id(), (IDENTIFY_PROTOCOL_ID,)) stream = await host_2.new_stream(host_1.get_id(), (IDENTIFY_PROTOCOL_ID,))
@ -94,64 +246,58 @@ async def main() -> None:
await stream.close() await stream.close()
# Run the identify protocol from host_1 to host_2 # Run the identify protocol from host_1 to host_2
# (so Host 2 learns Host 1's address) print("\n🔄 Running identify protocol (Host 1 → Host 2)...")
stream = await host_1.new_stream(host_2.get_id(), (IDENTIFY_PROTOCOL_ID,)) stream = await host_1.new_stream(host_2.get_id(), (IDENTIFY_PROTOCOL_ID,))
response = await stream.read() response = await stream.read()
await stream.close() await stream.close()
# --- NEW CODE: Update Host 1's peerstore with Host 2's addresses --- # Update Host 1's peerstore with Host 2's addresses
from libp2p.identity.identify.pb.identify_pb2 import (
Identify,
)
identify_msg = Identify() identify_msg = Identify()
identify_msg.ParseFromString(response) identify_msg.ParseFromString(response)
peerstore_1 = host_1.get_peerstore() peerstore_1 = host_1.get_peerstore()
peer_id_2 = host_2.get_id() peer_id_2 = host_2.get_id()
for addr_bytes in identify_msg.listen_addrs: for addr_bytes in identify_msg.listen_addrs:
maddr = multiaddr.Multiaddr(addr_bytes) maddr = multiaddr.Multiaddr(addr_bytes)
# TTL can be any positive int peerstore_1.add_addr(peer_id_2, maddr, ttl=3600)
peerstore_1.add_addr(
peer_id_2,
maddr,
ttl=3600,
)
# --- END NEW CODE ---
# Now Host 1's peerstore should have Host 2's address # Display peerstore information before push
peerstore_1 = host_1.get_peerstore() await display_peerstore_info(
peer_id_2 = host_2.get_id() host_1, "Host 1", peer_id_2, "Host 2 (before push)"
addrs_1_for_2 = peerstore_1.addrs(peer_id_2)
logger.info(
f"[DEBUG] Host 1 peerstore addresses for Host 2 before push: "
f"{addrs_1_for_2}"
)
print(
f"[DEBUG] Host 1 peerstore addresses for Host 2 before push: "
f"{addrs_1_for_2}"
) )
# Push identify information from host_1 to host_2 # Push identify information from host_1 to host_2
logger.info("Host 1 pushing identify information to Host 2") print("\n📤 Host 1 pushing identify information to Host 2...")
print("\nHost 1 pushing identify information to Host 2...")
try: try:
# Call push_identify_to_peer which now returns a boolean
success = await push_identify_to_peer(host_1, host_2.get_id()) success = await push_identify_to_peer(host_1, host_2.get_id())
if success: if success:
logger.info("Identify push completed successfully") print("Identify push completed successfully!")
print("Identify push completed successfully!")
else: else:
logger.warning("Identify push didn't complete successfully") print("⚠️ Identify push didn't complete successfully")
print("\nWarning: Identify push didn't complete successfully")
except Exception as e: except Exception as e:
logger.error(f"Error during identify push: {str(e)}") print(f"Error during identify push: {str(e)}")
print(f"\nError during identify push: {str(e)}")
# Add this at the end of your async with block: # Give a moment for the identify/push processing to complete
await trio.sleep(0.5) # Give background tasks time to finish await trio.sleep(0.5)
# Display peerstore information after push
await display_peerstore_info(host_1, "Host 1", peer_id_2, "Host 2 (after push)")
await display_peerstore_info(
host_2, "Host 2", host_1.get_id(), "Host 1 (after push)"
)
# Give more time for background tasks to finish and connections to stabilize
print("\n⏳ Waiting for background tasks to complete...")
await trio.sleep(1.0)
# Gracefully close connections to prevent connection errors
print("🔌 Closing connections...")
await host_2.disconnect(host_1.get_id())
await trio.sleep(0.2)
print("\n🎉 Example completed successfully!")
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -41,6 +41,9 @@ from libp2p.identity.identify import (
ID as ID_IDENTIFY, ID as ID_IDENTIFY,
identify_handler_for, identify_handler_for,
) )
from libp2p.identity.identify.identify import (
_remote_address_to_multiaddr,
)
from libp2p.identity.identify.pb.identify_pb2 import ( from libp2p.identity.identify.pb.identify_pb2 import (
Identify, Identify,
) )
@ -57,18 +60,46 @@ from libp2p.peer.peerinfo import (
logger = logging.getLogger("libp2p.identity.identify-push-example") logger = logging.getLogger("libp2p.identity.identify-push-example")
def custom_identify_push_handler_for(host): def custom_identify_push_handler_for(host, use_varint_format: bool = True):
""" """
Create a custom handler for the identify/push protocol that logs and prints Create a custom handler for the identify/push protocol that logs and prints
the identity information received from the dialer. the identity information received from the dialer.
Args:
host: The libp2p host
use_varint_format: If True, expect length-prefixed format; if False, expect
raw protobuf
""" """
async def handle_identify_push(stream: INetStream) -> None: async def handle_identify_push(stream: INetStream) -> None:
peer_id = stream.muxed_conn.peer_id peer_id = stream.muxed_conn.peer_id
# Get remote address information
try: try:
# Read the identify message from the stream remote_address = stream.get_remote_address()
data = await stream.read() if remote_address:
observed_multiaddr = _remote_address_to_multiaddr(remote_address)
logger.info(
"Connection from remote peer %s, address: %s, multiaddr: %s",
peer_id,
remote_address,
observed_multiaddr,
)
print(f"\n🔗 Received identify/push request from peer: {peer_id}")
# Add the peer ID to create a complete multiaddr
complete_multiaddr = f"{observed_multiaddr}/p2p/{peer_id}"
print(f" Remote address: {complete_multiaddr}")
except Exception as e:
logger.error("Error getting remote address: %s", e)
print(f"\n🔗 Received identify/push request from peer: {peer_id}")
try:
# Use the utility function to read the protobuf message
from libp2p.utils.varint import read_length_prefixed_protobuf
data = await read_length_prefixed_protobuf(stream, use_varint_format)
identify_msg = Identify() identify_msg = Identify()
identify_msg.ParseFromString(data) identify_msg.ParseFromString(data)
@ -117,11 +148,41 @@ def custom_identify_push_handler_for(host):
await _update_peerstore_from_identify(peerstore, peer_id, identify_msg) await _update_peerstore_from_identify(peerstore, peer_id, identify_msg)
logger.info("Successfully processed identify/push from peer %s", peer_id) logger.info("Successfully processed identify/push from peer %s", peer_id)
print(f"\nSuccessfully processed identify/push from peer {peer_id}") print(f"Successfully processed identify/push from peer {peer_id}")
except Exception as e: except Exception as e:
logger.error("Error processing identify/push from %s: %s", peer_id, e) error_msg = str(e)
print(f"\nError processing identify/push from {peer_id}: {e}") logger.error(
"Error processing identify/push from %s: %s", peer_id, error_msg
)
print(f"\nError processing identify/push from {peer_id}: {error_msg}")
# Check for specific format mismatch errors
if (
"Error parsing message" in error_msg
or "DecodeError" in error_msg
or "ParseFromString" in error_msg
):
print("\n" + "=" * 60)
print("FORMAT MISMATCH DETECTED!")
print("=" * 60)
if use_varint_format:
print(
"You are using length-prefixed format (default) but the "
"dialer is using raw protobuf format."
)
print("\nTo fix this, run the dialer with the --raw-format flag:")
print(
"identify-push-listener-dialer-demo --raw-format -d <ADDRESS>"
)
else:
print("You are using raw protobuf format but the dialer")
print("is using length-prefixed format (default).")
print(
"\nTo fix this, run the dialer without the --raw-format flag:"
)
print("identify-push-listener-dialer-demo -d <ADDRESS>")
print("=" * 60)
finally: finally:
# Close the stream after processing # Close the stream after processing
await stream.close() await stream.close()
@ -129,9 +190,15 @@ def custom_identify_push_handler_for(host):
return handle_identify_push return handle_identify_push
async def run_listener(port: int) -> None: async def run_listener(
port: int, use_varint_format: bool = True, raw_format_flag: bool = False
) -> None:
"""Run a host in listener mode.""" """Run a host in listener mode."""
print(f"\n==== Starting Identify-Push Listener on port {port} ====\n") format_name = "length-prefixed" if use_varint_format else "raw protobuf"
print(
f"\n==== Starting Identify-Push Listener on port {port} "
f"(using {format_name} format) ====\n"
)
# Create key pair for the listener # Create key pair for the listener
key_pair = create_new_key_pair() key_pair = create_new_key_pair()
@ -139,35 +206,58 @@ async def run_listener(port: int) -> None:
# Create the listener host # Create the listener host
host = new_host(key_pair=key_pair) host = new_host(key_pair=key_pair)
# Set up the identify and identify/push handlers # Set up the identify and identify/push handlers with specified format
host.set_stream_handler(ID_IDENTIFY, identify_handler_for(host)) host.set_stream_handler(
host.set_stream_handler(ID_IDENTIFY_PUSH, custom_identify_push_handler_for(host)) ID_IDENTIFY, identify_handler_for(host, use_varint_format=use_varint_format)
)
host.set_stream_handler(
ID_IDENTIFY_PUSH,
custom_identify_push_handler_for(host, use_varint_format=use_varint_format),
)
# Start listening # Start listening
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
async with host.run([listen_addr]): try:
addr = host.get_addrs()[0] async with host.run([listen_addr]):
logger.info("Listener host ready!") addr = host.get_addrs()[0]
print("Listener host ready!") logger.info("Listener host ready!")
print("Listener host ready!")
logger.info(f"Listening on: {addr}") logger.info(f"Listening on: {addr}")
print(f"Listening on: {addr}") print(f"Listening on: {addr}")
logger.info(f"Peer ID: {host.get_id().pretty()}") logger.info(f"Peer ID: {host.get_id().pretty()}")
print(f"Peer ID: {host.get_id().pretty()}") print(f"Peer ID: {host.get_id().pretty()}")
print("\nRun dialer with command:") print("\nRun dialer with command:")
print(f"identify-push-listener-dialer-demo -d {addr}") if raw_format_flag:
print("\nWaiting for incoming connections... (Ctrl+C to exit)") print(f"identify-push-listener-dialer-demo -d {addr} --raw-format")
else:
print(f"identify-push-listener-dialer-demo -d {addr}")
print("\nWaiting for incoming identify/push requests... (Ctrl+C to exit)")
# Keep running until interrupted # Keep running until interrupted
await trio.sleep_forever() try:
await trio.sleep_forever()
except KeyboardInterrupt:
print("\n🛑 Shutting down listener...")
logger.info("Listener interrupted by user")
return
except Exception as e:
logger.error(f"Listener error: {e}")
raise
async def run_dialer(port: int, destination: str) -> None: async def run_dialer(
port: int, destination: str, use_varint_format: bool = True
) -> None:
"""Run a host in dialer mode that connects to a listener.""" """Run a host in dialer mode that connects to a listener."""
print(f"\n==== Starting Identify-Push Dialer on port {port} ====\n") format_name = "length-prefixed" if use_varint_format else "raw protobuf"
print(
f"\n==== Starting Identify-Push Dialer on port {port} "
f"(using {format_name} format) ====\n"
)
# Create key pair for the dialer # Create key pair for the dialer
key_pair = create_new_key_pair() key_pair = create_new_key_pair()
@ -175,9 +265,14 @@ async def run_dialer(port: int, destination: str) -> None:
# Create the dialer host # Create the dialer host
host = new_host(key_pair=key_pair) host = new_host(key_pair=key_pair)
# Set up the identify and identify/push handlers # Set up the identify and identify/push handlers with specified format
host.set_stream_handler(ID_IDENTIFY, identify_handler_for(host)) host.set_stream_handler(
host.set_stream_handler(ID_IDENTIFY_PUSH, identify_push_handler_for(host)) ID_IDENTIFY, identify_handler_for(host, use_varint_format=use_varint_format)
)
host.set_stream_handler(
ID_IDENTIFY_PUSH,
identify_push_handler_for(host, use_varint_format=use_varint_format),
)
# Start listening on a different port # Start listening on a different port
listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}") listen_addr = multiaddr.Multiaddr(f"/ip4/0.0.0.0/tcp/{port}")
@ -198,7 +293,9 @@ async def run_dialer(port: int, destination: str) -> None:
try: try:
await host.connect(peer_info) await host.connect(peer_info)
logger.info("Successfully connected to listener!") logger.info("Successfully connected to listener!")
print("Successfully connected to listener!") print("Successfully connected to listener!")
print(f" Connected to: {peer_info.peer_id}")
print(f" Full address: {destination}")
# Push identify information to the listener # Push identify information to the listener
logger.info("Pushing identify information to listener...") logger.info("Pushing identify information to listener...")
@ -206,11 +303,13 @@ async def run_dialer(port: int, destination: str) -> None:
try: try:
# Call push_identify_to_peer which returns a boolean # Call push_identify_to_peer which returns a boolean
success = await push_identify_to_peer(host, peer_info.peer_id) success = await push_identify_to_peer(
host, peer_info.peer_id, use_varint_format=use_varint_format
)
if success: if success:
logger.info("Identify push completed successfully!") logger.info("Identify push completed successfully!")
print("Identify push completed successfully!") print("Identify push completed successfully!")
logger.info("Example completed successfully!") logger.info("Example completed successfully!")
print("\nExample completed successfully!") print("\nExample completed successfully!")
@ -221,17 +320,57 @@ async def run_dialer(port: int, destination: str) -> None:
logger.warning("Example completed with warnings.") logger.warning("Example completed with warnings.")
print("Example completed with warnings.") print("Example completed with warnings.")
except Exception as e: except Exception as e:
logger.error(f"Error during identify push: {str(e)}") error_msg = str(e)
print(f"\nError during identify push: {str(e)}") logger.error(f"Error during identify push: {error_msg}")
print(f"\nError during identify push: {error_msg}")
# Check for specific format mismatch errors
if (
"Error parsing message" in error_msg
or "DecodeError" in error_msg
or "ParseFromString" in error_msg
):
print("\n" + "=" * 60)
print("FORMAT MISMATCH DETECTED!")
print("=" * 60)
if use_varint_format:
print(
"You are using length-prefixed format (default) but the "
"listener is using raw protobuf format."
)
print(
"\nTo fix this, run the dialer with the --raw-format flag:"
)
print(
f"identify-push-listener-dialer-demo --raw-format -d "
f"{destination}"
)
else:
print("You are using raw protobuf format but the listener")
print("is using length-prefixed format (default).")
print(
"\nTo fix this, run the dialer without the --raw-format "
"flag:"
)
print(f"identify-push-listener-dialer-demo -d {destination}")
print("=" * 60)
logger.error("Example completed with errors.") logger.error("Example completed with errors.")
print("Example completed with errors.") print("Example completed with errors.")
# Continue execution despite the push error # Continue execution despite the push error
except Exception as e: except Exception as e:
logger.error(f"Error during dialer operation: {str(e)}") error_msg = str(e)
print(f"\nError during dialer operation: {str(e)}") if "unable to connect" in error_msg or "SwarmException" in error_msg:
raise print(f"\n❌ Cannot connect to peer: {peer_info.peer_id}")
print(f" Address: {destination}")
print(f" Error: {error_msg}")
print("\n💡 Make sure the peer is running and the address is correct.")
return
else:
logger.error(f"Error during dialer operation: {error_msg}")
print(f"\nError during dialer operation: {error_msg}")
raise
def main() -> None: def main() -> None:
@ -240,34 +379,55 @@ def main() -> None:
This program demonstrates the libp2p identify/push protocol. This program demonstrates the libp2p identify/push protocol.
Without arguments, it runs as a listener on random port. Without arguments, it runs as a listener on random port.
With -d parameter, it runs as a dialer on random port. With -d parameter, it runs as a dialer on random port.
Port 0 (default) means the OS will automatically assign an available port.
This prevents port conflicts when running multiple instances.
Use --raw-format to send raw protobuf messages (old format) instead of
length-prefixed protobuf messages (new format, default).
""" """
example = (
"/ip4/127.0.0.1/tcp/8000/p2p/QmQn4SwGkDZKkUEpBRBvTmheQycxAHJUNmVEnjA2v1qe8Q"
)
parser = argparse.ArgumentParser(description=description) parser = argparse.ArgumentParser(description=description)
parser.add_argument("-p", "--port", default=0, type=int, help="source port number") parser.add_argument(
"-p",
"--port",
default=0,
type=int,
help="source port number (0 = random available port)",
)
parser.add_argument( parser.add_argument(
"-d", "-d",
"--destination", "--destination",
type=str, type=str,
help=f"destination multiaddr string, e.g. {example}", help="destination multiaddr string",
) )
parser.add_argument(
"--raw-format",
action="store_true",
help=(
"use raw protobuf format (old format) instead of "
"length-prefixed (new format)"
),
)
args = parser.parse_args() args = parser.parse_args()
# Determine format: raw format if --raw-format is specified, otherwise
# length-prefixed
use_varint_format = not args.raw_format
try: try:
if args.destination: if args.destination:
# Run in dialer mode with random available port if not specified # Run in dialer mode with random available port if not specified
trio.run(run_dialer, args.port, args.destination) trio.run(run_dialer, args.port, args.destination, use_varint_format)
else: else:
# Run in listener mode with random available port if not specified # Run in listener mode with random available port if not specified
trio.run(run_listener, args.port) trio.run(run_listener, args.port, use_varint_format, args.raw_format)
except KeyboardInterrupt: except KeyboardInterrupt:
print("\nInterrupted by user") print("\n👋 Goodbye!")
logger.info("Interrupted by user") logger.info("Application interrupted by user")
except Exception as e: except Exception as e:
print(f"\nError: {str(e)}") print(f"\nError: {str(e)}")
logger.error("Error: %s", str(e)) logger.error("Error: %s", str(e))
sys.exit(1) sys.exit(1)

View File

@ -151,7 +151,10 @@ async def run_node(
host = new_host(key_pair=key_pair) host = new_host(key_pair=key_pair)
listen_addr = Multiaddr(f"/ip4/127.0.0.1/tcp/{port}") listen_addr = Multiaddr(f"/ip4/127.0.0.1/tcp/{port}")
async with host.run(listen_addrs=[listen_addr]): async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
peer_id = host.get_id().pretty() peer_id = host.get_id().pretty()
addr_str = f"/ip4/127.0.0.1/tcp/{port}/p2p/{peer_id}" addr_str = f"/ip4/127.0.0.1/tcp/{port}/p2p/{peer_id}"
await connect_to_bootstrap_nodes(host, bootstrap_nodes) await connect_to_bootstrap_nodes(host, bootstrap_nodes)

View File

@ -46,7 +46,10 @@ async def run(port: int) -> None:
logger.info("Starting peer Discovery") logger.info("Starting peer Discovery")
host = new_host(key_pair=key_pair, enable_mDNS=True) host = new_host(key_pair=key_pair, enable_mDNS=True)
async with host.run(listen_addrs=[listen_addr]): async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
await trio.sleep_forever() await trio.sleep_forever()

View File

@ -59,6 +59,9 @@ async def run(port: int, destination: str) -> None:
host = new_host(listen_addrs=[listen_addr]) host = new_host(listen_addrs=[listen_addr])
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery: async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
if not destination: if not destination:
host.set_stream_handler(PING_PROTOCOL_ID, handle_ping) host.set_stream_handler(PING_PROTOCOL_ID, handle_ping)

View File

@ -144,6 +144,9 @@ async def run(topic: str, destination: str | None, port: int | None) -> None:
pubsub = Pubsub(host, gossipsub) pubsub = Pubsub(host, gossipsub)
termination_event = trio.Event() # Event to signal termination termination_event = trio.Event() # Event to signal termination
async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery: async with host.run(listen_addrs=[listen_addr]), trio.open_nursery() as nursery:
# Start the peer-store cleanup task
nursery.start_soon(host.get_peerstore().start_cleanup_task, 60)
logger.info(f"Node started with peer ID: {host.get_id()}") logger.info(f"Node started with peer ID: {host.get_id()}")
logger.info(f"Listening on: {listen_addr}") logger.info(f"Listening on: {listen_addr}")
logger.info("Initializing PubSub and GossipSub...") logger.info("Initializing PubSub and GossipSub...")

View File

@ -251,6 +251,7 @@ def new_host(
muxer_preference: Literal["YAMUX", "MPLEX"] | None = None, muxer_preference: Literal["YAMUX", "MPLEX"] | None = None,
listen_addrs: Sequence[multiaddr.Multiaddr] | None = None, listen_addrs: Sequence[multiaddr.Multiaddr] | None = None,
enable_mDNS: bool = False, enable_mDNS: bool = False,
bootstrap: list[str] | None = None,
negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT, negotiate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> IHost: ) -> IHost:
""" """
@ -264,6 +265,7 @@ def new_host(
:param muxer_preference: optional explicit muxer preference :param muxer_preference: optional explicit muxer preference
:param listen_addrs: optional list of multiaddrs to listen on :param listen_addrs: optional list of multiaddrs to listen on
:param enable_mDNS: whether to enable mDNS discovery :param enable_mDNS: whether to enable mDNS discovery
:param bootstrap: optional list of bootstrap peer addresses as strings
:return: return a host instance :return: return a host instance
""" """
swarm = new_swarm( swarm = new_swarm(
@ -276,7 +278,7 @@ def new_host(
) )
if disc_opt is not None: if disc_opt is not None:
return RoutedHost(swarm, disc_opt, enable_mDNS) return RoutedHost(swarm, disc_opt, enable_mDNS, bootstrap)
return BasicHost(network=swarm,enable_mDNS=enable_mDNS , negotitate_timeout=negotiate_timeout) return BasicHost(network=swarm,enable_mDNS=enable_mDNS , bootstrap=bootstrap, negotitate_timeout=negotiate_timeout)
__version__ = __version("libp2p") __version__ = __version("libp2p")

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,5 @@
"""Bootstrap peer discovery module for py-libp2p."""
from .bootstrap import BootstrapDiscovery
__all__ = ["BootstrapDiscovery"]

View File

@ -0,0 +1,94 @@
import logging
from multiaddr import Multiaddr
from multiaddr.resolvers import DNSResolver
from libp2p.abc import ID, INetworkService, PeerInfo
from libp2p.discovery.bootstrap.utils import validate_bootstrap_addresses
from libp2p.discovery.events.peerDiscovery import peerDiscovery
from libp2p.peer.peerinfo import info_from_p2p_addr
logger = logging.getLogger("libp2p.discovery.bootstrap")
resolver = DNSResolver()
class BootstrapDiscovery:
"""
Bootstrap-based peer discovery for py-libp2p.
Connects to predefined bootstrap peers and adds them to peerstore.
"""
def __init__(self, swarm: INetworkService, bootstrap_addrs: list[str]):
self.swarm = swarm
self.peerstore = swarm.peerstore
self.bootstrap_addrs = bootstrap_addrs or []
self.discovered_peers: set[str] = set()
async def start(self) -> None:
"""Process bootstrap addresses and emit peer discovery events."""
logger.debug(
f"Starting bootstrap discovery with "
f"{len(self.bootstrap_addrs)} bootstrap addresses"
)
# Validate and filter bootstrap addresses
self.bootstrap_addrs = validate_bootstrap_addresses(self.bootstrap_addrs)
for addr_str in self.bootstrap_addrs:
try:
await self._process_bootstrap_addr(addr_str)
except Exception as e:
logger.debug(f"Failed to process bootstrap address {addr_str}: {e}")
def stop(self) -> None:
"""Clean up bootstrap discovery resources."""
logger.debug("Stopping bootstrap discovery")
self.discovered_peers.clear()
async def _process_bootstrap_addr(self, addr_str: str) -> None:
"""Convert string address to PeerInfo and add to peerstore."""
try:
multiaddr = Multiaddr(addr_str)
except Exception as e:
logger.debug(f"Invalid multiaddr format '{addr_str}': {e}")
return
if self.is_dns_addr(multiaddr):
resolved_addrs = await resolver.resolve(multiaddr)
peer_id_str = multiaddr.get_peer_id()
if peer_id_str is None:
logger.warning(f"Missing peer ID in DNS address: {addr_str}")
return
peer_id = ID.from_base58(peer_id_str)
addrs = [addr for addr in resolved_addrs]
if not addrs:
logger.warning(f"No addresses resolved for DNS address: {addr_str}")
return
peer_info = PeerInfo(peer_id, addrs)
self.add_addr(peer_info)
else:
self.add_addr(info_from_p2p_addr(multiaddr))
def is_dns_addr(self, addr: Multiaddr) -> bool:
"""Check if the address is a DNS address."""
return any(protocol.name == "dnsaddr" for protocol in addr.protocols())
def add_addr(self, peer_info: PeerInfo) -> None:
"""Add a peer to the peerstore and emit discovery event."""
# Skip if it's our own peer
if peer_info.peer_id == self.swarm.get_peer_id():
logger.debug(f"Skipping own peer ID: {peer_info.peer_id}")
return
# Always add addresses to peerstore (allows multiple addresses for same peer)
self.peerstore.add_addrs(peer_info.peer_id, peer_info.addrs, 10)
# Only emit discovery event if this is the first time we see this peer
peer_id_str = str(peer_info.peer_id)
if peer_id_str not in self.discovered_peers:
# Track discovered peer
self.discovered_peers.add(peer_id_str)
# Emit peer discovery event
peerDiscovery.emit_peer_discovered(peer_info)
logger.debug(f"Peer discovered: {peer_info.peer_id}")
else:
logger.debug(f"Additional addresses added for peer: {peer_info.peer_id}")

View File

@ -0,0 +1,51 @@
"""Utility functions for bootstrap discovery."""
import logging
from multiaddr import Multiaddr
from libp2p.peer.peerinfo import InvalidAddrError, PeerInfo, info_from_p2p_addr
logger = logging.getLogger("libp2p.discovery.bootstrap.utils")
def validate_bootstrap_addresses(addrs: list[str]) -> list[str]:
"""
Validate and filter bootstrap addresses.
:param addrs: List of bootstrap address strings
:return: List of valid bootstrap addresses
"""
valid_addrs = []
for addr_str in addrs:
try:
# Try to parse as multiaddr
multiaddr = Multiaddr(addr_str)
# Try to extract peer info (this validates the p2p component)
info_from_p2p_addr(multiaddr)
valid_addrs.append(addr_str)
logger.debug(f"Valid bootstrap address: {addr_str}")
except (InvalidAddrError, ValueError, Exception) as e:
logger.warning(f"Invalid bootstrap address '{addr_str}': {e}")
continue
return valid_addrs
def parse_bootstrap_peer_info(addr_str: str) -> PeerInfo | None:
"""
Parse bootstrap address string into PeerInfo.
:param addr_str: Bootstrap address string
:return: PeerInfo object or None if parsing fails
"""
try:
multiaddr = Multiaddr(addr_str)
return info_from_p2p_addr(multiaddr)
except Exception as e:
logger.error(f"Failed to parse bootstrap address '{addr_str}': {e}")
return None

View File

@ -29,6 +29,7 @@ from libp2p.custom_types import (
StreamHandlerFn, StreamHandlerFn,
TProtocol, TProtocol,
) )
from libp2p.discovery.bootstrap.bootstrap import BootstrapDiscovery
from libp2p.discovery.mdns.mdns import MDNSDiscovery from libp2p.discovery.mdns.mdns import MDNSDiscovery
from libp2p.host.defaults import ( from libp2p.host.defaults import (
get_default_protocols, get_default_protocols,
@ -92,6 +93,7 @@ class BasicHost(IHost):
self, self,
network: INetworkService, network: INetworkService,
enable_mDNS: bool = False, enable_mDNS: bool = False,
bootstrap: list[str] | None = None,
default_protocols: Optional["OrderedDict[TProtocol, StreamHandlerFn]"] = None, default_protocols: Optional["OrderedDict[TProtocol, StreamHandlerFn]"] = None,
negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT, negotitate_timeout: int = DEFAULT_NEGOTIATE_TIMEOUT,
) -> None: ) -> None:
@ -105,6 +107,8 @@ class BasicHost(IHost):
self.multiselect_client = MultiselectClient() self.multiselect_client = MultiselectClient()
if enable_mDNS: if enable_mDNS:
self.mDNS = MDNSDiscovery(network) self.mDNS = MDNSDiscovery(network)
if bootstrap:
self.bootstrap = BootstrapDiscovery(network, bootstrap)
def get_id(self) -> ID: def get_id(self) -> ID:
""" """
@ -172,11 +176,16 @@ class BasicHost(IHost):
if hasattr(self, "mDNS") and self.mDNS is not None: if hasattr(self, "mDNS") and self.mDNS is not None:
logger.debug("Starting mDNS Discovery") logger.debug("Starting mDNS Discovery")
self.mDNS.start() self.mDNS.start()
if hasattr(self, "bootstrap") and self.bootstrap is not None:
logger.debug("Starting Bootstrap Discovery")
await self.bootstrap.start()
try: try:
yield yield
finally: finally:
if hasattr(self, "mDNS") and self.mDNS is not None: if hasattr(self, "mDNS") and self.mDNS is not None:
self.mDNS.stop() self.mDNS.stop()
if hasattr(self, "bootstrap") and self.bootstrap is not None:
self.bootstrap.stop()
return _run() return _run()

View File

@ -26,5 +26,8 @@ if TYPE_CHECKING:
def get_default_protocols(host: IHost) -> "OrderedDict[TProtocol, StreamHandlerFn]": def get_default_protocols(host: IHost) -> "OrderedDict[TProtocol, StreamHandlerFn]":
return OrderedDict( return OrderedDict(
((IdentifyID, identify_handler_for(host)), (PingID, handle_ping)) (
(IdentifyID, identify_handler_for(host, use_varint_format=True)),
(PingID, handle_ping),
)
) )

View File

@ -19,9 +19,13 @@ class RoutedHost(BasicHost):
_router: IPeerRouting _router: IPeerRouting
def __init__( def __init__(
self, network: INetworkService, router: IPeerRouting, enable_mDNS: bool = False self,
network: INetworkService,
router: IPeerRouting,
enable_mDNS: bool = False,
bootstrap: list[str] | None = None,
): ):
super().__init__(network, enable_mDNS) super().__init__(network, enable_mDNS, bootstrap)
self._router = router self._router = router
async def connect(self, peer_info: PeerInfo) -> None: async def connect(self, peer_info: PeerInfo) -> None:

View File

@ -15,8 +15,12 @@ from libp2p.custom_types import (
from libp2p.network.stream.exceptions import ( from libp2p.network.stream.exceptions import (
StreamClosed, StreamClosed,
) )
from libp2p.peer.envelope import seal_record
from libp2p.peer.peer_record import PeerRecord
from libp2p.utils import ( from libp2p.utils import (
decode_varint_with_size,
get_agent_version, get_agent_version,
varint,
) )
from .pb.identify_pb2 import ( from .pb.identify_pb2 import (
@ -59,7 +63,12 @@ def _mk_identify_protobuf(
) -> Identify: ) -> Identify:
public_key = host.get_public_key() public_key = host.get_public_key()
laddrs = host.get_addrs() laddrs = host.get_addrs()
protocols = host.get_mux().get_protocols() protocols = tuple(str(p) for p in host.get_mux().get_protocols() if p is not None)
# Create a signed peer-record for the remote peer
record = PeerRecord(host.get_id(), host.get_addrs())
envelope = seal_record(record, host.get_private_key())
protobuf = envelope.marshal_envelope()
observed_addr = observed_multiaddr.to_bytes() if observed_multiaddr else b"" observed_addr = observed_multiaddr.to_bytes() if observed_multiaddr else b""
return Identify( return Identify(
@ -69,10 +78,51 @@ def _mk_identify_protobuf(
listen_addrs=map(_multiaddr_to_bytes, laddrs), listen_addrs=map(_multiaddr_to_bytes, laddrs),
observed_addr=observed_addr, observed_addr=observed_addr,
protocols=protocols, protocols=protocols,
signedPeerRecord=protobuf,
) )
def identify_handler_for(host: IHost) -> StreamHandlerFn: def parse_identify_response(response: bytes) -> Identify:
"""
Parse identify response that could be either:
- Old format: raw protobuf
- New format: length-prefixed protobuf
This function provides backward and forward compatibility.
"""
# Try new format first: length-prefixed protobuf
if len(response) >= 1:
length, varint_size = decode_varint_with_size(response)
if varint_size > 0 and length > 0 and varint_size + length <= len(response):
protobuf_data = response[varint_size : varint_size + length]
try:
identify_response = Identify()
identify_response.ParseFromString(protobuf_data)
# Sanity check: must have agent_version (protocol_version is optional)
if identify_response.agent_version:
logger.debug(
"Parsed length-prefixed identify response (new format)"
)
return identify_response
except Exception:
pass # Fall through to old format
# Fall back to old format: raw protobuf
try:
identify_response = Identify()
identify_response.ParseFromString(response)
logger.debug("Parsed raw protobuf identify response (old format)")
return identify_response
except Exception as e:
logger.error(f"Failed to parse identify response: {e}")
logger.error(f"Response length: {len(response)}")
logger.error(f"Response hex: {response.hex()}")
raise
def identify_handler_for(
host: IHost, use_varint_format: bool = True
) -> StreamHandlerFn:
async def handle_identify(stream: INetStream) -> None: async def handle_identify(stream: INetStream) -> None:
# get observed address from ``stream`` # get observed address from ``stream``
peer_id = ( peer_id = (
@ -100,7 +150,21 @@ def identify_handler_for(host: IHost) -> StreamHandlerFn:
response = protobuf.SerializeToString() response = protobuf.SerializeToString()
try: try:
await stream.write(response) if use_varint_format:
# Send length-prefixed protobuf message (new format)
await stream.write(varint.encode_uvarint(len(response)))
await stream.write(response)
logger.debug(
"Sent new format (length-prefixed) identify response to %s",
peer_id,
)
else:
# Send raw protobuf message (old format for backward compatibility)
await stream.write(response)
logger.debug(
"Sent old format (raw protobuf) identify response to %s",
peer_id,
)
except StreamClosed: except StreamClosed:
logger.debug("Fail to respond to %s request: stream closed", ID) logger.debug("Fail to respond to %s request: stream closed", ID)
else: else:

View File

@ -9,4 +9,5 @@ message Identify {
repeated bytes listen_addrs = 2; repeated bytes listen_addrs = 2;
optional bytes observed_addr = 4; optional bytes observed_addr = 4;
repeated string protocols = 3; repeated string protocols = 3;
optional bytes signedPeerRecord = 8;
} }

View File

@ -1,11 +1,12 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT! # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/identity/identify/pb/identify.proto # source: libp2p/identity/identify/pb/identify.proto
# Protobuf Python Version: 4.25.3
"""Generated protocol buffer code.""" """Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports) # @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default() _sym_db = _symbol_database.Default()
@ -13,13 +14,13 @@ _sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*libp2p/identity/identify/pb/identify.proto\x12\x0bidentify.pb\"\x8f\x01\n\x08Identify\x12\x18\n\x10protocol_version\x18\x05 \x01(\t\x12\x15\n\ragent_version\x18\x06 \x01(\t\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\x14\n\x0clisten_addrs\x18\x02 \x03(\x0c\x12\x15\n\robserved_addr\x18\x04 \x01(\x0c\x12\x11\n\tprotocols\x18\x03 \x03(\t') DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*libp2p/identity/identify/pb/identify.proto\x12\x0bidentify.pb\"\xa9\x01\n\x08Identify\x12\x18\n\x10protocol_version\x18\x05 \x01(\t\x12\x15\n\ragent_version\x18\x06 \x01(\t\x12\x12\n\npublic_key\x18\x01 \x01(\x0c\x12\x14\n\x0clisten_addrs\x18\x02 \x03(\x0c\x12\x15\n\robserved_addr\x18\x04 \x01(\x0c\x12\x11\n\tprotocols\x18\x03 \x03(\t\x12\x18\n\x10signedPeerRecord\x18\x08 \x01(\x0c')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals()) _globals = globals()
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.identity.identify.pb.identify_pb2', globals()) _builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.identity.identify.pb.identify_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False: if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None DESCRIPTOR._options = None
_IDENTIFY._serialized_start=60 _globals['_IDENTIFY']._serialized_start=60
_IDENTIFY._serialized_end=203 _globals['_IDENTIFY']._serialized_end=229
# @@protoc_insertion_point(module_scope) # @@protoc_insertion_point(module_scope)

View File

@ -1,46 +1,24 @@
""" from google.protobuf.internal import containers as _containers
@generated by mypy-protobuf. Do not edit manually! from google.protobuf import descriptor as _descriptor
isort:skip_file from google.protobuf import message as _message
""" from typing import ClassVar as _ClassVar, Iterable as _Iterable, Optional as _Optional
import builtins DESCRIPTOR: _descriptor.FileDescriptor
import collections.abc
import google.protobuf.descriptor
import google.protobuf.internal.containers
import google.protobuf.message
import typing
DESCRIPTOR: google.protobuf.descriptor.FileDescriptor class Identify(_message.Message):
__slots__ = ("protocol_version", "agent_version", "public_key", "listen_addrs", "observed_addr", "protocols", "signedPeerRecord")
@typing.final PROTOCOL_VERSION_FIELD_NUMBER: _ClassVar[int]
class Identify(google.protobuf.message.Message): AGENT_VERSION_FIELD_NUMBER: _ClassVar[int]
DESCRIPTOR: google.protobuf.descriptor.Descriptor PUBLIC_KEY_FIELD_NUMBER: _ClassVar[int]
LISTEN_ADDRS_FIELD_NUMBER: _ClassVar[int]
PROTOCOL_VERSION_FIELD_NUMBER: builtins.int OBSERVED_ADDR_FIELD_NUMBER: _ClassVar[int]
AGENT_VERSION_FIELD_NUMBER: builtins.int PROTOCOLS_FIELD_NUMBER: _ClassVar[int]
PUBLIC_KEY_FIELD_NUMBER: builtins.int SIGNEDPEERRECORD_FIELD_NUMBER: _ClassVar[int]
LISTEN_ADDRS_FIELD_NUMBER: builtins.int protocol_version: str
OBSERVED_ADDR_FIELD_NUMBER: builtins.int agent_version: str
PROTOCOLS_FIELD_NUMBER: builtins.int public_key: bytes
protocol_version: builtins.str listen_addrs: _containers.RepeatedScalarFieldContainer[bytes]
agent_version: builtins.str observed_addr: bytes
public_key: builtins.bytes protocols: _containers.RepeatedScalarFieldContainer[str]
observed_addr: builtins.bytes signedPeerRecord: bytes
@property def __init__(self, protocol_version: _Optional[str] = ..., agent_version: _Optional[str] = ..., public_key: _Optional[bytes] = ..., listen_addrs: _Optional[_Iterable[bytes]] = ..., observed_addr: _Optional[bytes] = ..., protocols: _Optional[_Iterable[str]] = ..., signedPeerRecord: _Optional[bytes] = ...) -> None: ...
def listen_addrs(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.bytes]: ...
@property
def protocols(self) -> google.protobuf.internal.containers.RepeatedScalarFieldContainer[builtins.str]: ...
def __init__(
self,
*,
protocol_version: builtins.str | None = ...,
agent_version: builtins.str | None = ...,
public_key: builtins.bytes | None = ...,
listen_addrs: collections.abc.Iterable[builtins.bytes] | None = ...,
observed_addr: builtins.bytes | None = ...,
protocols: collections.abc.Iterable[builtins.str] | None = ...,
) -> None: ...
def HasField(self, field_name: typing.Literal["agent_version", b"agent_version", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "public_key", b"public_key"]) -> builtins.bool: ...
def ClearField(self, field_name: typing.Literal["agent_version", b"agent_version", "listen_addrs", b"listen_addrs", "observed_addr", b"observed_addr", "protocol_version", b"protocol_version", "protocols", b"protocols", "public_key", b"public_key"]) -> None: ...
global___Identify = Identify

View File

@ -20,11 +20,16 @@ from libp2p.custom_types import (
from libp2p.network.stream.exceptions import ( from libp2p.network.stream.exceptions import (
StreamClosed, StreamClosed,
) )
from libp2p.peer.envelope import consume_envelope
from libp2p.peer.id import ( from libp2p.peer.id import (
ID, ID,
) )
from libp2p.utils import ( from libp2p.utils import (
get_agent_version, get_agent_version,
varint,
)
from libp2p.utils.varint import (
read_length_prefixed_protobuf,
) )
from ..identify.identify import ( from ..identify.identify import (
@ -43,20 +48,28 @@ AGENT_VERSION = get_agent_version()
CONCURRENCY_LIMIT = 10 CONCURRENCY_LIMIT = 10
def identify_push_handler_for(host: IHost) -> StreamHandlerFn: def identify_push_handler_for(
host: IHost, use_varint_format: bool = True
) -> StreamHandlerFn:
""" """
Create a handler for the identify/push protocol. Create a handler for the identify/push protocol.
This handler receives pushed identify messages from remote peers and updates This handler receives pushed identify messages from remote peers and updates
the local peerstore with the new information. the local peerstore with the new information.
Args:
host: The libp2p host.
use_varint_format: True=length-prefixed, False=raw protobuf.
""" """
async def handle_identify_push(stream: INetStream) -> None: async def handle_identify_push(stream: INetStream) -> None:
peer_id = stream.muxed_conn.peer_id peer_id = stream.muxed_conn.peer_id
try: try:
# Read the identify message from the stream # Use the utility function to read the protobuf message
data = await stream.read() data = await read_length_prefixed_protobuf(stream, use_varint_format)
identify_msg = Identify() identify_msg = Identify()
identify_msg.ParseFromString(data) identify_msg.ParseFromString(data)
@ -66,6 +79,11 @@ def identify_push_handler_for(host: IHost) -> StreamHandlerFn:
) )
logger.debug("Successfully processed identify/push from peer %s", peer_id) logger.debug("Successfully processed identify/push from peer %s", peer_id)
# Send acknowledgment to indicate successful processing
# This ensures the sender knows the message was received before closing
await stream.write(b"OK")
except StreamClosed: except StreamClosed:
logger.debug( logger.debug(
"Stream closed while processing identify/push from %s", peer_id "Stream closed while processing identify/push from %s", peer_id
@ -74,7 +92,10 @@ def identify_push_handler_for(host: IHost) -> StreamHandlerFn:
logger.error("Error processing identify/push from %s: %s", peer_id, e) logger.error("Error processing identify/push from %s: %s", peer_id, e)
finally: finally:
# Close the stream after processing # Close the stream after processing
await stream.close() try:
await stream.close()
except Exception:
pass # Ignore errors when closing
return handle_identify_push return handle_identify_push
@ -120,6 +141,19 @@ async def _update_peerstore_from_identify(
except Exception as e: except Exception as e:
logger.error("Error updating protocols for peer %s: %s", peer_id, e) logger.error("Error updating protocols for peer %s: %s", peer_id, e)
if identify_msg.HasField("signedPeerRecord"):
try:
# Convert the signed-peer-record(Envelope) from prtobuf bytes
envelope, _ = consume_envelope(
identify_msg.signedPeerRecord, "libp2p-peer-record"
)
# Use a default TTL of 2 hours (7200 seconds)
if not peerstore.consume_peer_record(envelope, 7200):
logger.error("Updating Certified-Addr-Book was unsuccessful")
except Exception as e:
logger.error(
"Error updating the certified addr book for peer %s: %s", peer_id, e
)
# Update observed address if present # Update observed address if present
if identify_msg.HasField("observed_addr") and identify_msg.observed_addr: if identify_msg.HasField("observed_addr") and identify_msg.observed_addr:
try: try:
@ -137,6 +171,7 @@ async def push_identify_to_peer(
peer_id: ID, peer_id: ID,
observed_multiaddr: Multiaddr | None = None, observed_multiaddr: Multiaddr | None = None,
limit: trio.Semaphore = trio.Semaphore(CONCURRENCY_LIMIT), limit: trio.Semaphore = trio.Semaphore(CONCURRENCY_LIMIT),
use_varint_format: bool = True,
) -> bool: ) -> bool:
""" """
Push an identify message to a specific peer. Push an identify message to a specific peer.
@ -144,10 +179,15 @@ async def push_identify_to_peer(
This function opens a stream to the peer using the identify/push protocol, This function opens a stream to the peer using the identify/push protocol,
sends the identify message, and closes the stream. sends the identify message, and closes the stream.
Returns Args:
------- host: The libp2p host.
bool peer_id: The peer ID to push to.
True if the push was successful, False otherwise. observed_multiaddr: The observed multiaddress (optional).
limit: Semaphore for concurrency control.
use_varint_format: True=length-prefixed, False=raw protobuf.
Returns:
bool: True if the push was successful, False otherwise.
""" """
async with limit: async with limit:
@ -159,10 +199,28 @@ async def push_identify_to_peer(
identify_msg = _mk_identify_protobuf(host, observed_multiaddr) identify_msg = _mk_identify_protobuf(host, observed_multiaddr)
response = identify_msg.SerializeToString() response = identify_msg.SerializeToString()
# Send the identify message if use_varint_format:
await stream.write(response) # Send length-prefixed identify message
await stream.write(varint.encode_uvarint(len(response)))
await stream.write(response)
else:
# Send raw protobuf message
await stream.write(response)
# Close the stream # Wait for acknowledgment from the receiver with timeout
# This ensures the message was processed before closing
try:
with trio.move_on_after(1.0): # 1 second timeout
ack = await stream.read(2) # Read "OK" acknowledgment
if ack != b"OK":
logger.warning(
"Unexpected acknowledgment from peer %s: %s", peer_id, ack
)
except Exception as e:
logger.debug("No acknowledgment received from peer %s: %s", peer_id, e)
# Continue anyway, as the message might have been processed
# Close the stream after acknowledgment (or timeout)
await stream.close() await stream.close()
logger.debug("Successfully pushed identify to peer %s", peer_id) logger.debug("Successfully pushed identify to peer %s", peer_id)
@ -176,18 +234,36 @@ async def push_identify_to_peers(
host: IHost, host: IHost,
peer_ids: set[ID] | None = None, peer_ids: set[ID] | None = None,
observed_multiaddr: Multiaddr | None = None, observed_multiaddr: Multiaddr | None = None,
use_varint_format: bool = True,
) -> None: ) -> None:
""" """
Push an identify message to multiple peers in parallel. Push an identify message to multiple peers in parallel.
If peer_ids is None, push to all connected peers. If peer_ids is None, push to all connected peers.
Args:
host: The libp2p host.
peer_ids: Set of peer IDs to push to (if None, push to all connected peers).
observed_multiaddr: The observed multiaddress (optional).
use_varint_format: True=length-prefixed, False=raw protobuf.
""" """
if peer_ids is None: if peer_ids is None:
# Get all connected peers # Get all connected peers
peer_ids = set(host.get_connected_peers()) peer_ids = set(host.get_connected_peers())
# Create a single shared semaphore for concurrency control
limit = trio.Semaphore(CONCURRENCY_LIMIT)
# Push to each peer in parallel using a trio.Nursery # Push to each peer in parallel using a trio.Nursery
# limiting concurrent connections to 10 # limiting concurrent connections to CONCURRENCY_LIMIT
async with trio.open_nursery() as nursery: async with trio.open_nursery() as nursery:
for peer_id in peer_ids: for peer_id in peer_ids:
nursery.start_soon(push_identify_to_peer, host, peer_id, observed_multiaddr) nursery.start_soon(
push_identify_to_peer,
host,
peer_id,
observed_multiaddr,
limit,
use_varint_format,
)

271
libp2p/peer/envelope.py Normal file
View File

@ -0,0 +1,271 @@
from typing import Any, cast
from libp2p.crypto.ed25519 import Ed25519PublicKey
from libp2p.crypto.keys import PrivateKey, PublicKey
from libp2p.crypto.rsa import RSAPublicKey
from libp2p.crypto.secp256k1 import Secp256k1PublicKey
import libp2p.peer.pb.crypto_pb2 as cryto_pb
import libp2p.peer.pb.envelope_pb2 as pb
import libp2p.peer.pb.peer_record_pb2 as record_pb
from libp2p.peer.peer_record import (
PeerRecord,
peer_record_from_protobuf,
unmarshal_record,
)
from libp2p.utils.varint import encode_uvarint
ENVELOPE_DOMAIN = "libp2p-peer-record"
PEER_RECORD_CODEC = b"\x03\x01"
class Envelope:
"""
A signed wrapper around a serialized libp2p record.
Envelopes are cryptographically signed by the author's private key
and are scoped to a specific 'domain' to prevent cross-protocol replay.
Attributes:
public_key: The public key that can verify the envelope's signature.
payload_type: A multicodec code identifying the type of payload inside.
raw_payload: The raw serialized record data.
signature: Signature over the domain-scoped payload content.
"""
public_key: PublicKey
payload_type: bytes
raw_payload: bytes
signature: bytes
_cached_record: PeerRecord | None = None
_unmarshal_error: Exception | None = None
def __init__(
self,
public_key: PublicKey,
payload_type: bytes,
raw_payload: bytes,
signature: bytes,
):
self.public_key = public_key
self.payload_type = payload_type
self.raw_payload = raw_payload
self.signature = signature
def marshal_envelope(self) -> bytes:
"""
Serialize this Envelope into its protobuf wire format.
Converts all envelope fields into a `pb.Envelope` protobuf message
and returns the serialized bytes.
:return: Serialized envelope as bytes.
"""
pb_env = pb.Envelope(
public_key=pub_key_to_protobuf(self.public_key),
payload_type=self.payload_type,
payload=self.raw_payload,
signature=self.signature,
)
return pb_env.SerializeToString()
def validate(self, domain: str) -> None:
"""
Verify the envelope's signature within the given domain scope.
This ensures that the envelope has not been tampered with
and was signed under the correct usage context.
:param domain: Domain string that contextualizes the signature.
:raises ValueError: If the signature is invalid.
"""
unsigned = make_unsigned(domain, self.payload_type, self.raw_payload)
if not self.public_key.verify(unsigned, self.signature):
raise ValueError("Invalid envelope signature")
def record(self) -> PeerRecord:
"""
Lazily decode and return the embedded PeerRecord.
This method unmarshals the payload bytes into a `PeerRecord` instance,
using the registered codec to identify the type. The decoded result
is cached for future use.
:return: Decoded PeerRecord object.
:raises Exception: If decoding fails or payload type is unsupported.
"""
if self._cached_record is not None:
return self._cached_record
try:
if self.payload_type != PEER_RECORD_CODEC:
raise ValueError("Unsuported payload type in envelope")
msg = record_pb.PeerRecord()
msg.ParseFromString(self.raw_payload)
self._cached_record = peer_record_from_protobuf(msg)
return self._cached_record
except Exception as e:
self._unmarshal_error = e
raise
def equal(self, other: Any) -> bool:
"""
Compare this Envelope with another for structural equality.
Two envelopes are considered equal if:
- They have the same public key
- The payload type and payload bytes match
- Their signatures are identical
:param other: Another object to compare.
:return: True if equal, False otherwise.
"""
if isinstance(other, Envelope):
return (
self.public_key.__eq__(other.public_key)
and self.payload_type == other.payload_type
and self.signature == other.signature
and self.raw_payload == other.raw_payload
)
return False
def pub_key_to_protobuf(pub_key: PublicKey) -> cryto_pb.PublicKey:
"""
Convert a Python PublicKey object to its protobuf equivalent.
:param pub_key: A libp2p-compatible PublicKey instance.
:return: Serialized protobuf PublicKey message.
"""
internal_key_type = pub_key.get_type()
key_type = cast(cryto_pb.KeyType, internal_key_type.value)
data = pub_key.to_bytes()
protobuf_key = cryto_pb.PublicKey(Type=key_type, Data=data)
return protobuf_key
def pub_key_from_protobuf(pb_key: cryto_pb.PublicKey) -> PublicKey:
"""
Parse a protobuf PublicKey message into a native libp2p PublicKey.
Supports Ed25519, RSA, and Secp256k1 key types.
:param pb_key: Protobuf representation of a public key.
:return: Parsed PublicKey object.
:raises ValueError: If the key type is unrecognized.
"""
if pb_key.Type == cryto_pb.KeyType.Ed25519:
return Ed25519PublicKey.from_bytes(pb_key.Data)
elif pb_key.Type == cryto_pb.KeyType.RSA:
return RSAPublicKey.from_bytes(pb_key.Data)
elif pb_key.Type == cryto_pb.KeyType.Secp256k1:
return Secp256k1PublicKey.from_bytes(pb_key.Data)
# libp2p.crypto.ecdsa not implemented
else:
raise ValueError(f"Unknown key type: {pb_key.Type}")
def seal_record(record: PeerRecord, private_key: PrivateKey) -> Envelope:
"""
Create and sign a new Envelope from a PeerRecord.
The record is serialized and signed in the scope of its domain and codec.
The result is a self-contained, verifiable Envelope.
:param record: A PeerRecord to encapsulate.
:param private_key: The signer's private key.
:return: A signed Envelope instance.
"""
payload = record.marshal_record()
unsigned = make_unsigned(record.domain(), record.codec(), payload)
signature = private_key.sign(unsigned)
return Envelope(
public_key=private_key.get_public_key(),
payload_type=record.codec(),
raw_payload=payload,
signature=signature,
)
def consume_envelope(data: bytes, domain: str) -> tuple[Envelope, PeerRecord]:
"""
Parse, validate, and decode an Envelope from bytes.
Validates the envelope's signature using the given domain and decodes
the inner payload into a PeerRecord.
:param data: Serialized envelope bytes.
:param domain: Domain string to verify signature against.
:return: Tuple of (Envelope, PeerRecord).
:raises ValueError: If signature validation or decoding fails.
"""
env = unmarshal_envelope(data)
env.validate(domain)
record = env.record()
return env, record
def unmarshal_envelope(data: bytes) -> Envelope:
"""
Deserialize an Envelope from its wire format.
This parses the protobuf fields without verifying the signature.
:param data: Serialized envelope bytes.
:return: Parsed Envelope object.
:raises DecodeError: If protobuf parsing fails.
"""
pb_env = pb.Envelope()
pb_env.ParseFromString(data)
pk = pub_key_from_protobuf(pb_env.public_key)
return Envelope(
public_key=pk,
payload_type=pb_env.payload_type,
raw_payload=pb_env.payload,
signature=pb_env.signature,
)
def make_unsigned(domain: str, payload_type: bytes, payload: bytes) -> bytes:
"""
Build a byte buffer to be signed for an Envelope.
The unsigned byte structure is:
varint(len(domain)) || domain ||
varint(len(payload_type)) || payload_type ||
varint(len(payload)) || payload
This is the exact input used during signing and verification.
:param domain: Domain string for signature scoping.
:param payload_type: Identifier for the type of payload.
:param payload: Raw serialized payload bytes.
:return: Byte buffer to be signed or verified.
"""
fields = [domain.encode(), payload_type, payload]
buf = bytearray()
for field in fields:
buf.extend(encode_uvarint(len(field)))
buf.extend(field)
return bytes(buf)
def debug_dump_envelope(env: Envelope) -> None:
print("\n=== Envelope ===")
print(f"Payload Type: {env.payload_type!r}")
print(f"Signature: {env.signature.hex()} ({len(env.signature)} bytes)")
print(f"Raw Payload: {env.raw_payload.hex()} ({len(env.raw_payload)} bytes)")
try:
peer_record = unmarshal_record(env.raw_payload)
print("\n=== Parsed PeerRecord ===")
print(peer_record)
except Exception as e:
print("Failed to parse PeerRecord:", e)

View File

@ -1,3 +1,4 @@
import functools
import hashlib import hashlib
import base58 import base58
@ -36,25 +37,23 @@ if ENABLE_INLINING:
class ID: class ID:
_bytes: bytes _bytes: bytes
_xor_id: int | None = None
_b58_str: str | None = None
def __init__(self, peer_id_bytes: bytes) -> None: def __init__(self, peer_id_bytes: bytes) -> None:
self._bytes = peer_id_bytes self._bytes = peer_id_bytes
@property @functools.cached_property
def xor_id(self) -> int: def xor_id(self) -> int:
if not self._xor_id: return int(sha256_digest(self._bytes).hex(), 16)
self._xor_id = int(sha256_digest(self._bytes).hex(), 16)
return self._xor_id @functools.cached_property
def base58(self) -> str:
return base58.b58encode(self._bytes).decode()
def to_bytes(self) -> bytes: def to_bytes(self) -> bytes:
return self._bytes return self._bytes
def to_base58(self) -> str: def to_base58(self) -> str:
if not self._b58_str: return self.base58
self._b58_str = base58.b58encode(self._bytes).decode()
return self._b58_str
def __repr__(self) -> str: def __repr__(self) -> str:
return f"<libp2p.peer.id.ID ({self!s})>" return f"<libp2p.peer.id.ID ({self!s})>"

View File

@ -0,0 +1,22 @@
syntax = "proto3";
package libp2p.peer.pb.crypto;
option go_package = "github.com/libp2p/go-libp2p/core/crypto/pb";
enum KeyType {
RSA = 0;
Ed25519 = 1;
Secp256k1 = 2;
ECDSA = 3;
}
message PublicKey {
KeyType Type = 1;
bytes Data = 2;
}
message PrivateKey {
KeyType Type = 1;
bytes Data = 2;
}

View File

@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/peer/pb/crypto.proto
# Protobuf Python Version: 4.25.3
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1blibp2p/peer/pb/crypto.proto\x12\x15libp2p.peer.pb.crypto\"G\n\tPublicKey\x12,\n\x04Type\x18\x01 \x01(\x0e\x32\x1e.libp2p.peer.pb.crypto.KeyType\x12\x0c\n\x04\x44\x61ta\x18\x02 \x01(\x0c\"H\n\nPrivateKey\x12,\n\x04Type\x18\x01 \x01(\x0e\x32\x1e.libp2p.peer.pb.crypto.KeyType\x12\x0c\n\x04\x44\x61ta\x18\x02 \x01(\x0c*9\n\x07KeyType\x12\x07\n\x03RSA\x10\x00\x12\x0b\n\x07\x45\x64\x32\x35\x35\x31\x39\x10\x01\x12\r\n\tSecp256k1\x10\x02\x12\t\n\x05\x45\x43\x44SA\x10\x03\x42,Z*github.com/libp2p/go-libp2p/core/crypto/pbb\x06proto3')
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.crypto_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
_globals['DESCRIPTOR']._options = None
_globals['DESCRIPTOR']._serialized_options = b'Z*github.com/libp2p/go-libp2p/core/crypto/pb'
_globals['_KEYTYPE']._serialized_start=201
_globals['_KEYTYPE']._serialized_end=258
_globals['_PUBLICKEY']._serialized_start=54
_globals['_PUBLICKEY']._serialized_end=125
_globals['_PRIVATEKEY']._serialized_start=127
_globals['_PRIVATEKEY']._serialized_end=199
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,33 @@
from google.protobuf.internal import enum_type_wrapper as _enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from typing import ClassVar as _ClassVar, Optional as _Optional, Union as _Union
DESCRIPTOR: _descriptor.FileDescriptor
class KeyType(int, metaclass=_enum_type_wrapper.EnumTypeWrapper):
__slots__ = ()
RSA: _ClassVar[KeyType]
Ed25519: _ClassVar[KeyType]
Secp256k1: _ClassVar[KeyType]
ECDSA: _ClassVar[KeyType]
RSA: KeyType
Ed25519: KeyType
Secp256k1: KeyType
ECDSA: KeyType
class PublicKey(_message.Message):
__slots__ = ("Type", "Data")
TYPE_FIELD_NUMBER: _ClassVar[int]
DATA_FIELD_NUMBER: _ClassVar[int]
Type: KeyType
Data: bytes
def __init__(self, Type: _Optional[_Union[KeyType, str]] = ..., Data: _Optional[bytes] = ...) -> None: ...
class PrivateKey(_message.Message):
__slots__ = ("Type", "Data")
TYPE_FIELD_NUMBER: _ClassVar[int]
DATA_FIELD_NUMBER: _ClassVar[int]
Type: KeyType
Data: bytes
def __init__(self, Type: _Optional[_Union[KeyType, str]] = ..., Data: _Optional[bytes] = ...) -> None: ...

View File

@ -0,0 +1,14 @@
syntax = "proto3";
package libp2p.peer.pb.record;
import "libp2p/peer/pb/crypto.proto";
option go_package = "github.com/libp2p/go-libp2p/core/record/pb";
message Envelope {
libp2p.peer.pb.crypto.PublicKey public_key = 1;
bytes payload_type = 2;
bytes payload = 3;
bytes signature = 5;
}

View File

@ -0,0 +1,28 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/peer/pb/envelope.proto
# Protobuf Python Version: 4.25.3
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from libp2p.peer.pb import crypto_pb2 as libp2p_dot_peer_dot_pb_dot_crypto__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1dlibp2p/peer/pb/envelope.proto\x12\x15libp2p.peer.pb.record\x1a\x1blibp2p/peer/pb/crypto.proto\"z\n\x08\x45nvelope\x12\x34\n\npublic_key\x18\x01 \x01(\x0b\x32 .libp2p.peer.pb.crypto.PublicKey\x12\x14\n\x0cpayload_type\x18\x02 \x01(\x0c\x12\x0f\n\x07payload\x18\x03 \x01(\x0c\x12\x11\n\tsignature\x18\x05 \x01(\x0c\x42,Z*github.com/libp2p/go-libp2p/core/record/pbb\x06proto3')
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.envelope_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
_globals['DESCRIPTOR']._options = None
_globals['DESCRIPTOR']._serialized_options = b'Z*github.com/libp2p/go-libp2p/core/record/pb'
_globals['_ENVELOPE']._serialized_start=85
_globals['_ENVELOPE']._serialized_end=207
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,18 @@
from libp2p.peer.pb import crypto_pb2 as _crypto_pb2
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from typing import ClassVar as _ClassVar, Mapping as _Mapping, Optional as _Optional, Union as _Union
DESCRIPTOR: _descriptor.FileDescriptor
class Envelope(_message.Message):
__slots__ = ("public_key", "payload_type", "payload", "signature")
PUBLIC_KEY_FIELD_NUMBER: _ClassVar[int]
PAYLOAD_TYPE_FIELD_NUMBER: _ClassVar[int]
PAYLOAD_FIELD_NUMBER: _ClassVar[int]
SIGNATURE_FIELD_NUMBER: _ClassVar[int]
public_key: _crypto_pb2.PublicKey
payload_type: bytes
payload: bytes
signature: bytes
def __init__(self, public_key: _Optional[_Union[_crypto_pb2.PublicKey, _Mapping]] = ..., payload_type: _Optional[bytes] = ..., payload: _Optional[bytes] = ..., signature: _Optional[bytes] = ...) -> None: ... # type: ignore[type-arg]

View File

@ -0,0 +1,31 @@
syntax = "proto3";
package peer.pb;
option go_package = "github.com/libp2p/go-libp2p/core/peer/pb";
// PeerRecord messages contain information that is useful to share with other peers.
// Currently, a PeerRecord contains the public listen addresses for a peer, but this
// is expected to expand to include other information in the future.
//
// PeerRecords are designed to be serialized to bytes and placed inside of
// SignedEnvelopes before sharing with other peers.
// See https://github.com/libp2p/go-libp2p/blob/master/core/record/pb/envelope.proto for
// the SignedEnvelope definition.
message PeerRecord {
// AddressInfo is a wrapper around a binary multiaddr. It is defined as a
// separate message to allow us to add per-address metadata in the future.
message AddressInfo {
bytes multiaddr = 1;
}
// peer_id contains a libp2p peer id in its binary representation.
bytes peer_id = 1;
// seq contains a monotonically-increasing sequence counter to order PeerRecords in time.
uint64 seq = 2;
// addresses is a list of public listen addresses for the peer.
repeated AddressInfo addresses = 3;
}

View File

@ -0,0 +1,29 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: libp2p/peer/pb/peer_record.proto
# Protobuf Python Version: 4.25.3
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n libp2p/peer/pb/peer_record.proto\x12\x07peer.pb\"\x80\x01\n\nPeerRecord\x12\x0f\n\x07peer_id\x18\x01 \x01(\x0c\x12\x0b\n\x03seq\x18\x02 \x01(\x04\x12\x32\n\taddresses\x18\x03 \x03(\x0b\x32\x1f.peer.pb.PeerRecord.AddressInfo\x1a \n\x0b\x41\x64\x64ressInfo\x12\x11\n\tmultiaddr\x18\x01 \x01(\x0c\x42*Z(github.com/libp2p/go-libp2p/core/peer/pbb\x06proto3')
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'libp2p.peer.pb.peer_record_pb2', _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
_globals['DESCRIPTOR']._options = None
_globals['DESCRIPTOR']._serialized_options = b'Z(github.com/libp2p/go-libp2p/core/peer/pb'
_globals['_PEERRECORD']._serialized_start=46
_globals['_PEERRECORD']._serialized_end=174
_globals['_PEERRECORD_ADDRESSINFO']._serialized_start=142
_globals['_PEERRECORD_ADDRESSINFO']._serialized_end=174
# @@protoc_insertion_point(module_scope)

View File

@ -0,0 +1,21 @@
from google.protobuf.internal import containers as _containers
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from typing import ClassVar as _ClassVar, Iterable as _Iterable, Mapping as _Mapping, Optional as _Optional, Union as _Union
DESCRIPTOR: _descriptor.FileDescriptor
class PeerRecord(_message.Message):
__slots__ = ("peer_id", "seq", "addresses")
class AddressInfo(_message.Message):
__slots__ = ("multiaddr",)
MULTIADDR_FIELD_NUMBER: _ClassVar[int]
multiaddr: bytes
def __init__(self, multiaddr: _Optional[bytes] = ...) -> None: ...
PEER_ID_FIELD_NUMBER: _ClassVar[int]
SEQ_FIELD_NUMBER: _ClassVar[int]
ADDRESSES_FIELD_NUMBER: _ClassVar[int]
peer_id: bytes
seq: int
addresses: _containers.RepeatedCompositeFieldContainer[PeerRecord.AddressInfo]
def __init__(self, peer_id: _Optional[bytes] = ..., seq: _Optional[int] = ..., addresses: _Optional[_Iterable[_Union[PeerRecord.AddressInfo, _Mapping]]] = ...) -> None: ... # type: ignore[type-arg]

251
libp2p/peer/peer_record.py Normal file
View File

@ -0,0 +1,251 @@
from collections.abc import Sequence
import threading
import time
from typing import Any
from multiaddr import Multiaddr
from libp2p.abc import IPeerRecord
from libp2p.peer.id import ID
import libp2p.peer.pb.peer_record_pb2 as pb
from libp2p.peer.peerinfo import PeerInfo
PEER_RECORD_ENVELOPE_DOMAIN = "libp2p-peer-record"
PEER_RECORD_ENVELOPE_PAYLOAD_TYPE = b"\x03\x01"
_last_timestamp_lock = threading.Lock()
_last_timestamp: int = 0
class PeerRecord(IPeerRecord):
"""
A record that contains metatdata about a peer in the libp2p network.
This includes:
- `peer_id`: The peer's globally unique indentifier.
- `addrs`: A list of the peer's publicly reachable multiaddrs.
- `seq`: A strictly monotonically increasing timestamp used
to order records over time.
PeerRecords are designed to be signed and transmitted in libp2p routing Envelopes.
"""
peer_id: ID
addrs: list[Multiaddr]
seq: int
def __init__(
self,
peer_id: ID | None = None,
addrs: list[Multiaddr] | None = None,
seq: int | None = None,
) -> None:
"""
Initialize a new PeerRecord.
If `seq` is not provided, a timestamp-based strictly increasing sequence
number will be generated.
:param peer_id: ID of the peer this record refers to.
:param addrs: Public multiaddrs of the peer.
:param seq: Monotonic sequence number.
"""
if peer_id is not None:
self.peer_id = peer_id
self.addrs = addrs or []
if seq is not None:
self.seq = seq
else:
self.seq = timestamp_seq()
def __repr__(self) -> str:
return (
f"PeerRecord(\n"
f" peer_id={self.peer_id},\n"
f" multiaddrs={[str(m) for m in self.addrs]},\n"
f" seq={self.seq}\n"
f")"
)
def domain(self) -> str:
"""
Return the domain string associated with this PeerRecord.
Used during record signing and envelope validation to identify the record type.
"""
return PEER_RECORD_ENVELOPE_DOMAIN
def codec(self) -> bytes:
"""
Return the codec identifier for PeerRecords.
This binary perfix helps distinguish PeerRecords in serialized envelopes.
"""
return PEER_RECORD_ENVELOPE_PAYLOAD_TYPE
def to_protobuf(self) -> pb.PeerRecord:
"""
Convert the current PeerRecord into a ProtoBuf PeerRecord message.
:raises ValueError: if peer_id serialization fails.
:return: A ProtoBuf-encoded PeerRecord message object.
"""
try:
id_bytes = self.peer_id.to_bytes()
except Exception as e:
raise ValueError(f"failed to marshal peer_id: {e}")
msg = pb.PeerRecord()
msg.peer_id = id_bytes
msg.seq = self.seq
msg.addresses.extend(addrs_to_protobuf(self.addrs))
return msg
def marshal_record(self) -> bytes:
"""
Serialize a PeerRecord into raw bytes suitable for embedding in an Envelope.
This is typically called during the process of signing or sealing the record.
:raises ValueError: if serialization to protobuf fails.
:return: Serialized PeerRecord bytes.
"""
try:
msg = self.to_protobuf()
return msg.SerializeToString()
except Exception as e:
raise ValueError(f"failed to marshal PeerRecord: {e}")
def equal(self, other: Any) -> bool:
"""
Check if this PeerRecord is identical to another.
Two PeerRecords are considered equal if:
- Their peer IDs match.
- Their sequence numbers are identical.
- Their address lists are identical and in the same order.
:param other: Another PeerRecord instance.
:return: True if all fields mathch, False otherwise.
"""
if isinstance(other, PeerRecord):
if self.peer_id == other.peer_id:
if self.seq == other.seq:
if len(self.addrs) == len(other.addrs):
for a1, a2 in zip(self.addrs, other.addrs):
if a1 == a2:
continue
else:
return False
return True
return False
def unmarshal_record(data: bytes) -> PeerRecord:
"""
Deserialize a PeerRecord from its serialized byte representation.
Typically used when receiveing a PeerRecord inside a signed routing Envelope.
:param data: Serialized protobuf-encoded bytes.
:raises ValueError: if parsing or conversion fails.
:reurn: A valid PeerRecord instance.
"""
if data is None:
raise ValueError("cannot unmarshal PeerRecord from None")
msg = pb.PeerRecord()
try:
msg.ParseFromString(data)
except Exception as e:
raise ValueError(f"Failed to parse PeerRecord protobuf: {e}")
try:
record = peer_record_from_protobuf(msg)
except Exception as e:
raise ValueError(f"Failed to convert protobuf to PeerRecord: {e}")
return record
def timestamp_seq() -> int:
"""
Generate a strictly increasing timestamp-based sequence number.
Ensures that even if multiple PeerRecords are generated in the same nanosecond,
their `seq` values will still be strictly increasing by using a lock to track the
last value.
:return: A strictly increasing integer timestamp.
"""
global _last_timestamp
now = int(time.time_ns())
with _last_timestamp_lock:
if now <= _last_timestamp:
now = _last_timestamp + 1
_last_timestamp = now
return now
def peer_record_from_peer_info(info: PeerInfo) -> PeerRecord:
"""
Create a PeerRecord from a PeerInfo object.
This automatically assigns a timestamp-based sequence number to the record.
:param info: A PeerInfo instance (contains peer_id and addrs).
:return: A PeerRecord instance.
"""
record = PeerRecord()
record.peer_id = info.peer_id
record.addrs = info.addrs
return record
def peer_record_from_protobuf(msg: pb.PeerRecord) -> PeerRecord:
"""
Convert a protobuf PeerRecord message into a PeerRecord object.
:param msg: Protobuf PeerRecord message.
:raises ValueError: if the peer_id cannot be parsed.
:return: A deserialized PeerRecord instance.
"""
try:
peer_id = ID(msg.peer_id)
except Exception as e:
raise ValueError(f"Failed to unmarshal peer_id: {e}")
addrs = addrs_from_protobuf(msg.addresses)
seq = msg.seq
return PeerRecord(peer_id, addrs, seq)
def addrs_from_protobuf(addrs: Sequence[pb.PeerRecord.AddressInfo]) -> list[Multiaddr]:
"""
Convert a list of protobuf address records to Multiaddr objects.
:param addrs: A list of protobuf PeerRecord.AddressInfo messages.
:return: A list of decoded Multiaddr instances (invalid ones are skipped).
"""
out = []
for addr_info in addrs:
try:
addr = Multiaddr(addr_info.multiaddr)
out.append(addr)
except Exception:
continue
return out
def addrs_to_protobuf(addrs: list[Multiaddr]) -> list[pb.PeerRecord.AddressInfo]:
"""
Convert a list of Multiaddr objects into their protobuf representation.
:param addrs: A list of Multiaddr instances.
:return: A list of PeerRecord.AddressInfo protobuf messages.
"""
out = []
for addr in addrs:
addr_info = pb.PeerRecord.AddressInfo()
addr_info.multiaddr = addr.to_bytes()
out.append(addr_info)
return out

View File

@ -18,6 +18,13 @@ from libp2p.crypto.keys import (
PublicKey, PublicKey,
) )
"""
Latency EWMA Smoothing governs the deacy of the EWMA (the speed at which
is changes). This must be a normalized (0-1) value.
1 is 100% change, 0 is no change.
"""
LATENCY_EWMA_SMOOTHING = 0.1
class PeerData(IPeerData): class PeerData(IPeerData):
pubkey: PublicKey | None pubkey: PublicKey | None
@ -27,6 +34,7 @@ class PeerData(IPeerData):
addrs: list[Multiaddr] addrs: list[Multiaddr]
last_identified: int last_identified: int
ttl: int # Keep ttl=0 by default for always valid ttl: int # Keep ttl=0 by default for always valid
latmap: float
def __init__(self) -> None: def __init__(self) -> None:
self.pubkey = None self.pubkey = None
@ -36,6 +44,9 @@ class PeerData(IPeerData):
self.addrs = [] self.addrs = []
self.last_identified = int(time.time()) self.last_identified = int(time.time())
self.ttl = 0 self.ttl = 0
self.latmap = 0
# --------PROTO-BOOK--------
def get_protocols(self) -> list[str]: def get_protocols(self) -> list[str]:
""" """
@ -55,6 +66,37 @@ class PeerData(IPeerData):
""" """
self.protocols = list(protocols) self.protocols = list(protocols)
def remove_protocols(self, protocols: Sequence[str]) -> None:
"""
:param protocols: protocols to remove
"""
for protocol in protocols:
if protocol in self.protocols:
self.protocols.remove(protocol)
def supports_protocols(self, protocols: Sequence[str]) -> list[str]:
"""
:param protocols: protocols to check from
:return: all supported protocols in the given list
"""
return [proto for proto in protocols if proto in self.protocols]
def first_supported_protocol(self, protocols: Sequence[str]) -> str:
"""
:param protocols: protocols to check from
:return: first supported protocol in the given list
"""
for protocol in protocols:
if protocol in self.protocols:
return protocol
return "None supported"
def clear_protocol_data(self) -> None:
"""Clear all protocols"""
self.protocols = []
# -------ADDR-BOOK---------
def add_addrs(self, addrs: Sequence[Multiaddr]) -> None: def add_addrs(self, addrs: Sequence[Multiaddr]) -> None:
""" """
:param addrs: multiaddresses to add :param addrs: multiaddresses to add
@ -73,6 +115,7 @@ class PeerData(IPeerData):
"""Clear all addresses.""" """Clear all addresses."""
self.addrs = [] self.addrs = []
# -------METADATA-----------
def put_metadata(self, key: str, val: Any) -> None: def put_metadata(self, key: str, val: Any) -> None:
""" """
:param key: key in KV pair :param key: key in KV pair
@ -90,6 +133,11 @@ class PeerData(IPeerData):
return self.metadata[key] return self.metadata[key]
raise PeerDataError("key not found") raise PeerDataError("key not found")
def clear_metadata(self) -> None:
"""Clears metadata."""
self.metadata = {}
# -------KEY-BOOK---------------
def add_pubkey(self, pubkey: PublicKey) -> None: def add_pubkey(self, pubkey: PublicKey) -> None:
""" """
:param pubkey: :param pubkey:
@ -120,9 +168,41 @@ class PeerData(IPeerData):
raise PeerDataError("private key not found") raise PeerDataError("private key not found")
return self.privkey return self.privkey
def clear_keydata(self) -> None:
"""Clears keydata"""
self.pubkey = None
self.privkey = None
# ----------METRICS--------------
def record_latency(self, new_latency: float) -> None:
"""
Records a new latency measurement for the given peer
using Exponentially Weighted Moving Average (EWMA)
:param new_latency: the new latency value
"""
s = LATENCY_EWMA_SMOOTHING
if s > 1 or s < 0:
s = 0.1
if self.latmap == 0:
self.latmap = new_latency
else:
prev = self.latmap
updated = ((1.0 - s) * prev) + (s * new_latency)
self.latmap = updated
def latency_EWMA(self) -> float:
"""Returns the latency EWMA value"""
return self.latmap
def clear_metrics(self) -> None:
"""Clear the latency metrics"""
self.latmap = 0
def update_last_identified(self) -> None: def update_last_identified(self) -> None:
self.last_identified = int(time.time()) self.last_identified = int(time.time())
# ----------TTL------------------
def get_last_identified(self) -> int: def get_last_identified(self) -> int:
""" """
:return: last identified timestamp :return: last identified timestamp

View File

@ -3,9 +3,11 @@ from collections.abc import (
) )
from typing import ( from typing import (
Any, Any,
cast,
) )
import multiaddr import multiaddr
from multiaddr.protocols import Protocol
from .id import ( from .id import (
ID, ID,
@ -42,7 +44,8 @@ def info_from_p2p_addr(addr: multiaddr.Multiaddr) -> PeerInfo:
p2p_protocols = p2p_part.protocols() p2p_protocols = p2p_part.protocols()
if not p2p_protocols: if not p2p_protocols:
raise InvalidAddrError("The last part of the address has no protocols") raise InvalidAddrError("The last part of the address has no protocols")
last_protocol = p2p_protocols[0] last_protocol = cast(Protocol, p2p_part.protocols()[0])
if last_protocol is None: if last_protocol is None:
raise InvalidAddrError("The last protocol is None") raise InvalidAddrError("The last protocol is None")

View File

@ -2,6 +2,7 @@ from collections import (
defaultdict, defaultdict,
) )
from collections.abc import ( from collections.abc import (
AsyncIterable,
Sequence, Sequence,
) )
from typing import ( from typing import (
@ -11,6 +12,8 @@ from typing import (
from multiaddr import ( from multiaddr import (
Multiaddr, Multiaddr,
) )
import trio
from trio import MemoryReceiveChannel, MemorySendChannel
from libp2p.abc import ( from libp2p.abc import (
IPeerStore, IPeerStore,
@ -20,6 +23,7 @@ from libp2p.crypto.keys import (
PrivateKey, PrivateKey,
PublicKey, PublicKey,
) )
from libp2p.peer.envelope import Envelope
from .id import ( from .id import (
ID, ID,
@ -35,11 +39,25 @@ from .peerinfo import (
PERMANENT_ADDR_TTL = 0 PERMANENT_ADDR_TTL = 0
# TODO: Set up an async task for periodic peer-store cleanup
# for expired addresses and records.
class PeerRecordState:
envelope: Envelope
seq: int
def __init__(self, envelope: Envelope, seq: int):
self.envelope = envelope
self.seq = seq
class PeerStore(IPeerStore): class PeerStore(IPeerStore):
peer_data_map: dict[ID, PeerData] peer_data_map: dict[ID, PeerData]
def __init__(self) -> None: def __init__(self, max_records: int = 10000) -> None:
self.peer_data_map = defaultdict(PeerData) self.peer_data_map = defaultdict(PeerData)
self.addr_update_channels: dict[ID, MemorySendChannel[Multiaddr]] = {}
self.peer_record_map: dict[ID, PeerRecordState] = {}
self.max_records = max_records
def peer_info(self, peer_id: ID) -> PeerInfo: def peer_info(self, peer_id: ID) -> PeerInfo:
""" """
@ -53,6 +71,69 @@ class PeerStore(IPeerStore):
return PeerInfo(peer_id, peer_data.get_addrs()) return PeerInfo(peer_id, peer_data.get_addrs())
raise PeerStoreError("peer ID not found") raise PeerStoreError("peer ID not found")
def peer_ids(self) -> list[ID]:
"""
:return: all of the peer IDs stored in peer store
"""
return list(self.peer_data_map.keys())
def clear_peerdata(self, peer_id: ID) -> None:
"""Clears all data associated with the given peer_id."""
if peer_id in self.peer_data_map:
del self.peer_data_map[peer_id]
else:
raise PeerStoreError("peer ID not found")
# Clear the peer records
if peer_id in self.peer_record_map:
self.peer_record_map.pop(peer_id, None)
def valid_peer_ids(self) -> list[ID]:
"""
:return: all of the valid peer IDs stored in peer store
"""
valid_peer_ids: list[ID] = []
for peer_id, peer_data in self.peer_data_map.items():
if not peer_data.is_expired():
valid_peer_ids.append(peer_id)
else:
peer_data.clear_addrs()
return valid_peer_ids
def _enforce_record_limit(self) -> None:
"""Enforce maximum number of stored records."""
if len(self.peer_record_map) > self.max_records:
# Record oldest records based on seequence number
sorted_records = sorted(
self.peer_record_map.items(), key=lambda x: x[1].seq
)
records_to_remove = len(self.peer_record_map) - self.max_records
for peer_id, _ in sorted_records[:records_to_remove]:
self.maybe_delete_peer_record(peer_id)
del self.peer_record_map[peer_id]
async def start_cleanup_task(self, cleanup_interval: int = 3600) -> None:
"""Start periodic cleanup of expired peer records and addresses."""
while True:
await trio.sleep(cleanup_interval)
self._cleanup_expired_records()
def _cleanup_expired_records(self) -> None:
"""Remove expired peer records and addresses"""
expired_peers = []
for peer_id, peer_data in self.peer_data_map.items():
if peer_data.is_expired():
expired_peers.append(peer_id)
for peer_id in expired_peers:
self.maybe_delete_peer_record(peer_id)
del self.peer_data_map[peer_id]
self._enforce_record_limit()
# --------PROTO-BOOK--------
def get_protocols(self, peer_id: ID) -> list[str]: def get_protocols(self, peer_id: ID) -> list[str]:
""" """
:param peer_id: peer ID to get protocols for :param peer_id: peer ID to get protocols for
@ -79,23 +160,31 @@ class PeerStore(IPeerStore):
peer_data = self.peer_data_map[peer_id] peer_data = self.peer_data_map[peer_id]
peer_data.set_protocols(list(protocols)) peer_data.set_protocols(list(protocols))
def peer_ids(self) -> list[ID]: def remove_protocols(self, peer_id: ID, protocols: Sequence[str]) -> None:
"""
:param peer_id: peer ID to get info for
:param protocols: unsupported protocols to remove
"""
peer_data = self.peer_data_map[peer_id]
peer_data.remove_protocols(protocols)
def supports_protocols(self, peer_id: ID, protocols: Sequence[str]) -> list[str]:
""" """
:return: all of the peer IDs stored in peer store :return: all of the peer IDs stored in peer store
""" """
return list(self.peer_data_map.keys()) peer_data = self.peer_data_map[peer_id]
return peer_data.supports_protocols(protocols)
def valid_peer_ids(self) -> list[ID]: def first_supported_protocol(self, peer_id: ID, protocols: Sequence[str]) -> str:
""" peer_data = self.peer_data_map[peer_id]
:return: all of the valid peer IDs stored in peer store return peer_data.first_supported_protocol(protocols)
"""
valid_peer_ids: list[ID] = [] def clear_protocol_data(self, peer_id: ID) -> None:
for peer_id, peer_data in self.peer_data_map.items(): """Clears prtocoldata"""
if not peer_data.is_expired(): peer_data = self.peer_data_map[peer_id]
valid_peer_ids.append(peer_id) peer_data.clear_protocol_data()
else:
peer_data.clear_addrs() # ------METADATA---------
return valid_peer_ids
def get(self, peer_id: ID, key: str) -> Any: def get(self, peer_id: ID, key: str) -> Any:
""" """
@ -121,6 +210,92 @@ class PeerStore(IPeerStore):
peer_data = self.peer_data_map[peer_id] peer_data = self.peer_data_map[peer_id]
peer_data.put_metadata(key, val) peer_data.put_metadata(key, val)
def clear_metadata(self, peer_id: ID) -> None:
"""Clears metadata"""
peer_data = self.peer_data_map[peer_id]
peer_data.clear_metadata()
# -----CERT-ADDR-BOOK-----
# TODO: Make proper use of this function
def maybe_delete_peer_record(self, peer_id: ID) -> None:
"""
Delete the signed peer record for a peer if it has no know
(non-expired) addresses.
This is a garbage collection mechanism: if all addresses for a peer have expired
or been cleared, there's no point holding onto its signed `Envelope`
:param peer_id: The peer whose record we may delete/
"""
if peer_id in self.peer_record_map:
if not self.addrs(peer_id):
self.peer_record_map.pop(peer_id, None)
def consume_peer_record(self, envelope: Envelope, ttl: int) -> bool:
"""
Accept and store a signed PeerRecord, unless it's older than
the one already stored.
This function:
- Extracts the peer ID and sequence number from the envelope
- Rejects the record if it's older (lower seq)
- Updates the stored peer record and replaces associated addresses if accepted
:param envelope: Signed envelope containing a PeerRecord.
:param ttl: Time-to-live for the included multiaddrs (in seconds).
:return: True if the record was accepted and stored; False if it was rejected.
"""
record = envelope.record()
peer_id = record.peer_id
existing = self.peer_record_map.get(peer_id)
if existing and existing.seq > record.seq:
return False # reject older record
new_addrs = set(record.addrs)
self.peer_record_map[peer_id] = PeerRecordState(envelope, record.seq)
self.peer_data_map[peer_id].clear_addrs()
self.add_addrs(peer_id, list(new_addrs), ttl)
return True
def consume_peer_records(self, envelopes: list[Envelope], ttl: int) -> list[bool]:
"""Consume multiple peer records in a single operation."""
results = []
for envelope in envelopes:
results.append(self.consume_peer_record(envelope, ttl))
return results
def get_peer_record(self, peer_id: ID) -> Envelope | None:
"""
Retrieve the most recent signed PeerRecord `Envelope` for a peer, if it exists
and is still relevant.
First, it runs cleanup via `maybe_delete_peer_record` to purge stale data.
Then it checks whether the peer has valid, unexpired addresses before
returning the associated envelope.
:param peer_id: The peer to look up.
:return: The signed Envelope if the peer is known and has valid
addresses; None otherwise.
"""
self.maybe_delete_peer_record(peer_id)
# Check if the peer has any valid addresses
if (
peer_id in self.peer_data_map
and not self.peer_data_map[peer_id].is_expired()
):
state = self.peer_record_map.get(peer_id)
if state is not None:
return state.envelope
return None
# -------ADDR-BOOK--------
def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int = 0) -> None: def add_addr(self, peer_id: ID, addr: Multiaddr, ttl: int = 0) -> None:
""" """
:param peer_id: peer ID to add address for :param peer_id: peer ID to add address for
@ -140,6 +315,15 @@ class PeerStore(IPeerStore):
peer_data.set_ttl(ttl) peer_data.set_ttl(ttl)
peer_data.update_last_identified() peer_data.update_last_identified()
if peer_id in self.addr_update_channels:
for addr in addrs:
try:
self.addr_update_channels[peer_id].send_nowait(addr)
except trio.WouldBlock:
pass # Or consider logging / dropping / replacing stream
self.maybe_delete_peer_record(peer_id)
def addrs(self, peer_id: ID) -> list[Multiaddr]: def addrs(self, peer_id: ID) -> list[Multiaddr]:
""" """
:param peer_id: peer ID to get addrs for :param peer_id: peer ID to get addrs for
@ -163,9 +347,11 @@ class PeerStore(IPeerStore):
if peer_id in self.peer_data_map: if peer_id in self.peer_data_map:
self.peer_data_map[peer_id].clear_addrs() self.peer_data_map[peer_id].clear_addrs()
self.maybe_delete_peer_record(peer_id)
def peers_with_addrs(self) -> list[ID]: def peers_with_addrs(self) -> list[ID]:
""" """
:return: all of the peer IDs which has addrs stored in peer store :return: all of the peer IDs which has addrsfloat stored in peer store
""" """
# Add all peers with addrs at least 1 to output # Add all peers with addrs at least 1 to output
output: list[ID] = [] output: list[ID] = []
@ -179,6 +365,27 @@ class PeerStore(IPeerStore):
peer_data.clear_addrs() peer_data.clear_addrs()
return output return output
async def addr_stream(self, peer_id: ID) -> AsyncIterable[Multiaddr]:
"""
Returns an async stream of newly added addresses for the given peer.
This function allows consumers to subscribe to address updates for a peer
and receive each new address as it is added via `add_addr` or `add_addrs`.
:param peer_id: The ID of the peer to monitor address updates for.
:return: An async iterator yielding Multiaddr instances as they are added.
"""
send: MemorySendChannel[Multiaddr]
receive: MemoryReceiveChannel[Multiaddr]
send, receive = trio.open_memory_channel(0)
self.addr_update_channels[peer_id] = send
async for addr in receive:
yield addr
# -------KEY-BOOK---------
def add_pubkey(self, peer_id: ID, pubkey: PublicKey) -> None: def add_pubkey(self, peer_id: ID, pubkey: PublicKey) -> None:
""" """
:param peer_id: peer ID to add public key for :param peer_id: peer ID to add public key for
@ -239,6 +446,45 @@ class PeerStore(IPeerStore):
self.add_pubkey(peer_id, key_pair.public_key) self.add_pubkey(peer_id, key_pair.public_key)
self.add_privkey(peer_id, key_pair.private_key) self.add_privkey(peer_id, key_pair.private_key)
def peer_with_keys(self) -> list[ID]:
"""Returns the peer_ids for which keys are stored"""
return [
peer_id
for peer_id, pdata in self.peer_data_map.items()
if pdata.pubkey is not None
]
def clear_keydata(self, peer_id: ID) -> None:
"""Clears the keys of the peer"""
peer_data = self.peer_data_map[peer_id]
peer_data.clear_keydata()
# --------METRICS--------
def record_latency(self, peer_id: ID, RTT: float) -> None:
"""
Records a new latency measurement for the given peer
using Exponentially Weighted Moving Average (EWMA)
:param peer_id: peer ID to get private key for
:param RTT: the new latency value (round trip time)
"""
peer_data = self.peer_data_map[peer_id]
peer_data.record_latency(RTT)
def latency_EWMA(self, peer_id: ID) -> float:
"""
:param peer_id: peer ID to get private key for
:return: The latency EWMA value for that peer
"""
peer_data = self.peer_data_map[peer_id]
return peer_data.latency_EWMA()
def clear_metrics(self, peer_id: ID) -> None:
"""Clear the latency metrics"""
peer_data = self.peer_data_map[peer_id]
peer_data.clear_metrics()
class PeerStoreError(KeyError): class PeerStoreError(KeyError):
"""Raised when peer ID is not found in peer store.""" """Raised when peer ID is not found in peer store."""

View File

@ -101,6 +101,18 @@ class Multiselect(IMultiselectMuxer):
except trio.TooSlowError: except trio.TooSlowError:
raise MultiselectError("handshake read timeout") raise MultiselectError("handshake read timeout")
def get_protocols(self) -> tuple[TProtocol | None, ...]:
"""
Retrieve the protocols for which handlers have been registered.
Returns
-------
tuple[TProtocol, ...]
A tuple of registered protocol names.
"""
return tuple(self.handlers.keys())
async def handshake(self, communicator: IMultiselectCommunicator) -> None: async def handshake(self, communicator: IMultiselectCommunicator) -> None:
""" """
Perform handshake to agree on multiselect protocol. Perform handshake to agree on multiselect protocol.

View File

@ -102,6 +102,9 @@ class TopicValidator(NamedTuple):
is_async: bool is_async: bool
MAX_CONCURRENT_VALIDATORS = 10
class Pubsub(Service, IPubsub): class Pubsub(Service, IPubsub):
host: IHost host: IHost
@ -109,6 +112,7 @@ class Pubsub(Service, IPubsub):
peer_receive_channel: trio.MemoryReceiveChannel[ID] peer_receive_channel: trio.MemoryReceiveChannel[ID]
dead_peer_receive_channel: trio.MemoryReceiveChannel[ID] dead_peer_receive_channel: trio.MemoryReceiveChannel[ID]
_validator_semaphore: trio.Semaphore
seen_messages: LastSeenCache seen_messages: LastSeenCache
@ -143,6 +147,7 @@ class Pubsub(Service, IPubsub):
msg_id_constructor: Callable[ msg_id_constructor: Callable[
[rpc_pb2.Message], bytes [rpc_pb2.Message], bytes
] = get_peer_and_seqno_msg_id, ] = get_peer_and_seqno_msg_id,
max_concurrent_validator_count: int = MAX_CONCURRENT_VALIDATORS,
) -> None: ) -> None:
""" """
Construct a new Pubsub object, which is responsible for handling all Construct a new Pubsub object, which is responsible for handling all
@ -168,6 +173,7 @@ class Pubsub(Service, IPubsub):
# Therefore, we can only close from the receive side. # Therefore, we can only close from the receive side.
self.peer_receive_channel = peer_receive self.peer_receive_channel = peer_receive
self.dead_peer_receive_channel = dead_peer_receive self.dead_peer_receive_channel = dead_peer_receive
self._validator_semaphore = trio.Semaphore(max_concurrent_validator_count)
# Register a notifee # Register a notifee
self.host.get_network().register_notifee( self.host.get_network().register_notifee(
PubsubNotifee(peer_send, dead_peer_send) PubsubNotifee(peer_send, dead_peer_send)
@ -657,7 +663,11 @@ class Pubsub(Service, IPubsub):
logger.debug("successfully published message %s", msg) logger.debug("successfully published message %s", msg)
async def validate_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None: async def validate_msg(
self,
msg_forwarder: ID,
msg: rpc_pb2.Message,
) -> None:
""" """
Validate the received message. Validate the received message.
@ -680,23 +690,34 @@ class Pubsub(Service, IPubsub):
if not validator(msg_forwarder, msg): if not validator(msg_forwarder, msg):
raise ValidationError(f"Validation failed for msg={msg}") raise ValidationError(f"Validation failed for msg={msg}")
# TODO: Implement throttle on async validators
if len(async_topic_validators) > 0: if len(async_topic_validators) > 0:
# Appends to lists are thread safe in CPython # Appends to lists are thread safe in CPython
results = [] results: list[bool] = []
async def run_async_validator(func: AsyncValidatorFn) -> None:
result = await func(msg_forwarder, msg)
results.append(result)
async with trio.open_nursery() as nursery: async with trio.open_nursery() as nursery:
for async_validator in async_topic_validators: for async_validator in async_topic_validators:
nursery.start_soon(run_async_validator, async_validator) nursery.start_soon(
self._run_async_validator,
async_validator,
msg_forwarder,
msg,
results,
)
if not all(results): if not all(results):
raise ValidationError(f"Validation failed for msg={msg}") raise ValidationError(f"Validation failed for msg={msg}")
async def _run_async_validator(
self,
func: AsyncValidatorFn,
msg_forwarder: ID,
msg: rpc_pb2.Message,
results: list[bool],
) -> None:
async with self._validator_semaphore:
result = await func(msg_forwarder, msg)
results.append(result)
async def push_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None: async def push_msg(self, msg_forwarder: ID, msg: rpc_pb2.Message) -> None:
""" """
Push a pubsub message to others. Push a pubsub message to others.

View File

@ -234,7 +234,8 @@ class RelayDiscovery(Service):
if not callable(proto_getter): if not callable(proto_getter):
return None return None
if peer_id not in peerstore.peer_ids():
return None
try: try:
# Try to get protocols # Try to get protocols
proto_result = proto_getter(peer_id) proto_result = proto_getter(peer_id)
@ -283,8 +284,6 @@ class RelayDiscovery(Service):
return None return None
mux = self.host.get_mux() mux = self.host.get_mux()
if not hasattr(mux, "protocols"):
return None
peer_protocols = set() peer_protocols = set()
# Get protocols from mux with proper type safety # Get protocols from mux with proper type safety
@ -293,7 +292,9 @@ class RelayDiscovery(Service):
# Get protocols with proper typing # Get protocols with proper typing
mux_protocols = mux.get_protocols() mux_protocols = mux.get_protocols()
if isinstance(mux_protocols, (list, tuple)): if isinstance(mux_protocols, (list, tuple)):
available_protocols = list(mux_protocols) available_protocols = [
p for p in mux.get_protocols() if p is not None
]
for protocol in available_protocols: for protocol in available_protocols:
try: try:
@ -313,7 +314,7 @@ class RelayDiscovery(Service):
self._protocol_cache[peer_id] = peer_protocols self._protocol_cache[peer_id] = peer_protocols
protocol_str = str(PROTOCOL_ID) protocol_str = str(PROTOCOL_ID)
for protocol in peer_protocols: for protocol in map(TProtocol, peer_protocols):
if protocol == protocol_str: if protocol == protocol_str:
return True return True
return False return False

View File

@ -1,3 +1,5 @@
from collections.abc import AsyncGenerator
from contextlib import asynccontextmanager
from types import ( from types import (
TracebackType, TracebackType,
) )
@ -32,6 +34,72 @@ if TYPE_CHECKING:
) )
class ReadWriteLock:
"""
A read-write lock that allows multiple concurrent readers
or one exclusive writer, implemented using Trio primitives.
"""
def __init__(self) -> None:
self._readers = 0
self._readers_lock = trio.Lock() # Protects access to _readers count
self._writer_lock = trio.Semaphore(1) # Allows only one writer at a time
async def acquire_read(self) -> None:
"""Acquire a read lock. Multiple readers can hold it simultaneously."""
try:
async with self._readers_lock:
if self._readers == 0:
await self._writer_lock.acquire()
self._readers += 1
except trio.Cancelled:
raise
async def release_read(self) -> None:
"""Release a read lock."""
async with self._readers_lock:
if self._readers == 1:
self._writer_lock.release()
self._readers -= 1
async def acquire_write(self) -> None:
"""Acquire an exclusive write lock."""
try:
await self._writer_lock.acquire()
except trio.Cancelled:
raise
def release_write(self) -> None:
"""Release the exclusive write lock."""
self._writer_lock.release()
@asynccontextmanager
async def read_lock(self) -> AsyncGenerator[None, None]:
"""Context manager for acquiring and releasing a read lock safely."""
acquire = False
try:
await self.acquire_read()
acquire = True
yield
finally:
if acquire:
with trio.CancelScope() as scope:
scope.shield = True
await self.release_read()
@asynccontextmanager
async def write_lock(self) -> AsyncGenerator[None, None]:
"""Context manager for acquiring and releasing a write lock safely."""
acquire = False
try:
await self.acquire_write()
acquire = True
yield
finally:
if acquire:
self.release_write()
class MplexStream(IMuxedStream): class MplexStream(IMuxedStream):
""" """
reference: https://github.com/libp2p/go-mplex/blob/master/stream.go reference: https://github.com/libp2p/go-mplex/blob/master/stream.go
@ -46,7 +114,7 @@ class MplexStream(IMuxedStream):
read_deadline: int | None read_deadline: int | None
write_deadline: int | None write_deadline: int | None
# TODO: Add lock for read/write to avoid interleaving receiving messages? rw_lock: ReadWriteLock
close_lock: trio.Lock close_lock: trio.Lock
# NOTE: `dataIn` is size of 8 in Go implementation. # NOTE: `dataIn` is size of 8 in Go implementation.
@ -80,6 +148,7 @@ class MplexStream(IMuxedStream):
self.event_remote_closed = trio.Event() self.event_remote_closed = trio.Event()
self.event_reset = trio.Event() self.event_reset = trio.Event()
self.close_lock = trio.Lock() self.close_lock = trio.Lock()
self.rw_lock = ReadWriteLock()
self.incoming_data_channel = incoming_data_channel self.incoming_data_channel = incoming_data_channel
self._buf = bytearray() self._buf = bytearray()
@ -113,48 +182,49 @@ class MplexStream(IMuxedStream):
:param n: number of bytes to read :param n: number of bytes to read
:return: bytes actually read :return: bytes actually read
""" """
if n is not None and n < 0: async with self.rw_lock.read_lock():
raise ValueError( if n is not None and n < 0:
"the number of bytes to read `n` must be non-negative or " raise ValueError(
f"`None` to indicate read until EOF, got n={n}" "the number of bytes to read `n` must be non-negative or "
) f"`None` to indicate read until EOF, got n={n}"
if self.event_reset.is_set(): )
raise MplexStreamReset if self.event_reset.is_set():
if n is None: raise MplexStreamReset
return await self._read_until_eof() if n is None:
if len(self._buf) == 0: return await self._read_until_eof()
data: bytes if len(self._buf) == 0:
# Peek whether there is data available. If yes, we just read until there is data: bytes
# no data, then return. # Peek whether there is data available. If yes, we just read until
try: # there is no data, then return.
data = self.incoming_data_channel.receive_nowait()
self._buf.extend(data)
except trio.EndOfChannel:
raise MplexStreamEOF
except trio.WouldBlock:
# We know `receive` will be blocked here. Wait for data here with
# `receive` and catch all kinds of errors here.
try: try:
data = await self.incoming_data_channel.receive() data = self.incoming_data_channel.receive_nowait()
self._buf.extend(data) self._buf.extend(data)
except trio.EndOfChannel: except trio.EndOfChannel:
if self.event_reset.is_set(): raise MplexStreamEOF
raise MplexStreamReset except trio.WouldBlock:
if self.event_remote_closed.is_set(): # We know `receive` will be blocked here. Wait for data here with
raise MplexStreamEOF # `receive` and catch all kinds of errors here.
except trio.ClosedResourceError as error: try:
# Probably `incoming_data_channel` is closed in `reset` when we are data = await self.incoming_data_channel.receive()
# waiting for `receive`. self._buf.extend(data)
if self.event_reset.is_set(): except trio.EndOfChannel:
raise MplexStreamReset if self.event_reset.is_set():
raise Exception( raise MplexStreamReset
"`incoming_data_channel` is closed but stream is not reset. " if self.event_remote_closed.is_set():
"This should never happen." raise MplexStreamEOF
) from error except trio.ClosedResourceError as error:
self._buf.extend(self._read_return_when_blocked()) # Probably `incoming_data_channel` is closed in `reset` when
payload = self._buf[:n] # we are waiting for `receive`.
self._buf = self._buf[len(payload) :] if self.event_reset.is_set():
return bytes(payload) raise MplexStreamReset
raise Exception(
"`incoming_data_channel` is closed but stream is not reset."
"This should never happen."
) from error
self._buf.extend(self._read_return_when_blocked())
payload = self._buf[:n]
self._buf = self._buf[len(payload) :]
return bytes(payload)
async def write(self, data: bytes) -> None: async def write(self, data: bytes) -> None:
""" """
@ -162,22 +232,21 @@ class MplexStream(IMuxedStream):
:return: number of bytes written :return: number of bytes written
""" """
if self.event_local_closed.is_set(): async with self.rw_lock.write_lock():
raise MplexStreamClosed(f"cannot write to closed stream: data={data!r}") if self.event_local_closed.is_set():
flag = ( raise MplexStreamClosed(f"cannot write to closed stream: data={data!r}")
HeaderTags.MessageInitiator flag = (
if self.is_initiator HeaderTags.MessageInitiator
else HeaderTags.MessageReceiver if self.is_initiator
) else HeaderTags.MessageReceiver
await self.muxed_conn.send_message(flag, data, self.stream_id) )
await self.muxed_conn.send_message(flag, data, self.stream_id)
async def close(self) -> None: async def close(self) -> None:
""" """
Closing a stream closes it for writing and closes the remote end for Closing a stream closes it for writing and closes the remote end for
reading but allows writing in the other direction. reading but allows writing in the other direction.
""" """
# TODO error handling with timeout
async with self.close_lock: async with self.close_lock:
if self.event_local_closed.is_set(): if self.event_local_closed.is_set():
return return
@ -185,8 +254,17 @@ class MplexStream(IMuxedStream):
flag = ( flag = (
HeaderTags.CloseInitiator if self.is_initiator else HeaderTags.CloseReceiver HeaderTags.CloseInitiator if self.is_initiator else HeaderTags.CloseReceiver
) )
# TODO: Raise when `muxed_conn.send_message` fails and `Mplex` isn't shutdown.
await self.muxed_conn.send_message(flag, None, self.stream_id) try:
with trio.fail_after(5): # timeout in seconds
await self.muxed_conn.send_message(flag, None, self.stream_id)
except trio.TooSlowError:
raise TimeoutError("Timeout while trying to close the stream")
except MuxedConnUnavailable:
if not self.muxed_conn.event_shutting_down.is_set():
raise RuntimeError(
"Failed to send close message and Mplex isn't shutting down"
)
_is_remote_closed: bool _is_remote_closed: bool
async with self.close_lock: async with self.close_lock:

View File

@ -45,6 +45,9 @@ from libp2p.stream_muxer.exceptions import (
MuxedStreamReset, MuxedStreamReset,
) )
# Configure logger for this module
logger = logging.getLogger("libp2p.stream_muxer.yamux")
PROTOCOL_ID = "/yamux/1.0.0" PROTOCOL_ID = "/yamux/1.0.0"
TYPE_DATA = 0x0 TYPE_DATA = 0x0
TYPE_WINDOW_UPDATE = 0x1 TYPE_WINDOW_UPDATE = 0x1
@ -98,16 +101,32 @@ class YamuxStream(IMuxedStream):
# Flow control: Check if we have enough send window # Flow control: Check if we have enough send window
total_len = len(data) total_len = len(data)
sent = 0 sent = 0
logger.debug(f"Stream {self.stream_id}: Starts writing {total_len} bytes ")
while sent < total_len: while sent < total_len:
# Wait for available window with timeout
timeout = False
async with self.window_lock: async with self.window_lock:
# Wait for available window if self.send_window == 0:
while self.send_window == 0 and not self.closed: logger.debug(
# Release lock while waiting f"Stream {self.stream_id}: Window is zero, waiting for update"
)
# Release lock and wait with timeout
self.window_lock.release() self.window_lock.release()
await trio.sleep(0.01) # To avoid re-acquiring the lock immediately,
with trio.move_on_after(5.0) as cancel_scope:
while self.send_window == 0 and not self.closed:
await trio.sleep(0.01)
# If we timed out, cancel the scope
timeout = cancel_scope.cancelled_caught
# Re-acquire lock
await self.window_lock.acquire() await self.window_lock.acquire()
# If we timed out waiting for window update, raise an error
if timeout:
raise MuxedStreamError(
"Timed out waiting for window update after 5 seconds."
)
if self.closed: if self.closed:
raise MuxedStreamError("Stream is closed") raise MuxedStreamError("Stream is closed")
@ -123,25 +142,45 @@ class YamuxStream(IMuxedStream):
await self.conn.secured_conn.write(header + chunk) await self.conn.secured_conn.write(header + chunk)
sent += to_send sent += to_send
# If window is getting low, consider updating async def send_window_update(self, increment: int, skip_lock: bool = False) -> None:
if self.send_window < DEFAULT_WINDOW_SIZE // 2: """
await self.send_window_update() Send a window update to peer.
async def send_window_update(self, increment: int | None = None) -> None:
"""Send a window update to peer."""
if increment is None:
increment = DEFAULT_WINDOW_SIZE - self.recv_window
param:increment: The amount to increment the window size by.
If None, uses the difference between DEFAULT_WINDOW_SIZE
and current receive window.
param:skip_lock (bool): If True, skips acquiring window_lock.
This should only be used when calling from a context
that already holds the lock.
"""
if increment <= 0: if increment <= 0:
# If increment is zero or negative, skip sending update
logger.debug(
f"Stream {self.stream_id}: Skipping window update"
f"(increment={increment})"
)
return return
logger.debug(
f"Stream {self.stream_id}: Sending window update with increment={increment}"
)
async with self.window_lock: async def _do_window_update() -> None:
self.recv_window += increment
header = struct.pack( header = struct.pack(
YAMUX_HEADER_FORMAT, 0, TYPE_WINDOW_UPDATE, 0, self.stream_id, increment YAMUX_HEADER_FORMAT,
0,
TYPE_WINDOW_UPDATE,
0,
self.stream_id,
increment,
) )
await self.conn.secured_conn.write(header) await self.conn.secured_conn.write(header)
if skip_lock:
await _do_window_update()
else:
async with self.window_lock:
await _do_window_update()
async def read(self, n: int | None = -1) -> bytes: async def read(self, n: int | None = -1) -> bytes:
# Handle None value for n by converting it to -1 # Handle None value for n by converting it to -1
if n is None: if n is None:
@ -149,64 +188,77 @@ class YamuxStream(IMuxedStream):
# If the stream is closed for receiving and the buffer is empty, raise EOF # If the stream is closed for receiving and the buffer is empty, raise EOF
if self.recv_closed and not self.conn.stream_buffers.get(self.stream_id): if self.recv_closed and not self.conn.stream_buffers.get(self.stream_id):
logging.debug( logger.debug(
f"Stream {self.stream_id}: Stream closed for receiving and buffer empty" f"Stream {self.stream_id}: Stream closed for receiving and buffer empty"
) )
raise MuxedStreamEOF("Stream is closed for receiving") raise MuxedStreamEOF("Stream is closed for receiving")
# If reading until EOF (n == -1), block until stream is closed
if n == -1: if n == -1:
while not self.recv_closed and not self.conn.event_shutting_down.is_set(): data = b""
while not self.conn.event_shutting_down.is_set():
# Check if there's data in the buffer # Check if there's data in the buffer
buffer = self.conn.stream_buffers.get(self.stream_id) buffer = self.conn.stream_buffers.get(self.stream_id)
if buffer and len(buffer) > 0:
# Wait for closure even if data is available
logging.debug(
f"Stream {self.stream_id}:Waiting for FIN before returning data"
)
await self.conn.stream_events[self.stream_id].wait()
self.conn.stream_events[self.stream_id] = trio.Event()
else:
# No data, wait for data or closure
logging.debug(f"Stream {self.stream_id}: Waiting for data or FIN")
await self.conn.stream_events[self.stream_id].wait()
self.conn.stream_events[self.stream_id] = trio.Event()
# After loop, check if stream is closed or shutting down # If buffer is not available, check if stream is closed
async with self.conn.streams_lock:
if self.conn.event_shutting_down.is_set():
logging.debug(f"Stream {self.stream_id}: Connection shutting down")
raise MuxedStreamEOF("Connection shut down")
if self.closed:
if self.reset_received:
logging.debug(f"Stream {self.stream_id}: Stream was reset")
raise MuxedStreamReset("Stream was reset")
else:
logging.debug(
f"Stream {self.stream_id}: Stream closed cleanly (EOF)"
)
raise MuxedStreamEOF("Stream closed cleanly (EOF)")
buffer = self.conn.stream_buffers.get(self.stream_id)
if buffer is None: if buffer is None:
logging.debug( logger.debug(f"Stream {self.stream_id}: No buffer available")
f"Stream {self.stream_id}: Buffer gone, assuming closed"
)
raise MuxedStreamEOF("Stream buffer closed") raise MuxedStreamEOF("Stream buffer closed")
# If we have data in buffer, process it
if len(buffer) > 0:
chunk = bytes(buffer)
buffer.clear()
data += chunk
# Send window update for the chunk we just read
async with self.window_lock:
self.recv_window += len(chunk)
logger.debug(f"Stream {self.stream_id}: Update {len(chunk)}")
await self.send_window_update(len(chunk), skip_lock=True)
# If stream is closed (FIN received) and buffer is empty, break
if self.recv_closed and len(buffer) == 0: if self.recv_closed and len(buffer) == 0:
logging.debug(f"Stream {self.stream_id}: EOF reached") logger.debug(f"Stream {self.stream_id}: Closed with empty buffer")
raise MuxedStreamEOF("Stream is closed for receiving") break
# Return all buffered data
data = bytes(buffer) # If stream was reset, raise reset error
buffer.clear() if self.reset_received:
logging.debug(f"Stream {self.stream_id}: Returning {len(data)} bytes") logger.debug(f"Stream {self.stream_id}: Stream was reset")
raise MuxedStreamReset("Stream was reset")
# Wait for more data or stream closure
logger.debug(f"Stream {self.stream_id}: Waiting for data or FIN")
await self.conn.stream_events[self.stream_id].wait()
self.conn.stream_events[self.stream_id] = trio.Event()
# After loop exit, first check if we have data to return
if data:
logger.debug(
f"Stream {self.stream_id}: Returning {len(data)} bytes after loop"
)
return data return data
# For specific size read (n > 0), return available data immediately # No data accumulated, now check why we exited the loop
return await self.conn.read_stream(self.stream_id, n) if self.conn.event_shutting_down.is_set():
logger.debug(f"Stream {self.stream_id}: Connection shutting down")
raise MuxedStreamEOF("Connection shut down")
# Return empty data
return b""
else:
data = await self.conn.read_stream(self.stream_id, n)
async with self.window_lock:
self.recv_window += len(data)
logger.debug(
f"Stream {self.stream_id}: Sending window update after read, "
f"increment={len(data)}"
)
await self.send_window_update(len(data), skip_lock=True)
return data
async def close(self) -> None: async def close(self) -> None:
if not self.send_closed: if not self.send_closed:
logging.debug(f"Half-closing stream {self.stream_id} (local end)") logger.debug(f"Half-closing stream {self.stream_id} (local end)")
header = struct.pack( header = struct.pack(
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_FIN, self.stream_id, 0 YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_FIN, self.stream_id, 0
) )
@ -222,7 +274,7 @@ class YamuxStream(IMuxedStream):
async def reset(self) -> None: async def reset(self) -> None:
if not self.closed: if not self.closed:
logging.debug(f"Resetting stream {self.stream_id}") logger.debug(f"Resetting stream {self.stream_id}")
header = struct.pack( header = struct.pack(
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_RST, self.stream_id, 0 YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_RST, self.stream_id, 0
) )
@ -300,7 +352,7 @@ class Yamux(IMuxedConn):
self._nursery: Nursery | None = None self._nursery: Nursery | None = None
async def start(self) -> None: async def start(self) -> None:
logging.debug(f"Starting Yamux for {self.peer_id}") logger.debug(f"Starting Yamux for {self.peer_id}")
if self.event_started.is_set(): if self.event_started.is_set():
return return
async with trio.open_nursery() as nursery: async with trio.open_nursery() as nursery:
@ -313,7 +365,7 @@ class Yamux(IMuxedConn):
return self.is_initiator_value return self.is_initiator_value
async def close(self, error_code: int = GO_AWAY_NORMAL) -> None: async def close(self, error_code: int = GO_AWAY_NORMAL) -> None:
logging.debug(f"Closing Yamux connection with code {error_code}") logger.debug(f"Closing Yamux connection with code {error_code}")
async with self.streams_lock: async with self.streams_lock:
if not self.event_shutting_down.is_set(): if not self.event_shutting_down.is_set():
try: try:
@ -322,7 +374,7 @@ class Yamux(IMuxedConn):
) )
await self.secured_conn.write(header) await self.secured_conn.write(header)
except Exception as e: except Exception as e:
logging.debug(f"Failed to send GO_AWAY: {e}") logger.debug(f"Failed to send GO_AWAY: {e}")
self.event_shutting_down.set() self.event_shutting_down.set()
for stream in self.streams.values(): for stream in self.streams.values():
stream.closed = True stream.closed = True
@ -333,12 +385,12 @@ class Yamux(IMuxedConn):
self.stream_events.clear() self.stream_events.clear()
try: try:
await self.secured_conn.close() await self.secured_conn.close()
logging.debug(f"Successfully closed secured_conn for peer {self.peer_id}") logger.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
except Exception as e: except Exception as e:
logging.debug(f"Error closing secured_conn for peer {self.peer_id}: {e}") logger.debug(f"Error closing secured_conn for peer {self.peer_id}: {e}")
self.event_closed.set() self.event_closed.set()
if self.on_close: if self.on_close:
logging.debug(f"Calling on_close in Yamux.close for peer {self.peer_id}") logger.debug(f"Calling on_close in Yamux.close for peer {self.peer_id}")
if inspect.iscoroutinefunction(self.on_close): if inspect.iscoroutinefunction(self.on_close):
if self.on_close is not None: if self.on_close is not None:
await self.on_close() await self.on_close()
@ -367,7 +419,7 @@ class Yamux(IMuxedConn):
header = struct.pack( header = struct.pack(
YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_SYN, stream_id, 0 YAMUX_HEADER_FORMAT, 0, TYPE_DATA, FLAG_SYN, stream_id, 0
) )
logging.debug(f"Sending SYN header for stream {stream_id}") logger.debug(f"Sending SYN header for stream {stream_id}")
await self.secured_conn.write(header) await self.secured_conn.write(header)
return stream return stream
except Exception as e: except Exception as e:
@ -375,32 +427,32 @@ class Yamux(IMuxedConn):
raise e raise e
async def accept_stream(self) -> IMuxedStream: async def accept_stream(self) -> IMuxedStream:
logging.debug("Waiting for new stream") logger.debug("Waiting for new stream")
try: try:
stream = await self.new_stream_receive_channel.receive() stream = await self.new_stream_receive_channel.receive()
logging.debug(f"Received stream {stream.stream_id}") logger.debug(f"Received stream {stream.stream_id}")
return stream return stream
except trio.EndOfChannel: except trio.EndOfChannel:
raise MuxedStreamError("No new streams available") raise MuxedStreamError("No new streams available")
async def read_stream(self, stream_id: int, n: int = -1) -> bytes: async def read_stream(self, stream_id: int, n: int = -1) -> bytes:
logging.debug(f"Reading from stream {self.peer_id}:{stream_id}, n={n}") logger.debug(f"Reading from stream {self.peer_id}:{stream_id}, n={n}")
if n is None: if n is None:
n = -1 n = -1
while True: while True:
async with self.streams_lock: async with self.streams_lock:
if stream_id not in self.streams: if stream_id not in self.streams:
logging.debug(f"Stream {self.peer_id}:{stream_id} unknown") logger.debug(f"Stream {self.peer_id}:{stream_id} unknown")
raise MuxedStreamEOF("Stream closed") raise MuxedStreamEOF("Stream closed")
if self.event_shutting_down.is_set(): if self.event_shutting_down.is_set():
logging.debug( logger.debug(
f"Stream {self.peer_id}:{stream_id}: connection shutting down" f"Stream {self.peer_id}:{stream_id}: connection shutting down"
) )
raise MuxedStreamEOF("Connection shut down") raise MuxedStreamEOF("Connection shut down")
stream = self.streams[stream_id] stream = self.streams[stream_id]
buffer = self.stream_buffers.get(stream_id) buffer = self.stream_buffers.get(stream_id)
logging.debug( logger.debug(
f"Stream {self.peer_id}:{stream_id}: " f"Stream {self.peer_id}:{stream_id}: "
f"closed={stream.closed}, " f"closed={stream.closed}, "
f"recv_closed={stream.recv_closed}, " f"recv_closed={stream.recv_closed}, "
@ -408,7 +460,7 @@ class Yamux(IMuxedConn):
f"buffer_len={len(buffer) if buffer else 0}" f"buffer_len={len(buffer) if buffer else 0}"
) )
if buffer is None: if buffer is None:
logging.debug( logger.debug(
f"Stream {self.peer_id}:{stream_id}:" f"Stream {self.peer_id}:{stream_id}:"
f"Buffer gone, assuming closed" f"Buffer gone, assuming closed"
) )
@ -421,7 +473,7 @@ class Yamux(IMuxedConn):
else: else:
data = bytes(buffer[:n]) data = bytes(buffer[:n])
del buffer[:n] del buffer[:n]
logging.debug( logger.debug(
f"Returning {len(data)} bytes" f"Returning {len(data)} bytes"
f"from stream {self.peer_id}:{stream_id}, " f"from stream {self.peer_id}:{stream_id}, "
f"buffer_len={len(buffer)}" f"buffer_len={len(buffer)}"
@ -429,7 +481,7 @@ class Yamux(IMuxedConn):
return data return data
# If reset received and buffer is empty, raise reset # If reset received and buffer is empty, raise reset
if stream.reset_received: if stream.reset_received:
logging.debug( logger.debug(
f"Stream {self.peer_id}:{stream_id}:" f"Stream {self.peer_id}:{stream_id}:"
f"reset_received=True, raising MuxedStreamReset" f"reset_received=True, raising MuxedStreamReset"
) )
@ -442,7 +494,7 @@ class Yamux(IMuxedConn):
else: else:
data = bytes(buffer[:n]) data = bytes(buffer[:n])
del buffer[:n] del buffer[:n]
logging.debug( logger.debug(
f"Returning {len(data)} bytes" f"Returning {len(data)} bytes"
f"from stream {self.peer_id}:{stream_id}, " f"from stream {self.peer_id}:{stream_id}, "
f"buffer_len={len(buffer)}" f"buffer_len={len(buffer)}"
@ -450,21 +502,21 @@ class Yamux(IMuxedConn):
return data return data
# Check if stream is closed # Check if stream is closed
if stream.closed: if stream.closed:
logging.debug( logger.debug(
f"Stream {self.peer_id}:{stream_id}:" f"Stream {self.peer_id}:{stream_id}:"
f"closed=True, raising MuxedStreamReset" f"closed=True, raising MuxedStreamReset"
) )
raise MuxedStreamReset("Stream is reset or closed") raise MuxedStreamReset("Stream is reset or closed")
# Check if recv_closed and buffer empty # Check if recv_closed and buffer empty
if stream.recv_closed: if stream.recv_closed:
logging.debug( logger.debug(
f"Stream {self.peer_id}:{stream_id}:" f"Stream {self.peer_id}:{stream_id}:"
f"recv_closed=True, buffer empty, raising EOF" f"recv_closed=True, buffer empty, raising EOF"
) )
raise MuxedStreamEOF("Stream is closed for receiving") raise MuxedStreamEOF("Stream is closed for receiving")
# Wait for data if stream is still open # Wait for data if stream is still open
logging.debug(f"Waiting for data on stream {self.peer_id}:{stream_id}") logger.debug(f"Waiting for data on stream {self.peer_id}:{stream_id}")
try: try:
await self.stream_events[stream_id].wait() await self.stream_events[stream_id].wait()
self.stream_events[stream_id] = trio.Event() self.stream_events[stream_id] = trio.Event()
@ -479,7 +531,7 @@ class Yamux(IMuxedConn):
try: try:
header = await self.secured_conn.read(HEADER_SIZE) header = await self.secured_conn.read(HEADER_SIZE)
if not header or len(header) < HEADER_SIZE: if not header or len(header) < HEADER_SIZE:
logging.debug( logger.debug(
f"Connection closed orincomplete header for peer {self.peer_id}" f"Connection closed orincomplete header for peer {self.peer_id}"
) )
self.event_shutting_down.set() self.event_shutting_down.set()
@ -488,7 +540,7 @@ class Yamux(IMuxedConn):
version, typ, flags, stream_id, length = struct.unpack( version, typ, flags, stream_id, length = struct.unpack(
YAMUX_HEADER_FORMAT, header YAMUX_HEADER_FORMAT, header
) )
logging.debug( logger.debug(
f"Received header for peer {self.peer_id}:" f"Received header for peer {self.peer_id}:"
f"type={typ}, flags={flags}, stream_id={stream_id}," f"type={typ}, flags={flags}, stream_id={stream_id},"
f"length={length}" f"length={length}"
@ -509,7 +561,7 @@ class Yamux(IMuxedConn):
0, 0,
) )
await self.secured_conn.write(ack_header) await self.secured_conn.write(ack_header)
logging.debug( logger.debug(
f"Sending stream {stream_id}" f"Sending stream {stream_id}"
f"to channel for peer {self.peer_id}" f"to channel for peer {self.peer_id}"
) )
@ -527,7 +579,7 @@ class Yamux(IMuxedConn):
elif typ == TYPE_DATA and flags & FLAG_RST: elif typ == TYPE_DATA and flags & FLAG_RST:
async with self.streams_lock: async with self.streams_lock:
if stream_id in self.streams: if stream_id in self.streams:
logging.debug( logger.debug(
f"Resetting stream {stream_id} for peer {self.peer_id}" f"Resetting stream {stream_id} for peer {self.peer_id}"
) )
self.streams[stream_id].closed = True self.streams[stream_id].closed = True
@ -536,27 +588,27 @@ class Yamux(IMuxedConn):
elif typ == TYPE_DATA and flags & FLAG_ACK: elif typ == TYPE_DATA and flags & FLAG_ACK:
async with self.streams_lock: async with self.streams_lock:
if stream_id in self.streams: if stream_id in self.streams:
logging.debug( logger.debug(
f"Received ACK for stream" f"Received ACK for stream"
f"{stream_id} for peer {self.peer_id}" f"{stream_id} for peer {self.peer_id}"
) )
elif typ == TYPE_GO_AWAY: elif typ == TYPE_GO_AWAY:
error_code = length error_code = length
if error_code == GO_AWAY_NORMAL: if error_code == GO_AWAY_NORMAL:
logging.debug( logger.debug(
f"Received GO_AWAY for peer" f"Received GO_AWAY for peer"
f"{self.peer_id}: Normal termination" f"{self.peer_id}: Normal termination"
) )
elif error_code == GO_AWAY_PROTOCOL_ERROR: elif error_code == GO_AWAY_PROTOCOL_ERROR:
logging.error( logger.error(
f"Received GO_AWAY for peer{self.peer_id}: Protocol error" f"Received GO_AWAY for peer{self.peer_id}: Protocol error"
) )
elif error_code == GO_AWAY_INTERNAL_ERROR: elif error_code == GO_AWAY_INTERNAL_ERROR:
logging.error( logger.error(
f"Received GO_AWAY for peer {self.peer_id}: Internal error" f"Received GO_AWAY for peer {self.peer_id}: Internal error"
) )
else: else:
logging.error( logger.error(
f"Received GO_AWAY for peer {self.peer_id}" f"Received GO_AWAY for peer {self.peer_id}"
f"with unknown error code: {error_code}" f"with unknown error code: {error_code}"
) )
@ -565,7 +617,7 @@ class Yamux(IMuxedConn):
break break
elif typ == TYPE_PING: elif typ == TYPE_PING:
if flags & FLAG_SYN: if flags & FLAG_SYN:
logging.debug( logger.debug(
f"Received ping request with value" f"Received ping request with value"
f"{length} for peer {self.peer_id}" f"{length} for peer {self.peer_id}"
) )
@ -574,7 +626,7 @@ class Yamux(IMuxedConn):
) )
await self.secured_conn.write(ping_header) await self.secured_conn.write(ping_header)
elif flags & FLAG_ACK: elif flags & FLAG_ACK:
logging.debug( logger.debug(
f"Received ping response with value" f"Received ping response with value"
f"{length} for peer {self.peer_id}" f"{length} for peer {self.peer_id}"
) )
@ -588,7 +640,7 @@ class Yamux(IMuxedConn):
self.stream_buffers[stream_id].extend(data) self.stream_buffers[stream_id].extend(data)
self.stream_events[stream_id].set() self.stream_events[stream_id].set()
if flags & FLAG_FIN: if flags & FLAG_FIN:
logging.debug( logger.debug(
f"Received FIN for stream {self.peer_id}:" f"Received FIN for stream {self.peer_id}:"
f"{stream_id}, marking recv_closed" f"{stream_id}, marking recv_closed"
) )
@ -596,7 +648,7 @@ class Yamux(IMuxedConn):
if self.streams[stream_id].send_closed: if self.streams[stream_id].send_closed:
self.streams[stream_id].closed = True self.streams[stream_id].closed = True
except Exception as e: except Exception as e:
logging.error(f"Error reading data for stream {stream_id}: {e}") logger.error(f"Error reading data for stream {stream_id}: {e}")
# Mark stream as closed on read error # Mark stream as closed on read error
async with self.streams_lock: async with self.streams_lock:
if stream_id in self.streams: if stream_id in self.streams:
@ -610,7 +662,7 @@ class Yamux(IMuxedConn):
if stream_id in self.streams: if stream_id in self.streams:
stream = self.streams[stream_id] stream = self.streams[stream_id]
async with stream.window_lock: async with stream.window_lock:
logging.debug( logger.debug(
f"Received window update for stream" f"Received window update for stream"
f"{self.peer_id}:{stream_id}," f"{self.peer_id}:{stream_id},"
f" increment: {increment}" f" increment: {increment}"
@ -625,7 +677,7 @@ class Yamux(IMuxedConn):
and details.get("requested_count") == 2 and details.get("requested_count") == 2
and details.get("received_count") == 0 and details.get("received_count") == 0
): ):
logging.info( logger.info(
f"Stream closed cleanly for peer {self.peer_id}" f"Stream closed cleanly for peer {self.peer_id}"
+ f" (IncompleteReadError: {details})" + f" (IncompleteReadError: {details})"
) )
@ -633,15 +685,32 @@ class Yamux(IMuxedConn):
await self._cleanup_on_error() await self._cleanup_on_error()
break break
else: else:
logging.error( logger.error(
f"Error in handle_incoming for peer {self.peer_id}: " f"Error in handle_incoming for peer {self.peer_id}: "
+ f"{type(e).__name__}: {str(e)}" + f"{type(e).__name__}: {str(e)}"
) )
else: else:
logging.error( # Handle RawConnError with more nuance
f"Error in handle_incoming for peer {self.peer_id}: " if isinstance(e, RawConnError):
+ f"{type(e).__name__}: {str(e)}" error_msg = str(e)
) # If RawConnError is empty, it's likely normal cleanup
if not error_msg.strip():
logger.info(
f"RawConnError (empty) during cleanup for peer "
f"{self.peer_id} (normal connection shutdown)"
)
else:
# Log non-empty RawConnError as warning
logger.warning(
f"RawConnError during connection handling for peer "
f"{self.peer_id}: {error_msg}"
)
else:
# Log all other errors normally
logger.error(
f"Error in handle_incoming for peer {self.peer_id}: "
+ f"{type(e).__name__}: {str(e)}"
)
# Don't crash the whole connection for temporary errors # Don't crash the whole connection for temporary errors
if self.event_shutting_down.is_set() or isinstance( if self.event_shutting_down.is_set() or isinstance(
e, (RawConnError, OSError) e, (RawConnError, OSError)
@ -671,9 +740,9 @@ class Yamux(IMuxedConn):
# Close the secured connection # Close the secured connection
try: try:
await self.secured_conn.close() await self.secured_conn.close()
logging.debug(f"Successfully closed secured_conn for peer {self.peer_id}") logger.debug(f"Successfully closed secured_conn for peer {self.peer_id}")
except Exception as close_error: except Exception as close_error:
logging.error( logger.error(
f"Error closing secured_conn for peer {self.peer_id}: {close_error}" f"Error closing secured_conn for peer {self.peer_id}: {close_error}"
) )
@ -682,14 +751,14 @@ class Yamux(IMuxedConn):
# Call on_close callback if provided # Call on_close callback if provided
if self.on_close: if self.on_close:
logging.debug(f"Calling on_close for peer {self.peer_id}") logger.debug(f"Calling on_close for peer {self.peer_id}")
try: try:
if inspect.iscoroutinefunction(self.on_close): if inspect.iscoroutinefunction(self.on_close):
await self.on_close() await self.on_close()
else: else:
self.on_close() self.on_close()
except Exception as callback_error: except Exception as callback_error:
logging.error(f"Error in on_close callback: {callback_error}") logger.error(f"Error in on_close callback: {callback_error}")
# Cancel nursery tasks # Cancel nursery tasks
if self._nursery: if self._nursery:

View File

@ -7,6 +7,9 @@ from libp2p.utils.varint import (
encode_varint_prefixed, encode_varint_prefixed,
read_delim, read_delim,
read_varint_prefixed_bytes, read_varint_prefixed_bytes,
decode_varint_from_bytes,
decode_varint_with_size,
read_length_prefixed_protobuf,
) )
from libp2p.utils.version import ( from libp2p.utils.version import (
get_agent_version, get_agent_version,
@ -20,4 +23,7 @@ __all__ = [
"get_agent_version", "get_agent_version",
"read_delim", "read_delim",
"read_varint_prefixed_bytes", "read_varint_prefixed_bytes",
"decode_varint_from_bytes",
"decode_varint_with_size",
"read_length_prefixed_protobuf",
] ]

View File

@ -1,7 +1,9 @@
import itertools import itertools
import logging import logging
import math import math
from typing import BinaryIO
from libp2p.abc import INetStream
from libp2p.exceptions import ( from libp2p.exceptions import (
ParseError, ParseError,
) )
@ -25,18 +27,41 @@ HIGH_MASK = 2**7
SHIFT_64_BIT_MAX = int(math.ceil(64 / 7)) * 7 SHIFT_64_BIT_MAX = int(math.ceil(64 / 7)) * 7
def encode_uvarint(number: int) -> bytes: def encode_uvarint(value: int) -> bytes:
"""Pack `number` into varint bytes.""" """Encode an unsigned integer as a varint."""
buf = b"" if value < 0:
while True: raise ValueError("Cannot encode negative value as uvarint")
towrite = number & 0x7F
number >>= 7 result = bytearray()
if number: while value >= 0x80:
buf += bytes((towrite | 0x80,)) result.append((value & 0x7F) | 0x80)
else: value >>= 7
buf += bytes((towrite,)) result.append(value & 0x7F)
return bytes(result)
def decode_uvarint(data: bytes) -> int:
"""Decode a varint from bytes."""
if not data:
raise ParseError("Unexpected end of data")
result = 0
shift = 0
for byte in data:
result |= (byte & 0x7F) << shift
if (byte & 0x80) == 0:
break break
return buf shift += 7
if shift >= 64:
raise ValueError("Varint too long")
return result
def decode_varint_from_bytes(data: bytes) -> int:
"""Decode a varint from bytes (alias for decode_uvarint for backward comp)."""
return decode_uvarint(data)
async def decode_uvarint_from_stream(reader: Reader) -> int: async def decode_uvarint_from_stream(reader: Reader) -> int:
@ -44,7 +69,9 @@ async def decode_uvarint_from_stream(reader: Reader) -> int:
res = 0 res = 0
for shift in itertools.count(0, 7): for shift in itertools.count(0, 7):
if shift > SHIFT_64_BIT_MAX: if shift > SHIFT_64_BIT_MAX:
raise ParseError("TODO: better exception msg: Integer is too large...") raise ParseError(
"Varint decoding error: integer exceeds maximum size of 64 bits."
)
byte = await read_exactly(reader, 1) byte = await read_exactly(reader, 1)
value = byte[0] value = byte[0]
@ -56,9 +83,35 @@ async def decode_uvarint_from_stream(reader: Reader) -> int:
return res return res
def encode_varint_prefixed(msg_bytes: bytes) -> bytes: def decode_varint_with_size(data: bytes) -> tuple[int, int]:
varint_len = encode_uvarint(len(msg_bytes)) """
return varint_len + msg_bytes Decode a varint from bytes and return both the value and the number of bytes
consumed.
Returns:
Tuple[int, int]: (value, bytes_consumed)
"""
result = 0
shift = 0
bytes_consumed = 0
for byte in data:
result |= (byte & 0x7F) << shift
bytes_consumed += 1
if (byte & 0x80) == 0:
break
shift += 7
if shift >= 64:
raise ValueError("Varint too long")
return result, bytes_consumed
def encode_varint_prefixed(data: bytes) -> bytes:
"""Encode data with a varint length prefix."""
length_bytes = encode_uvarint(len(data))
return length_bytes + data
async def read_varint_prefixed_bytes(reader: Reader) -> bytes: async def read_varint_prefixed_bytes(reader: Reader) -> bytes:
@ -85,3 +138,95 @@ async def read_delim(reader: Reader) -> bytes:
f'`msg_bytes` is not delimited by b"\\n": `msg_bytes`={msg_bytes!r}' f'`msg_bytes` is not delimited by b"\\n": `msg_bytes`={msg_bytes!r}'
) )
return msg_bytes[:-1] return msg_bytes[:-1]
def read_varint_prefixed_bytes_sync(
stream: BinaryIO, max_length: int = 1024 * 1024
) -> bytes:
"""
Read varint-prefixed bytes from a stream.
Args:
stream: A stream-like object with a read() method
max_length: Maximum allowed data length to prevent memory exhaustion
Returns:
bytes: The data without the length prefix
Raises:
ValueError: If the length prefix is invalid or too large
EOFError: If the stream ends unexpectedly
"""
# Read the varint length prefix
length_bytes = b""
while True:
byte_data = stream.read(1)
if not byte_data:
raise EOFError("Stream ended while reading varint length prefix")
length_bytes += byte_data
if byte_data[0] & 0x80 == 0:
break
# Decode the length
length = decode_uvarint(length_bytes)
if length > max_length:
raise ValueError(f"Data length {length} exceeds maximum allowed {max_length}")
# Read the data
data = stream.read(length)
if len(data) != length:
raise EOFError(f"Expected {length} bytes, got {len(data)}")
return data
async def read_length_prefixed_protobuf(
stream: INetStream, use_varint_format: bool = True, max_length: int = 1024 * 1024
) -> bytes:
"""Read a protobuf message from a stream, handling both formats."""
if use_varint_format:
# Read length-prefixed protobuf message from the stream
# First read the varint length prefix
length_bytes = b""
while True:
b = await stream.read(1)
if not b:
raise Exception("No length prefix received")
length_bytes += b
if b[0] & 0x80 == 0:
break
msg_length = decode_varint_from_bytes(length_bytes)
if msg_length > max_length:
raise Exception(
f"Message length {msg_length} exceeds maximum allowed {max_length}"
)
# Read the protobuf message
data = await stream.read(msg_length)
if len(data) != msg_length:
raise Exception(
f"Incomplete message: expected {msg_length}, got {len(data)}"
)
return data
else:
# Read raw protobuf message from the stream
# For raw format, read all available data in one go
data = await stream.read()
# If we got no data, raise an exception
if not data:
raise Exception("No data received in raw format")
if len(data) > max_length:
raise Exception(
f"Message length {len(data)} exceeds maximum allowed {max_length}"
)
return data

View File

@ -1 +0,0 @@
Added support for ``Kademlia DHT`` in py-libp2p.

View File

@ -1 +0,0 @@
Limit concurrency in `push_identify_to_peers` to prevent resource congestion under high peer counts.

View File

@ -1,7 +0,0 @@
Store public key and peer ID in peerstore during handshake
Modified the InsecureTransport class to accept an optional peerstore parameter and updated the handshake process to store the received public key and peer ID in the peerstore when available.
Added test cases to verify:
1. The peerstore remains unchanged when handshake fails due to peer ID mismatch
2. The handshake correctly adds a public key to a peer ID that already exists in the peerstore but doesn't have a public key yet

View File

@ -1 +0,0 @@
Added support for ``Multicast DNS`` in py-libp2p

View File

@ -1 +0,0 @@
Refactored gossipsub heartbeat logic to use a single helper method `_handle_topic_heartbeat` that handles both fanout and gossip heartbeats.

View File

@ -1 +0,0 @@
Added sparse connect utility function to pubsub test utilities for creating test networks with configurable connectivity.

View File

@ -1,2 +0,0 @@
Reordered the arguments to `upgrade_security` to place `is_initiator` before `peer_id`, and made `peer_id` optional.
This allows the method to reflect the fact that peer identity is not required for inbound connections.

View File

@ -1 +0,0 @@
Uses the `decapsulate` method of the `Multiaddr` class to clean up the observed address.

View File

@ -1 +0,0 @@
Optimized pubsub publishing to send multiple topics in a single message instead of separate messages per topic.

View File

@ -1 +0,0 @@
Optimized pubsub message writing by implementing a write_msg() method that uses pre-allocated buffers and single write operations, improving performance by eliminating separate varint prefix encoding and write operations in FloodSub and GossipSub.

View File

@ -1 +0,0 @@
added peer exchange and backoff logic as part of Gossipsub v1.1 upgrade

View File

@ -1,4 +0,0 @@
Add timeout wrappers in:
1. multiselect.py: `negotiate` function
2. multiselect_client.py: `select_one_of` , `query_multistream_command` functions
to prevent indefinite hangs when a remote peer does not respond.

View File

@ -1 +0,0 @@
align stream creation logic with yamux specification

View File

@ -1 +0,0 @@
Fixed an issue in `Pubsub` where async validators were not handled reliably under concurrency. Now uses a safe aggregator list for consistent behavior.

View File

@ -1 +0,0 @@
Added comprehensive tests for pubsub connection utility functions to verify degree limits are enforced, excess peers are handled correctly, and edge cases (degree=0, negative values, empty lists) are managed gracefully.

View File

@ -0,0 +1 @@
Added `Bootstrap` peer discovery module that allows nodes to connect to predefined bootstrap peers for network discovery.

View File

@ -0,0 +1,3 @@
Improved type safety in `get_mux()` and `get_protocols()` by returning properly typed values instead
of `Any`. Also updated `identify.py` and `discovery.py` to handle `None` values safely and
compare protocols correctly.

View File

@ -0,0 +1 @@
Add lock for read/write to avoid interleaving receiving messages in mplex_stream.py

View File

@ -0,0 +1 @@
Add comprehensive tests for relay_discovery method in circuit_relay_v2

View File

@ -0,0 +1 @@
Add logic to clear_peerdata method in peerstore

View File

@ -0,0 +1 @@
[mplex] Add timeout and error handling during stream close

View File

@ -0,0 +1,2 @@
Added the `Certified Addr-Book` interface supported by `Envelope` and `PeerRecord` class.
Integrated the signed-peer-record transfer in the identify/push protocols.

View File

@ -0,0 +1,2 @@
Added throttling for async topic validators in validate_msg, enforcing a
concurrency limit to prevent resource exhaustion under heavy load.

View File

@ -0,0 +1 @@
fixed malformed PeerId in test_peerinfo

View File

@ -0,0 +1 @@
fixed a typecheck error using cast in peerinfo.py

View File

@ -0,0 +1 @@
Improve error message under the function decode_uvarint_from_stream in libp2p/utils/varint.py file

View File

@ -0,0 +1 @@
identify protocol use now prefix-length messages by default. use use_varint_format param for old raw messages

View File

@ -0,0 +1 @@
add length-prefixed support to identify protocol

View File

@ -0,0 +1 @@
Fix raw format reading in identify/push protocol and add comprehensive test coverage for both varint and raw formats

View File

@ -0,0 +1 @@
Pin py-multiaddr dependency to specific git commit db8124e2321f316d3b7d2733c7df11d6ad9c03e6

View File

@ -0,0 +1 @@
Replace the libp2p.peer.ID cache attributes with functools.cached_property functional decorator.

View File

@ -0,0 +1 @@
Clarified the requirement for a trailing newline in newsfragments to pass lint checks.

View File

@ -0,0 +1 @@
Fixed incorrect handling of raw protobuf format in identify protocol. The identify example now properly handles both raw and length-prefixed (varint) message formats, provides better error messages, and displays connection status with peer IDs. Replaced mock-based tests with comprehensive real network integration tests for both formats.

View File

@ -0,0 +1 @@
Fixed incorrect handling of raw protobuf format in identify push protocol. The identify push example now properly handles both raw and length-prefixed (varint) message formats, provides better error messages, and displays connection status with peer IDs. Replaced mock-based tests with comprehensive real network integration tests for both formats.

View File

@ -0,0 +1 @@
Yamux RawConnError Logging Refactor - Improved error handling and debug logging

View File

@ -18,12 +18,19 @@ Each file should be named like `<ISSUE>.<TYPE>.rst`, where
- `performance` - `performance`
- `removal` - `removal`
So for example: `123.feature.rst`, `456.bugfix.rst` So for example: `1024.feature.rst`
**Important**: Ensure the file ends with a newline character (`\n`) to pass GitHub tox linting checks.
```
Added support for Ed25519 key generation in libp2p peer identity creation.
```
If the PR fixes an issue, use that number here. If there is no issue, If the PR fixes an issue, use that number here. If there is no issue,
then open up the PR first and use the PR number for the newsfragment. then open up the PR first and use the PR number for the newsfragment.
Note that the `towncrier` tool will automatically **Note** that the `towncrier` tool will automatically
reflow your text, so don't try to do any fancy formatting. Run reflow your text, so don't try to do any fancy formatting. Run
`towncrier build --draft` to get a preview of what the release notes entry `towncrier build --draft` to get a preview of what the release notes entry
will look like in the final release notes. will look like in the final release notes.

View File

@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project] [project]
name = "libp2p" name = "libp2p"
version = "0.2.8" version = "0.2.9"
description = "libp2p: The Python implementation of the libp2p networking stack" description = "libp2p: The Python implementation of the libp2p networking stack"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10, <4.0" requires-python = ">=3.10, <4.0"
@ -19,10 +19,11 @@ dependencies = [
"exceptiongroup>=1.2.0; python_version < '3.11'", "exceptiongroup>=1.2.0; python_version < '3.11'",
"grpcio>=1.41.0", "grpcio>=1.41.0",
"lru-dict>=1.1.6", "lru-dict>=1.1.6",
"multiaddr>=0.0.9", # "multiaddr>=0.0.9",
"multiaddr @ git+https://github.com/multiformats/py-multiaddr.git@db8124e2321f316d3b7d2733c7df11d6ad9c03e6",
"mypy-protobuf>=3.0.0", "mypy-protobuf>=3.0.0",
"noiseprotocol>=0.3.0", "noiseprotocol>=0.3.0",
"protobuf>=4.21.0,<5.0.0", "protobuf>=4.25.0,<5.0.0",
"pycryptodome>=3.9.2", "pycryptodome>=3.9.2",
"pymultihash>=0.8.2", "pymultihash>=0.8.2",
"pynacl>=1.3.0", "pynacl>=1.3.0",
@ -188,7 +189,7 @@ name = "Removals"
showcontent = true showcontent = true
[tool.bumpversion] [tool.bumpversion]
current_version = "0.2.8" current_version = "0.2.9"
parse = """ parse = """
(?P<major>\\d+) (?P<major>\\d+)
\\.(?P<minor>\\d+) \\.(?P<minor>\\d+)

View File

@ -11,10 +11,10 @@ from libp2p.identity.identify.identify import (
PROTOCOL_VERSION, PROTOCOL_VERSION,
_mk_identify_protobuf, _mk_identify_protobuf,
_multiaddr_to_bytes, _multiaddr_to_bytes,
parse_identify_response,
) )
from libp2p.identity.identify.pb.identify_pb2 import ( from libp2p.peer.envelope import Envelope, consume_envelope, unmarshal_envelope
Identify, from libp2p.peer.peer_record import unmarshal_record
)
from tests.utils.factories import ( from tests.utils.factories import (
host_pair_factory, host_pair_factory,
) )
@ -29,14 +29,31 @@ async def test_identify_protocol(security_protocol):
host_b, host_b,
): ):
# Here, host_b is the requester and host_a is the responder. # Here, host_b is the requester and host_a is the responder.
# observed_addr represent host_bs address as observed by host_a # observed_addr represent host_b's address as observed by host_a
# (i.e., the address from which host_bs request was received). # (i.e., the address from which host_b's request was received).
stream = await host_b.new_stream(host_a.get_id(), (ID,)) stream = await host_b.new_stream(host_a.get_id(), (ID,))
response = await stream.read()
# Read the response (could be either format)
# Read a larger chunk to get all the data before stream closes
response = await stream.read(8192) # Read enough data in one go
await stream.close() await stream.close()
identify_response = Identify() # Parse the response (handles both old and new formats)
identify_response.ParseFromString(response) identify_response = parse_identify_response(response)
# Validate the recieved envelope and then store it in the certified-addr-book
envelope, record = consume_envelope(
identify_response.signedPeerRecord, "libp2p-peer-record"
)
assert host_b.peerstore.consume_peer_record(envelope, ttl=7200)
# Check if the peer_id in the record is same as of host_a
assert record.peer_id == host_a.get_id()
# Check if the peer-record is correctly consumed
assert host_a.get_addrs() == host_b.peerstore.addrs(host_a.get_id())
assert isinstance(host_b.peerstore.get_peer_record(host_a.get_id()), Envelope)
logger.debug("host_a: %s", host_a.get_addrs()) logger.debug("host_a: %s", host_a.get_addrs())
logger.debug("host_b: %s", host_b.get_addrs()) logger.debug("host_b: %s", host_b.get_addrs())
@ -62,11 +79,21 @@ async def test_identify_protocol(security_protocol):
logger.debug("observed_addr: %s", Multiaddr(identify_response.observed_addr)) logger.debug("observed_addr: %s", Multiaddr(identify_response.observed_addr))
logger.debug("host_b.get_addrs()[0]: %s", host_b.get_addrs()[0]) logger.debug("host_b.get_addrs()[0]: %s", host_b.get_addrs()[0])
logger.debug("cleaned_addr= %s", cleaned_addr)
assert identify_response.observed_addr == _multiaddr_to_bytes(cleaned_addr) # The observed address should match the cleaned address
assert Multiaddr(identify_response.observed_addr) == cleaned_addr
# Check protocols # Check protocols
assert set(identify_response.protocols) == set(host_a.get_mux().get_protocols()) assert set(identify_response.protocols) == set(host_a.get_mux().get_protocols())
# sanity check # sanity check if the peer_id of the identify msg are same
assert identify_response == _mk_identify_protobuf(host_a, cleaned_addr) assert (
unmarshal_record(
unmarshal_envelope(identify_response.signedPeerRecord).raw_payload
).peer_id
== unmarshal_record(
unmarshal_envelope(
_mk_identify_protobuf(host_a, cleaned_addr).signedPeerRecord
).raw_payload
).peer_id
)

View File

@ -0,0 +1,241 @@
import logging
import pytest
from libp2p.custom_types import TProtocol
from libp2p.identity.identify.identify import (
AGENT_VERSION,
ID,
PROTOCOL_VERSION,
_multiaddr_to_bytes,
identify_handler_for,
parse_identify_response,
)
from tests.utils.factories import host_pair_factory
logger = logging.getLogger("libp2p.identity.identify-integration-test")
@pytest.mark.trio
async def test_identify_protocol_varint_format_integration(security_protocol):
"""Test identify protocol with varint format in real network scenario."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
host_a.set_stream_handler(
ID, identify_handler_for(host_a, use_varint_format=True)
)
# Make identify request
stream = await host_b.new_stream(host_a.get_id(), (ID,))
response = await stream.read(8192)
await stream.close()
# Parse response
result = parse_identify_response(response)
# Verify response content
assert result.agent_version == AGENT_VERSION
assert result.protocol_version == PROTOCOL_VERSION
assert result.public_key == host_a.get_public_key().serialize()
assert result.listen_addrs == [
_multiaddr_to_bytes(addr) for addr in host_a.get_addrs()
]
@pytest.mark.trio
async def test_identify_protocol_raw_format_integration(security_protocol):
"""Test identify protocol with raw format in real network scenario."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
host_a.set_stream_handler(
ID, identify_handler_for(host_a, use_varint_format=False)
)
# Make identify request
stream = await host_b.new_stream(host_a.get_id(), (ID,))
response = await stream.read(8192)
await stream.close()
# Parse response
result = parse_identify_response(response)
# Verify response content
assert result.agent_version == AGENT_VERSION
assert result.protocol_version == PROTOCOL_VERSION
assert result.public_key == host_a.get_public_key().serialize()
assert result.listen_addrs == [
_multiaddr_to_bytes(addr) for addr in host_a.get_addrs()
]
@pytest.mark.trio
async def test_identify_default_format_behavior(security_protocol):
"""Test identify protocol uses correct default format."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Use default identify handler (should use varint format)
host_a.set_stream_handler(ID, identify_handler_for(host_a))
# Make identify request
stream = await host_b.new_stream(host_a.get_id(), (ID,))
response = await stream.read(8192)
await stream.close()
# Parse response
result = parse_identify_response(response)
# Verify response content
assert result.agent_version == AGENT_VERSION
assert result.protocol_version == PROTOCOL_VERSION
assert result.public_key == host_a.get_public_key().serialize()
@pytest.mark.trio
async def test_identify_cross_format_compatibility_varint_to_raw(security_protocol):
"""Test varint dialer with raw listener compatibility."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Host A uses raw format
host_a.set_stream_handler(
ID, identify_handler_for(host_a, use_varint_format=False)
)
# Host B makes request (will automatically detect format)
stream = await host_b.new_stream(host_a.get_id(), (ID,))
response = await stream.read(8192)
await stream.close()
# Parse response (should work with automatic format detection)
result = parse_identify_response(response)
# Verify response content
assert result.agent_version == AGENT_VERSION
assert result.protocol_version == PROTOCOL_VERSION
assert result.public_key == host_a.get_public_key().serialize()
@pytest.mark.trio
async def test_identify_cross_format_compatibility_raw_to_varint(security_protocol):
"""Test raw dialer with varint listener compatibility."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Host A uses varint format
host_a.set_stream_handler(
ID, identify_handler_for(host_a, use_varint_format=True)
)
# Host B makes request (will automatically detect format)
stream = await host_b.new_stream(host_a.get_id(), (ID,))
response = await stream.read(8192)
await stream.close()
# Parse response (should work with automatic format detection)
result = parse_identify_response(response)
# Verify response content
assert result.agent_version == AGENT_VERSION
assert result.protocol_version == PROTOCOL_VERSION
assert result.public_key == host_a.get_public_key().serialize()
@pytest.mark.trio
async def test_identify_format_detection_robustness(security_protocol):
"""Test identify protocol format detection is robust with various message sizes."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Test both formats with different message sizes
for use_varint in [True, False]:
host_a.set_stream_handler(
ID, identify_handler_for(host_a, use_varint_format=use_varint)
)
# Make identify request
stream = await host_b.new_stream(host_a.get_id(), (ID,))
response = await stream.read(8192)
await stream.close()
# Parse response
result = parse_identify_response(response)
# Verify response content
assert result.agent_version == AGENT_VERSION
assert result.protocol_version == PROTOCOL_VERSION
assert result.public_key == host_a.get_public_key().serialize()
@pytest.mark.trio
async def test_identify_large_message_handling(security_protocol):
"""Test identify protocol handles large messages with many protocols."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add many protocols to make the message larger
async def dummy_handler(stream):
pass
for i in range(10):
host_a.set_stream_handler(TProtocol(f"/test/protocol/{i}"), dummy_handler)
host_a.set_stream_handler(
ID, identify_handler_for(host_a, use_varint_format=True)
)
# Make identify request
stream = await host_b.new_stream(host_a.get_id(), (ID,))
response = await stream.read(8192)
await stream.close()
# Parse response
result = parse_identify_response(response)
# Verify response content
assert result.agent_version == AGENT_VERSION
assert result.protocol_version == PROTOCOL_VERSION
assert result.public_key == host_a.get_public_key().serialize()
@pytest.mark.trio
async def test_identify_message_equivalence_real_network(security_protocol):
"""Test that both formats produce equivalent messages in real network."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Test varint format
host_a.set_stream_handler(
ID, identify_handler_for(host_a, use_varint_format=True)
)
stream_varint = await host_b.new_stream(host_a.get_id(), (ID,))
response_varint = await stream_varint.read(8192)
await stream_varint.close()
# Test raw format
host_a.set_stream_handler(
ID, identify_handler_for(host_a, use_varint_format=False)
)
stream_raw = await host_b.new_stream(host_a.get_id(), (ID,))
response_raw = await stream_raw.read(8192)
await stream_raw.close()
# Parse both responses
result_varint = parse_identify_response(response_varint)
result_raw = parse_identify_response(response_raw)
# Both should produce identical parsed results
assert result_varint.agent_version == result_raw.agent_version
assert result_varint.protocol_version == result_raw.protocol_version
assert result_varint.public_key == result_raw.public_key
assert result_varint.listen_addrs == result_raw.listen_addrs

View File

@ -35,6 +35,8 @@ from tests.utils.factories import (
) )
from tests.utils.utils import ( from tests.utils.utils import (
create_mock_connections, create_mock_connections,
run_host_forever,
wait_until_listening,
) )
logger = logging.getLogger("libp2p.identity.identify-push-test") logger = logging.getLogger("libp2p.identity.identify-push-test")
@ -457,7 +459,11 @@ async def test_push_identify_to_peers_respects_concurrency_limit():
lock = trio.Lock() lock = trio.Lock()
async def mock_push_identify_to_peer( async def mock_push_identify_to_peer(
host, peer_id, observed_multiaddr=None, limit=trio.Semaphore(CONCURRENCY_LIMIT) host,
peer_id,
observed_multiaddr=None,
limit=trio.Semaphore(CONCURRENCY_LIMIT),
use_varint_format=True,
) -> bool: ) -> bool:
""" """
Mock function to test concurrency by simulating an identify message. Mock function to test concurrency by simulating an identify message.
@ -503,3 +509,192 @@ async def test_push_identify_to_peers_respects_concurrency_limit():
assert state["max_observed"] <= CONCURRENCY_LIMIT, ( assert state["max_observed"] <= CONCURRENCY_LIMIT, (
f"Max concurrency observed: {state['max_observed']}" f"Max concurrency observed: {state['max_observed']}"
) )
@pytest.mark.trio
async def test_all_peers_receive_identify_push_with_semaphore(security_protocol):
dummy_peers = []
async with host_pair_factory(security_protocol=security_protocol) as (host_a, _):
# Create dummy peers
for _ in range(50):
key_pair = create_new_key_pair()
dummy_host = new_host(key_pair=key_pair)
dummy_host.set_stream_handler(
ID_PUSH, identify_push_handler_for(dummy_host)
)
listen_addr = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
dummy_peers.append((dummy_host, listen_addr))
async with trio.open_nursery() as nursery:
# Start all dummy hosts
for host, listen_addr in dummy_peers:
nursery.start_soon(run_host_forever, host, listen_addr)
# Wait for all hosts to finish setting up listeners
for host, _ in dummy_peers:
await wait_until_listening(host)
# Now connect host_a → dummy peers
for host, _ in dummy_peers:
await host_a.connect(info_from_p2p_addr(host.get_addrs()[0]))
await push_identify_to_peers(
host_a,
)
await trio.sleep(0.5)
peer_id_a = host_a.get_id()
for host, _ in dummy_peers:
dummy_peerstore = host.get_peerstore()
assert peer_id_a in dummy_peerstore.peer_ids()
nursery.cancel_scope.cancel()
@pytest.mark.trio
async def test_all_peers_receive_identify_push_with_semaphore_under_high_peer_load(
security_protocol,
):
dummy_peers = []
async with host_pair_factory(security_protocol=security_protocol) as (host_a, _):
# Create dummy peers
# Breaking with more than 500 peers
# Trio have a async tasks limit of 1000
for _ in range(499):
key_pair = create_new_key_pair()
dummy_host = new_host(key_pair=key_pair)
dummy_host.set_stream_handler(
ID_PUSH, identify_push_handler_for(dummy_host)
)
listen_addr = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
dummy_peers.append((dummy_host, listen_addr))
async with trio.open_nursery() as nursery:
# Start all dummy hosts
for host, listen_addr in dummy_peers:
nursery.start_soon(run_host_forever, host, listen_addr)
# Wait for all hosts to finish setting up listeners
for host, _ in dummy_peers:
await wait_until_listening(host)
# Now connect host_a → dummy peers
for host, _ in dummy_peers:
await host_a.connect(info_from_p2p_addr(host.get_addrs()[0]))
await push_identify_to_peers(
host_a,
)
await trio.sleep(0.5)
peer_id_a = host_a.get_id()
for host, _ in dummy_peers:
dummy_peerstore = host.get_peerstore()
assert peer_id_a in dummy_peerstore.peer_ids()
nursery.cancel_scope.cancel()
@pytest.mark.trio
async def test_identify_push_default_varint_format(security_protocol):
"""
Test that the identify/push protocol uses varint format by default.
This test verifies that:
1. The default behavior uses length-prefixed messages (varint format)
2. Messages are correctly encoded with varint length prefix
3. Messages are correctly decoded with varint length prefix
4. The peerstore is updated correctly with the received information
"""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Set up the identify/push handlers with default settings
# (use_varint_format=True)
host_b.set_stream_handler(ID_PUSH, identify_push_handler_for(host_b))
# Push identify information from host_a to host_b using default settings
success = await push_identify_to_peer(host_a, host_b.get_id())
assert success, "Identify push should succeed with default varint format"
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Get the peerstore from host_b
peerstore = host_b.get_peerstore()
peer_id = host_a.get_id()
# Verify that the peerstore was updated correctly
assert peer_id in peerstore.peer_ids()
# Check that addresses have been updated
host_a_addrs = set(host_a.get_addrs())
peerstore_addrs = set(peerstore.addrs(peer_id))
assert all(addr in peerstore_addrs for addr in host_a_addrs)
# Check that protocols have been updated
host_a_protocols = set(host_a.get_mux().get_protocols())
peerstore_protocols = set(peerstore.get_protocols(peer_id))
assert all(protocol in peerstore_protocols for protocol in host_a_protocols)
# Check that the public key has been updated
host_a_public_key = host_a.get_public_key().serialize()
peerstore_public_key = peerstore.pubkey(peer_id).serialize()
assert host_a_public_key == peerstore_public_key
@pytest.mark.trio
async def test_identify_push_legacy_raw_format(security_protocol):
"""
Test that the identify/push protocol can use legacy raw format when specified.
This test verifies that:
1. When use_varint_format=False, messages are sent without length prefix
2. Raw protobuf messages are correctly encoded and decoded
3. The peerstore is updated correctly with the received information
4. The legacy format is backward compatible
"""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Set up the identify/push handlers with legacy format (use_varint_format=False)
host_b.set_stream_handler(
ID_PUSH, identify_push_handler_for(host_b, use_varint_format=False)
)
# Push identify information from host_a to host_b using legacy format
success = await push_identify_to_peer(
host_a, host_b.get_id(), use_varint_format=False
)
assert success, "Identify push should succeed with legacy raw format"
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Get the peerstore from host_b
peerstore = host_b.get_peerstore()
peer_id = host_a.get_id()
# Verify that the peerstore was updated correctly
assert peer_id in peerstore.peer_ids()
# Check that addresses have been updated
host_a_addrs = set(host_a.get_addrs())
peerstore_addrs = set(peerstore.addrs(peer_id))
assert all(addr in peerstore_addrs for addr in host_a_addrs)
# Check that protocols have been updated
host_a_protocols = set(host_a.get_mux().get_protocols())
peerstore_protocols = set(peerstore.get_protocols(peer_id))
assert all(protocol in peerstore_protocols for protocol in host_a_protocols)
# Check that the public key has been updated
host_a_public_key = host_a.get_public_key().serialize()
peerstore_public_key = peerstore.pubkey(peer_id).serialize()
assert host_a_public_key == peerstore_public_key

View File

@ -0,0 +1,552 @@
import logging
import pytest
import trio
from libp2p.custom_types import TProtocol
from libp2p.identity.identify_push.identify_push import (
ID_PUSH,
identify_push_handler_for,
push_identify_to_peer,
push_identify_to_peers,
)
from tests.utils.factories import host_pair_factory
logger = logging.getLogger("libp2p.identity.identify-push-integration-test")
@pytest.mark.trio
async def test_identify_push_protocol_varint_format_integration(security_protocol):
"""Test identify/push protocol with varint format in real network scenario."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add some protocols to host_b so it has something to push
async def dummy_handler(stream):
pass
host_b.set_stream_handler(TProtocol("/test/protocol/1"), dummy_handler)
host_b.set_stream_handler(TProtocol("/test/protocol/2"), dummy_handler)
# Set up identify/push handler on host_a
host_a.set_stream_handler(
ID_PUSH, identify_push_handler_for(host_a, use_varint_format=True)
)
# Push identify information from host_b to host_a
await push_identify_to_peer(host_b, host_a.get_id(), use_varint_format=True)
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Verify that host_a's peerstore was updated
peerstore_a = host_a.get_peerstore()
peer_id_b = host_b.get_id()
# Check that addresses were added
addrs = peerstore_a.addrs(peer_id_b)
assert len(addrs) > 0
# Check that protocols were added
protocols = peerstore_a.get_protocols(peer_id_b)
assert protocols is not None
# The protocols should include the dummy protocols we added
assert len(protocols) >= 2 # Should include the dummy protocols
@pytest.mark.trio
async def test_identify_push_protocol_raw_format_integration(security_protocol):
"""Test identify/push protocol with raw format in real network scenario."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add some protocols to both hosts
async def dummy_handler(stream):
pass
host_a.set_stream_handler(TProtocol("/test/protocol/a"), dummy_handler)
host_b.set_stream_handler(TProtocol("/test/protocol/b"), dummy_handler)
# Set up identify/push handler on host_a
host_a.set_stream_handler(
ID_PUSH, identify_push_handler_for(host_a, use_varint_format=False)
)
# Push identify information from host_b to host_a
await push_identify_to_peer(host_b, host_a.get_id(), use_varint_format=False)
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Verify that host_a's peerstore was updated
peerstore_a = host_a.get_peerstore()
peer_id_b = host_b.get_id()
# Check that addresses were added
addrs = peerstore_a.addrs(peer_id_b)
assert len(addrs) > 0
# Check that protocols were added
protocols = peerstore_a.get_protocols(peer_id_b)
assert protocols is not None
assert len(protocols) >= 1 # Should include the dummy protocol
@pytest.mark.trio
async def test_identify_push_default_format_behavior(security_protocol):
"""Test identify/push protocol uses correct default format."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add some protocols to both hosts
async def dummy_handler(stream):
pass
host_a.set_stream_handler(TProtocol("/test/protocol/a"), dummy_handler)
host_b.set_stream_handler(TProtocol("/test/protocol/b"), dummy_handler)
# Use default identify/push handler (should use varint format)
host_a.set_stream_handler(ID_PUSH, identify_push_handler_for(host_a))
# Push identify information from host_b to host_a
await push_identify_to_peer(host_b, host_a.get_id())
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Verify that host_a's peerstore was updated
peerstore_a = host_a.get_peerstore()
peer_id_b = host_b.get_id()
# Check that protocols were added
protocols = peerstore_a.get_protocols(peer_id_b)
assert protocols is not None
assert len(protocols) >= 1 # Should include the dummy protocol
@pytest.mark.trio
async def test_identify_push_cross_format_compatibility_varint_to_raw(
security_protocol,
):
"""Test varint pusher with raw listener compatibility."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Use an event to signal when handler is ready
handler_ready = trio.Event()
# Create a wrapper handler that signals when ready
original_handler = identify_push_handler_for(host_a, use_varint_format=False)
async def wrapped_handler(stream):
handler_ready.set() # Signal that handler is ready
await original_handler(stream)
# Host A uses raw format with wrapped handler
host_a.set_stream_handler(ID_PUSH, wrapped_handler)
# Host B pushes with varint format (should fail gracefully)
success = await push_identify_to_peer(
host_b, host_a.get_id(), use_varint_format=True
)
# This should fail due to format mismatch
# Note: The format detection might be more robust than expected
# so we just check that the operation completes
assert isinstance(success, bool)
@pytest.mark.trio
async def test_identify_push_cross_format_compatibility_raw_to_varint(
security_protocol,
):
"""Test raw pusher with varint listener compatibility."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Use an event to signal when handler is ready
handler_ready = trio.Event()
# Create a wrapper handler that signals when ready
original_handler = identify_push_handler_for(host_a, use_varint_format=True)
async def wrapped_handler(stream):
handler_ready.set() # Signal that handler is ready
await original_handler(stream)
# Host A uses varint format with wrapped handler
host_a.set_stream_handler(ID_PUSH, wrapped_handler)
# Host B pushes with raw format (should fail gracefully)
success = await push_identify_to_peer(
host_b, host_a.get_id(), use_varint_format=False
)
# This should fail due to format mismatch
# Note: The format detection might be more robust than expected
# so we just check that the operation completes
assert isinstance(success, bool)
@pytest.mark.trio
async def test_identify_push_multiple_peers_integration(security_protocol):
"""Test identify/push protocol with multiple peers."""
# Create two hosts using the factory
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Create a third host following the pattern from test_identify_push.py
import multiaddr
from libp2p import new_host
from libp2p.crypto.secp256k1 import create_new_key_pair
from libp2p.peer.peerinfo import info_from_p2p_addr
# Create a new key pair for host_c
key_pair_c = create_new_key_pair()
host_c = new_host(key_pair=key_pair_c)
# Set up identify/push handlers on all hosts
host_a.set_stream_handler(ID_PUSH, identify_push_handler_for(host_a))
host_b.set_stream_handler(ID_PUSH, identify_push_handler_for(host_b))
host_c.set_stream_handler(ID_PUSH, identify_push_handler_for(host_c))
# Start listening on a random port using the run context manager
listen_addr = multiaddr.Multiaddr("/ip4/127.0.0.1/tcp/0")
async with host_c.run([listen_addr]):
# Connect host_c to host_a and host_b using the correct pattern
await host_c.connect(info_from_p2p_addr(host_a.get_addrs()[0]))
await host_c.connect(info_from_p2p_addr(host_b.get_addrs()[0]))
# Push identify information from host_a to all connected peers
await push_identify_to_peers(host_a)
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Check that host_b's peerstore has been updated with host_a's information
peerstore_b = host_b.get_peerstore()
peer_id_a = host_a.get_id()
# Check that the peer is in the peerstore
assert peer_id_a in peerstore_b.peer_ids()
# Check that host_c's peerstore has been updated with host_a's information
peerstore_c = host_c.get_peerstore()
# Check that the peer is in the peerstore
assert peer_id_a in peerstore_c.peer_ids()
# Test for push_identify to only connected peers and not all peers
# Disconnect a from c.
await host_c.disconnect(host_a.get_id())
await push_identify_to_peers(host_c)
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Check that host_a's peerstore has not been updated with host_c's info
assert host_c.get_id() not in host_a.get_peerstore().peer_ids()
# Check that host_b's peerstore has been updated with host_c's info
assert host_c.get_id() in host_b.get_peerstore().peer_ids()
@pytest.mark.trio
async def test_identify_push_large_message_handling(security_protocol):
"""Test identify/push protocol handles large messages with many protocols."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add many protocols to make the message larger
async def dummy_handler(stream):
pass
for i in range(10):
host_b.set_stream_handler(TProtocol(f"/test/protocol/{i}"), dummy_handler)
# Also add some protocols to host_a to ensure it has protocols to push
for i in range(5):
host_a.set_stream_handler(TProtocol(f"/test/protocol/a{i}"), dummy_handler)
# Set up identify/push handler on host_a
host_a.set_stream_handler(
ID_PUSH, identify_push_handler_for(host_a, use_varint_format=True)
)
# Push identify information from host_b to host_a
success = await push_identify_to_peer(
host_b, host_a.get_id(), use_varint_format=True
)
assert success
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Verify that host_a's peerstore was updated with all protocols
peerstore_a = host_a.get_peerstore()
peer_id_b = host_b.get_id()
protocols = peerstore_a.get_protocols(peer_id_b)
assert protocols is not None
assert len(protocols) >= 10 # Should include the dummy protocols
@pytest.mark.trio
async def test_identify_push_peerstore_update_completeness(security_protocol):
"""Test that identify/push updates all relevant peerstore information."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add some protocols to both hosts
async def dummy_handler(stream):
pass
host_a.set_stream_handler(TProtocol("/test/protocol/a"), dummy_handler)
host_b.set_stream_handler(TProtocol("/test/protocol/b"), dummy_handler)
# Set up identify/push handler on host_a
host_a.set_stream_handler(ID_PUSH, identify_push_handler_for(host_a))
# Push identify information from host_b to host_a
await push_identify_to_peer(host_b, host_a.get_id())
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Verify that host_a's peerstore was updated
peerstore_a = host_a.get_peerstore()
peer_id_b = host_b.get_id()
# Check that protocols were added
protocols = peerstore_a.get_protocols(peer_id_b)
assert protocols is not None
assert len(protocols) > 0
# Check that addresses were added
addrs = peerstore_a.addrs(peer_id_b)
assert len(addrs) > 0
# Check that public key was added
pubkey = peerstore_a.pubkey(peer_id_b)
assert pubkey is not None
assert pubkey.serialize() == host_b.get_public_key().serialize()
@pytest.mark.trio
async def test_identify_push_concurrent_requests(security_protocol):
"""Test identify/push protocol handles concurrent requests properly."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add some protocols to both hosts
async def dummy_handler(stream):
pass
host_a.set_stream_handler(TProtocol("/test/protocol/a"), dummy_handler)
host_b.set_stream_handler(TProtocol("/test/protocol/b"), dummy_handler)
# Set up identify/push handler on host_a
host_a.set_stream_handler(ID_PUSH, identify_push_handler_for(host_a))
# Make multiple concurrent push requests
results = []
async def push_identify():
result = await push_identify_to_peer(host_b, host_a.get_id())
results.append(result)
# Run multiple concurrent pushes using nursery
async with trio.open_nursery() as nursery:
for _ in range(3):
nursery.start_soon(push_identify)
# All should succeed
assert len(results) == 3
assert all(results)
# Wait a bit for the pushes to complete
await trio.sleep(0.1)
# Verify that host_a's peerstore was updated
peerstore_a = host_a.get_peerstore()
peer_id_b = host_b.get_id()
protocols = peerstore_a.get_protocols(peer_id_b)
assert protocols is not None
assert len(protocols) > 0
@pytest.mark.trio
async def test_identify_push_stream_handling(security_protocol):
"""Test identify/push protocol properly handles stream lifecycle."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add some protocols to both hosts
async def dummy_handler(stream):
pass
host_a.set_stream_handler(TProtocol("/test/protocol/a"), dummy_handler)
host_b.set_stream_handler(TProtocol("/test/protocol/b"), dummy_handler)
# Set up identify/push handler on host_a
host_a.set_stream_handler(ID_PUSH, identify_push_handler_for(host_a))
# Push identify information from host_b to host_a
success = await push_identify_to_peer(host_b, host_a.get_id())
assert success
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Verify that host_a's peerstore was updated
peerstore_a = host_a.get_peerstore()
peer_id_b = host_b.get_id()
protocols = peerstore_a.get_protocols(peer_id_b)
assert protocols is not None
assert len(protocols) > 0
@pytest.mark.trio
async def test_identify_push_error_handling(security_protocol):
"""Test identify/push protocol handles errors gracefully."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Create a handler that raises an exception but catches it to prevent test
# failure
async def error_handler(stream):
try:
await stream.close()
raise Exception("Test error")
except Exception:
# Catch the exception to prevent it from propagating up
pass
host_a.set_stream_handler(ID_PUSH, error_handler)
# Push should complete (message sent) but handler should fail gracefully
success = await push_identify_to_peer(host_b, host_a.get_id())
assert success # The push operation itself succeeds (message sent)
# Wait a bit for the handler to process
await trio.sleep(0.1)
# Verify that the error was handled gracefully (no test failure)
# The handler caught the exception and didn't propagate it
@pytest.mark.trio
async def test_identify_push_message_equivalence_real_network(security_protocol):
"""Test that both formats produce equivalent peerstore updates in real network."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add some protocols to both hosts
async def dummy_handler(stream):
pass
host_a.set_stream_handler(TProtocol("/test/protocol/a"), dummy_handler)
host_b.set_stream_handler(TProtocol("/test/protocol/b"), dummy_handler)
# Test varint format
host_a.set_stream_handler(
ID_PUSH, identify_push_handler_for(host_a, use_varint_format=True)
)
await push_identify_to_peer(host_b, host_a.get_id(), use_varint_format=True)
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Get peerstore state after varint push
peerstore_a = host_a.get_peerstore()
peer_id_b = host_b.get_id()
protocols_varint = peerstore_a.get_protocols(peer_id_b)
addrs_varint = peerstore_a.addrs(peer_id_b)
# Clear peerstore for next test
peerstore_a.clear_addrs(peer_id_b)
peerstore_a.clear_protocol_data(peer_id_b)
# Test raw format
host_a.set_stream_handler(
ID_PUSH, identify_push_handler_for(host_a, use_varint_format=False)
)
await push_identify_to_peer(host_b, host_a.get_id(), use_varint_format=False)
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Get peerstore state after raw push
protocols_raw = peerstore_a.get_protocols(peer_id_b)
addrs_raw = peerstore_a.addrs(peer_id_b)
# Both should produce equivalent peerstore updates
# Check that both formats successfully updated protocols
assert protocols_varint is not None
assert protocols_raw is not None
assert len(protocols_varint) > 0
assert len(protocols_raw) > 0
# Check that both formats successfully updated addresses
assert addrs_varint is not None
assert addrs_raw is not None
assert len(addrs_varint) > 0
assert len(addrs_raw) > 0
# Both should contain the same essential information
# (exact address lists might differ due to format-specific handling)
assert set(protocols_varint) == set(protocols_raw)
@pytest.mark.trio
async def test_identify_push_with_observed_address(security_protocol):
"""Test identify/push protocol includes observed address information."""
async with host_pair_factory(security_protocol=security_protocol) as (
host_a,
host_b,
):
# Add some protocols to both hosts
async def dummy_handler(stream):
pass
host_a.set_stream_handler(TProtocol("/test/protocol/a"), dummy_handler)
host_b.set_stream_handler(TProtocol("/test/protocol/b"), dummy_handler)
# Set up identify/push handler on host_a
host_a.set_stream_handler(ID_PUSH, identify_push_handler_for(host_a))
# Get host_b's address as observed by host_a
from multiaddr import Multiaddr
host_b_addr = host_b.get_addrs()[0]
observed_multiaddr = Multiaddr(str(host_b_addr))
# Push identify information with observed address
await push_identify_to_peer(
host_b, host_a.get_id(), observed_multiaddr=observed_multiaddr
)
# Wait a bit for the push to complete
await trio.sleep(0.1)
# Verify that host_a's peerstore was updated
peerstore_a = host_a.get_peerstore()
peer_id_b = host_b.get_id()
# Check that addresses were added
addrs = peerstore_a.addrs(peer_id_b)
assert len(addrs) > 0
# The observed address should be among the stored addresses
addr_strings = [str(addr) for addr in addrs]
assert str(observed_multiaddr) in addr_strings

View File

@ -3,7 +3,10 @@ from multiaddr import (
Multiaddr, Multiaddr,
) )
from libp2p.crypto.rsa import create_new_key_pair
from libp2p.peer.envelope import Envelope, seal_record
from libp2p.peer.id import ID from libp2p.peer.id import ID
from libp2p.peer.peer_record import PeerRecord
from libp2p.peer.peerstore import ( from libp2p.peer.peerstore import (
PeerStore, PeerStore,
PeerStoreError, PeerStoreError,
@ -84,3 +87,53 @@ def test_peers_with_addrs():
store.clear_addrs(ID(b"peer2")) store.clear_addrs(ID(b"peer2"))
assert set(store.peers_with_addrs()) == {ID(b"peer3")} assert set(store.peers_with_addrs()) == {ID(b"peer3")}
def test_ceritified_addr_book():
store = PeerStore()
key_pair = create_new_key_pair()
peer_id = ID.from_pubkey(key_pair.public_key)
addrs = [
Multiaddr("/ip4/127.0.0.1/tcp/9000"),
Multiaddr("/ip4/127.0.0.1/tcp/9001"),
]
ttl = 60
# Construct signed PereRecord
record = PeerRecord(peer_id, addrs, 21)
envelope = seal_record(record, key_pair.private_key)
result = store.consume_peer_record(envelope, ttl)
assert result is True
# Retrieve the record
retrieved = store.get_peer_record(peer_id)
assert retrieved is not None
assert isinstance(retrieved, Envelope)
addr_list = store.addrs(peer_id)
assert set(addr_list) == set(addrs)
# Now try to push an older record (should be rejected)
old_record = PeerRecord(peer_id, [Multiaddr("/ip4/10.0.0.1/tcp/4001")], 20)
old_envelope = seal_record(old_record, key_pair.private_key)
result = store.consume_peer_record(old_envelope, ttl)
assert result is False
# Push a new record (should override)
new_addrs = [Multiaddr("/ip4/192.168.0.1/tcp/5001")]
new_record = PeerRecord(peer_id, new_addrs, 23)
new_envelope = seal_record(new_record, key_pair.private_key)
result = store.consume_peer_record(new_envelope, ttl)
assert result is True
# Confirm the record is updated
latest = store.get_peer_record(peer_id)
assert isinstance(latest, Envelope)
assert latest.record().seq == 23
# Merged addresses = old addres + new_addrs
expected_addrs = set(new_addrs)
actual_addrs = set(store.addrs(peer_id))
assert actual_addrs == expected_addrs

View File

@ -0,0 +1,129 @@
from multiaddr import Multiaddr
from libp2p.crypto.rsa import (
create_new_key_pair,
)
from libp2p.peer.envelope import (
Envelope,
consume_envelope,
make_unsigned,
seal_record,
unmarshal_envelope,
)
from libp2p.peer.id import ID
import libp2p.peer.pb.crypto_pb2 as crypto_pb
import libp2p.peer.pb.envelope_pb2 as env_pb
from libp2p.peer.peer_record import PeerRecord
DOMAIN = "libp2p-peer-record"
def test_basic_protobuf_serialization_deserialization():
pubkey = crypto_pb.PublicKey()
pubkey.Type = crypto_pb.KeyType.Ed25519
pubkey.Data = b"\x01\x02\x03"
env = env_pb.Envelope()
env.public_key.CopyFrom(pubkey)
env.payload_type = b"\x03\x01"
env.payload = b"test-payload"
env.signature = b"signature-bytes"
serialized = env.SerializeToString()
new_env = env_pb.Envelope()
new_env.ParseFromString(serialized)
assert new_env.public_key.Type == crypto_pb.KeyType.Ed25519
assert new_env.public_key.Data == b"\x01\x02\x03"
assert new_env.payload_type == b"\x03\x01"
assert new_env.payload == b"test-payload"
assert new_env.signature == b"signature-bytes"
def test_enevelope_marshal_unmarshal_roundtrip():
keypair = create_new_key_pair()
pubkey = keypair.public_key
private_key = keypair.private_key
payload_type = b"\x03\x01"
payload = b"test-record"
sig = private_key.sign(make_unsigned(DOMAIN, payload_type, payload))
env = Envelope(pubkey, payload_type, payload, sig)
serialized = env.marshal_envelope()
new_env = unmarshal_envelope(serialized)
assert new_env.public_key == pubkey
assert new_env.payload_type == payload_type
assert new_env.raw_payload == payload
assert new_env.signature == sig
def test_seal_and_consume_envelope_roundtrip():
keypair = create_new_key_pair()
priv_key = keypair.private_key
pub_key = keypair.public_key
peer_id = ID.from_pubkey(pub_key)
addrs = [Multiaddr("/ip4/127.0.0.1/tcp/4001"), Multiaddr("/ip4/127.0.0.1/tcp/4002")]
seq = 12345
record = PeerRecord(peer_id=peer_id, addrs=addrs, seq=seq)
# Seal
envelope = seal_record(record, priv_key)
serialized = envelope.marshal_envelope()
# Consume
env, rec = consume_envelope(serialized, record.domain())
# Assertions
assert env.public_key == pub_key
assert rec.peer_id == peer_id
assert rec.seq == seq
assert rec.addrs == addrs
def test_envelope_equal():
# Create a new keypair
keypair = create_new_key_pair()
private_key = keypair.private_key
# Create a mock PeerRecord
record = PeerRecord(
peer_id=ID.from_base58("QmNM23MiU1Kd7yfiKVdUnaDo8RYca8By4zDmr7uSaVV8Px"),
seq=1,
addrs=[Multiaddr("/ip4/127.0.0.1/tcp/4001")],
)
# Seal it into an Envelope
env1 = seal_record(record, private_key)
# Create a second identical envelope
env2 = Envelope(
public_key=env1.public_key,
payload_type=env1.payload_type,
raw_payload=env1.raw_payload,
signature=env1.signature,
)
# They should be equal
assert env1.equal(env2)
# Now change something — payload type
env2.payload_type = b"\x99\x99"
assert not env1.equal(env2)
# Restore payload_type but change signature
env2.payload_type = env1.payload_type
env2.signature = b"wrong-signature"
assert not env1.equal(env2)
# Restore signature but change payload
env2.signature = env1.signature
env2.raw_payload = b"tampered"
assert not env1.equal(env2)
# Finally, test with a non-envelope object
assert not env1.equal("not-an-envelope")

View File

@ -0,0 +1,112 @@
import time
from multiaddr import Multiaddr
from libp2p.peer.id import ID
import libp2p.peer.pb.peer_record_pb2 as pb
from libp2p.peer.peer_record import (
PeerRecord,
addrs_from_protobuf,
peer_record_from_protobuf,
unmarshal_record,
)
# Testing methods from PeerRecord base class and PeerRecord protobuf:
def test_basic_protobuf_serializatrion_deserialization():
record = pb.PeerRecord()
record.seq = 1
serialized = record.SerializeToString()
new_record = pb.PeerRecord()
new_record.ParseFromString(serialized)
assert new_record.seq == 1
def test_timestamp_seq_monotonicity():
rec1 = PeerRecord()
time.sleep(1)
rec2 = PeerRecord()
assert isinstance(rec1.seq, int)
assert isinstance(rec2.seq, int)
assert rec2.seq > rec1.seq, f"Expected seq2 ({rec2.seq}) > seq1 ({rec1.seq})"
def test_addrs_from_protobuf_multiple_addresses():
ma1 = Multiaddr("/ip4/127.0.0.1/tcp/4001")
ma2 = Multiaddr("/ip4/127.0.0.1/tcp/4002")
addr_info1 = pb.PeerRecord.AddressInfo()
addr_info1.multiaddr = ma1.to_bytes()
addr_info2 = pb.PeerRecord.AddressInfo()
addr_info2.multiaddr = ma2.to_bytes()
result = addrs_from_protobuf([addr_info1, addr_info2])
assert result == [ma1, ma2]
def test_peer_record_from_protobuf():
peer_id = ID.from_base58("QmNM23MiU1Kd7yfiKVdUnaDo8RYca8By4zDmr7uSaVV8Px")
record = pb.PeerRecord()
record.peer_id = peer_id.to_bytes()
record.seq = 42
for addr_str in ["/ip4/127.0.0.1/tcp/4001", "/ip4/127.0.0.1/tcp/4002"]:
ma = Multiaddr(addr_str)
addr_info = pb.PeerRecord.AddressInfo()
addr_info.multiaddr = ma.to_bytes()
record.addresses.append(addr_info)
result = peer_record_from_protobuf(record)
assert result.peer_id == peer_id
assert result.seq == 42
assert len(result.addrs) == 2
assert str(result.addrs[0]) == "/ip4/127.0.0.1/tcp/4001"
assert str(result.addrs[1]) == "/ip4/127.0.0.1/tcp/4002"
def test_to_protobuf_generates_correct_message():
peer_id = ID.from_base58("QmNM23MiU1Kd7yfiKVdUnaDo8RYca8By4zDmr7uSaVV8Px")
addrs = [Multiaddr("/ip4/127.0.0.1/tcp/4001")]
seq = 12345
record = PeerRecord(peer_id, addrs, seq)
proto = record.to_protobuf()
assert isinstance(proto, pb.PeerRecord)
assert proto.peer_id == peer_id.to_bytes()
assert proto.seq == seq
assert len(proto.addresses) == 1
assert proto.addresses[0].multiaddr == addrs[0].to_bytes()
def test_unmarshal_record_roundtrip():
record = PeerRecord(
peer_id=ID.from_base58("QmNM23MiU1Kd7yfiKVdUnaDo8RYca8By4zDmr7uSaVV8Px"),
addrs=[Multiaddr("/ip4/127.0.0.1/tcp/4001")],
seq=999,
)
serialized = record.to_protobuf().SerializeToString()
deserialized = unmarshal_record(serialized)
assert deserialized.peer_id == record.peer_id
assert deserialized.seq == record.seq
assert len(deserialized.addrs) == 1
assert deserialized.addrs[0] == record.addrs[0]
def test_marshal_record_and_equal():
peer_id = ID.from_base58("QmNM23MiU1Kd7yfiKVdUnaDo8RYca8By4zDmr7uSaVV8Px")
addrs = [Multiaddr("/ip4/127.0.0.1/tcp/4001")]
original = PeerRecord(peer_id, addrs)
serialized = original.marshal_record()
deserailzed = unmarshal_record(serialized)
assert original.equal(deserailzed)

View File

@ -6,10 +6,12 @@ from multiaddr import Multiaddr
from libp2p.crypto.secp256k1 import ( from libp2p.crypto.secp256k1 import (
create_new_key_pair, create_new_key_pair,
) )
from libp2p.peer.id import ID
from libp2p.peer.peerdata import ( from libp2p.peer.peerdata import (
PeerData, PeerData,
PeerDataError, PeerDataError,
) )
from libp2p.peer.peerstore import PeerStore
MOCK_ADDR = Multiaddr("/ip4/127.0.0.1/tcp/4001") MOCK_ADDR = Multiaddr("/ip4/127.0.0.1/tcp/4001")
MOCK_KEYPAIR = create_new_key_pair() MOCK_KEYPAIR = create_new_key_pair()
@ -39,6 +41,59 @@ def test_set_protocols():
assert peer_data.get_protocols() == protocols assert peer_data.get_protocols() == protocols
# Test case when removing protocols:
def test_remove_protocols():
peer_data = PeerData()
protocols: Sequence[str] = ["protocol1", "protocol2"]
peer_data.set_protocols(protocols)
peer_data.remove_protocols(["protocol1"])
assert peer_data.get_protocols() == ["protocol2"]
# Test case when clearing the protocol list:
def test_clear_protocol_data():
peer_data = PeerData()
protocols: Sequence[str] = ["protocol1", "protocol2"]
peer_data.set_protocols(protocols)
peer_data.clear_protocol_data()
assert peer_data.get_protocols() == []
# Test case when supports protocols:
def test_supports_protocols():
peer_data = PeerData()
peer_data.set_protocols(["protocol1", "protocol2", "protocol3"])
input_protocols = ["protocol1", "protocol4", "protocol2"]
supported = peer_data.supports_protocols(input_protocols)
assert supported == ["protocol1", "protocol2"]
# Test case for first supported protocol is found
def test_first_supported_protocol_found():
peer_data = PeerData()
peer_data.set_protocols(["protocolA", "protocolB"])
input_protocols = ["protocolC", "protocolB", "protocolA"]
first = peer_data.first_supported_protocol(input_protocols)
assert first == "protocolB"
# Test case for first supported protocol not found
def test_first_supported_protocol_none():
peer_data = PeerData()
peer_data.set_protocols(["protocolX", "protocolY"])
input_protocols = ["protocolA", "protocolB"]
first = peer_data.first_supported_protocol(input_protocols)
assert first == "None supported"
# Test case when adding addresses # Test case when adding addresses
def test_add_addrs(): def test_add_addrs():
peer_data = PeerData() peer_data = PeerData()
@ -81,6 +136,15 @@ def test_get_metadata_key_not_found():
peer_data.get_metadata("nonexistent_key") peer_data.get_metadata("nonexistent_key")
# Test case for clearing metadata
def test_clear_metadata():
peer_data = PeerData()
peer_data.metadata = {"key1": "value1", "key2": "value2"}
peer_data.clear_metadata()
assert peer_data.metadata == {}
# Test case for adding public key # Test case for adding public key
def test_add_pubkey(): def test_add_pubkey():
peer_data = PeerData() peer_data = PeerData()
@ -107,3 +171,71 @@ def test_get_privkey_not_found():
peer_data = PeerData() peer_data = PeerData()
with pytest.raises(PeerDataError): with pytest.raises(PeerDataError):
peer_data.get_privkey() peer_data.get_privkey()
# Test case for returning all the peers with stored keys
def test_peer_with_keys():
peer_store = PeerStore()
peer_id_1 = ID(b"peer1")
peer_id_2 = ID(b"peer2")
peer_data_1 = PeerData()
peer_data_2 = PeerData()
peer_data_1.pubkey = MOCK_PUBKEY
peer_data_2.pubkey = None
peer_store.peer_data_map = {
peer_id_1: peer_data_1,
peer_id_2: peer_data_2,
}
assert peer_store.peer_with_keys() == [peer_id_1]
# Test case for clearing the key book
def test_clear_keydata():
peer_store = PeerStore()
peer_id = ID(b"peer123")
peer_data = PeerData()
peer_data.pubkey = MOCK_PUBKEY
peer_data.privkey = MOCK_PRIVKEY
peer_store.peer_data_map = {peer_id: peer_data}
peer_store.clear_keydata(peer_id)
assert peer_data.pubkey is None
assert peer_data.privkey is None
# Test case for recording latency for the first time
def test_record_latency_initial():
peer_data = PeerData()
assert peer_data.latency_EWMA() == 0
peer_data.record_latency(100.0)
assert peer_data.latency_EWMA() == 100.0
# Test case for updating latency
def test_record_latency_updates_ewma():
peer_data = PeerData()
peer_data.record_latency(100.0) # first measurement
first = peer_data.latency_EWMA()
peer_data.record_latency(50.0) # second measurement
second = peer_data.latency_EWMA()
assert second < first # EWMA should have smoothed downward
assert second > 50.0 # Not as low as the new latency
assert second != first
def test_clear_metrics():
peer_data = PeerData()
peer_data.record_latency(200.0)
assert peer_data.latency_EWMA() == 200.0
peer_data.clear_metrics()
assert peer_data.latency_EWMA() == 0

View File

@ -13,7 +13,9 @@ from libp2p.peer.peerinfo import (
) )
ALPHABETS = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz" ALPHABETS = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"
VALID_MULTI_ADDR_STR = "/ip4/127.0.0.1/tcp/8000/p2p/3YgLAeMKSAPcGqZkAt8mREqhQXmJT8SN8VCMN4T6ih4GNX9wvK8mWJnWZ1qA2mLdCQ" # noqa: E501 VALID_MULTI_ADDR_STR = (
"/ip4/127.0.0.1/tcp/8000/p2p/QmWQqHcMi6Cay5M6KWSNVYSDnxzfqWb1aGFQFSRzBNe49t"
)
def test_init_(): def test_init_():
@ -50,9 +52,6 @@ def test_info_from_p2p_addr_invalid(addr):
def test_info_from_p2p_addr_valid(): def test_info_from_p2p_addr_valid():
m_addr = multiaddr.Multiaddr(VALID_MULTI_ADDR_STR) m_addr = multiaddr.Multiaddr(VALID_MULTI_ADDR_STR)
info = info_from_p2p_addr(m_addr) info = info_from_p2p_addr(m_addr)
assert ( assert info.peer_id.pretty() == "QmWQqHcMi6Cay5M6KWSNVYSDnxzfqWb1aGFQFSRzBNe49t"
info.peer_id.pretty()
== "3YgLAeMKSAPcGqZkAt8mREqhQXmJT8SN8VCMN4T6ih4GNX9wvK8mWJnWZ1qA2mLdCQ"
)
assert len(info.addrs) == 1 assert len(info.addrs) == 1
assert str(info.addrs[0]) == "/ip4/127.0.0.1/tcp/8000" assert str(info.addrs[0]) == "/ip4/127.0.0.1/tcp/8000"

View File

@ -2,6 +2,7 @@ import time
import pytest import pytest
from multiaddr import Multiaddr from multiaddr import Multiaddr
import trio
from libp2p.peer.id import ID from libp2p.peer.id import ID
from libp2p.peer.peerstore import ( from libp2p.peer.peerstore import (
@ -89,3 +90,60 @@ def test_peers():
store.add_addr(ID(b"peer3"), Multiaddr("/ip4/127.0.0.1/tcp/4001"), 10) store.add_addr(ID(b"peer3"), Multiaddr("/ip4/127.0.0.1/tcp/4001"), 10)
assert set(store.peer_ids()) == {ID(b"peer1"), ID(b"peer2"), ID(b"peer3")} assert set(store.peer_ids()) == {ID(b"peer1"), ID(b"peer2"), ID(b"peer3")}
@pytest.mark.trio
async def test_addr_stream_yields_new_addrs():
store = PeerStore()
peer_id = ID(b"peer1")
addr1 = Multiaddr("/ip4/127.0.0.1/tcp/4001")
addr2 = Multiaddr("/ip4/127.0.0.1/tcp/4002")
collected = []
async def consume_addrs():
async for addr in store.addr_stream(peer_id):
collected.append(addr)
if len(collected) == 2:
break
async with trio.open_nursery() as nursery:
nursery.start_soon(consume_addrs)
await trio.sleep(2) # Give time for the stream to start
store.add_addr(peer_id, addr1, ttl=10)
await trio.sleep(0.2)
store.add_addr(peer_id, addr2, ttl=10)
await trio.sleep(0.2)
# After collecting expected addresses, cancel the stream
nursery.cancel_scope.cancel()
assert collected == [addr1, addr2]
@pytest.mark.trio
async def test_cleanup_task_remove_expired_data():
store = PeerStore()
peer_id = ID(b"peer123")
addr = Multiaddr("/ip4/127.0.0.1/tcp/4040")
# Insert addrs with short TTL (0.01s)
store.add_addr(peer_id, addr, 1)
assert store.addrs(peer_id) == [addr]
assert peer_id in store.peer_data_map
# Start cleanup task in a nursery
async with trio.open_nursery() as nursery:
# Run the cleanup task with a short interval so it runs soon
nursery.start_soon(store.start_cleanup_task, 1)
# Sleep long enough for TTL to expire and cleanup to run
await trio.sleep(3)
# Cancel the nursery to stop background tasks
nursery.cancel_scope.cancel()
# Confirm the peer data is gone from the peer_data_map
assert peer_id not in store.peer_data_map

View File

@ -3,6 +3,7 @@ import pytest
from libp2p.custom_types import ( from libp2p.custom_types import (
TProtocol, TProtocol,
) )
from libp2p.protocol_muxer.multiselect import Multiselect
from libp2p.tools.utils import ( from libp2p.tools.utils import (
create_echo_stream_handler, create_echo_stream_handler,
) )
@ -138,3 +139,23 @@ async def test_multistream_command(security_protocol):
# Dialer asks for unspoorted command # Dialer asks for unspoorted command
with pytest.raises(ValueError, match="Command not supported"): with pytest.raises(ValueError, match="Command not supported"):
await dialer.send_command(listener.get_id(), "random") await dialer.send_command(listener.get_id(), "random")
@pytest.mark.trio
async def test_get_protocols_returns_all_registered_protocols():
ms = Multiselect()
async def dummy_handler(stream):
pass
p1 = TProtocol("/echo/1.0.0")
p2 = TProtocol("/foo/1.0.0")
p3 = TProtocol("/bar/1.0.0")
ms.add_handler(p1, dummy_handler)
ms.add_handler(p2, dummy_handler)
ms.add_handler(p3, dummy_handler)
protocols = ms.get_protocols()
assert set(protocols) == {p1, p2, p3}

View File

@ -5,10 +5,12 @@ import inspect
from typing import ( from typing import (
NamedTuple, NamedTuple,
) )
from unittest.mock import patch
import pytest import pytest
import trio import trio
from libp2p.custom_types import AsyncValidatorFn
from libp2p.exceptions import ( from libp2p.exceptions import (
ValidationError, ValidationError,
) )
@ -243,7 +245,37 @@ async def test_get_msg_validators():
((False, True), (True, False), (True, True)), ((False, True), (True, False), (True, True)),
) )
@pytest.mark.trio @pytest.mark.trio
async def test_validate_msg(is_topic_1_val_passed, is_topic_2_val_passed): async def test_validate_msg_with_throttle_condition(
is_topic_1_val_passed, is_topic_2_val_passed
):
CONCURRENCY_LIMIT = 10
state = {
"concurrency_counter": 0,
"max_observed": 0,
}
lock = trio.Lock()
async def mock_run_async_validator(
self,
func: AsyncValidatorFn,
msg_forwarder: ID,
msg: rpc_pb2.Message,
results: list[bool],
) -> None:
async with self._validator_semaphore:
async with lock:
state["concurrency_counter"] += 1
if state["concurrency_counter"] > state["max_observed"]:
state["max_observed"] = state["concurrency_counter"]
try:
result = await func(msg_forwarder, msg)
results.append(result)
finally:
async with lock:
state["concurrency_counter"] -= 1
async with PubsubFactory.create_batch_with_floodsub(1) as pubsubs_fsub: async with PubsubFactory.create_batch_with_floodsub(1) as pubsubs_fsub:
def passed_sync_validator(peer_id: ID, msg: rpc_pb2.Message) -> bool: def passed_sync_validator(peer_id: ID, msg: rpc_pb2.Message) -> bool:
@ -280,11 +312,19 @@ async def test_validate_msg(is_topic_1_val_passed, is_topic_2_val_passed):
seqno=b"\x00" * 8, seqno=b"\x00" * 8,
) )
if is_topic_1_val_passed and is_topic_2_val_passed: with patch(
await pubsubs_fsub[0].validate_msg(pubsubs_fsub[0].my_id, msg) "libp2p.pubsub.pubsub.Pubsub._run_async_validator",
else: new=mock_run_async_validator,
with pytest.raises(ValidationError): ):
if is_topic_1_val_passed and is_topic_2_val_passed:
await pubsubs_fsub[0].validate_msg(pubsubs_fsub[0].my_id, msg) await pubsubs_fsub[0].validate_msg(pubsubs_fsub[0].my_id, msg)
else:
with pytest.raises(ValidationError):
await pubsubs_fsub[0].validate_msg(pubsubs_fsub[0].my_id, msg)
assert state["max_observed"] <= CONCURRENCY_LIMIT, (
f"Max concurrency observed: {state['max_observed']}"
)
@pytest.mark.trio @pytest.mark.trio

View File

@ -105,11 +105,11 @@ async def test_relay_discovery_initialization():
@pytest.mark.trio @pytest.mark.trio
async def test_relay_discovery_find_relay(): async def test_relay_discovery_find_relay_peerstore_method():
"""Test finding a relay node via discovery.""" """Test finding a relay node via discovery using the peerstore method."""
async with HostFactory.create_batch_and_listen(2) as hosts: async with HostFactory.create_batch_and_listen(2) as hosts:
relay_host, client_host = hosts relay_host, client_host = hosts
logger.info("Created hosts for test_relay_discovery_find_relay") logger.info("Created host for test_relay_discovery_find_relay_peerstore_method")
logger.info("Relay host ID: %s", relay_host.get_id()) logger.info("Relay host ID: %s", relay_host.get_id())
logger.info("Client host ID: %s", client_host.get_id()) logger.info("Client host ID: %s", client_host.get_id())
@ -144,19 +144,19 @@ async def test_relay_discovery_find_relay():
# Start discovery service # Start discovery service
async with background_trio_service(client_discovery): async with background_trio_service(client_discovery):
await client_discovery.event_started.wait() await client_discovery.event_started.wait()
logger.info("Client discovery service started") logger.info("Client discovery service started (peerstore method)")
# Wait for discovery to find the relay # Wait for discovery to find the relay using the peerstore method
logger.info("Waiting for relay discovery...") logger.info("Waiting for relay discovery using peerstore...")
# Manually trigger discovery instead of waiting # Manually trigger discovery which uses peerstore as default
await client_discovery.discover_relays() await client_discovery.discover_relays()
# Check if relay was found # Check if relay was found
with trio.fail_after(DISCOVERY_TIMEOUT): with trio.fail_after(DISCOVERY_TIMEOUT):
for _ in range(20): # Try multiple times for _ in range(20): # Try multiple times
if relay_host.get_id() in client_discovery._discovered_relays: if relay_host.get_id() in client_discovery._discovered_relays:
logger.info("Relay discovered successfully") logger.info("Relay discovered successfully (peerstore method)")
break break
# Wait and try again # Wait and try again
@ -164,14 +164,194 @@ async def test_relay_discovery_find_relay():
# Manually trigger discovery again # Manually trigger discovery again
await client_discovery.discover_relays() await client_discovery.discover_relays()
else: else:
pytest.fail("Failed to discover relay node within timeout") pytest.fail(
"Failed to discover relay node within timeout(peerstore method)"
)
# Verify that relay was found and is valid # Verify that relay was found and is valid
assert relay_host.get_id() in client_discovery._discovered_relays, ( assert relay_host.get_id() in client_discovery._discovered_relays, (
"Relay should be discovered" "Relay should be discovered (peerstore method)"
) )
relay_info = client_discovery._discovered_relays[relay_host.get_id()] relay_info = client_discovery._discovered_relays[relay_host.get_id()]
assert relay_info.peer_id == relay_host.get_id(), "Peer ID should match" assert relay_info.peer_id == relay_host.get_id(), (
"Peer ID should match (peerstore method)"
)
@pytest.mark.trio
async def test_relay_discovery_find_relay_direct_connection_method():
"""Test finding a relay node via discovery using the direct connection method."""
async with HostFactory.create_batch_and_listen(2) as hosts:
relay_host, client_host = hosts
logger.info("Created hosts for test_relay_discovery_find_relay_direct_method")
logger.info("Relay host ID: %s", relay_host.get_id())
logger.info("Client host ID: %s", client_host.get_id())
# Explicitly register the protocol handlers on relay_host
relay_host.set_stream_handler(PROTOCOL_ID, simple_stream_handler)
relay_host.set_stream_handler(STOP_PROTOCOL_ID, simple_stream_handler)
# Manually add protocol to peerstore for testing, then remove to force fallback
client_host.get_peerstore().add_protocols(
relay_host.get_id(), [str(PROTOCOL_ID)]
)
# Set up discovery on the client host
client_discovery = RelayDiscovery(
client_host, discovery_interval=5
) # Use shorter interval for testing
try:
# Connect peers so they can discover each other
with trio.fail_after(CONNECT_TIMEOUT):
logger.info("Connecting client host to relay host")
await connect(client_host, relay_host)
assert relay_host.get_network().connections[client_host.get_id()], (
"Peers not connected"
)
logger.info("Connection established between peers")
except Exception as e:
logger.error("Failed to connect peers: %s", str(e))
raise
# Remove the relay from the peerstore to test fallback to direct connection
client_host.get_peerstore().clear_peerdata(relay_host.get_id())
# Make sure that peer_id is not present in peerstore
assert relay_host.get_id() not in client_host.get_peerstore().peer_ids()
# Start discovery service
async with background_trio_service(client_discovery):
await client_discovery.event_started.wait()
logger.info("Client discovery service started (direct connection method)")
# Wait for discovery to find the relay using the direct connection method
logger.info(
"Waiting for relay discovery using direct connection fallback..."
)
# Manually trigger discovery which should fallback to direct connection
await client_discovery.discover_relays()
# Check if relay was found
with trio.fail_after(DISCOVERY_TIMEOUT):
for _ in range(20): # Try multiple times
if relay_host.get_id() in client_discovery._discovered_relays:
logger.info("Relay discovered successfully (direct method)")
break
# Wait and try again
await trio.sleep(1)
# Manually trigger discovery again
await client_discovery.discover_relays()
else:
pytest.fail(
"Failed to discover relay node within timeout (direct method)"
)
# Verify that relay was found and is valid
assert relay_host.get_id() in client_discovery._discovered_relays, (
"Relay should be discovered (direct method)"
)
relay_info = client_discovery._discovered_relays[relay_host.get_id()]
assert relay_info.peer_id == relay_host.get_id(), (
"Peer ID should match (direct method)"
)
@pytest.mark.trio
async def test_relay_discovery_find_relay_mux_method():
"""
Test finding a relay node via discovery using the mux method
(fallback after direct connection fails).
"""
async with HostFactory.create_batch_and_listen(2) as hosts:
relay_host, client_host = hosts
logger.info("Created hosts for test_relay_discovery_find_relay_mux_method")
logger.info("Relay host ID: %s", relay_host.get_id())
logger.info("Client host ID: %s", client_host.get_id())
# Explicitly register the protocol handlers on relay_host
relay_host.set_stream_handler(PROTOCOL_ID, simple_stream_handler)
relay_host.set_stream_handler(STOP_PROTOCOL_ID, simple_stream_handler)
client_host.set_stream_handler(PROTOCOL_ID, simple_stream_handler)
client_host.set_stream_handler(STOP_PROTOCOL_ID, simple_stream_handler)
# Set up discovery on the client host
client_discovery = RelayDiscovery(
client_host, discovery_interval=5
) # Use shorter interval for testing
try:
# Connect peers so they can discover each other
with trio.fail_after(CONNECT_TIMEOUT):
logger.info("Connecting client host to relay host")
await connect(client_host, relay_host)
assert relay_host.get_network().connections[client_host.get_id()], (
"Peers not connected"
)
logger.info("Connection established between peers")
except Exception as e:
logger.error("Failed to connect peers: %s", str(e))
raise
# Remove the relay from the peerstore to test fallback
client_host.get_peerstore().clear_peerdata(relay_host.get_id())
# Make sure that peer_id is not present in peerstore
assert relay_host.get_id() not in client_host.get_peerstore().peer_ids()
# Mock the _check_via_direct_connection method to return None
# This forces the discovery to fall back to the mux method
async def mock_direct_check_fails(peer_id):
"""Mock that always returns None to force mux fallback."""
return None
client_discovery._check_via_direct_connection = mock_direct_check_fails
# Start discovery service
async with background_trio_service(client_discovery):
await client_discovery.event_started.wait()
logger.info("Client discovery service started (mux method)")
# Wait for discovery to find the relay using the mux method
logger.info("Waiting for relay discovery using mux fallback...")
# Manually trigger discovery which should fallback to mux method
await client_discovery.discover_relays()
# Check if relay was found
with trio.fail_after(DISCOVERY_TIMEOUT):
for _ in range(20): # Try multiple times
if relay_host.get_id() in client_discovery._discovered_relays:
logger.info("Relay discovered successfully (mux method)")
break
# Wait and try again
await trio.sleep(1)
# Manually trigger discovery again
await client_discovery.discover_relays()
else:
pytest.fail(
"Failed to discover relay node within timeout (mux method)"
)
# Verify that relay was found and is valid
assert relay_host.get_id() in client_discovery._discovered_relays, (
"Relay should be discovered (mux method)"
)
relay_info = client_discovery._discovered_relays[relay_host.get_id()]
assert relay_info.peer_id == relay_host.get_id(), (
"Peer ID should match (mux method)"
)
# Verify that the protocol was cached via mux method
assert relay_host.get_id() in client_discovery._protocol_cache, (
"Protocol should be cached (mux method)"
)
assert (
str(PROTOCOL_ID)
in client_discovery._protocol_cache[relay_host.get_id()]
), "Relay protocol should be in cache (mux method)"
@pytest.mark.trio @pytest.mark.trio

View File

@ -8,6 +8,7 @@ from libp2p.stream_muxer.mplex.exceptions import (
MplexStreamClosed, MplexStreamClosed,
MplexStreamEOF, MplexStreamEOF,
MplexStreamReset, MplexStreamReset,
MuxedConnUnavailable,
) )
from libp2p.stream_muxer.mplex.mplex import ( from libp2p.stream_muxer.mplex.mplex import (
MPLEX_MESSAGE_CHANNEL_SIZE, MPLEX_MESSAGE_CHANNEL_SIZE,
@ -213,3 +214,39 @@ async def test_mplex_stream_reset(mplex_stream_pair):
# `reset` should do nothing as well. # `reset` should do nothing as well.
await stream_0.reset() await stream_0.reset()
await stream_1.reset() await stream_1.reset()
@pytest.mark.trio
async def test_mplex_stream_close_timeout(monkeypatch, mplex_stream_pair):
stream_0, stream_1 = mplex_stream_pair
# (simulate hanging)
async def fake_send_message(*args, **kwargs):
await trio.sleep_forever()
monkeypatch.setattr(stream_0.muxed_conn, "send_message", fake_send_message)
with pytest.raises(TimeoutError):
await stream_0.close()
@pytest.mark.trio
async def test_mplex_stream_close_mux_unavailable(monkeypatch, mplex_stream_pair):
stream_0, _ = mplex_stream_pair
# Patch send_message to raise MuxedConnUnavailable
def raise_unavailable(*args, **kwargs):
raise MuxedConnUnavailable("Simulated conn drop")
monkeypatch.setattr(stream_0.muxed_conn, "send_message", raise_unavailable)
# Case 1: Mplex is shutting down — should not raise
stream_0.muxed_conn.event_shutting_down.set()
await stream_0.close() # Should NOT raise
# Case 2: Mplex is NOT shutting down — should raise RuntimeError
stream_0.event_local_closed = trio.Event() # Reset since it was set in first call
stream_0.muxed_conn.event_shutting_down = trio.Event() # Unset the shutdown flag
with pytest.raises(RuntimeError, match="Failed to send close message"):
await stream_0.close()

View File

@ -0,0 +1,590 @@
from typing import Any, cast
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
import trio
from trio.testing import wait_all_tasks_blocked
from libp2p.stream_muxer.exceptions import (
MuxedConnUnavailable,
)
from libp2p.stream_muxer.mplex.constants import HeaderTags
from libp2p.stream_muxer.mplex.datastructures import StreamID
from libp2p.stream_muxer.mplex.exceptions import (
MplexStreamClosed,
MplexStreamEOF,
MplexStreamReset,
)
from libp2p.stream_muxer.mplex.mplex_stream import MplexStream
class MockMuxedConn:
"""A mock Mplex connection for testing purposes."""
def __init__(self):
self.sent_messages = []
self.streams: dict[StreamID, MplexStream] = {}
self.streams_lock = trio.Lock()
self.is_unavailable = False
async def send_message(
self, flag: HeaderTags, data: bytes | None, stream_id: StreamID
) -> None:
"""Mocks sending a message over the connection."""
if self.is_unavailable:
raise MuxedConnUnavailable("Connection is unavailable")
self.sent_messages.append((flag, data, stream_id))
# Yield to allow other tasks to run
await trio.lowlevel.checkpoint()
def get_remote_address(self) -> tuple[str, int]:
"""Mocks getting the remote address."""
return "127.0.0.1", 4001
@pytest.fixture
async def mplex_stream():
"""Provides a fully initialized MplexStream and its communication channels."""
# Use a buffered channel to prevent deadlocks in simple tests
send_chan, recv_chan = trio.open_memory_channel(10)
stream_id = StreamID(1, is_initiator=True)
muxed_conn = MockMuxedConn()
stream = MplexStream("test-stream", stream_id, cast(Any, muxed_conn), recv_chan)
muxed_conn.streams[stream_id] = stream
yield stream, send_chan, muxed_conn
# Cleanup: Close channels and reset stream state
await send_chan.aclose()
await recv_chan.aclose()
# Reset stream state to prevent cross-test contamination
stream.event_local_closed = trio.Event()
stream.event_remote_closed = trio.Event()
stream.event_reset = trio.Event()
# ===============================================
# 1. Tests for Stream-Level Lock Integration
# ===============================================
@pytest.mark.trio
async def test_stream_write_is_protected_by_rwlock(mplex_stream):
"""Verify that stream.write() acquires and releases the write lock."""
stream, _, muxed_conn = mplex_stream
# Mock lock methods
original_acquire = stream.rw_lock.acquire_write
original_release = stream.rw_lock.release_write
stream.rw_lock.acquire_write = AsyncMock(wraps=original_acquire)
stream.rw_lock.release_write = MagicMock(wraps=original_release)
await stream.write(b"test data")
stream.rw_lock.acquire_write.assert_awaited_once()
stream.rw_lock.release_write.assert_called_once()
# Verify the message was actually sent
assert len(muxed_conn.sent_messages) == 1
flag, data, stream_id = muxed_conn.sent_messages[0]
assert flag == HeaderTags.MessageInitiator
assert data == b"test data"
assert stream_id == stream.stream_id
@pytest.mark.trio
async def test_stream_read_is_protected_by_rwlock(mplex_stream):
"""Verify that stream.read() acquires and releases the read lock."""
stream, send_chan, _ = mplex_stream
# Mock lock methods
original_acquire = stream.rw_lock.acquire_read
original_release = stream.rw_lock.release_read
stream.rw_lock.acquire_read = AsyncMock(wraps=original_acquire)
stream.rw_lock.release_read = AsyncMock(wraps=original_release)
await send_chan.send(b"hello")
result = await stream.read(5)
stream.rw_lock.acquire_read.assert_awaited_once()
stream.rw_lock.release_read.assert_awaited_once()
assert result == b"hello"
@pytest.mark.trio
async def test_multiple_readers_can_coexist(mplex_stream):
"""Verify multiple readers can operate concurrently."""
stream, send_chan, _ = mplex_stream
# Send enough data for both reads
await send_chan.send(b"data1")
await send_chan.send(b"data2")
# Track lock acquisition order
acquisition_order = []
release_order = []
# Patch lock methods to track concurrency
original_acquire = stream.rw_lock.acquire_read
original_release = stream.rw_lock.release_read
async def tracked_acquire():
nonlocal acquisition_order
acquisition_order.append("start")
await original_acquire()
acquisition_order.append("acquired")
async def tracked_release():
nonlocal release_order
release_order.append("start")
await original_release()
release_order.append("released")
with (
patch.object(
stream.rw_lock, "acquire_read", side_effect=tracked_acquire, autospec=True
),
patch.object(
stream.rw_lock, "release_read", side_effect=tracked_release, autospec=True
),
):
# Execute concurrent reads
async with trio.open_nursery() as nursery:
nursery.start_soon(stream.read, 5)
nursery.start_soon(stream.read, 5)
# Verify both reads happened
assert acquisition_order.count("start") == 2
assert acquisition_order.count("acquired") == 2
assert release_order.count("start") == 2
assert release_order.count("released") == 2
@pytest.mark.trio
async def test_writer_blocks_readers(mplex_stream):
"""Verify that a writer blocks all readers and new readers queue behind."""
stream, send_chan, _ = mplex_stream
writer_acquired = trio.Event()
readers_ready = trio.Event()
writer_finished = trio.Event()
all_readers_started = trio.Event()
all_readers_done = trio.Event()
counters = {"reader_start_count": 0, "reader_done_count": 0}
reader_target = 3
reader_start_lock = trio.Lock()
# Patch write lock to control test flow
original_acquire_write = stream.rw_lock.acquire_write
original_release_write = stream.rw_lock.release_write
async def tracked_acquire_write():
await original_acquire_write()
writer_acquired.set()
# Wait for readers to queue up
await readers_ready.wait()
# Must be synchronous since real release_write is sync
def tracked_release_write():
original_release_write()
writer_finished.set()
with (
patch.object(
stream.rw_lock, "acquire_write", side_effect=tracked_acquire_write
),
patch.object(
stream.rw_lock, "release_write", side_effect=tracked_release_write
),
):
async with trio.open_nursery() as nursery:
# Start writer
nursery.start_soon(stream.write, b"test")
await writer_acquired.wait()
# Start readers
async def reader_task():
async with reader_start_lock:
counters["reader_start_count"] += 1
if counters["reader_start_count"] == reader_target:
all_readers_started.set()
try:
# This will block until data is available
await stream.read(5)
except (MplexStreamReset, MplexStreamEOF):
pass
finally:
async with reader_start_lock:
counters["reader_done_count"] += 1
if counters["reader_done_count"] == reader_target:
all_readers_done.set()
for _ in range(reader_target):
nursery.start_soon(reader_task)
# Wait until all readers are started
await all_readers_started.wait()
# Let the writer finish and release the lock
readers_ready.set()
await writer_finished.wait()
# Send data to unblock the readers
for i in range(reader_target):
await send_chan.send(b"data" + str(i).encode())
# Wait for all readers to finish
await all_readers_done.wait()
@pytest.mark.trio
async def test_writer_waits_for_readers(mplex_stream):
"""Verify a writer waits for existing readers to complete."""
stream, send_chan, _ = mplex_stream
readers_started = trio.Event()
writer_entered = trio.Event()
writer_acquiring = trio.Event()
readers_finished = trio.Event()
# Send data for readers
await send_chan.send(b"data1")
await send_chan.send(b"data2")
# Patch read lock to control test flow
original_acquire_read = stream.rw_lock.acquire_read
async def tracked_acquire_read():
await original_acquire_read()
readers_started.set()
# Wait until readers are allowed to finish
await readers_finished.wait()
# Patch write lock to detect when writer is blocked
original_acquire_write = stream.rw_lock.acquire_write
async def tracked_acquire_write():
writer_acquiring.set()
await original_acquire_write()
writer_entered.set()
with (
patch.object(stream.rw_lock, "acquire_read", side_effect=tracked_acquire_read),
patch.object(
stream.rw_lock, "acquire_write", side_effect=tracked_acquire_write
),
):
async with trio.open_nursery() as nursery:
# Start readers
nursery.start_soon(stream.read, 5)
nursery.start_soon(stream.read, 5)
# Wait for at least one reader to acquire the lock
await readers_started.wait()
# Start writer (should block)
nursery.start_soon(stream.write, b"test")
# Wait for writer to start acquiring lock
await writer_acquiring.wait()
# Verify writer hasn't entered critical section
assert not writer_entered.is_set()
# Allow readers to finish
readers_finished.set()
# Verify writer can proceed
await writer_entered.wait()
@pytest.mark.trio
async def test_lock_behavior_during_cancellation(mplex_stream):
"""Verify that a lock is released when a task holding it is cancelled."""
stream, _, _ = mplex_stream
reader_acquired_lock = trio.Event()
async def cancellable_reader(task_status):
async with stream.rw_lock.read_lock():
reader_acquired_lock.set()
task_status.started()
# Wait indefinitely until cancelled.
await trio.sleep_forever()
async with trio.open_nursery() as nursery:
# Start the reader and wait for it to acquire the lock.
await nursery.start(cancellable_reader)
await reader_acquired_lock.wait()
# Now that the reader has the lock, cancel the nursery.
# This will cancel the reader task, and its lock should be released.
nursery.cancel_scope.cancel()
# After the nursery is cancelled, the reader should have released the lock.
# To verify, we try to acquire a write lock. If the read lock was not
# released, this will time out.
with trio.move_on_after(1) as cancel_scope:
async with stream.rw_lock.write_lock():
pass
if cancel_scope.cancelled_caught:
pytest.fail(
"Write lock could not be acquired after a cancelled reader, "
"indicating the read lock was not released."
)
@pytest.mark.trio
async def test_concurrent_read_write_sequence(mplex_stream):
"""Verify complex sequence of interleaved reads and writes."""
stream, send_chan, _ = mplex_stream
results = []
# Use a mock to intercept writes and feed them back to the read channel
original_write = stream.write
reader1_finished = trio.Event()
writer1_finished = trio.Event()
reader2_finished = trio.Event()
async def mocked_write(data: bytes) -> None:
await original_write(data)
# Simulate the other side receiving the data and sending a response
# by putting data into the read channel.
await send_chan.send(data)
with patch.object(stream, "write", wraps=mocked_write) as patched_write:
async with trio.open_nursery() as nursery:
# Test scenario:
# 1. Reader 1 starts, waits for data.
# 2. Writer 1 writes, which gets fed back to the stream.
# 3. Reader 2 starts, reads what Writer 1 wrote.
# 4. Writer 2 writes.
async def reader1():
nonlocal results
results.append("R1 start")
data = await stream.read(5)
results.append(data)
results.append("R1 done")
reader1_finished.set()
async def writer1():
nonlocal results
await reader1_finished.wait()
results.append("W1 start")
await stream.write(b"write1")
results.append("W1 done")
writer1_finished.set()
async def reader2():
nonlocal results
await writer1_finished.wait()
# This will read the data from writer1
results.append("R2 start")
data = await stream.read(6)
results.append(data)
results.append("R2 done")
reader2_finished.set()
async def writer2():
nonlocal results
await reader2_finished.wait()
results.append("W2 start")
await stream.write(b"write2")
results.append("W2 done")
# Execute sequence
nursery.start_soon(reader1)
nursery.start_soon(writer1)
nursery.start_soon(reader2)
nursery.start_soon(writer2)
await send_chan.send(b"data1")
# Verify sequence and that write was called
assert patched_write.call_count == 2
assert results == [
"R1 start",
b"data1",
"R1 done",
"W1 start",
"W1 done",
"R2 start",
b"write1",
"R2 done",
"W2 start",
"W2 done",
]
# ===============================================
# 2. Tests for Reset, EOF, and Close Interactions
# ===============================================
@pytest.mark.trio
async def test_read_after_remote_close_triggers_eof(mplex_stream):
"""Verify reading from a remotely closed stream returns EOF correctly."""
stream, send_chan, _ = mplex_stream
# Send some data that can be read first
await send_chan.send(b"data")
# Close the channel to signify no more data will ever arrive
await send_chan.aclose()
# Mark the stream as remotely closed
stream.event_remote_closed.set()
# The first read should succeed, consuming the buffered data
data = await stream.read(4)
assert data == b"data"
# Now that the buffer is empty and the channel is closed, this should raise EOF
with pytest.raises(MplexStreamEOF):
await stream.read(1)
@pytest.mark.trio
async def test_read_on_closed_stream_raises_eof(mplex_stream):
"""Test that reading from a closed stream with no data raises EOF."""
stream, send_chan, _ = mplex_stream
stream.event_remote_closed.set()
await send_chan.aclose() # Ensure the channel is closed
# Reading from a stream that is closed and has no data should raise EOF
with pytest.raises(MplexStreamEOF):
await stream.read(100)
@pytest.mark.trio
async def test_write_to_locally_closed_stream_raises(mplex_stream):
"""Verify writing to a locally closed stream raises MplexStreamClosed."""
stream, _, _ = mplex_stream
stream.event_local_closed.set()
with pytest.raises(MplexStreamClosed):
await stream.write(b"this should fail")
@pytest.mark.trio
async def test_read_from_reset_stream_raises(mplex_stream):
"""Verify reading from a reset stream raises MplexStreamReset."""
stream, _, _ = mplex_stream
stream.event_reset.set()
with pytest.raises(MplexStreamReset):
await stream.read(10)
@pytest.mark.trio
async def test_write_to_reset_stream_raises(mplex_stream):
"""Verify writing to a reset stream raises MplexStreamClosed."""
stream, _, _ = mplex_stream
# A stream reset implies it's also locally closed.
await stream.reset()
# The `write` method checks `event_local_closed`, which `reset` sets.
with pytest.raises(MplexStreamClosed):
await stream.write(b"this should also fail")
@pytest.mark.trio
async def test_stream_reset_cleans_up_resources(mplex_stream):
"""Verify reset() cleans up stream state and resources."""
stream, _, muxed_conn = mplex_stream
stream_id = stream.stream_id
assert stream_id in muxed_conn.streams
await stream.reset()
assert stream.event_reset.is_set()
assert stream.event_local_closed.is_set()
assert stream.event_remote_closed.is_set()
assert (HeaderTags.ResetInitiator, None, stream_id) in muxed_conn.sent_messages
assert stream_id not in muxed_conn.streams
# Verify the underlying data channel is closed
with pytest.raises(trio.ClosedResourceError):
await stream.incoming_data_channel.receive()
# ===============================================
# 3. Rigorous Concurrency Tests with Events
# ===============================================
@pytest.mark.trio
async def test_writer_is_blocked_by_reader_using_events(mplex_stream):
"""Verify a writer must wait for a reader using trio.Event for synchronization."""
stream, _, _ = mplex_stream
reader_has_lock = trio.Event()
writer_finished = trio.Event()
async def reader():
async with stream.rw_lock.read_lock():
reader_has_lock.set()
# Hold the lock until the writer has finished its attempt
await writer_finished.wait()
async def writer():
await reader_has_lock.wait()
# This call will now block until the reader releases the lock
await stream.write(b"data")
writer_finished.set()
async with trio.open_nursery() as nursery:
nursery.start_soon(reader)
nursery.start_soon(writer)
# Verify writer is blocked
await wait_all_tasks_blocked()
assert not writer_finished.is_set()
# Signal the reader to finish
writer_finished.set()
@pytest.mark.trio
async def test_multiple_readers_can_read_concurrently_using_events(mplex_stream):
"""Verify that multiple readers can acquire a read lock simultaneously."""
stream, _, _ = mplex_stream
counters = {"readers_in_critical_section": 0}
lock = trio.Lock() # To safely mutate the counter
reader1_acquired = trio.Event()
reader2_acquired = trio.Event()
all_readers_finished = trio.Event()
async def concurrent_reader(event_to_set: trio.Event):
async with stream.rw_lock.read_lock():
async with lock:
counters["readers_in_critical_section"] += 1
event_to_set.set()
# Wait until all readers have finished before exiting the lock context
await all_readers_finished.wait()
async with lock:
counters["readers_in_critical_section"] -= 1
async with trio.open_nursery() as nursery:
nursery.start_soon(concurrent_reader, reader1_acquired)
nursery.start_soon(concurrent_reader, reader2_acquired)
# Wait for both readers to acquire their locks
await reader1_acquired.wait()
await reader2_acquired.wait()
# Check that both were in the critical section at the same time
async with lock:
assert counters["readers_in_critical_section"] == 2
# Signal for all readers to finish
all_readers_finished.set()
# Verify they exit cleanly
await wait_all_tasks_blocked()
async with lock:
assert counters["readers_in_critical_section"] == 0

Some files were not shown because too many files have changed in this diff Show More