Conversation
There was a problem hiding this comment.
Pull request overview
Adds DNS-based peer discovery (BOLT-0010) so the node can bootstrap outbound connections automatically from well-known DNS seeds instead of relying solely on manual peer configuration.
Changes:
- Introduces a new
dns_bootstrapmodule to query SRV records, decode bech32 pubkeys from hostnames, and resolve A/AAAA records into connectable peers. - Wires optional
dns_bootstrapconfiguration throughconfig.json→NodeConfig→LdkUserInfoand starts a periodic bootstrap task instart_ldk. - Adds
hickory-resolverdependency to perform async DNS lookups.
Reviewed changes
Copilot reviewed 9 out of 10 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| src/main.rs | Starts periodic DNS bootstrap task to discover/connect peers. |
| src/dns_bootstrap/bootstrapper.rs | Core bootstrap logic: SRV query → pubkey decode → host resolve → peer list. |
| src/dns_bootstrap/query.rs | DNS resolver wrapper for SRV + IP lookups with timeouts. |
| src/dns_bootstrap/pubkey.rs | Decodes node pubkeys from bech32 DNS labels; includes unit tests. |
| src/dns_bootstrap/config.rs | Defines DnsBootstrapConfig + default seed list. |
| src/dns_bootstrap/mod.rs | Module exports and error type definitions. |
| src/config.rs | Adds dns_bootstrap to NodeConfig and config help output. |
| src/cli.rs | Extends LdkUserInfo to carry dns_bootstrap config. |
| Cargo.toml | Adds hickory-resolver dependency. |
| Cargo.lock | Lockfile updates for new DNS dependency graph. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /// This is the primary BOLT-0010 query that returns virtual hostnames | ||
| /// encoding node public keys and their listening ports. | ||
| pub async fn query_srv(&self, seed: &str) -> Result<Vec<SrvRecord>, DnsBootstrapError> { | ||
| let query_name = format!("_nodes._tcp.{}.", seed); |
There was a problem hiding this comment.
query_name unconditionally appends a trailing dot, so a config value like nodes.lightning.wiki. becomes _nodes._tcp.nodes.lightning.wiki.. and may fail to resolve. Normalize seed (e.g., trim trailing '.') before formatting, and/or validate seed strings during config parsing.
| let query_name = format!("_nodes._tcp.{}.", seed); | |
| let normalized_seed = seed.trim_end_matches('.'); | |
| let query_name = format!("_nodes._tcp.{}.", normalized_seed); |
| let addr = SocketAddr::new(ip, srv.port); | ||
| peers.push(BootstrappedPeer { pubkey, node_id, addr }); | ||
| } |
There was a problem hiding this comment.
sample_node_addrs may include duplicates (same node returned multiple times across SRV records or across seeds), leading to redundant connection attempts. Consider tracking a seen set (e.g., HashSet<NodeId> or HashSet<(NodeId, SocketAddr)>) and skipping duplicates before pushing into peers.
| pub async fn sample_node_addrs( | ||
| &self, num_addrs: usize, ignore: &HashSet<NodeId>, | ||
| ) -> Result<Vec<BootstrappedPeer>, DnsBootstrapError> { |
There was a problem hiding this comment.
New DNS bootstrap behavior isn’t covered by tests (selection/ignore/dedup/error handling). Consider making the DNS query layer injectable (e.g., via a trait) so DnsBootstrapper::sample_node_addrs can be unit-tested deterministically without real DNS.
| println!( | ||
| "DNS bootstrap enabled: {} seeds, target {} peers, interval {}s", | ||
| bootstrapper.config().seeds.len(), | ||
| num_peers, | ||
| interval_secs, | ||
| ); | ||
|
|
||
| tokio::spawn(async move { | ||
| // Initial delay to let networking and gossip start. | ||
| tokio::time::sleep(Duration::from_secs(5)).await; | ||
|
|
||
| let mut interval = | ||
| tokio::time::interval(Duration::from_secs(interval_secs)); | ||
| loop { | ||
| interval.tick().await; | ||
|
|
||
| if stop_bootstrap.load(Ordering::Acquire) { | ||
| return; | ||
| } | ||
|
|
||
| // Build ignore set from currently connected peers. | ||
| let ignore: std::collections::HashSet< | ||
| lightning::routing::gossip::NodeId, | ||
| > = bootstrap_pm | ||
| .list_peers() | ||
| .iter() | ||
| .map(|p| { | ||
| lightning::routing::gossip::NodeId::from_pubkey( | ||
| &p.counterparty_node_id, | ||
| ) | ||
| }) | ||
| .collect(); | ||
|
|
||
| match bootstrapper.sample_node_addrs(num_peers, &ignore).await { | ||
| Ok(peers) => { | ||
| println!( | ||
| "[dns_bootstrap] Discovered {} peers", | ||
| peers.len() | ||
| ); | ||
| for peer in peers { | ||
| let _ = cli::do_connect_peer( | ||
| peer.pubkey, | ||
| peer.addr, | ||
| Arc::clone(&bootstrap_pm), | ||
| ) | ||
| .await; | ||
| } | ||
| }, | ||
| Err(e) => { | ||
| eprintln!("[dns_bootstrap] Bootstrap failed: {}", e); | ||
| }, | ||
| } | ||
| } | ||
| }); |
There was a problem hiding this comment.
interval_secs comes from config and can be set to 0, but tokio::time::interval(Duration::from_secs(0)) panics. Add validation (e.g., reject 0 with a clear error) before spawning the task, or clamp to a minimum of 1s so misconfiguration can’t crash the bootstrap task.
| println!( | |
| "DNS bootstrap enabled: {} seeds, target {} peers, interval {}s", | |
| bootstrapper.config().seeds.len(), | |
| num_peers, | |
| interval_secs, | |
| ); | |
| tokio::spawn(async move { | |
| // Initial delay to let networking and gossip start. | |
| tokio::time::sleep(Duration::from_secs(5)).await; | |
| let mut interval = | |
| tokio::time::interval(Duration::from_secs(interval_secs)); | |
| loop { | |
| interval.tick().await; | |
| if stop_bootstrap.load(Ordering::Acquire) { | |
| return; | |
| } | |
| // Build ignore set from currently connected peers. | |
| let ignore: std::collections::HashSet< | |
| lightning::routing::gossip::NodeId, | |
| > = bootstrap_pm | |
| .list_peers() | |
| .iter() | |
| .map(|p| { | |
| lightning::routing::gossip::NodeId::from_pubkey( | |
| &p.counterparty_node_id, | |
| ) | |
| }) | |
| .collect(); | |
| match bootstrapper.sample_node_addrs(num_peers, &ignore).await { | |
| Ok(peers) => { | |
| println!( | |
| "[dns_bootstrap] Discovered {} peers", | |
| peers.len() | |
| ); | |
| for peer in peers { | |
| let _ = cli::do_connect_peer( | |
| peer.pubkey, | |
| peer.addr, | |
| Arc::clone(&bootstrap_pm), | |
| ) | |
| .await; | |
| } | |
| }, | |
| Err(e) => { | |
| eprintln!("[dns_bootstrap] Bootstrap failed: {}", e); | |
| }, | |
| } | |
| } | |
| }); | |
| if interval_secs == 0 { | |
| eprintln!( | |
| "[dns_bootstrap] Invalid configuration: interval_secs must be greater than 0; DNS bootstrap disabled." | |
| ); | |
| } else { | |
| println!( | |
| "DNS bootstrap enabled: {} seeds, target {} peers, interval {}s", | |
| bootstrapper.config().seeds.len(), | |
| num_peers, | |
| interval_secs, | |
| ); | |
| tokio::spawn(async move { | |
| // Initial delay to let networking and gossip start. | |
| tokio::time::sleep(Duration::from_secs(5)).await; | |
| let mut interval = | |
| tokio::time::interval(Duration::from_secs(interval_secs)); | |
| loop { | |
| interval.tick().await; | |
| if stop_bootstrap.load(Ordering::Acquire) { | |
| return; | |
| } | |
| // Build ignore set from currently connected peers. | |
| let ignore: std::collections::HashSet< | |
| lightning::routing::gossip::NodeId, | |
| > = bootstrap_pm | |
| .list_peers() | |
| .iter() | |
| .map(|p| { | |
| lightning::routing::gossip::NodeId::from_pubkey( | |
| &p.counterparty_node_id, | |
| ) | |
| }) | |
| .collect(); | |
| match bootstrapper.sample_node_addrs(num_peers, &ignore).await { | |
| Ok(peers) => { | |
| println!( | |
| "[dns_bootstrap] Discovered {} peers", | |
| peers.len() | |
| ); | |
| for peer in peers { | |
| let _ = cli::do_connect_peer( | |
| peer.pubkey, | |
| peer.addr, | |
| Arc::clone(&bootstrap_pm), | |
| ) | |
| .await; | |
| } | |
| }, | |
| Err(e) => { | |
| eprintln!("[dns_bootstrap] Bootstrap failed: {}", e); | |
| }, | |
| } | |
| } | |
| }); | |
| } |
| /// Primary seed domain (e.g. "nodes.lightning.wiki"). | ||
| pub primary: String, | ||
| /// Optional fallback SOA shim for TCP fallback (e.g. "soa.nodes.lightning.wiki"). | ||
| pub fallback_soa: Option<String>, | ||
| } |
There was a problem hiding this comment.
fallback_soa is defined and documented as a TCP fallback mechanism but is never used anywhere in the bootstrapper/query logic. Either implement the SOA/TCP fallback behavior or remove this field (and the config help example) to avoid exposing non-functional configuration.
Signed-off-by: dzdidi <dzdidi@protonmail.com>
5de4407 to
5b96479
Compare
DNS bootstrap instead of manual
Blocked by / Based on #6