-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Created by @CMCDragonkai
Specification
There are two types of Data Flow in the MDNS System, Polling (Pull), and Announcements/Responses (Push). When a Node joins the MDNS group, the records are pushed to all other nodes. However, for the joined node to discover other nodes, it needs to conduct polling queries that other nodes respond to.
Sending Queries

The MDNS spec states that query records can have additional records, but we won't care to do this as it isn't necessary.
Queries won't have any other records in the query record, much like a standard DNS packet (albeit an mdns query packet can contain multiple questions).
In the case that a responder is binded to 2 interfaces that are connected to the same network (such as a laptop with WiFi + ethernet connected), the queries asking for the ip for a hostname of the responder will receive multiple responses with different ip addresses.
This behavior is documented in: RFC 6762 14.
Control Flow
Unlike other mDNS libraries, we're going to use an AsyncIterator in order to have the consumer to have more control over the querying. An example of this would be:
async function* query({...}: Service, minimumDelay: number = 1, maximumDelay: number = 3600) {
let delay = minimumDelay;
while (true) {
await this.sendPacket(...);
delay *= 2;
yield delay;
}
}The query system has been decided to have it's runtime contained within MDNS rather than being consumer-driven. This means that scheduled background queries will have to be managed by a TaskManager (similar to polykey)
Data Flow
Receiving Announcements/Responses (Pull)
Data Flow
Because queries are basically fire and forget, the main part comes in the form of receiving query responses from the multicast group. Hence, our querier needs to be able to collect records with a fan-in approach using a muxer that is reactive:
This can also be interpreted as a series of state transitions to completely build a service.

There also needs to be consideration that if the threshold for a muxer to complete is not reached, that additional queries are sent off in order to reach the finished state.

The decision tree for such would be as follows:

Control Flow
Instances of MDNS will extend EventTarget in order to emit events for service discovery/removal/etc.
class MDNS extends EventTarget {
}The cache will be managed using a timer that is set to the soonest record TTL, rather than a timer for each record. The cache will also need to be an LRU in order to make sure that malicious responders cannot overwhelm it.
Sending Announcements
Control Flow
This will need to be experimented with a little. Currently the decisions are:
- registerService cannot be called before start is called.
- create should take in services in place of this.
- stop should deregister all services
- destroy should remove everything from the instance
class MDNS extends EventTarget {
create()
start()
stop()
register()
deregister()
}Types
Messages can be Queries or Announcements or Responses.
This can be expressed as:
type MessageType = "query" | "announcement" | "response";
type Message = [MessageType, ResourceRecord] & ["query", QuestionRecord];
const message = ["query", {...}];Parser / Generator
The Parsing and Generation together are not isomorphic, as different parsed UInt8array packets can result in the same packet structure.
Every worker parser function will return the value wrapped in an object of this type:
type Parsed<T> = {
data: T;
remainder: UInt8Array;
}The point of this is so that whatever hasn't been parsed get returned in .remainder so we don't keep track of the offset manually. This means that each worker function also needs to take in a second uint8array representing the original data structure.
- DNS Packet Parser Generator Utilities
- Parser -
parsePacket(Uint8array): Packet- Headers -
parseHeader(Uint8array): {id: ..., flags: PacketFlags, counts: {...}} - Id -
parseId(Uint8array): number - Flags -
parseFlags(Uint8Array): PacketFlags - Counts -
parseCount(Uint8Array): number - Question Records -
parseQuestionRecords(Uint8Array): {...}parseQuestionRecord(Uint8Array): {...}
- Resource Records -
parseResourceRecords(Uint8Array): {...}parseResourceRecord(Uint8Array): {...}parseResourceRecordName(Uint8Array): stringparseResourceRecordType(Uint8Array): A/CNAMEparseResourceRecordClass(Uint8Array): INparseResourceRecordLength(Uint8array): numberparseResourceRecordData(Uint8array): {...}parseARecordData(Uint8array): {...}parseAAAARecordData(Uint8array): {...}parseCNAMERecordData(Uint8array): {...}parseSRVRecordData(Uint8array): {...}parseTXTRecordData(Uint8array): Map<string, string>parseOPTRecordData(Uint8array): {...}parseNSECRecordData(Uint8array): {...}
- String Pointer Cycle Detection
- Everytime a string is parsed, we take reference of the beginning and end of the string so that pointers cannot point to a start of a string that would infinite loop. A separate index table for the path of the dereferences to make sure deadlock doesn't happen.
- Errors at each parsing function instead of letting the data view failing
ErrorDNSParse- Generic error with message that contains information for different exceptions. Ie.id parse failed at ...
- Record Keys -
parseResourceRecordKeyandparseQuestionRecordKeyandparseRecordKey-parseLabels.
- Headers -
- Generator -
generatePacket(Packet): UInt8Array- Header
generateHeader(id, flags, counts...)- Id
- Flags -
generateFlags({ ... }): Uint8Array - Counts -
generateCount(number): Uint8Array
- Question Records -
generateQuestionRecords(): Uint8Array-flatMap(generateQuestion)generateQuestionRecord(): Uint8Array
- Resource Records (KV) -
generateResourceRecords()generateRecord(): Uint8array-generateRecordName- "abc.com" - ...RecordKeygenerateRecordType- A/CNAMEgenerateRecordClass- INgenerateRecordLengthgenerateRecordDatagenerateARecordData(string): Uint8arraygenerateAAAARecordData(string): Uint8arraygenerateCNAMERecordData(string): Uint8arraygenerateSRVRecordData(SRVRecordValue): Uint8arraygenerateTXTRecordData(Map<string, string>): Uint8arraygenerateOPTRecordData(Uint8array): Uint8arraygenerateNSECRecordData(): Uint8array
- Header
- Integrated into
MDNS
MDNS
- Querying
MDNS.query()- query services of a type
MDNS.registerService()MDNS.unregisterService()
- Responding
- Listening to queries
- Responding to all queries with all records
- Respond to unicast
- Truncated bit
Testing
We can use two MDNS instances to interact with each other to test both query and respond on separate ports.
Additional Context
The following discussion from 'Refactoring Network Module' MR should be addressed:
-
@CMCDragonkai: (+3 comments)
Next things:
- Multicast discovery (bring back what we had in the old code)
- Update our peer table
- Implement kademlia on peer table
- Refactor GRPC commands for this
- Produce test harness for multiple nodes on a local network
Multicast Discovery
DNS-SD and mDNS are related protocols for discovery on a local network. This local network is designated usually by a "home router" or a switch. But basically some arbitrary subnet with gateways that are part of a "single network".
The way this works is that there's a special address (for IPv4 and IPv6) that a mDNS software both listens and sends UDP packets on. This is known as a "multicast" address.
The UDP packet contains information structured as a "DNS message", basically the same kind of message you get when you actually query a DNS server to resolve a hostname.
What we have to do is register a hostname for our application. Let's say "polykey", this registration is done on IANA (just like how we registered the OID for our company). This list is here: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml and our application for an assignment is here: https://www.iana.org/form/ports-services
Now once we build a multicast mDNS client/server, we can then advertise on the multicast address by sending a DNS UDP packet that claims "polykey.local" is registered to IP at PORT. Simultaneously we can listen on the multicast address for DNS packets, and filter for all claims to "polykey.local".
Since multiple PK agents can occur on the same subnet, this means we may either have to differentiate on a subdomain of our hostname, or append a renew record to the hostname. It is possible for a given hostname to contain multiple DNS records (and these may indicate IP and PORT assignments). I'm not sure if this is the right way to do this. Need to do a bit of research.
At the end of the day, PK node table gets populated with this initial multicast discovery. We have to have a mDNS client (to send multicast) and mDNS server (to listen for multicast). The mDNS server would be listening for these packets.
Note that there is software like avahi, that acts like an mDNS server/client, this is used interactively by other software that don't have native mDNS capabilities. This is not important to us... we will be advertising directly.
According to this: https://askubuntu.com/a/1105895 it may be possible that the "service type" is different from the hostname. Therefore it's possible to randomly allocate a hostname, and only listen for a given "service type". In which case
_polykey_._tcpwould be the service type, and you could get hostnames likepolykey-PUBLICKEYFINGERPRINT.local, and thus that should be considered unique. Therefore from a service discovery point of view, the only thing static is the IP used for multicast, and the "service type" which is globally setup at IANA. The hostnames, IPs and even ports are all dynamically setup.The node library https://github.com/agnat/node_mdns binds into system-level services and provides the ability to listen and query mdns packets: https://github.com/agnat/node_mdns
A lower level library in pure-js is here: https://github.com/mafintosh/multicast-dns that is used with https://github.com/mafintosh/multicast-dns-service-types according to this issue: Service advertisement mafintosh/multicast-dns#6 (comment)
-
https://blog.apnic.net/2022/05/03/how-nat-traversal-works-concerning-cgnats/
-
https://support.citrix.com/article/CTX205483/how-to-accommodate-hairpinning-behaviour-in-netscaler
-
Domain Names RFC https://datatracker.ietf.org/doc/html/rfc1035
-
Extension Mechanisms for DNS RFC https://datatracker.ietf.org/doc/html/rfc6891
Tasks
- Parser - 5.5 days
- Packet Header - 0.5 days
- Packet Flags - 0.5 days
- Questions - 0.5 days
- Resource Records - 4 days
- Generator 5.5 days
- Packet Header - 0.5 days
- Packet Flags - 0.5 days
- Questions - 0.5 days
- Resource Records - 4 days
- MDNSCache - 2.5 days
- Multi-Keyed Maps for ResourceRecord Querying - 0.5 days
- TTL Expiry Record Invalidation - 0.5 days
- Reverse Host to Record Mapping - 0.5 days
- LRU to Prevent DoS - 0.5 days
- Support use as local resource record in-memory database - 0.5 days
- TaskManager - ? days
- Migrate to in-memory - ? days
- MDNS - 11.5 days
- UDP Networking Logic - 2 days
- Socket Binding to Multiple Interfaces - 1 days
- Error Handling - 1 days
- Querier - 4 days
- Service Querying - 2.5 days
- Record Aggregation for Muxxing Together Services - 0.5 days
- Querying For a Service's Missing Records - 0.5 days
- Emitting Expired Services - 0.5 days
- Unicast 1.5 days
- Checking for Unicast Availability - 1 days
- Sending queries with unicast enabled - 0.5 days
- Service Querying - 2.5 days
- Responder 5.5 days
- Service Registration - 0.5 days
- Filter Messages Received from Multicast Interface Loopback - 0.5 days
- Unicast
- Responding to Unicast Queries - 0.5 days
- UDP Networking Logic - 2 days