Friday, July 6, 2007

What's wrong with DNS?

What's wrong with DNS?
DNS was designed in the days when long-term store was scarce and so
was bandwidth. Technology was difficult and obtuse, with the number of
human implementors measured in the hundreds, and the number of users
in the thousands. I remember that because I was there. There was no
other way to pass this data. Something had to be first and that was
DNS, which reflected the times and the organizations that invented it
in the first place.

Those days are long gone now in the days of the $20 grocery-store 2G
flash RAM, millions of implementors, and billions of users.

Today's Internet is a consumer-driven multi-modal world which has no
patience for that.
To cripple bootstrap by leashing it to creaky DNS is a disservice to
the future.

Bootstrap should have some criteria requirements for some obvious
things such as satisfying the security AD but the actual
implementation should allow for individual implementations that
reflect a rapidly changing modern world.

Hardcoding is a quaint way of solving the problem, and it is far too
easy for protocol engineers to design in a vacuum. When a single
handheld device can receive inputs by SMS, email, walled and unwalled
IP, and HTTP the number of sources for a bootstrap is openended and
impossible to mandate without creating limitations on functionality
and usefulness in a hybridized multi-modal world..

A mime message, encrypted by a private key known to that device, could
contain a myriad number of bootstraps, be they DNS, hardcoded IP (IPV4
or IPV6? Which?), local names, peer names, SMS addresses,sip uris,
whathaveyou. A list based on operating criteria, that can evolve over
time.

This message, more 'carrier pigeon' than boostrap, can be delivered
via a variety of means, including via C/S SIP or stored from a
previous DHT. A primitive form of unencrypted messaging is called
'paper'.

In this environment a 'friend' network, of names associated with keys,
could be seeded entirely by SMS messages and never ever need DNS,, and
it works when zeroconf fails, which is often. It could later use
broadcasts or multicasts to find other 'friends', using one of a
variety of techniques, zeroconf being one.

Create the requirements for the bootstrap that guarantee security,
functionality and interopability. Create the list of appropriate
beacons, allowing room for more in the future.
//////////////////////////////////////////////////////////////////////////////////
That's certainly my thinking in bringing this up. Just do what's easy and knock off the biggest win -- a low-cost SIP infrastructure.

As I see it, this approach has the following advantages:

1) Zero changes to proxy servers and registrars.
- Non-firewalled peers become proxies and registrars just like before. They should be able to literally use the same code as the open source proxies and registrars out there, though, because nothing changes in the lookup.

2) Eliminating the most complicated part of the current architecture -- the DHT. It's also the thorniest to standardize.

3) Lower latency for completing calls. No more waiting around for the DHT to respond. Call completion happens just like it does with any other SIP call (if the DNS is up-to-date -- more below).


Then there are the disadvantages:

1) Delay of DNS propagation when registrars go down or clients move. I think this is the toughest to get a good read on because I don't know how it would play out in the field.

2) You'd need to run a dynamic DNS *client* on every node (at least that seems like the best approach to me).

3) You'd be relying on some servers out there to do the dynamic DNS. Possibly existing dynamic DNS providers?

4) Doesn't work in completely disconnected scenarios.

Keep in mind I'm not necessarily advocating this shift, but it certainly resonated with me once I realized the full implications of Paul's question to me the other day.

///////////////////////////////////////////////////////////////////////////////////
I don't want to beat a dead horse, but:
1. Most of the dynamic DNS implementations I'm familiar with have update
times measured in a couple of minutes. It's not "real time", but it's
pretty short. This is reflected in the TTL of the answer.

2. Any form of DNS requires a set of caching resolvers and authoritative
servers. If you call that "centralized", then you are correct. I always
think of DNS as being very decentralized except for the roots, and most
implementations I know work if connection to the root is lost, but the
answer is in the cache.

3. If you combine a short TTL with a disconnected server, then caching
doesn't work; resolvers need access to the authoritative servers (plural)
that provide the data. It is servers plural, not singular. I think this is
the reason dynamic DNS is not the best choice, although we may have been
hasty in coming to that conclusion.

4. I think identifiers in these systems are in the form of user@domain,
where domain is the "Overlay Id". Humans may never see that, but it's
there. Resolving "domain" to at least a starter set of peers has to happen
somehow. We have only a few tricks in our bag for that:
a. Configuration
b. Multicast
c. DNS
Are there other mechanisms? There are multiple configuration mechanisms,
but they all depend on static information, not dynamic, learned data.

Agree that we explicitly declared dynamic DNS and multicast out of scope.
That leaves basic DNS and Configuration.

When networks become disconnected, local caching resolvers work. If you are
within a domain, and you want a DNS answer from that domain, it works.


Many existing systems work by configuration. I'd rather not do that.
/////////////////////////////////////////////////////////////////////////////////
However if a kindler, gentler and 1000 points of light - better/faster/cheaper DNS is to be implemented
please look into steroid performance of CoDnS

No comments: