From: John Stanley (stanley@PEAK.ORG)
Date: Fri Jul 10 1998 - 15:43:01 CDT
On Fri, 10 Jul 1998, Brad Templeton wrote:
> Collapse servers can work 2 ways. One is simply to have "major" nodes
> on the net do collapse. Articles start out big, with a fat certificate
> set on them. They go downstream, but if done right, soon they run into
> a "major" server whose key is one in the list of those known to all sites.
> That server collapses the certs and resigns the article, and forwards it on.
And how does a site know that its key is known to all sites? Currently it
isn't possible to come up with a list of all sites, much less determine
what they "know". Or are you saying that someone* will create* a list*
that is supposed* to be known to all* sites? (I've marked the problem
words with * -- I count 5 in that sentence.)
> This uses the USENET flood method, and has the risk that if the path of
> an article happens to make it to you without passing through a major node,
> you get an article that is somewhat bulkier.
In other words, the savings in size that we are supposed to expect may not
happen. And we are no longer talking about just cancels, we are talking
about every article being signed; every article containing the "big, fat
certificates" which may or may not be collapsed by the time the article
gets to your site.
> The other way is to E-mail the article to a collapse server address that
> has many, many MX records. It collapses and injects for you. This relies
> on E-mail but means that the cost of the signature system is very small.
I would include the cost of running a "collapse server" in the cost of
This is a fundamental change to the way USENET works. It isn't simply the
creation of new headers to signify new meanings, it changes USENET from a
peering system to a parent/child system. It creates a hierarchy of
"bosses" who filter and process articles that then get passed to the
You keep talking about the dangers of systems changing messages as they
pass through. What about the dangers of one of these central servers
changing the message before it signs it? This is a hole, yes? Are we
creating a system with a known hole?
> You don't register your key with a collapse server. You don't register your
> key with anybody. You get your key *certified* by some party that is
> known and trusted to do fair certifications.
Let me put the * in your last sentence to mark the problem points.
"You get your key certified* by some party* that is known* and trusted* to
do fair certifications*."
I count five more problems.
> Sites also only keep the keys of a small set of trusted roots and collapsers
> around. Perhaps a few hundred. How do they know what to keep? The keys
> themselves have a "keep me" attribute in them when broadcast. The "keep me",
> like all attributes, is only given to people whose keys you need to keep.
Given by whom? How does that person know what keys you need to keep?
> The high level keep me keys are regularly broadcast in a newsgroup and always
> available for ftp/http pickup.
Was this volume included in your count when you compared cancel locks to
your certificates? I don't think so. I remember "80 bytes per cancel". I
don't remember "constant reposting of hundreds of keys of unknown length."
Are we going to abandon small volume sites now? Those sites with small
pipes, who want to get only a few groups and can't afford more? Has the
USENET illuminati become so blinded by OC48 and gigabit switches that
small sites are no longer welcome to play? Yes, ALL of USENET is quite
large. There are parts, however, that are still quite small.
And, as I think Dave has mentioned, what about local servers that do
nothing but server local groups? What "collapse server" do these people
mail their articles to?
> via FTP or HTTP. In addition, in their first week of life, they get
> included with the articles, and not assumed. You have to be off the net
> for a long time before this would fail.
And you object to a 20 byte cancel lock?