[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Where and how to evaluate the Entry Filter in sparse LDUP?
>I've been trying to think about how and where the Entry Filter can be
>correctly evaluated when the Consumer is a sparse replica. I haven't
>noticed any very satisfactory answers. Have I missed something? Do
>any of the current drafts address this issue?
The current I-D does not contain any section specifically discussing
Partial Replication. Within the defined primitive processing, there is the
use of an addition timestamp/vector/something. I was not at TRL when then
work was done, and I am told it was intended to solve Partial Replication.
However, it is not a particularly elegant way to do it (sorry, I can't tell
you whether it works or not), and I believe Steve is removing it from the
I think this raises a specific issue - I think there is a lack of
documented info about the design choices/reasons etc for choosing
particular mechanisms etc which needs to be separate from the developing
internet draft. In my opinion, the internet draft should indicate the
agreed choices of the group. This design info etc should explain and set
out the choices the group needs to make and/or the implications of the
Unless anyone wants to fight me for the job, I'm happy to put together a
quick design document for discussion/circulation close to the next meeting
for Partial Replication. I'll get together the work already done at TRL,
and what works and what doesn't, and also, I think there's useful stuff
from the following discussion too. I think what is needed on the Partial
Replication front at this point in time is some confidence that it doesn't
break the current approach, rather than an exact solution to include in the
>Let's say an entry "passes" the Entry Filter if it is to be included
>in the sparse replica, and "fails" otherwise.
>First, let me digress on why this is important. Presumably a sparse
>replica holds little or no information about entries that fail the
>Entry Filter. It may hold something like an entry deletion record; it
>won't hold a record of all the attribute values --- otherwise what's
>the point in sparseness? So when an entry changes from failing to
>passing, the Supplier has to send primitives to fully populate the
>entry with all its attribute values (with the right timestamps too ---
<an interesting issue in its own right). So it's important that the
>Supplier know when entries make this change; correctness (but not
>optimal performance) can be maintained if the Supplier errs on the
>side of detecting this change too often. End of digression.
Before we get into it, I'll step back one further level which may be useful
later. If we consider an entry as the basic unit of filtering (which its
not), then we can say that any DSA which filters this entry must be
connected via one or more replication agreements to the mesh of other DSAs
filtering this entry. That is, the graph of DSAs filtering an entry is
fully meshed. Now, the entry isn't the basic unit, but whatever it is, I
think the above holds true.
>It seems clear that the Consumer can't, in general, detect when an
>entry changes from failing to passing: the Entry Filter might depend
>on data not stored at the Consumer while the entry is failing, and the
>URP protocol normally sends changes rather than complete entries.
>So let's suppose the Supplier evaluates whether entries pass or fail
>the Entry Filter (the Consumer may also double-check that it isn't
>being given entries that fail the Filter).
Yep. Supplier evaluation makes sense.
I think this is a bad option, but just in case its useful: What if we
assume all DSAs know all other DSA Entry Filters and that these do not
change. Doesn't this mean that when an operation is performed, that the
performing DSA will be able to detect whether it will cause a problem with
any other DSAs Entry Filter and be able to catch it at the source? My
feeling is not, but I haven't got a good explanation for why not yet. If
we could say this, I would use the mechanisms defined for introducing a new
DSA for changing an Entry Filter.
I think the state based approach described below is better than the log
>First, let's consider log-based implementations (later we'll look at
>purely state-based implementations). When considering sending a
>primitive that sets some, but not all, of the data tested by the Entry
>Filter, where can the Supplier get the other data examined by the
>Filter? We could imagine rolling back the entry state to the point
>just after the primitive in question, and thus doing a completely
>accurate test. However, I expect most implementors in the WG will not
>want to support such rollbacks.
Actually, a primitive in the replication log does not hang around in the
log after it is passed onto all connected DSAs unless
(1) it changes into a deletion record
(2) it changes into a saved primitive, or
(3) it becomes the valid timestamp on an entry, attribute etc.
This means that a record of previous states is not held. It is an
implementation choice whether the actual primitives causing the current
state are saved. The cleanup mechanisms mean that even these three records
do not need to hang around after a certain length of time.
For the above reason, the rollback would be messy, and implementation
independent. If a DSA chooses not to keep all replication primitives, then
it cannot roll back accurately. [The above reasons also contribute to
making the algorithms easier to describe in a state based manner - except
as noted by Mike, the saved primitives.]
>Another imaginable source is simply
>the current state of the entry. In general, if multiple primitives
>affecting a given entry need to be sent, this test will only be
>correct for the last one.
I think we can assume we only have the current state of the entry.
>When does the Supplier send the full set of entry-populating
>primtives? Clearly it's necessary to send them in place of a
>primitive that, if applied to the state that the entry would have on
>the Consumer if the Consumer were holding that entry (despite its
>failing the Filter), would change the entry from failing to passing.
>However, the Supplier doesn't know what state the entry would have on
>the Consumer. A very simple answer is to send them in place of every
>passing state; as mentioned above, this should be true of at least the
>last primitive in each complete batch to be sent, so we'll have no
>false omissions for long, assuming each batch actually is sent fully
>and quickly (that's an unlikely assumption, but I'll set that aside
>for now). However, that may consume a lot more bandwidth than
>necessary --- the size of the full entry times the number of
>It's tempting to make some effort at reducing unnecessary
>transmissions of the full entry. One idea is to have the Supplier
>keep a bit, for every (entry, sparse Consumer) pair, of whether that
>entry passed the Entry Filter for that Consumer at the end of the last
>complete transmission of primitives. This is a worrisomely large
>number of bits, but not inconceivable. The Supplier could decide what
>to send by comparing this bit to its estimate of whether the primitive
>at hand leaves the entry in a passing state: (1) if the bit is FALSE
>and the estimate is TRUE, the full entry is sent; (2) if the bit is
>TRUE and the estimate is TRUE, just the change primitive is sent; (3)
>if the bit is FALSE and the estimate is FALSE, nothing is sent; and
>(4) if the bit is TRUE and the estimate is FALSE, an entry deletion is
>sent. This is correct only if the Consumer has exactly one Supplier.
>We could fix it up in the case of multiple Suppliers by having the
>Consumer return a special request-for-full-entry if it received a
>simple change primitive when it should have received a full
>entry-populating set. That may be a bit too baroque.
I agree with the above logic. Perhaps, a slight variation of the timing of
the check will help? At the Supplier, we have :
(1) current state
(2) filter of consumer DSA (perhaps - what if it changes?)
Can we detect the move in/out of filter area for each consumer at the time
of processing the primitive? If yes, can we then determine the part of the
entry the consumer will not have according to the filter? Then, we can
insert appropriate additional primitives to insert the entry into the
consumer DSA with the state at this DSA AFTER the processing of the
primitive occurs at the Supplier DSA (maybe this is where a new primitive
for partial replication is required).
>Now consider the possibilities for a purely state-based Supplier.
>This is a bit easier, in that it only looks at the current state, not
>a series of changes that produce that state. It's obvious that the
>Supplier can correctly determine whether the current state passes the
>Entry Filter. The remaining question is whether there is an effective
>way to reduce unnecessary transmissions of the full entry. We could
>use a history-bit scheme like the one above. Another approach would
>be to simply look at the timestamps in the entry, and see if any of
>them are "above" the Consumer's Update Vector --- if so, the Supplier
>presumes the entry's pass/fail status has changed, and otherwise the
>Supplier presumes not. In other words, the Supplier will always send,
>if anything, either an entry-deletion or a full entry-populating
>primtive set. Not a very satisfying reduction in excess.
>What do you think?
You're taking it the way I would go too. The issues are when do we
determine the entry has moved in/out of filter (at supplier -preferred, or
operation??) and how do we optimise the exchange of state information
between the two DSAs involved. Additionally, there'll need to be thought
on the implications of changing the entry filter at a DSA also.
St Jakobsstrasse 25