[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Proposed requirements not met by current architecture and URP proposals - part 2 (end)
Ah... more snipping and <rm2>...</rm2> this time...
BTW, these are my views and my coauthors are free to
disagree with me :-)
From: Albert Langer [mailto:Albert.Langer@xxxxxxxxxxxxxxxxxxxxx]
Subject: RE: Proposed requirements not met by current architecture and
URP proposals - part 2 (end)
Looking forward to your views on this and other issues
(whether in this "last call" on requirements context or
in relation to URP etc). I'm assuming the editors will be going
through all issues raised by anyone whether you (or
John) have expressed an opinion about them now or not.
<rm2>I think that's a fair assumption :-)</rm2>
>11) Design constraint on orderinig
>MM7. Multi-master conflict resolution MUST NOT depend on
>arrival of changes at a replica to assure eventual
>SM2. The master replica in a Single Master system SHOULD
>changes to read-only replicas in the order in which the
>(Appendix B.2 explains that requirement SM2 is intentionally
>MUST and does not apply to multi-master at all)
>URP does not meet SM2 for Single Master but does meet MM7,
at the expense
>of conflicting requirements G3 and MM1. Appendix B.2 does
>although the justification points to MUST and the
>not justify not adopting the same for multi-master and
>simply dropping MM7.
>Decisions about ordering are a design issue, not
>Design constraints should not be included in requirements
>a clear reference to corresponding actual requirements.
<rm>Uh, you answered this yourself, didn't you? Decisions
about ordering are not a design issue in the single master
case, but are a design issue in the multi-master case.
Therefore, order can be required for single master but
not for the multi-master case</rm>
The way I see it any sensible design for Single Master
would maintain strict ordering - both to meet the necessity
for some sort of conformance level and migration and
coexistance strategy for existing single master implementations
implied by the WG charter (which I believe should also be
spelled out as a requirement), and also to simplify
implementation and meet requirements G2 and MM1 for
However the functional requirements are covered by G2, MM1
and the implicit or missing requirement to support
Single Master meaningfully.
The conclusion that maintaining strict ordering is
the best design for meeting such requirements can
be drawn in the architecture and does not need to be
spelled out as a requirement. I try to be very
consistent about this distinction between design
choices and requirements. Perhaps I am overly
sensitive to it in reaction to John's persistent
campaign to present my objections to URP not
meeting the requirements as just a cover for my
having been so presumptious as to show that an
alternative design was possible by submitting MDCR
when I initially drew attention to the basic flaws in
URP and insisted that the lack of requirements
that made it obvious such a design would be
unacceptable necessitated review of the
allegedly "ready" requirements draft.
My items 6 and 8 raise the relevant requirements
that URP does not comply with. The design choices
that resulted in that are a design problem, not
a requirements problem.
At any rate I certainly agree with the design
choice reflected in SM2, so the comment is
essentially editorial and I will not pursue it.
Concerning MM7, you seem to be suggesting that
it merely does not impose a requirement for strict
ordering on the design for multi-master. That could
be achieved by simply omitting it and perhaps
moving the explanation words elsewhere into a more
explicit statement within SM2 that it does not
apply to multi-master.
<rm2>Maybe it could be omitted, but SM2 is single master
and doesn't apply to multi master, and I (for one) would
rather say something then have a void on ordering. The
dependence on ordering is very close to making a choice
between two general classes of implementation: state-based
versus change log and that's something that the WG will not
be successful mandating and has explicitly kept open in the
In fact MM7 explicitly requires that the design
for multi-master require consumers to accept out
of order changes during a replication session,
instead of requiring suppliers to keep them in
order. That is very clearly a design choice as
to *how* to satisfy requirements for (robust)
<rm2>I disagree. If you go back through the archives, you
will see a lot of discussion of state-based versus change log
implementations. Requirement MM7 says that either approach
may be used, because the state-based system may never be able
to send changes in the correct order.</rm2>
I believe it is a bad design choice which
unnecessarily complicates implementation and
directly frustrates achieving the performance
requirements in G2 and MM1, without adding
anything to the robustness of convergence.
<rm2>That may be true, but that should be determined by the
An immediate consequence is that replication
sessions have to be restarted from the
beginning when the link fails whereas they
can continue from where they left of when
strict ordering is used (as correctly noted
in the architecture). As multi-master is
especially useful in situations with weak
network connectivity that could be quite
a significant impact on performance.
It is also much easier for each supplier to
maintain a single ordered index of what it is
sending to various consumers than for each
consumer to have to deal with updates out
of order. Any simplification of the supplier
(which I doubt exists anyway) is completely
negated by the fact that every supplier is also a
consumer in multi-master and that such
indexing has to be done for slave read only
single master servers at the periphery anyway.
<rm2>Again, you are now into implementation and others
may not agree with you, so the requirements have to leave
other possibilities open</rm2>
I find the "negative consequences" listed in B.2
for out of order replication convincing and
the "beneficial" situations unconvincing. The
examples given in the "beneficial" section
concerning schema and other critical policy
information being replicated out of order are
the most dangerous things to replicate out of
order as immediately pointed out in the
"negative consequences". The only remaining
example is for pushing a priority update
ahead of a bulk reload which refers to Single
Master where you then prohibit it.
<rm2>I do not claim to be wiser than every directory administrator
in every scenario, so B.2 is worded with that (at least in my mind)
>12) Design constraint on update tracking
>P2. The supplier MUST track updates sent to the consumer
>resend already acknowledged ones, even in the event of
recovery from a
>failed replica cycle.
>This is another unecessary design constraint. It is not
>URP and there does not appear to be anything unsound about
the URP approach
>having the consumer track updates received rather than the
>tracking updates sent.
<rm>No, this breaks G3 and MM1 above</rm>
I think we might be at cross purposes. I was not supporting Steve's
comment on P2:
I thought that there was mail already agreeing to change P2 to be
implementation neutral in response to other comments.
>13) Transactional Consistency
>G2. LDAP Replication SHOULD NOT preclude support for model 1
>(Transactional Consistency) in the future.
>P7. The protocol SHOULD NOT preclude support of
>Consistency (model 1).
>Model 1: Transactional Consistency -- Environments that
>four of the ACID properties (Atomicity, Concurrency,
>No multi-master replication system can avoid precluding
>for ACID transactions. That is inherent in the definition.
>does the C stand for Concurrency. Nor does the I usually
>ACID transactions are also impractical in a global
>directory service even using single master - eg it is
>to maintain referential integrity between different servers.
>That is why they are excluded from the X.500 data model.
>The whole discussion of models is simply confusing and
>It should be dropped as adding nothing to either requirements
>or understanding as demonstrated by the inaccurate
<rm>I thought you were originally arguing for this model
being supported. Are you now changing your mind and saying
that this should be dropped?</rm>
I have *always* said that model 2 is the only model relevant
to LDUP work and that multi-master cannot possibly support ACID
transactions. To understand why one could reject the wording
of that requirement in view of the definition given for
Transactional Consistency, simply click on the link that
Speaking frankly both the requirement above and the fact that
you thought I ever supported it, is entirely due to you,
like many other members of the WG, not actually understanding
Well, I think that you may have some difficulty in convincing
the WG to drop model 1, but now I understand. Also I point out
that Model 1 is not required, but that there was sufficient
consensus behind not closing the door on it permanently</rm2>
>14) Limited Effort Consistency
>G1. LDAP Replication MUST support models 2 (Eventual
>3 (Limited Effort Eventual Consistency) above.
>URP makes no attempt to support model 3 (any failure in its
>support model 2 does not constitute supporting model 3,
>aimed at entirely different design goals completely
>to LDAP directory replication).
<rm>If folks want to use LDAP in a Model 3 situation
then it should be replicatable by LDUP, right?
Only if you believe a WG that has not been chartered to do that work,
has not done any such work, and knows nothing about the issues
involved, should nevertheless be told it MUST do it before having
completed work it is supposed to know about.
I can only repeat my remarks at the link provided in the message
you are replying to:
Model 3 ("limited effort consistency") is needed for multi-master
replication among such a large number of independent nodes that it becomes
impossible to maintain globally consistent state information as to which
nodes are actually participating at any given time. There may also be other
situations in which it is needed (eg nodes that are not "servers" attempting
to provide a reliable service to "clients" but just trying to get a better
picture of the world around them, without storage capacity for global
information). Examples that come to mind with "servers" include the
replication of newsgroups and (distributed) email lists. Despite the best
efforts of "backbones" it is impractical to ensure that all nodes do receive
all updates so the feeds diverge due to batches not being collected within
the "limited best effort" criteria for purges. Model 2 is simply not capable
of scaling to the size of systems such as Usenet with currently available
technology on a multi-master basis. It is inherently confined to situations
where a relatively small number of "servers" (hundreds, perhaps thousands,
but not millions) are maintaining a common "backbone" for a much larger
number of "slave" servers and end user clients - such as for example LDAP
<rm2>Ah, this is your opinion. However, again I believe that I have
seen enough WG consensus to keep Model 3 on the books. I think we
may have to agree to disagree here.