[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: ldup urp draft comments
Thanks for that - may I enlighten you?
> -----Original Message-----
> From: Steven Legg
> Sent: Friday, February 19, 1999 5:07 PM
> To: Alan.Lloyd@xxxxxxxxxxxxxxxxxxxx
> Cc: ietf-ldup@xxxxxxx
> Subject: Re: ldup urp draft comments
> Alan Lloyd wrote:
> > comments in line.
> > > -----Original Message-----
> > > From: Steven Legg
> > > Sent: Thursday, February 11, 1999 6:13 PM
> > > To: merrells@xxxxxxxxxxxx
> > > Cc: ietf-ldup@xxxxxxx
> > > Subject: Re: ldup urp draft comments
> > >
> > >
> > snip
> > > DISP has a concept of information planes. Each replication
> agreement a
> > > consumer DSA has with a supplier DSA corresponds to one plane of
> > > information
> > > within the consumer DSA. Think of each plane as a separate virtual
> > > within
> > > one physical DSA. The updates for a particular agreement will
> > > be sent in order but if a consumer DSA has two or more agreements
> > > covering
> > > the same set of entries (e.g. secondary shadowing through
> > > intermediary DSAs) then it may potentially see the updates out of
> > > order
> > > when viewed without regard to the agreements.
> > >
> > Is this theory?
> No. It's specified in X.525. Information planes are described in
> clause 9.2.6.
The standard represents an abstract information model with
protocols. The implementation of it (and according to the standard and
ISPs, etc) is only real on its interfaces - the protocols.
The standard represents about 20% of a commercial product.
Most of those who implemented the standard information model as
the "real engineering" approach have learnt the hard way. I could cite
a list of directory implementations that have FAILED simply because they
have implemented the standard to the letter as an engineering spec.
we do use an RDB as the back end - the X.500 standard does not
we do use the RDB /SQL features for integrity and commitment
services - the X.500 standard does not define this
we do use RDB tables to index everything - the X.500 standard
does not define this
we apply load balancing and redundant DSA mechanisms - the X.500
standard does not define this.
we can build backbones with our X.500/LDAP router processes -
the X.500 standard does not define this
we are developing/demonstrating relational - directory embedded
search engines - the X.500 standard does not define this.
we have alias integrity features - the X.500 standard does not
we can scale to tens millions of entries - the X.500 standard
does not define this.
we can deliver thousands of searches per second - the X.500
standard does not define this.
we can pul the power cord on our system when live and be back on
line in a minute (once power is back) - the X.500 standard does not
we can do bulk dumps, bulk load, LDIF sorting, etc with
utilities - the X.500 standard does not define this.
We have live operational systems that work - the X.500 standard
does not define this.
> > In the real world - one would direct a replica update from a
> > master portion of the DIB to those replica nodes that require it. ie
> > in the system design that determines this. I think one would be
> > of poor system design if one directed two or more DISP updates with
> > different values at the same replica - knowing collisions and
> > inconsistencies would be generated. ie part of designing real
> > systems is to ensure that the operational replication processes are
> > robust - for the customer - ie those who run their businesses on
> > stuff..
> If a consumer DSA has two or more DISP replication agreements which
> overlapping areas of replication then every update which occurs at the
> master DSA will be reported to the consumer DSA multiple times - once
> each agreement. With respect to any one of those agreements the
> occuring at the master will all be reported and will be reported in
> Across the agreements, the consumer DSA will see uncoordinated and
> streams of updates. The X.525 answer is to direct the streams of
> into different information planes. To ignore the planes of information
> such a situation is to invite updates to be lost or applied to the
> wrong entry,
> resulting in the consumer DSA being inconsistent with the master DSA.
In the real world, the DIT design, its ownership, its
protection, its distribution have to be considered. In addition, the
consumer DSAs will see a set of updates on its interfaces which can be
bulk load, LDIF load, DISP (or below this) database replication tools or
hard ware level disk mirroring. And these will be dealt with (as
theoretical planes) according to system design and engineering
principles. Yes one can say that these are implemented as replication
planes - and one can write this down as such - but when engineering this
into a commercial system - reality has to be dealt with. If one chooses
to implement abstract theory into practice - all sorts of issues and
particularly limitations arise.
Ask Boeing :-) as I am sure at one time the theory of
aeroplanes was to have wings, engines, etc But the operational issues
then dominated the design process.
> The LDUP and DMRP answer is to provide procedures which are tolerant
> duplication and reordering of updates, and use a single information
> A consumer DSA following these procedures will, after allowing for
> delays, reflect exactly the state of the master DSA.
OK show me a 10 DSA system doing 1000s of updates be second with
this stuff and what the investment is...
And how do you deal with eg. a three DSA system where A has an
entry updated (modified) at the same perceived time as the update in B
and where DSA C has had an update before this but C's time is 2 seconds
in front of A and B ? What wins???
> > > By maintaining a separate
> > > information plane for each agreement the consumer DSA avoids the
> > > problems
> > > of updates arriving out of order.
> > Theoretically possible - but customers dont want that because it
> > creates problems.
> Keeping the information planes logically separated is a technical
> for the implementors but I don't see it causing any problems for the
> However ignoring the planes exposes the users to the risk of permanent
> inconsistencies between the supplier and consumer DSAs.
The planes are a theoretical model which we appreciate and apply
in engineering mechanisms.
However, the real debate is Operational, System consistency and
designing systems that are useable, do not create conflicts in updates
and provide the BEST integrity of information - that businesses run on.
> > > DISP consistency depends on updates being
> > > sent and applied in order.
> > So why upset that engineering principle by this LDUP proposal?
> Because we have a requirement to permit updates on the same data to be
> performed at separate DSAs without having to establish a link between
> the DSAs
> at the time of the update. That necessarily introduces a situation
> updates are potentially seen in different orders at different master
> Since we have solved that problem for master DSAs it is a simple
> matter to
> apply the same solution in other awkward circumstances where updates
> can be
> seen out-of-order, i.e. consumer DSAs with overlapping areas of
> > > The notional contents of the different
> > > planes can get temporarily out of synch but these transitory
> > > discrepancies
> > > are dealt with during query evaluation.
> > Oh really - how does one unwind a real DIB that has got out of
> > sync - with renames and deletes, and updates applied from many
> > that are not tightly synchronised.. How does one DETECT this from a
> > perspective. ????? See below.
> There is no need to unwind anything. Each DISP information plane in
> shadow consumer is updated by its own independent stream of updates
> originating at the master DSA. Each plane in the consumer will
> reach the same state as the master DSA, but at different times. The
> transitory discrepancies I'm talking about are the differences between
> the contents of the planes which are simply the artifact of them being
> different points in processing the stream of updates.
Please - I understand the theory - Can you please tell the world
how ten DSAs acting in your model deal with dispersed updates at the
same physical time and also at the same perceived logical time , but
different physical times - when that is open to tolerances of +-
Can you please tell the world - how 100 million users of the
directory system who rely on integrity of information do detect or deal
with - values that are changing from multiple sources, that can be
applied in order or the wrong order.
ie are these artifacts in the wrong or right order: A user's
information, a bank account, X rays, health records, fingerprints.... or
just email addresses?
> > > In the URP draft we don't need information planes because we have
> > > assumed
> > > from the outset that updates can arrive out-of-order and have
> built a
> > > framework to handle that.
> > Again this is a theoretical view.
> At Telstra we have been putting the ideas in the URP draft to the test
> a replication simulator to show the procedures work in practice as
> as in theory. The simulator gives us confidence in the correctness of
> procedures and provides insight on implementation issues.
I am all for simulation and ideas. I am also very much
associated (for the last twenty years ) of the cost of putting theory
into practice and what the end user ends up with. I dont want our users
have unworkable conflict models - particularly whe we are dealing with
very large distributed information infrastructures.
We have a draft paper on our web site that describes our
approach and it is based on delivered systems that run businesses.
It describes Directed Multi Master mode - and all the features
for fault tolerant load balanced systems.
> > I have yet to see any answers
> > on the issues of loosely coupled systems where updates can conflict
> > time and space - across eg. 10 interconnected systems of millions of
> > entries with thousands of updates per second - how is entry level
> > detection processed.
> Conflicts in time don't occur because of the way CSNs (timestamps) are
> generated for updates. One of the components of the CSN is a replica
> ID which
> is unique for each DSA. Thus updates performed at different DSAs will
> be given exactly the same CSN. In a comparison of the CSNs of any two
> from different DSAs one will always be before the other and all DSAs
> agree on this relative ordering.
This last statement to me hits the nail and just proves how
BROKEN the whole concept is.
eg. Please supply a multi partitioned directory system with
thousands of DSAs and interconnected LDAP servers where all the DSAs are
set with a globally unique set of ordering rules and what the management
is of these ordering rules are in fault tolerant DSA systems when DSAs
go off and on line.
Please also supply dates of availability.
Directories are simple - they have named objects in them. There
is a master object (that can be replicated) which is updated. And the
engineering under this should a) not generate conflicts and b) be
responsible for the update propagation - once the master has assumed the
> I've had a look at the Directed Multi Master Mode in
> http://www.opendirectory.com/Whitepapers/Multi-Master.html and it
> impress me.
Is that because it works?
I was not out to impress you - we just wanted to provide the
directory industry our approaches to building real operational systems
and the features that we have and work + an insite to the fact that its
not just the "protocols" that make a (multi master ) system. It also
defines features like load balancing, alternate DSAs, security regimes
and information integrity at the user and operational levels.
But please note the document is a draft based on deliverables
and operational testing.
It is not a theory paper. We went through the theory a number of
> Section 2.2.3 describes an architecture where updates are directed to
> primary master DSA, which then propagates the updates to the other
> through DSP. Isn't this exactly the functionality that can be achieved
> with DISP ? The non-primary "masters" look to be just like shadow
> The paper doesn't say what happens when the primary master isn't
Well actually its a question of operational integrity and
efficiency . DSP is more efficient because it does not need shadow
agreementsn which can reach to N*N proportions in a real system. ie. its
Less for the business user and system designer to worry about.
When the primary master goes off line - we automatically switch
to the secondary or tertiary - in that order - to preserve the
We then as the reality dictates, deal with the off line master
by replaying transactions from logs or rebuilding the DIB from utilities
or DISP, etc - it all depends how long the primary master was out for.
In between times the secondary master has assumed the role of the
primary. - So its a service level thing we target - not protocols and
their theory - and in our books its that what customers want.
> Section 126.96.36.199 suggests that there may be peer master DSAs which can
> the locally invoked updates to each other using "DSP Write Through".
> There is
> no hint of a mechanism for even ensuring that updates performed at
> masters are executed in the same relative order at all DSAs, let alone
> resolving any update conflicts which may arise.
All writes from anywher in the system, including secondary and
tertiary masters are directed to the nominated primary master. Once the
master has updated THE MASTER OBJECT - then the write is propagated to
the secondary and tertiary . This is performed because in X.500 - ---
I will quote some of the theory - WE WANT TO BE ABLE TO DEAL
WITH THE DAP/DSP PARAMETER - "DONT USE COPY" - TO GET THE MASTER
INSTANCE OF AN OBJECT FROM ANYWHERE IN THE SYSTEM - the true master.
- WE DO THIS SO WE CAN SUPPORT HIGH INTEGRITY, COMMAND AND
CONTROL TYPE, OPERATIONAL SYSTEMS FOR GLOBAL BUSINESSES AND DEFENCE
IN YOUR MODEL YOUR ABILITY TO SUPPORT THIS FUNDAMENTAL SELECTION
PARAMETER --- IS BROKEN.
SIMPLY BECAUSE YOU DONT KNOW WHERE THE MASTER OBJECT IS... AND
WHAT STATE ITS IN.
For example, what happens if
> one of the "Written Through" DSP operations fails with an error when
> to the other masters ?
Described above - retried a few times in the queue or if the
things has died recover it later - this is the way of the world for all
things - its obvious.
However, experience is showing that:
a) good commercial products dont tend to fall over very much
b) that having the ability to use a standard LDAP server as a
backup/archive system with the X.500 directory service is a desirable.
And of course - with the DX link we support that.
> > regards alan
> > PS - at the end of the day - its the working systems that win.
> > This LDUP approach will be a 5 year pain and expense - for those who
> > boldly go and build it. That could be good news to those who dont
> > implement it.:-)
> > snip