[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: draft-ietf-ltans-dssc-00 comments
----- Original Message -----
From: "Thomas Kunz" <thomas.kunz@xxxxxxxxxxxxxxxxx>
To: "Carl Wallace" <CWallace@xxxxxxxxxxxx>
Sent: Monday, August 27, 2007 7:10 AM
Subject: Re: draft-ietf-ltans-dssc-00 comments
sorry for the late answer.
Carl Wallace schrieb:
Here are a few comments on DSSC. I'll send an off list email with some
- In section 4.1, the fifth element in the sequence should be named
SuitableAlgorithm to be consistent with the schema in Appendix B
- The draft should provide some guidance regarding constraints. For
example, should one define key size constraints per public key algorithm
or per each signature algorithm?
That is a decision controlled by the Hosting Entity's Operations Policies
and needs to be retained as such. The answer is that both methods should
> For policy brevity,
which IS NOT a good thing here... Policy is what allows NEA to be used in a
number of different scenario's
the former would be better. Perhaps an alternative would be to bind
constraints and validity periods within SuitableAlgorithm.
I would also prefer the definition of constraints per public key
But how is that to be represented? Is it part of the internal policy
controlled by SuitableAlgorithm or where then is it enumerated?
- Section 5 should include processing for constraints.
- The spec should prohibit including multiple instances of the same
algorithm identifier w/ the same constraints.
If they have those different policy contraints tagged to them then they are
different policy instances. Hence it would be reasonable to have multiple
policies around the same PKI model.
- If an ASN.1 version is to be produced, using an enveloping signature
would make the mapping to CMS easier.
OK. An ASN.1 version should come with the next version of the draft.
- The assumption in Section 3.2 that one must find an old policy in order
to determine if an algorithm was valid at a point in the past is too
complicated. Suitability definitions should accumulate in a single
policy definition. An enterprise could maintain several policies. For
example, one complete, one current and one past policy could be
Uh, policies could be setup based on Role and Responsibility Separation
Models and as such its likely that similar policies would exist in the same
In our notion, the policies are published by specific institutions (e.g.
annually) and one policy represents the evaluations based on current
knowledge (e.g. on current findings, RSA with 1024 bit key length could be
valid until end of 2007, but next year a new policy could be published
which states that RSA 1024 is valid until end of 2008).
To expect, that a policy also contains all past evaluations of an
algorithms could be error-prone.
Customized Policies are the creation of those that adopt the NEA system, but
basic operations policies are what the framework should come out-of-the-bag
In our opinion, the question, if the evaluation of an algorithm in an old
policy is different from that in the current policy is primarily important
in law cases.
Nope... I dont think this has anything to do with "law cases". I think it
has to do with logging and assurance models and that's about it. What's
missing is to qualify NEA under the new Judge Grimm ruling so that NEA based
testimnoy ius admissible into the US and Foreign Court's that support
similar standards or MRA's.
You want to know what is missing here?
a.. Relevance; i.e. meets F.R.E. 401
So a process for determining relevancy of the information produced and its
integrity is key.
a.. Authentication; i.e. meets F.R.E. 901
Authentication as to where the information came from is key. Also that the
material submitted is what was authenticated meaning that content-integrity
propcesses are mandatory as well.
a.. Hearsay (if offered for truth) i.e. meets F.R.E 801
The mechanical processes for how this information is produced must be mapped
out such that it can be demonstrated to a Court that this data is either
beyond hear-say or that the heay-say exceptions in re statements against
interest is brouught into play such that this evidence is admissible.
a.. Original or Duplicate i.e. meets F.R.E. 1001-1008
OK so is this the original data from the system or a copy/fake of it?
a.. Probative Value Outweighed by Prejudicial Value i.e. meets F.R.E. 403
This actually is a set of pre-built policy statements which are designed to
meet the requirements of FRE403. These would be a part of the NEA boiler
plate and would be the basis of NEA Data's being admitted into the Courts as
As noted in the above list, these are the five key tests for Digital
Evidence being admitted to the US Federal Courts. Of these probably 1,2, 3
and 4 are key tests. Test 5 would be something that would happen at hearing
time if the rest of the submission actually meets the other tests and that
the opposing counsel doesnt object to 'how it meets those tests' which this
group is directly responsible for designing. otherwise each system fielded
will have to be proven in Court under a DAUBERT type hearing and if that is
true then the IETF will have totally failed there.
As to old policy and its review, the review of old policy relative to new
policy should only come into play when a new system is installed into a
network hosting NEA service-qualifications, or when an existing policy
expires and a new one comes into play, and that only has to do with changes
in the Entitlement and Logging Constraints for that "New Policy" relative to
the old one.
And there you cannot trust, that the current policy correctly quotes past
evaluations, instead you will have to look in the old policy.
Only if there is a policy saying that. I can think of any number of NEA
scenario's where the current client doesnt care what the past policies were.
So I disagree as to 'that the old policies will always need to be reviewed'
or be maintained as 'reviewable'.