I am no expert at tieing test specifications with design specifications,
but I would like to express some words of caution. In a nutshell testing
implies to me some sort of guarantee, guarantees imply some additional
inaccuracy, and some additional schedule time. I would rather have the
design engineer's best guess as to the performance of their devices,
when they have the design far enough along to make that determination.
Testing to a specification means that you have to account for the tester
tolerances, the noise in the tester environment, and if you have a
sampled test scheme, some additional guardband for the non-sampled
parts. All of this to prevent 'bad' parts from testing as good. In my
experience, there is a fair amount of pressure to have the test
specification guardband the desired device performance, and to have the
published specification guardband the test specification. Users end up
with published specifications that predict performance that is
dramatically different from the performance of the delivered parts.
The additional time in the schedule comes because you have to involve
test organizations and some non-trivial sample of hardware before your
test specification becomes firm. Invariably the test specification
changes as the design engineer and the test organization learn about a
particular part.
To close the loop, I would suggest you consider the hardware your
silicon modellers use to build the simulation models for the devices,
and the prototype hardware you build.
Joe Cahill - VLSI Development
Received on Mon Jul 1 07:16:19 1996
This archive was generated by hypermail 2.1.8 : Fri Jun 03 2011 - 09:52:29 PDT