Well, you would rather capture all data with realism and
disposition the data against limits, than turn it all into
black-and-white before you get to even see it. In my
opinion. There might be more to learn, than just "Fail".
Statistics should be the last thing you apply, because
they are a convenient substitute for understanding,
not a source of it. I have had product engineers tell
me with absolute certainty that some process param
was the problem, and later found it was another thing
entirely that they did not think to look at, but
happened to track the other thing short-term (not
even related at all - a thin film resistor and a BJT
hFE, and the BJT was not even involved with the
failing parameter). Statistics are as good as your
assumptions, including ones you don't know you're
making. Don't let them prevent you understanding
"why" by fixating on the precision of "what".
Because particularly, knowing what the sensitivities
are is the first step in getting yield back. If you have
the real results and process params you can regress
a lot better, than looking at the sensitivity of "-1".