Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

What specific characteristics should be emphasized when grading software maturity?

Status
Not open for further replies.

jani12

Advanced Member level 4
Joined
Oct 30, 2014
Messages
108
Helped
0
Reputation
0
Reaction score
1
Trophy points
1,298
Activity points
2,536
We would like to use some process to grade embedded software maturity for our ADAS application. On any day, we want to know how much of the v-cycle work has been completed. For all ADAS features such as Lane Departure Warning(LDW), Automatic Cruise Control(ACC), and many more, we want to know the following:

  1. In calculating software maturity for our ADAS software application, what specific requirements characteristics should be emphasized when grading maturity and why? The characteristics are testability, measurable, completeness, traceability, and stability.
  2. With respect to software architecture documents, software design documents, state diagrams, and any v-cycle documents that are created to understand software operation, what specific characteristics should be emphasized when grading software maturity and why?
 

Comparing ADAS to an old HDD for data transfer error rate, allow me to create an analogy.

We had 3 metrics.

1. Window margin was design of error budgets and verified by instruments with sufficient timing resolution (<=1%) The window was measured as a linear percentage of the clock interval for some burst of 10kBytes at one time from 1 track. (mid 80's). There were many variables that contributed to loss of window margin of "making correct binary decisions during a read". There were data patterns of worst case, random and best case symmetry to measure offset or bias, magnetic defects, group delay distortion of the bits, and pre-compensation to account for density delays.
The fundamentals were easy to measure, by degrading the data until an error occured. We called that your margin by actually shrink the timing window to detecting a bit transition. But more importantly I used it to determine the sensitivity to supply tolerance, temperature range or dT/dt, vibration, shock and EMI (conducted & radiated), CW, pulse RF, ESD and impulse transients) Each would have an effect of reducing window margin.

2. Soft Error Rate ( recoverable) in the old days it was 1e-9, with a certain number of retries. This is indirectly related to window margin and SNR as well defined in theory by Shannon, et al.
3. Hard Error Rate HER, (unrecoverable) after retries was 1e-12 but then with Forward Error Correction (FEC) depending on errors if in a burst could improve HER by several orders of magnitude.
4. Seek Error Rate. (OK well 4 metrics)
- head crash drop test was one of the DVT tests and that's a fatal stat if it failed. Moreoften that might accelerate start/stop 10k cycle MTBF

What I would propose is testing each environmental stress like a Taguchi sensitivity calculation +/- x % and determine the contributing loss of margin or confidence in feedback amplitude or phase margin. Then accumulate the total or likelihood of coherent stress factors pairing up and write Design Verification Test (DVT) plan that validates each performance assumption each only on 1 page with a Design Spec with acceptance criteria, diagram of test method, with stress limits imposed. Then perform the tests and summary of results. e.g. 100 klux mirror sun flicker reflected into cameras, giant spark plug noise emitter loop antenna or yagi with oscillating relay and coil (TBD noise maker, jammer) , adjacent car creep test from sleepy driver..

well you get the idea.

Its a metric of good performance with active corrections vs margin to analog failure modes of sensors and real noise conditions of the environment vs speed, RPM rush hour , whatever.

Your final acceptance criteria depends on your design specs, and Verification then the customer requirements or expectations and validation of those.

p.s. All of our test gear was outrageously expensive, but we at Burroughs, Unisys had big budgets. The cheapest simplest solution was simply a box that when in series with the data to add random noise to the results with predefined jitter % and limits (before clock recovery) to reduce the "window margin" of error. This was equivalent to accelerating BER or error rates in time by several orders of magnitude as a cheap time saver parallel test.
 
Last edited:
p.s. A significant caveat to my analogy.

Although an HDD is possibly the most complex set of technologies in a tiny box, it still pales to the worst-case challenges of an automotive driver. The HDD also rarely gives you a warning because they are pretty reliable and have sophisticated error correction and some redundancy. Whereas car drivers need feedback to warn on low margins to gauge their trust of car and road conditions. Failure is not an option.

The HDD has solved most of the aerodynamic, electromagnetic, antistatic, class 10 ppm HEPA environment with precision ultrahigh speed rotary servos with heads flying over 100 MPH on highest RPM drives at sub-micron flying heights. They have progressed each year a new generation of technology evolution, that instead of frequent errors decades ago, they are now expected and perform error-free, that is until they die with no warning from servo click-of death usually after 5 to 10 years. Yet humans expect better than this from driving. Although accidents happen every day in major cities and I know ADAS features will reduce accident rates, the more aggressive driver, might still cause errors to those with ADAS features may still want to be in control during rush hour with 6 lanes of dynamic traffic at speeds that may not be protected by air-bags.

My point is that autonomous driving tools must become measurable and have a better indication of road safety margins to report to the driver from all the sensors than HDD's do . The complacent drivers with no steering wheel might only be comfortable where there are no unexpected conditions. Otherwise drivers can become complacent like Windows users who often blame failures on apps when user setup is often to blame and there is often no warning from the infamous S.M.A.R.T. error tracking, that rarely predicts failure. Yet some users still lose all their data without backup from a servo click-of-death. (missing embedded servo signals from contaminants or scratches)

 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top