Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Modelsim "Coverage"

Status
Not open for further replies.

shaiko

Advanced Member level 5
Joined
Aug 20, 2011
Messages
2,644
Helped
303
Reputation
608
Reaction score
297
Trophy points
1,363
Activity points
18,302
Hello,

Has anyone here used Modelsim "Coverage" for FPGA design?
If so, how helpful is it?
How easy is it to use?
 

I'd used it in the past, it wasn't hard to use. It would give you a good idea if your testcase coverage was good enough that you didn't miss any logical branches in your code. I've found that most of the problems I had were in not quite knowing the exact system behavior, so I would miss some really subtle cases of burstiness (over/under flow of FIFOs) that only happened after the code was running in a live system, none of which would be indicated by the code coverage.

The most annoying thing about using it was that management made it a point that you had to show your coverage was over 95%. Problem was large portions of the logic (~10%) was for dealing with errors that should not occur, and was extremely difficult or impossible to generate from a top-level testbench, and was added for recovery purposes. This would necessitate using modelsim force commands to make some of those errors show up (what a pain in the a**) especially as many of the simulations used random numbers to generate when packets would be transferred, so of course changes could cause the random sequence to change (had to record the random start seed for runs :-()

I haven't used it much lately and haven't seen any difference in the number and types of problems to debug in the lab during system integration (those problems are still due to not knowing the system behavior perfectly)
 
  • Like
Reactions: shaiko

    shaiko

    Points: 2
    Helpful Answer Positive Rating
It is a useful feature. Easy to use also. But 1 problem with Modelsim is that coverage merging is not accurate...
 

Re: Modelsim "Coverage"

1 problem with Modelsim is that coverage merging is not accurate...
Please elaborate

- - - Updated - - -

The most annoying thing about using it was that management made it a point that you had to show your coverage was over 95%

I see your point.
Instead of a "code coverage tool" it quickly became a "birocratic **s coverage tool"
 

What you usually do is that you write N test cases to cover all N branches. So each test case covers it's respective branch. Finally you have to merge all the coverage results to get 100% coverage. But what I have observed with modelsim is that after merging individual results, you don't get a 100% result. What you will get is around 80 or 90%...
 
  • Like
Reactions: shaiko

    shaiko

    Points: 2
    Helpful Answer Positive Rating
Re: Modelsim "Coverage"

What you usually do is that you write N test cases to cover all N branches. So each test case covers it's respective branch. Finally you have to merge all the coverage results to get 100% coverage. But what I have observed with modelsim is that after merging individual results, you don't get a 100% result. What you will get is around 80 or 90%...

That's odd. And "odd" is the polite form of "big fat bug" if what you say really is what is happening. Are you certain you did cover all cases and merged them correctly? Because running seperate test cases and then adding up all the bins to see what branches got hit is pretty much an elementary action for a test coverage tool IMO.

- - - Updated - - -

The most annoying thing about using it was that management made it a point that you had to show your coverage was over 95%.
That's because the typical management types didn't pay attention during non-linear physics in kindergarten. 100% test coverage can only be 2 times as much work as 50% coverage, right? And possibly it's even less work than that factor of 2 due to economies of scale. ;-)

But yes, I know the sort of problem. And in a previous job I got so many of those on a daily basis that I learned to not fight it. Go with the flow. You want a 99.9999% guarantee of Whatever It Is Today? No worries, we can do that. The consequences of that request are such & such, and it will cost you roughly this much. Usually followed by a WHAAAAAAAT?!? Give a rough breakdown of the time & costs and that's that. And after they settle down a little you explain a couple of more reasonable strategies + time/cost, and then your favorite management person gets to choose which one he wants. You sure? Yes. Confirm over e-mail with your management dude on the CC (aka no backpedaling :p) and Bob's your uncle.

I found that usually by adressing the mismatch between expectation & reality you mostly can fix this sort of problem. There's always the difficult person but those are significantly less than 50% of the PMs I've had to deal with.
 

Re: Modelsim "Coverage"

But yes, I know the sort of problem. And in a previous job I got so many of those on a daily basis that I learned to not fight it. Go with the flow. You want a 99.9999% guarantee of Whatever It Is Today? No worries, we can do that. The consequences of that request are such & such, and it will cost you roughly this much. Usually followed by a WHAAAAAAAT?!? Give a rough breakdown of the time & costs and that's that. And after they settle down a little you explain a couple of more reasonable strategies + time/cost, and then your favorite management person gets to choose which one he wants. You sure? Yes. Confirm over e-mail with your management dude on the CC (aka no backpedaling :p) and Bob's your uncle.

I found that usually by adressing the mismatch between expectation & reality you mostly can fix this sort of problem. There's always the difficult person but those are significantly less than 50% of the PMs I've had to deal with.
Yes this is how I've normally dealt with the problem, but its really annoying when management gets fixated on the latest Holy Grail of verification, because of some smooth talking sales person says it will fix all the problems, without understanding the implications of using it. Then it takes the engineering staff months to convince management it wasn't the panacea they thought it was.

- - - Updated - - -

I just realized I'm probably coming off as a cynical old fuddy duddy. ;-)
 

Re: Modelsim "Coverage"

Yes this is how I've normally dealt with the problem, but its really annoying when management gets fixated on the latest Holy Grail of verification, because of some smooth talking sales person says it will fix all the problems, without understanding the implications of using it. Then it takes the engineering staff months to convince management it wasn't the panacea they thought it was.

Heh. I submit that six sigma is for losers. We want twelve sigma! Much better. 12 > 6, so must be better!

But here as well the thing is not to convince people as if you have a vested interest. I found that a professionally channeled lack of interest works well. Of course I am interested in a good technical result, but you've got to keep the problem owner the problem owner. Aka That Guy who is also not me. You get paid for managing the project? Excellent, your problem, you manage it. I just tell you what does what and what costs how much. Then you pick whichever option best fits your constraints.

The only trick there is to make sure that you & your management person are on the same page. Otherwise things can become an uphill battle. As in, you have to have some mandate to "be difficult in a friendly fashion" towards project managers at times. ;)

- - - Updated - - -

I just realized I'm probably coming off as a cynical old fuddy duddy. ;-)
The mark of a true experienced person. ;-)
 

Another one to watch out for is daily TNS figures from your overnight seed sweep.
The problem is you can cut the numbers down quite quick with some sweeping timing specs or area constraints. But you get the pain when the last few ns refused to go down by much on a day to day basis.
 

Another one to watch out for is daily TNS figures from your overnight seed sweep.
The problem is you can cut the numbers down quite quick with some sweeping timing specs or area constraints. But you get the pain when the last few ns refused to go down by much on a day to day basis.

I "trained" my last manager to understand that I would run those seed sweeps to verify the timing stability of the design and that anything under 75% of the seeds passing timing with a 0 TNS was probably a bit unstable and that future changes would result in problems with TNS in subsequent runs. Then I would give them the option of having me poke around in the design to see where it was having problems and fix them or just work on the next project. Depending on the chances of future changes, they sometimes would have me look at making it closer to a 100% passing.
 

Re: Modelsim "Coverage"

That's odd. And "odd" is the polite form of "big fat bug" if what you say really is what is happening. Are you certain you did cover all cases and merged them correctly? Because running seperate test cases and then adding up all the bins to see what branches got hit is pretty much an elementary action for a test coverage tool IMO.

- - - Updated - - -

This is what my experience has been. Maybe others can also share their experiences with Modelsim coverage here..
 

Re: Modelsim "Coverage"

This is what my experience has been. Maybe others can also share their experiences with Modelsim coverage here..
Curious. Were there any particular methods you used to work around this behavior?
 

Re: Modelsim "Coverage"

Curious. Were there any particular methods you used to work around this behavior?

So what I did was to test all possible scenarios in a single test case. Our test environment consisted of a set of tasks. So I developed one case where I called all possible tasks one after the other. Maybe some corner cases could not be covered. But I good number at the end of the case...

- - - Updated - - -

Maybe not an elegant solution. But definitely made my manager happy :)
 

So basically 2 types of test runs?

1) separate cases. Gets you the test results, doesn't require you to re-run everything when you do a small change. But drawback is coverage numbers don't always add up.

2) all in one huge test that includes all separate cases. Takes annoyingly long, but at least the coverage numbers are correct. And you'd only run that every once in a while to make sure your coverage is according to plan.

Something like that?
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top