Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

SystemVerilog testbench requires a model, how does one verify the model itself?

Status
Not open for further replies.

matrixofdynamism

Advanced Member level 2
Joined
Apr 17, 2011
Messages
593
Helped
24
Reputation
48
Reaction score
23
Trophy points
1,298
Activity points
7,681
Lets take example of a microprocessor design. This is a complex design and can benefit from constrained-random transaction-level self-checking testbenches in SystemVerilog. When using constrained random tests, it is important to use a model which will generate the expected output. This shall then be compared with the acutal output of the RTL design of the microprocessor for verification.

If we get mismatch between the output of the model and the RTL design, it is possible that there is a bug in the design. However, it is also possible that there is a bug in the model or even both for that matter.

How does one verify the model used in simulation?
Somebody told me long ago that if a testbench is too complex, we may need to verify the testbench itself. How would one do that?
 

If we get mismatch between the output of the model and the RTL design, it is possible that there is a bug in the design. However, it is also possible that there is a bug in the model or even both for that matter.

How does one verify the model used in simulation?
Somebody told me long ago that if a testbench is too complex, we may need to verify the testbench itself. How would one do that?

You would have additional testbenches to help validate the model, just like you have testbenches to validate your design using a model. No difference.

Kevin Jennings
 

That is fine. But the question arises since the design is tooo complex to be verified using directed tests and thus we need to use constrained random tests and that is why we need to use a model. But the model itself will be quite complex too, so how to verify the model first?

Since you have pointed out that there shall be a testbench for the model itself, that means that it shall use directed tests to verify the model rather than constrained random tests. But since the model is complex, we shall need a lot of directed tests. Why not use these directed tests on the actual design itself rather than waste precious time in verifying a model which is equivalent to the design at the interface? I am even more confused now.
 

I can sortof see the point. You have the main testbench, which is so complex that it needs to be checked and may even need debugging. So to check/debug it you need another testbench to check the main one. But this secondary testbench probably can be a lot simpler.

It's is just one step further from what you do in simpler cases. Simpler cases you have a complex but not-too complex testbench, and you also check if it is working properly, but just based on human in the loop to see if the output looks like it should. When that becomes unwieldy you take it one step further and testbench the testbench.

That said, I've never had to really do that, so is there some reading material on this particular topic?

Well, I've sortof done the test the test, but only by dumping values and then doing statistical analysis. The "if I don't get a gaussian for this and this the testbench is broken" and "thou shalt not cross-correlate" type of thing. But never systemverilog checking systemverilog. So links to any reading material would be much appreciated. :)
 

That is fine. But the question arises since the design is tooo complex to be verified using directed tests and thus we need to use constrained random tests and that is why we need to use a model. But the model itself will be quite complex too, so how to verify the model first?

At some level, there should be a 'golden model' which you cannot modify but must be able to match the expected output. I don't know if that exists in the particular case that you describe, but let's assume that it does for the moment. Then you would push inputs into the 'golden model' and capture what comes out, perhaps to an output file. The testbench would push the same inputs into the DUT model, capture what comes out and compares it to the output file asserting as soon as there is a mismatch.

Since you have pointed out that there shall be a testbench for the model itself, that means that it shall use directed tests to verify the model rather than constrained random tests.
Maybe, but maybe not. Kind of depends on what we're talking about for a model.
But since the model is complex, we shall need a lot of directed tests. Why not use these directed tests on the actual design itself rather than waste precious time in verifying a model which is equivalent to the design at the interface? I am even more confused now.
Since you stated you wanted to use microprocessor design as an example, here is a basic flow of how I might go about it if there is no 'golden model':
- Microprocessors do not do useful work in a vacuum. They are surrounded by other ICs such as memory, controllers, interface controllers etc. that all are assembled on to a PCBA.
- All of those other ICs can have a model produced to emulate the activity. Step 1 then is to assemble all of the models that you will need to model your PCBA.
- Some of those models one can fairly easily create by writing code. That code, for each IC, should be a standalone entity that will later be instantiated. If you create the model for the part(s), you should create a testbench for that part(s) and test them.
- Other models of commercial parts are readily available, such as memory. Here you are trusting that the person that modeled those part did their work correctly and produced a useful model.
- Create a model for the PCBA which instantiates each of the components and then interconnect them per the schematic/netlist. If you're creating the PCBA design yourself, then your CAD package can probably output a complete VHDL/Verilog golden model for the PCBA.
- The microprocessor will need code to execute, so model that as well.

Run your simulator until you're satisfied that the design is complete.

Truthfully, I'm just guessing that this is what you're getting at when you say that the model is 'complex', so maybe clarify what parts of the testbench model you think are too complex. The basic approach here is that you're building up higher level system models using lower level tested smaller models that are easier to test on their own. Testing at a system level for many conditions is sometimes not practical.

If you have a complex system, then do expect that you'll be spending more time and effort in verification than you are with your actual design. That's the way verification testing goes.

Kevin Jennings
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top