Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Verifying Large ASIC

Status
Not open for further replies.

khtsoi

Newbie level 3
Joined
Aug 26, 2019
Messages
3
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
31
Hi all,

My pervious knowledge is to break the system in to small pieces (may be hierarchically) to a stage that the feature points cannot be further decomposed. Then create a collection of test cases to cover all these feature points.

Now we have a system with about 10 modules (each is a DSP implemented in Verilog). Each module has 3~10 parameters which by itself is a large combination. Then these modules are connected to a routing network (NoC) such that data can flow through them in another large combination. We have count over 40 possible data paths. Since the parameters are orthogonal and the modules will alter data size which will cause the change of parameters further down the flow, it is unlikely a single test case can cover many different combinations.

The number of test cases for individual module are grow over about 500 now. Combining this with the possible order of data flow, we will never tape out before fully cover every possible use cases. Any suggestion of how we should manage the verification of this type of systems? Thanks in advance!

Regards,

Brittle
 

The top level test bench should focus on connectivity. All wires/signals between the modules must be verified, but the possible values/combinations are too many to be tested.
Make sure that you can detect missing or swapped connections.
You must rely on the test bench for each module to detect internal module errors.
 

The top level test bench should focus on connectivity. All wires/signals between the modules must be verified, but the possible values/combinations are too many to be tested.
Make sure that you can detect missing or swapped connections.
You must rely on the test bench for each module to detect internal module errors.

This is good advice. For really big SoCs, you will see people doing verification by actually running the intended application since it is just not feasible to cover all combinations of modules and their parameters, use cases, states, etc.
 

This is good advice. For really big SoCs, you will see people doing verification by actually running the intended application since it is just not feasible to cover all combinations of modules and their parameters, use cases, states, etc.
If I am not wrong this is where FPGA prototyping for ASICs come in. As mentioned by std_match, after the top level TB does what it is intended for, an FPGA prototyping can save a lot of time.
 

Thanks for the advices from both of you. Yes, we have unit test for all the individual modules where the parameter ranges, margins, typical value, etc. are tests to cover the combinations. It is nice to know that others have worked out how to address these large parameter space issues. Thanks!
 

We have FPGA prototype. But due to the limited budget and team experience, we don't have a proper emulator to run the full design. We can only fit 1/4 of the core design (excluding the peripherals) in the largest available Altera S10 FPGA. That is the reason why we are worrying the quality of our verification using the prototype approach. Thanks!
 

If I am not wrong this is where FPGA prototyping for ASICs come in. As mentioned by std_match, after the top level TB does what it is intended for, an FPGA prototyping can save a lot of time.

Yes and no, it gets messy. Emulation is good for speeding up the verification, but unless you have a really good emulation platform (e.g. palladium), debugging becomes messy. Simulation at least gives you full visibility.

The issue I was hinting at is that verification of complex SoCs is so overwhelming that you try to answer the question "does it run my application?" instead of "does it run any application?". This is, of course, assuming the SoC is CPU-centric and you can exercise the intended functionalities by writing software. Another issue of verifying modern SoCs is that you need verification for performance and verification for power. These are hard to grasp concepts at first... but essentially you need to make sure the intended application runs at a speed X while burning power Y. Verifying this in a simple SoC is already tough (bus contention makes this analysis kind of chaotic). In a multi-task and cache-enhanced CPU, it gets even tougher. Too many unknowns.

All of this being said, we still try to do block level verification and top level verification. Throw as much manpower and machinepower you have at your disposal... but accept that while the block level verification can be creative and random-constrained, top level is most likely going to be very directed.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top