Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

A way to reduce simulation time

Raeiu

Newbie level 5
Joined
Jan 19, 2024
Messages
8
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
83
Hi everyone. I have a component wich is simulated for 200 ms where 200 ms is equal to 255*2880*271 (approximately).
- 271 is the clock period in ns.
- 255 is a clock divider (8 bit counter).
- 2880 is the number of counts that the simulation must do (12 bit counter).

I need a way to reduce the counters length to reduce time simulation for better readability of the output, becouse in this way i have something like 1.470.000 line of output (the output is written at every clock hit).

I cannot modify the code
I need to do this on ModelSim or other way. I tought to use tcl to stop the counters first and make them start to counting from 0 again but i don't know if there is a command to do this.
Some idea?
 
It's hard for us humans to immerse oneself in the internal circuits of computers. In hardware your component operates in 1/5 of a second. However simulators have an inherent shortcoming. You're simultaneously monitoring both a fast process and a slow process in one simulation. A short cut needs to be found.

At our human speed you cannot possibly watch several minutes of counting up to a million and a half. Even just the routine to print onscreen takes a bit of time (either in hardware or in simulation). Scrolling up a line takes a bit of time. Or maybe it saves time to clear the screen each loop.

Or instead of incrementing by one each loop, can you increment by 10000?

Or instead of counting up and comparing to a specific number after each loop...
See if it's quicker to begin by loading that number in a counter at the start, then decrementing and comparing to zero after each loop.
 
271 is the clock period in ns.
Am I reading it correctly?
You are trying to do something with a 36.9 Hz clock?

If this is correct, there might be one idea to reduce simulation time.....
For simulation, you need to use a factor which is 10x, 100x or 1000x (depends on how fast you want to simulate) of the current clock frequency, i.e. use a higher clock frequency which has a definite relationship to your actual/intended clock frequency.
 
Last edited:
Am I reading it correctly?
You are trying to do something with a 36.9 Hz clock?

If this is correct, there might be one idea to reduce simulation time.....
For simulation, you need to use a factor which is 10x, 100x or 1000x (depends on how fast you want to simulate) of the current clock frequency, i.e. use a higher clock frequency which has a definite relationship to your actual/intended clock frequency.
I guess you meant 3.69MHz.
Anyway simulation time length does not care about physical clock period. It may depend on simulation resolution and platform.
I don't know why one would go that far checking every counting stage using text lines or eyeballing. I will just check samples of counting transition points.
Alternative of scaling down all variables/signals is worth it for critical cases.
 
Am I reading it correctly?
You are trying to do something with a 36.9 Hz clock?

If this is correct, there might be one idea to reduce simulation time.....
For simulation, you need to use a factor which is 10x, 100x or 1000x (depends on how fast you want to simulate) of the current clock frequency, i.e. use a higher clock frequency which has a definite relationship to your actual/intended clock frequency.
Yes. But, my bad, i badley explained myself. The tb saves output on a file. I need to reduce this output (the product 256*2880 = 737280 line of output). The simulation time is a consequence of this product. I don't know if i explained myself.
 
Have you tried to use some alternative simulator to perform this task?

If you only need a console output of the simulation, maybe you should try something as Icarus Verilog or Verilator, they are much faster than ModelSim. But you might need to somehow adapt your code.
 
Have you tried to use some alternative simulator to perform this task?

If you only need a console output of the simulation, maybe you should try something as Icarus Verilog or Verilator, they are much faster than ModelSim. But you might need to somehow adapt your code.
Can use olny ModelSim. I have to follow tests guideline
 
Although a professor doesn't need to know or care how much time you spend on an exercise...
In the workaday world 'time is money'. Your employer won't enjoy seeing you pore over lengthy logs of a simulation.

We could say this is a test of your resourcefulness, your ability to save time. You need every strategy you can think up to focus on bottlenecks in your program, and get through them as quickly as you can. Break up the tasks, create several simulations --
a) one to slow down events that take place at high speed (nanoseconds)
b) another to speed up events that happen at slow speed (milliSec).
 
You should look at the French solution from Aniah (aniah.fr). their support allows a significant gain in simulation.
 
SPICE has trouble when you mix very fast and very slow
signals. All fast, or all slow, no prob. But presence of anything
fast drives short timesteps, while the slow says how many of
those hence data volume.

Consider just what you really want from this simulation. I expect
that counts 1-2046 are probably of zero interest after you prove
it once. From there find the critical path and the from-to step
that delivers the worst timing case, and exercise just that from
an initial condition that gets you to the "from" state and the
worst case timing PVT corner.
 

LaTeX Commands Quick-Menu:

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top