Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

[SOLVED] [DISCUSSION] Maximum ram usage for Sonnet/HFSS simulation

Status
Not open for further replies.

JLHW

Member level 1
Joined
Jan 5, 2019
Messages
39
Helped
1
Reputation
2
Reaction score
0
Trophy points
6
Activity points
451
Hi, I'm just curious, what are your maximum used RAM for the projects you've been simulated for either/both software?

Is it possible to achieve extreme RAM usage, like 64 GB or above for a single project file?
 

This is a strange question. By using dense mesh, you can always push RAM requirement to the limit.

You asked about Sonnet and spiral inductors earlier - for that application, using conformal mesh instead of staircase mesh can often reduce the memory requirement.
 

Thanks for the reply! I'm aware that I can reduce RAM usage by using conformal mesh and increase the size of the mesh, I'm just curious whether anyone here had hit high RAM usage for their projects, with or without putting efforts to reduce the memory requirement. This thread is actually not related to my previous question on inductors.
 

I'm just curious whether anyone here had hit high RAM usage for their projects

Yes, up to 48GB because that was the RAM limit for my machine back then. I used that to verify mesh density at a few frequency points, i.e. make sure that the different to coarse mesh is small enough.

with or without putting efforts to reduce the memory requirement.

When I started using Sonnet, they told me this: "Simulation always takes one night, no matter how fast the computer is." Which means that users start to spend time on smarter modelling if that magic limit (over night) is exceeded. Indeed, I found that's true for my own work, and also for a lot of customer models.
 
  • Like
Reactions: JLHW

    JLHW

    Points: 2
    Helpful Answer Positive Rating
I use an off-the-shelf server with 1TB of RAM for simulating large arrays of complex structures. One HFSS instance; the machine has 24 cores so I can usually run up to 48 simultaneous parametric sweeps for small structures, but usually I run just one task for the large arrays. Not uncommon to have > 15M tetrahedron. No problems with single files and multiple projects inside of it (running Scientific Linux).

I have run out of RAM; sadly that happens no matter how much you have -- but it was for a 12x12x12 array of intricate 3D objects.
 
  • Like
Reactions: JLHW

    JLHW

    Points: 2
    Helpful Answer Positive Rating
The structure in that case consisted of a number of small 100 um features, but was cubic and approximately 20mm on each size.

I still can't overemphasize what has been said: more RAM doesn't fix everything. It's critical to be able to simulate smarter, rather than trying to brute force a simulation with more resources.
 
  • Like
Reactions: JLHW

    JLHW

    Points: 2
    Helpful Answer Positive Rating
Agreed, especially when I have only 4GB of soldered RAM on my laptop, I had to configure my simulation to output relatively accurate results while using just below my RAM limit. Compromises had to be made.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top