Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Duplicate points (and values) in HFSS fields export

Status
Not open for further replies.

Hash

Member level 1
Member level 1
Joined
Feb 5, 2008
Messages
37
Helped
19
Reputation
38
Reaction score
18
Trophy points
1,288
Activity points
1,566
Dear HFSS users,

A colleague of mine spotted that when exporting fields quantity on a surface or a volume using the field calculator (using the "Write" button), the ascii file generated by HFSS contains many duplicate points (same (x,y,z) points and more disturbing, different values associated!). According to the ANSYS support this is a normal behaviour. However, depending on the geometry, the dispersion between the values for a same point can be very large! So if you use these files as input of other processing, you may have surprises.

HFSS uses tetrahedrons, hence a point in a mesh is connected to either to 6 vertices, either to 2 on edges. And indeed, searching duplicates entries in the .fld file returns either 6 or 2 duplicates per point (but sometime other number of duplicate... I don't know why). The HFSS doc says that a quantity is stored from tangential values at vertices, so I understand that the given values in the .fld files correspond to these tangential values.

So, I would like to ask you:
1) are you aware of this behaviour?
2) how to you deal with it?

To illustrate the problem, here is a short Python script to analyze a HFSS export file (.fld file) :

Python:
import pandas as pd

filename = 'E_inner_conductor.fld'
# import .fld file as a pandas dataframe

df = pd.read_csv(filename, skiprows=2, delimiter=' ', index_col=False, names=['x','y','z','Ex','Ey','Ez'])

# Find the number of duplicates (x,y,z) points
# keep only (x,y,z) points
df = df[['x','y','z']]
# remove duplicates, if any
df_unique_points = df.drop_duplicates()
# does it found some duplictated (x,y,z) elements?
print(f'Number of duplicate elements in {filename}: {len(df) - len(df_unique_points)} (ie. {len(df_unique_points)/len(df)*100}% of {len(df)} elements)')

For this example, I obtain 33% of points which are duplicates !

Analysis the data, most points are duplicated twice of 6 times, but not only :
Figure_2.png


The worst things is that the duplicated values are not the same !

Sometime the relative error between the mean of these duplicate and the duplicate points is negligible :

dispersion1.jpg

but sometime not, depending of the geometry !!
dispersion.jpg
 

Attachments

  • Figure_2.png
    Figure_2.png
    17.2 KB · Views: 97

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top