Issue with get_quality_stats_at_geometry

Cubit: 15.5
MacOS 11.4 Big Sur

Hello! I am trying to use the Cubit Python API to identify malformed elements in my model. I am using get_quality_stats_at_geometry(). I would like to be able to threshold to find scaled Jacobian values between -1 and 0. I have tried all types of settings to effect the threshold to return only elements with negative Jacobians, but the function always returns the opposite. Any ideas on how to address this issue?

Btw, I have no problem doing this in pure cubit syntax:

quality volume all scaled jacobian global high 0 low -1 draw mesh

Is there a way for me to use a cubit.cmd() call and get the output from this process?

Thanks,
Andy

Hi Andy,

I thought I knew what this function was doing. I donā€™t get it either. Iā€™m going to have to debug the threshold settings to see if there is a problem there. There is not a simple way to get the contents of the command window.

One possible workaround would be to do
cubit.cmd("set logging on file ā€˜quality.logā€™ ")
cubit.cmd(ā€œquality volume 1 scaled jacobian global high 0 low -1ā€)
cubit.cmd(ā€œset logging offā€)

Then open and read the data out of the file. Thatā€™s the best I have for you right now.

Karl

Thank you, Karl. I appreciate you looking into the issues. Iā€™ll try your suggestion in using the log file.

Best regards,
Andy

You should be able to loop over each element and query its quality using the get_quality_value() method:

cubit.cmd("reset")
cubit.cmd("sphere r 1")
cubit.cmd("brick x 1")
cubit.cmd("cylinder radius 1 height 4")

cubit.cmd("mesh volume all")

query_volume = 3
HEX = cubit.parse_cubit_list("hex", f"in volume {query_volume}")

hex_quality = []
for hex_id in HEX:
  hex_quality.append(cubit.get_quality_value("hex", hex_id, "scaled jacobian"))

Does that do what you need?

I considered this type of approach, but wondered if it could be computationally expensive for large meshes (500K-1M elements). Itā€™s worth looking into. I will give it a try and report on my findings.

Thanks!
Andy

Hereā€™s a benchmark:

import time

cubit.cmd("reset")
cubit.cmd("sphere r 1")
cubit.cmd("bri x 1")
cubit.cmd("cylinder radius 1 height 4")

cubit.cmd("volume all size 0.0125")
cubit.cmd("mesh vol all")

print(f"Total Hexes: {cubit.get_hex_count()}")

query_volume = 3
HEX = cubit.parse_cubit_list("hex", f"in vol {query_volume}")

print(f"Query Hexes: {len(HEX)}")

t0 = time.perf_counter()
hex_quality = []
for hex_id in HEX:
  hex_quality.append(cubit.get_quality_value("hex", hex_id, "scaled jacobian"))

t1 = time.perf_counter()
print(f"Elapsed Time: {t1-t0} seconds")
print(f"Rate: {len(HEX)/(t1-t0)} elements/second")

Which results in:

Total Hexes: 7,530,560
Query Hexes: 6,733,760
Elapsed Time: 9.475 seconds
Rate: 710,652 elements/second

So, I wrote a test function that emulates the get_quality_stats_at_geometry function based on gvernonā€™s sample code.

def cubit_get_quality_stats(geom_type, mesh_type, geom_ids, metric_name, low_threshold, high_threshold):
    mids=[]; quality=[]; th_set=[]; th_set_vals=[]
    for geom in geom_ids: 
        elements = cubit.parse_cubit_list(mesh_type, f"in {geom_type} {geom}")
        for mesh_id in elements:
          val=cubit.get_quality_value(mesh_type, mesh_id, metric_name)
          quality.append(val)
          mids.append(mesh_id)
          if val >= low_threshold and val <= high_threshold:
            th_set.append(mesh_id)
            th_set_vals.append(val)
    if quality != []:
        q=np.array(quality)
        output=[np.min(q), np.max(q), np.mean(q), np.std(q),
                mids[np.argmin(q)], mids[np.argmax(q)], 
                0, 0, len(th_set), th_set, th_set_vals]
    else:
        output=[0]*9+[[],[]]
    return output

I am looping over about 30 files with various components/assemblies and calling the function to perform a system wide quality analysis. Iā€™m executing the calls using a ProcessPoolExecutor so that I can process files in parallel (nprocs=20). I timed the call using the built-in get_quality_stats_at_geometry function and my test emulator function. Performance comparison is below:

Built-in get_quality_stats_at_geometry():

Elapsed Time:      5.553 seconds
Rate: 674290.049 elements/second
Total element count is: 3744428

Test function: cubit_get_quality_stats():

Elapsed Time:      7.292 seconds
Rate: 513496.370 elements/second
Total element count is: 3744428

As you can see the built-in function is slightly faster than my test function, but itā€™s not too bad. Iā€™m please to see how fast the test_function performed. I think my processing rate is lower (even with the multiprocessing) because Iā€™m working on multiple files.

Btw, Iā€™ve been using Cubit python in a Jupyter Lab notebook and initializing using cubit.init([ā€™ā€™]) for each instance. Itā€™s a really nice environment for building complex scripts.

1 Like