So, I wrote a test function that emulates the get_quality_stats_at_geometry function based on gvernonās sample code.
def cubit_get_quality_stats(geom_type, mesh_type, geom_ids, metric_name, low_threshold, high_threshold):
mids=[]; quality=[]; th_set=[]; th_set_vals=[]
for geom in geom_ids:
elements = cubit.parse_cubit_list(mesh_type, f"in {geom_type} {geom}")
for mesh_id in elements:
val=cubit.get_quality_value(mesh_type, mesh_id, metric_name)
quality.append(val)
mids.append(mesh_id)
if val >= low_threshold and val <= high_threshold:
th_set.append(mesh_id)
th_set_vals.append(val)
if quality != []:
q=np.array(quality)
output=[np.min(q), np.max(q), np.mean(q), np.std(q),
mids[np.argmin(q)], mids[np.argmax(q)],
0, 0, len(th_set), th_set, th_set_vals]
else:
output=[0]*9+[[],[]]
return output
I am looping over about 30 files with various components/assemblies and calling the function to perform a system wide quality analysis. Iām executing the calls using a ProcessPoolExecutor so that I can process files in parallel (nprocs=20). I timed the call using the built-in get_quality_stats_at_geometry function and my test emulator function. Performance comparison is below:
Built-in get_quality_stats_at_geometry():
Elapsed Time: 5.553 seconds
Rate: 674290.049 elements/second
Total element count is: 3744428
Test function: cubit_get_quality_stats():
Elapsed Time: 7.292 seconds
Rate: 513496.370 elements/second
Total element count is: 3744428
As you can see the built-in function is slightly faster than my test function, but itās not too bad. Iām please to see how fast the test_function performed. I think my processing rate is lower (even with the multiprocessing) because Iām working on multiple files.
Btw, Iāve been using Cubit python in a Jupyter Lab notebook and initializing using cubit.init([āā]) for each instance. Itās a really nice environment for building complex scripts.