How to use Coreform Cubit and Python scripting to perform a refinement study (with openCFS)

On January 13, 2022, we will be hosting a webinar with guest presenter Klaus Roppert on how to use Coreform Cubit and Python scripting to perform a refinement study with openCFS.

Klaus was kind enough to share the slides, journal file and Python scripts he will be using in the webinar, here. Feel free to download in advance if you’d like, and register to watch the webinar here: https://us02web.zoom.us/webinar/register/WN_qkK8xpMXRgy1uaNCkg1ACw
presentation.pdf (2.0 MB)
eddy_quarter.jou (4.7 KB) eddy_master.py (455 Bytes) create_mesh.py (9.1 KB)

Thanks Klaus!

1 Like

Hello Matt,

I wanted to thank you and Klaus for the interesting webinar today! I noticed a few things, while watching it, that have already caused me many headaches. Therefore, I want to share my insights on how to avoid unnecessary frustration!

My experience is, that it is not a good idea to extend the sys.path of an external Python Interpreter with the path to the Cubit modules like it is shown in create_mesh.py! This is because you have to make sure, that your Python interpreter is a 100% binary compatible with Cubit’s interpreter (i.e. Python minor version, OS, bitness, UCS2 vs. UCS4 strings, etc.). And even if that is the case, other binary Python packages (like e.g. VTK) can still cause the external interpreter to crash due to incompatible DLLs when imported at the same time!

The background is, that the Cubit Python package is still implemented (at least in 2022) as a binary wrapper around the Cubit runtime. And mixing too many runtimes inside the adress space of a single process, namely in that of Python, is never a good idea! The bottom line of this is, that Cubit forces you to use a certain version of the Python interpreter. If you e.g. need a second package in your workflow, that depends on a different version of Python you are screwed!

In my opinion, extending the sys.path is okay, as long as you just want to do some quick-and-dirty trials. But if you want to create stable workflows that need to keep working for the years to come, you absolutely cannot depend on the binary compatibility of the Cubit Python modules with a given arbitrary external Python interpreter! This is because you have absolutely no control over what Python interpreter Cubit will use in the future (Python 2.7, 3.6, 3.8, who but Coreform knows that??!).

For the moment I suggest, that one rather just uses the features, that are available in the Python interpreter inside Cubit. You can then start all your Cubit-related Python tasks using Cubit’s batch interface:

coreform_cubit -batch -nographics -nojournal -nooverwritecheck my_cubit_script.py

For the future, I would really find it helpful, if Cubit could provide the scripting interface to Python e.g. in a client/server based way. This way, it would not be necessary anymore to mix binary runtimes inside an external Python interpreter.

The other suggestion I wanted to make concerning advanced APREPRO scripting was about the SurfaceAt() APREPRO function. It comes in handy, if you do not have full control over the geometry (maybe you imported it from a STEP file) and need to find out surface ids based on coordinate positions:

https://coreform.com/cubit_help/appendix/aprepro/aprepro_functions.htm

create volume loft surface {SurfaceAt(0, 0, 0.73, 2)} {SurfaceAt(0, 0, 0.731, 1)}

I also found the more general Sandia APREPRO manual quite helpful:

https://gsjaardema.github.io/seacas-docs/aprepro.pdf

If one absolutely needs to import the Cubit module into an external Python interpreter, I found out, that there are some things to keep in mind:

  1. Only one cubit.init() / cubit.destroy() cycle is possible, because the Python interpreter will crash if any Cubit command is called after the destroy.
  2. Even if cubit.destroy() has already been called, the license will still stay occupied until the Python process exits!
  3. It is good practice to
    • do as much as possible with journal files and APREPRO since this will run mostly independent of the Cubit or Python version
    • introduce an extra APREPRO variable that contains the successful state of the model, since some journal commands might indicate errors, even if the model has been built correctly.
    • check the error state after a journal file has been processed from Python.
import sys

# Append path to Cubit module to sys.path
sys.path.append('/path/to/Cubit/bin')

# Initialize Cubit for batch mode
cubit.init(['-batch', '-nographics', '-nojournal'])

# In the case, where you are running inside a "reset and build new loop" you
# have to make sure, to manually delete all previously existing APREPRO
# variables since these will not be deleted by the journal reset command!
aprepro_vars = list(cubit.get_aprepro_vars())
for varName in aprepro_vars:
    cubit.cmd('#{delete(\'' + varName + '\')}')

# Set some APREPRO new variables
cubit.cmd('#{my_string_var = "' + 'Georg' + '"}')
cubit.cmd('#{my_int_var = ' + str(120) + '}')
cubit.cmd('#{my_float_var = ' + str(345.67) + '}')

# Playback a journal file, which should run from the Cubit GUI, from batch mode
# or even from external Python interpreters. Moreover, it is much easier to
# debug a journal file than calls to the Python API in the Cubit GUI.
cubit.cmd('Playback "my-model.jou"')

# Iterate over all current APREPRO variables to print them or do something else
# with them.
success = False
aprepro_vars = list(cubit.get_aprepro_vars())
for varName in aprepro_vars:
    varValue = cubit.get_aprepro_value_as_string(varName)
    print(varName, varValue)
    if 'Centroid' in varName:
        centroid = float(varValue)
    
    # I found it to be good practice to introduce an additional APREPRO
    # variable (e.g. here msg), which contains the success state of the
    # model. Some journal commands (e.g. undo commands in batch mode) can
    # set the error state (cf. cubit.get_error_count()) even if the model
    # has been built successfully.
    if varName == 'msg' and 'Everything is fine' in varValue:
        success = True

# https://coreform.com/cubit_help/appendix/python/namespace_cubit_interface.htm
surface_inlet = int(cubit.get_aprepro_value_as_string('surface_inlet'))
surface_outlet = int(cubit.get_aprepro_value_as_string('surface_outlet'))
surf_inlet_normal = cubit.get_surface_normal(surface_inlet)
surf_outlet_normal = cubit.get_surface_normal(surface_outlet)
cubit_error_count = cubit.get_error_count();
cubit.cmd('reset')

# Normally, one would call cubit.destroy() to delete any resources associated
# with Cubit here. But any further Cubit command after cubit.destroy() will lead
# to a crash of the external Python interpreter! Therefore, only call destroy,
# once you are really sure, you do not need Cubit anymore! Another shortcoming
# of the current Cubit Python interface is, that the Cubit license will still
# be occupied even, if cubit.destroy() was called!
cubit.destroy()

Here is an example for an accompanying journal file (i.e. my-model.jou):

# Set some new APREPRO variables
#if(!my_string_var)
#{my_string_var = "Franz"}
#endif
#if(!my_int_var)
#{my_int_var = 80}
#endif
#if(!my_float_var)
#{my_float_var = 500.56}
#endif

# https://coreform.com/cubit_help/appendix/aprepro/aprepro_functions.htm
create Cylinder height {my_float_var} radius {my_int_var} 
#{CentroidX = GeomCentroid_X("volume", 1)}
# #{volume_cylinder = VolumeAt(0, 0, 0, 1)}
#{volume_cylinder = Id("volume")}
volume {volume_cylinder} rename "cylinder1"
#{surface_outlet = SurfaceAt(0, 0, my_float_var/2, 1)}
surface {surface_outlet} rename "cylinder1_outlet"
#{surface_inlet = SurfaceAt(0, 0, -my_float_var/2, 1)}
surface {surface_inlet} rename "cylinder1_inlet"
export step "Geom.stp"  overwrite 

#{msg = "Everything is fine"}