I want to compare two approaches of mesh generation. I am facing difficulty regarding which metrics (in Cubit quality assessment) I should use to compare them with each other.
I have meshes generated for the same object using Hexahedral elements and Tetrahedral elements. Object is wall of a blood vessel. (thin annular tube). One way is to fill the volume across thickness with tetrahedrons and another is to sweep surface mesh.
Some objectives I could think from FEA point of view were:
no slivers
average element quality closer to ideal one ( not sure which metric to use)
less number of elements (ideally degrees of freedom)
Can anyone suggest guidelines for doing such comparison ?
If you look at the mesh quality from the simulation accuracy point of view, a lot will depend on the type of analysis you are running and the underlying element formulations.
For example, the linear tetrahedron elements would be generally a bad choice for bending dominated problems (which may be the case in blood vessel walls) and the mesh topology metrics alone would not tell you that.
This paper is still pretty current in the sense that there is not a similarly thorough analysis for non-simplicial meshes, higher order, or vector-valued problems.
I suggest that your best bet is to plot time versus accuracy for your particular problem. Ideally, use an error estimator to compute a good mesh density for your given problem. If you are interested in some functionals of the solution rather than pointwise error (in some norm, e.g. energy), use an adjoint-based error estimator to compute where elements are needed and use accuracy in those functionals of interest for the time vs. accuracy plot.
Note that solver details can make a huge difference. For example, multigrid that handles anisotropy well will favor a stretched mesh for anisotropic problems/boundary layers while multigrid that does not handle anisotropy well may prefer an isotropic mesh, even if it requires orders of magnitude more degrees of freedom to reach the same accuracy.
Similarly, methods based on explicit matrix assembly typically do not favor high order because the matrices lose sparsity so quickly (and smoothers become more complicated due to loss of h-ellipticity, etc). But matrix-free methods can apply the operator in a cost that is roughly independent of the order, so if you can build a preconditioner without the assembled high-order matrix, use of a matrix-free method may enable higher order methods to win on the accuracy vs. time plot.
The best method is also architecture-dependent, given different memory bandwidth to compute ratios, which can still have significant indirect consequences on choice of mesh.
I hope these examples give some perspective on how big of a thing you are asking for. There is an enormous gap between efficient meshes for simulation and local quality metrics.