Learning-Enhanced Finite Volume Methods for Nonlinear Convection-Diffusion Problems

Authors

  • Mustafa Alasadi Basra Author

DOI:

https://doi.org/10.65204/djes.v3i1.321

Keywords:

Learning-Enhanced Finite Volume Methods for Nonlinear Convection-Diffusion Problems

Abstract

Background: The non-linear convection-diffusion equation is one of the cornerstones of computational fluid dynamics and applied mathematics, particularly in applications such as environmental modeling, heat transfer or reactive flows. The finite volume method (FVM) is known for its good conservation property and robustness, but usually has difficulty in accurately simulating the sharp gradients and discontinuities without high computational cost. PINNs also provide a mesh-free data-driven technique however it may suffer from the slow convergence rate and poor generalization. To address these shortcomings, in this manuscript we introduce Learning-Enhanced Finite Volume Methods (LE-FVM), a new class of approach that bridges the scientific computation and machine learning communities using advanced deep learning models — including PINNs, FVGNs, and KANs — into an FVM framework.

Methods: We have developed the physics-constrained hybridization of machine learning models assigned directly within a FVM context to improve flux determination, support in situ adaptive mesh refinement optimization and end-to-end solvers constrained by integral conservation laws. Important contributions include twice-message aggregation in FVGN for irregular mesh, and adaptive loss weighting using the Neural Tangent Kernel to achieve a good balance between PDE residuals and boundary conditions. The methodology was extensively tested on standard 1D and 2D nonlinear convection-diffusion-reaction test-cases (Burgers’, Fisher’s, Burgers–Huxley, Newell–Whitehead–Segel) employing the metrics of error norms (L₁, L₂, L∞), statistical tests (Wilcoxon signed-rank coefficient variation) and generalization tests on unseen geometries.

Results: The LE-FVM framework achieved a 95%+ success rate in delivering high-fidelity solutions across all benchmarks. For the 1D Burgers’ equation at Re=1, PINN-based LE-FVM reduced the maximum absolute error by a factor of 173 and the RMSE by 60x compared to GFEM. In 2D unsteady flow simulations, FVGN demonstrated a 77% improvement in prediction accuracy for velocity fields and a 56% reduction in training time compared to purely data-driven graph networks. The framework also exhibited superior generalization, accurately predicting flow around unseen elliptical and airfoil geometries without retraining. Mean residual errors for LE-FVM solutions were consistently on the order of 10⁻⁴ to 10⁻⁵, outperforming traditional methods by one to two orders of magnitude.

Conclusions: The proposed LE-FVM framework successfully bridges the gap between physical fidelity and data-driven adaptability, offering a robust, efficient, and highly accurate solver for complex nonlinear systems. By embedding FVM’s conservation principles into the loss functions and architectures of modern neural networks, LE-FVM ensures physically consistent solutions while dramatically accelerating computation for parametric studies and real-time applications.

Downloads

Published

2026-03-22