Speaker
Description
I introduce and study a class of neural network operators whose activation
mechanism is built from cardinal B-splines. The compact support and smoothness
of B-splines lead to localized approximation processes that fit naturally
into the theory of positive operators. I prove convergence results and investigate
qualitative features such as shape preservation, showing that several
geometric properties of the target function are inherited by the approximants.
A central part of the analysis is the derivation of Voronovskaja-type asymptotic
formulas, which provide a refined description of the local rate of convergence
and highlight the role of the underlying spline moments. The construction is
further extended to a bivariate tensor-product setting, where analogous convergence
and asymptotic results are obtained together with axial shape preserving
properties. Finally, I illustrate the method by numerical experiments in image
processing, and I report standard similarity measures (SSIM and PSNR) to
assess the quality of the reconstructed images.
Keywords: Neural network operators; positive linear operators; B-splines;
shape preserving approximation; Voronovskaja-type formula; image processing.