计算与应用讨论班
报告题目:Learning Fréchet Differentiable Operators Via Prespecified Neural Operators
报 告 人:宋林昊(中南大学)
时 间:2025年11月7日(星期五),下午14:00
地 点:海纳苑2幢205
摘 要:Neural operators, built on neural networks, have emerged as a crucial tool in deep learning for approximating nonlinear operators. The present work develops an approximation and generalization theory for neural operators with prespecified encoders and decoders, improving and extending previous work by focusing on target operators that are Fréchet differentiable. To extract the smoothness feature, we expand the target operator by the Taylor formula and apply a re-discretizing technique. This enables us to derive an upper bound on the approximation error for Fréchet differentiable operators, and achieve improved rates of approximation under some properly chosen classes of encoders and decoders compared to those for Lipschitz continuous operators. Furthermore, we establish an upper bound on the generalization error for the empirical risk minimizer induced by prespecified neural operators. Explicit learning rates are derived when encoder-decoder pairs are chosen via polynomial approximation and principal component analysis. These findings quantitatively demonstrate how the reconstruction errors of infinite dimensional spaces and the smoothness of target operators influence learning performances.
报告人简介:宋林昊,博士毕业于北京航空航天大学数学科学学院,同时期在香港城市大学数据科学学院联合培养。他现为中南大学讲师,研究兴趣集中在统计学习理论和深度学习理论,论文部分发表于Journal of Fourier analysis and applications 以及 Neural networks 等期刊。