讲座题目:Local Randomized Neural Networks Methods for Interface Problems
主讲人:王飞 教授
主持人:朱升峰 教授
开始时间:2024-06-13 10:00
讲座地址:腾讯会议,会议号:596578494
主办单位:数学科学学院
报告人简介:
王飞,西安交通大学数学与统计学院教授、博士生导师,Commun. Nonlinear Sci. Numer. Simul. 副主编。2010年获浙江大学数学博士学位。2010年—2012年,在华中科技大学任教;2012年-2013年,为美国爱荷华大学客座助理教授;2013年-2016年,为美国宾州州立大学Research Associate;2015年入选西安交通大学青年拔尖人才B类(副教授),2017年入选陕西省青年百人,2022年入选西安交通大学青年拔尖人才A类(教 授)。研究领域为数值分析与科学计算,主要研究兴趣包括:有限元分析及其应用,变分不等式的数值方法,求解偏微分方程的神经网络方法等。主持国家自然科学基金面上项目2项、青年基金1项。已在国际 SCI 期刊发表论文五十篇,其中包括计算数学方向的顶级期刊:SIAM J Numer. Anal.,IMA J Numer. Anal.,Numer. Math.,Comput. Methods Appl. Mech. Eng. 等。
报告内容:
Accurate modeling of complex physical problems, such as fluid-structure interaction, requires multiphysics coupling across the interface, which often has intricate geometry and dynamic boundaries. Conventional numerical methods face challenges in handling interface conditions. Deep neural networks offer a mesh-free and flexible alternative, but they suffer from drawbacks such as time-consuming optimization and local optima. In this talk, we introduce a mesh-free approach based on Randomized Neural Networks (RNNs), which avoid optimization solvers during training, making them more efficient than traditional deep neural networks. Our approach, called Local Randomized Neural Networks (LRNNs), uses different RNNs to approximate solutions in different subdomains. We discretize the interface problem into a linear system at randomly sampled points across the domain, boundary, and interface using a finite difference scheme, and then solve it by a least-square method. For time-dependent interface problems, we use a space-time approach based on LRNNs. We show the effectiveness and robustness of the LRNNs methods through numerical examples of elliptic and parabolic interface problems. We also demonstrate that our approach can handle high-dimension interface problems. Compared to conventional numerical methods, our approach achieves higher accuracy with fewer degrees of freedom, eliminates the need for complex interface meshing and fitting, and significantly reduces training time, outperforming deep neural networks.