# 谱方法最新进展学术研讨会系列报告

报告摘要： Klein-Gordon (KG) equation describes the motion of spinless particle. In the non-relativistic limit $\varepsilon\to 0^+$ ($\varepsilon$ inversely proportional to the speed of light), the solution to the KG equation propagates waves with amplitude at O(1) and wavelength at $O(\varepsilon^2)$ in time and O(1) in space, which causes significantly numerical burdens due to the high oscillation in time.  By the analysis of the non-relativistic limit of the KG equation, the KG equation can be asymptotically reduced to the nonlinear Schroedinger equations (NLS) with wave operator (NLSW)  perturbed by the wave operator with strength described by a dimensionless parameter $\varepsilon\in(0,1]$. Starting with the  error analysis of finite difference methods for NLSW and the uniform bounds w.r.t. $\varepsilon$, we  will  show the error analysis of an exponential wave integrator sine pseudospectral method for NLSW, with improved uniform error bounds. Finally, a uniformly accurate multi scale time integrator method will be constructed for solving the KG equation in the non-relativistic limit based on the NLSW expansion, and rigorous error bounds are established.

报告摘要： In this report, we investigate discontinuous Galerkin (DG) methods for nonlinear vanishing delay and state dependent delay differential equations. The optimal global convergence and local superconvergence results are established. By suitable designing partitions, the optimal nodal superconvergence of the discontinuous Galerkin solutions is obtained. Numerical examples are provided to illustrate the theoretical results.

报告摘要： Deep neural networks with rectified linear units (ReLU) are recently getting very popular due to its universal representation power and easier to train. Some theoretical progresses on deep ReLU network approximation power for functions in Sobolev space and Korobov space have recently been made by several groups. In this talk, we show that deep networks with rectified power units (RePU) can give better approximations for smooth functions than deep ReLU networks. Our analyses base on classical polynomial approximation theory and some efficient algorithms we proposed to convert polynomials into deep RePU networks of optimal size without any approximation error. Our constructive proofs reveal clearly the relation between the depth of the RePU network and the “order” of polynomial approximation. Taking into account some other good properties of RePU networks, such as being high-order differentiable, we advocate the use of deep RePU networks for problems where the underlying high dimensional functions are smooth or derivatives are involved in the loss function.

/