•  
  •  
 

Abstract

We know that we can use the neural networks for the approximation of functions for many types of activation functions. Here, we treat only neural networks with simple and particular activation function called rectified linear units (ReLU). The main aim of this paper is to introduce a type of constructive universal approximation theorem and estimate the error of the universal approximation. We will obtain optimal approximation if we have a basis independent of the target function. We prove a type of Debao Chen's theorem for approximation.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Share

COinS