Abstract
We know that we can use the neural networks for the approximation of functions for many types of activation functions. Here, we treat only neural networks with simple and particular activation function called rectified linear units (ReLU). The main aim of this paper is to introduce a type of constructive universal approximation theorem and estimate the error of the universal approximation. We will obtain optimal approximation if we have a basis independent of the target function. We prove a type of Debao Chen's theorem for approximation.
Recommended Citation
Bhaya, Eman Samir and Sharba, Zainab Abdulmunim
(2020)
"L_p_ Approximation by ReLU Neural Networks,"
Karbala International Journal of Modern Science: Vol. 6
:
Iss.
4
, Article 9.
Available at:
https://doi.org/10.33640/2405-609X.2362
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.