Alternating Transfer Functions to Prevent Overfitting in Non-Linear Regression with Neural Networks
- In nonlinear regression with machine learning methods, neural networks (NNs)
are ideally suited due to their universal approximation property, which states
that arbitrary nonlinear functions can thereby be approximated arbitrarily well.
Unfortunately, this property also poses the problem that data points with
measurement errors can be approximated too well and unknown parameter
subspaces in the estimation can deviate far from the actual value (so-called
overfitting). Various developed methods aim to reduce overfitting through
modifications in several areas of the training. In this work, we pursue the
question of how an NN behaves in training with respect to overfitting when
linear and nonlinear transfer functions (TF) are alternated in different hidden
layers (HL). The presented approach is applied to a generated dataset and
contrasted to established methods from the literature, both individually and in
combination. Comparable results are obtained, whereby the common use of
purely nonlinear transfer functions proves to be not recommended generally.