DiffAct: A Unifying Framework for Activation Functions
Saha, Snehanshu, Pandey, Aditya, Mathur, Archana, and Arun Kumar, Harshith
In 2021 International Joint Conference on Neural Networks (IJCNN) 2021
The evolution of activation functions in deep neural networks is usually driven by fixed goals and incremental steps toward solving specific problems. We introduce differential equation activations (DiffAct), an umbrella of activation units to improve the process of learning activation functions in complex neural network structures on different data sets. Our stated objective is to enable a feed-forward neural network (FNN) to learn nonlinear activation functional forms from a family of solutions to an ordinary differential equation. In this paper, we also present new activation functions in parabolic forms, as offspring of the family of DiffActs. We demonstrate the effectiveness of the exercise of exploiting DiffAct based units in accomplishing comparable, sometimes superior, performance against single, fixed activation functions. The stated objective is to discover the internal dynamics of such activation functions via a common framework instead of producing State-of-the-Art (SOTA) results. The paper also hypothesizes on the ability to draw effective inferences on simple, 1-hidden layer architectures, wherever possible.