Evaluating the Performance of TAAF for image classification models
In this paper, we present the results of testing a custom activation function, The Analog Activation Function (TAAF), on both the MNIST and CIFAR-10 datasets. TAAF is a novel activation function designed to improve the performance of neural networks by leveraging a unique mathematical formulation. We evaluate TAAF in a convolutional neural network (CNN) architecture and compare its performance against standard activation functions on MNIST and against ELU on CIFAR-10. Our results demonstrate that TAAF achieves a test accuracy of 99.39% on the MNIST dataset and 79.37% on the CIFAR-10 dataset. On MNIST, TAAF achieves a slightly higher test accuracy of 99.39%, surpassing standard activation functions. On CIFAR-10, TAAF achieves a significantly higher test accuracy of 79.37% compared to ELU's 72.06% in the same architecture, suggesting improved generalization capabilities. This paper establishes a solid performance baseline for TAAF across different image classification tasks.