Artificial neural networks loosely mimic the complex web of nearly 100 trillion connections in the human brain. Deep neural networks, and specifically convolutional neural networks, have recently demonstrated breakthrough performances in the pattern recognition community. Studies
on the network depth, regularization, filters, choice of activation function, and training parameters are numerous. With regard to activation functions, the rectified linear unit, is favored over the sigmoid and tanh function because the differentiation of larger signals is maintained. This
paper introduces multiple activation functions per single neuron. Libraries have been generated to allow individual neurons within a neural network the ability to select between a multitude of activation functions, where the selection of each function is done on a node by node basis to minimize
classification error. Each node is able to use more than one activation function if the final classification error can be reduced. The resulting networks have been trained on several commonly used datasets, which show increases in classification performance, and are compared to the recent
findings in neuroscience research.