Two-Argument Activation Functions Learn Soft XOR Operations Like Cortical Neuronsopen access
- Authors
- Kim, Juhyeon; Orhan, Emin; Yoon, Kijung; Pitkow, Xaq
- Issue Date
- May-2022
- Publisher
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Keywords
- Neurons; Computer architecture; Training; Task analysis; Licenses; Government; Transformers; Biological and artificial neurons; activation functions; exclusive-or operation; adversarial robustness
- Citation
- IEEE ACCESS, v.10, pp.58071 - 58080
- Indexed
- SCIE
SCOPUS
- Journal Title
- IEEE ACCESS
- Volume
- 10
- Start Page
- 58071
- End Page
- 58080
- URI
- https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/138366
- DOI
- 10.1109/ACCESS.2022.3178951
- ISSN
- 2169-3536
- Abstract
- Neurons in the brain are complex machines with distinct functional compartments that interact nonlinearly. In contrast, neurons in artificial neural networks abstract away this complexity, typically down to a scalar activation function of a weighted sum of inputs. Here we emulate more biologically realistic neurons by learning canonical activation functions with two input arguments, analogous to basal and apical dendrites. We use a network-in-network architecture where each neuron is modeled as a multilayer perceptron with two inputs and a single output. This inner perceptron is shared by all units in the outer network. Remarkably, the resultant nonlinearities often produce soft XOR functions, consistent with recent experimental observations about interactions between inputs in human cortical neurons. When hyperparameters are optimized, networks with these nonlinearities learn faster and perform better than conventional ReLU nonlinearities with matched parameter counts, and they are more robust to natural and adversarial perturbations.
- Files in This Item
-
- Appears in
Collections - 서울 공과대학 > 서울 융합전자공학부 > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.