Questions about Feedforward neural network

Short answers, pulled from the story.

Who invented the first working deep learning method for feedforward neural networks in 1965?

Alexey Grigorevich Ivakhnenko and Valentin Lapa published their Group Method of Data Handling algorithm in 1965, which became the first working deep learning method capable of training arbitrarily deep neural networks. They used Kolmogorov-Gabor polynomials as activation functions and pruned unnecessary hidden units using validation sets.

When did Seppo Linnainmaa publish the modern form of backpropagation for feedforward neural networks?

Seppo Linnainmaa published the modern form of backpropagation in his 1970 master thesis before G.M. Ostrovski and colleagues republished it in 1971. Paul Werbos applied this algorithm to neural networks in 1982, though his original 1974 PhD thesis did not yet describe the complete method.

What is the difference between a multilayer perceptron and other feedforward neural network architectures?

A multilayer perceptron consists of fully connected neurons organized into at least three layers despite being technically called a misnomer for modern systems. These structures often use nonlinear activation functions to distinguish data that cannot be separated by simple linear boundaries.

Which activation function gained prominence in recent deep learning developments for feedforward neural networks?

The rectified linear unit or ReLU gained prominence in recent deep learning developments as practitioners sought solutions to these computational challenges. Alternative approaches included rectifier and softplus functions designed to overcome numerical problems inherent in sigmoidal mathematics.