Questions about Recurrent neural network

Short answers, pulled from the story.

When did Santiago Ramón y Cajal observe recurrent semicircles in the cerebellar cortex?

Santiago Ramón y Cajal observed recurrent semicircles in the cerebellar cortex in 1901. These structures formed by parallel fibers, Purkinje cells, and granule cells suggested that neural pathways could loop back on themselves.

Who invented Long Short-Term Memory networks to solve the vanishing gradient problem?

Hochreiter and Schmidhuber invented Long Short-Term Memory networks in 1995. This architecture solved the vanishing gradient problem effectively by preventing backpropagated errors from vanishing or exploding.

What year were Gated Recurrent Units introduced as a computationally efficient alternative to LSTMs?

Gated Recurrent Units were introduced in 2014. They had fewer parameters than LSTMs because they lacked an output gate while maintaining similar performance on polyphonic music modeling and speech signal modeling.

How does bidirectional recurrent neural network process input sequences?

Bidirectional recurrent neural networks use two RNNs processing the same input in opposite directions. The forward RNN processes in one direction while the backward RNN processes in the opposite direction, and their output sequences are concatenated to give the total output.

Which algorithm serves as the standard method for training recurrent neural networks by gradient descent?

Backpropagation through time is the standard method for training recurrent neural networks by gradient descent. This algorithm is a special case of general backpropagation used to handle sequence data.