Sign in

Neural networks are known as ‘universal function approximators’ (Hornik et al., 1989). If they are indeed universal, it is a fair question to ask why we see so many architectures present in modern deep learning. The reason for this is the structure of problems we wish to consider are varied and there are many inductive biases that can help neural networks to achieve amazing performance when encoded in the network architecture. One clear example of this is convolutional neural networks which encode translation invariance. In this post, we will consider recurrent neural networks, specifically a recurrent neural network imbued with…

SolveSmart

Machine Learning Researcher/Engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store