The advent of deep learning

Deep learning as we know today is a machine learning-based algorithm that mimics the functioning of human brain. The actual foundation of deep learning was laid in the mid-1960s when Alexey Ivakhnenko and his associate, Valentin Grigorʹevich Lapa, used multiple layers of nonlinear features with polynomial activation functions. This was an approach similar to today’s deep learning techniques. It followed by the use of manually weighted neural networks and an application of back-propagation of errors to train deep models that could yield useful distributed representation. Convolutional Neural Networks (CNNs) were introduced in the 1990s specifically for image recognition problems.

For further information, you can refer to the research paper cited in Appendix A.

At that time, the potential of deep learning remained unrealized and unexplored due to:

But as the age of big data, GPUs, and efficient algorithms arrived, these issues started to get resolved in the mid-2000s and through the decade, the deep learning term evolved to be used as frequently as AI due to its rapid adaptation to this day.

The fundamental tenet of deep learning is entwined in mimicking the neuron network of a human brain. This is done to create various neural network that can be utilized to make self-learning machines. It basically consists of inputs and outputs with hidden layers in between. These concepts are the foundation of current approaches in deep learning today. To get a holistic view, follow the following URL that provides a pictorial representation of neural network: https://www.needpix.com/photo/1315699/neural-network-networks-neuron.

Every artificial neural network model that we see in the image (in the preceding URL) are based on the artificial neuron derived from the biological model.

If we closely observe, we can infer that machine learning has always been a subset of AI, whereas deep learning is a subset of machine learning.

We can conclude that the recent advancements in AI have eliminated the need for manual creation of features to run machine learning algorithms. These recent advancements have evolved as representation learning collectively through the latest machine learning and deep learning techniques. It allows a system to automatically discover the representations needed for feature detection or classification from raw data with combined training and inference.

In our next section, we will learn about different machine learning frameworks in Python for GPUs.