Activation function

Activation function is typically introduced in the neural network in order to induce non-linearity. Without non-linearity, a neural network will have little chance to learn non-linearity. But you might question as to why why we need non-linearity in the first place. If we deem every relationship as a linear one, then the model won't be able to do justice to the actual relationship because having a linear relationship is a rarity. If applied linearity, the model's output won't be a generalized one. 

Also, the main purpose of an activation function is to convert an input signal into an output. Let's say if we try to do away with an activation function, it will output a linear result. Linear function is a polynomial of the first degree and it's easy to solve but, again, it's not able to capture complex mapping between various features, which is very much required in the case of unstructured data.

Non-linear functions are those that have a degree more than one. Now we need a neural network model to learn and represent almost anything and any arbitrary complex function that maps inputs to outputs. Neural networks are also called universal function approximatorsIt means that they can compute and learn any function. Hence, activation function is an integral part of a neural network to make it learn complex functions.