- Three main differences between machine learning and deep learning are as follows:
- Deep learning algorithms need to have a high-end infrastructure to train properly. Deep learning techniques heavily rely on high-end machines, contrary to traditional machine learning techniques, which can work on low-end machines.
- When there is a lack of domain understanding for both feature introspection and engineering, deep learning techniques outperform other techniques because you have to worry less about feature engineering.
- Both machine learning and deep learning are able to handle massive dataset sizes, however machine learning methods make much more sense when dealing with small datasets. A rule of thumb is to consider that deep learning outperforms other techniques if the data size is large, while traditional machine learning algorithms are preferable when the dataset is small.
- In connection with computer vision, Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton published ImageNet Classification with Deep Convolutional Neural Networks (2012). This publication is also known as AlexNet, which is the name of the convolutional neural network the authors designed and is considered one of the most influential papers in computer vision. Therefore, the year 2012 is considered the explosion on deep learning.
- This function creates a four-dimensional blob from the image, and means that we want to run the model on BGR images resized to 300x300, applying a mean subtraction of values (104, 117, 123) for each blue, green, and red channels, respectively.
- The first line feeds the input blob to the network, while the second one performs inference, and when the inference is done, we get the predictions.
- A placeholder is simply a variable that we will assign data to at a later date. When training/testing an algorithm, placeholders are commonly used for feeding training/testing data into the computation graph.
- When saving the final model (for instance, saver.save(sess, './linear_regression')), four files are created:
- .meta files: Contain the TensorFlow graph
- .data files: Contain the values of the weights, biases, gradients, and all the other variables that were saved
- .index files: Identify the checkpoint
- checkpoint file: Keep a record of the latest checkpoint files that were saved
- One-hot encoding means that labels have been converted from a single number into a vector whose length is equal to the number of possible classes. This way, all elements of the vector will be set to zero except the i element, whose value will be 1, corresponding to the class i.
- When using Keras, the simplest type of model is the sequential model, which can be seen as a linear stack of layers and is used in this example to create the model. Additionally, for more complex architectures, the Keras functional API, which allows you to build arbitrary graphs of layers, can be used.
- This method can be used for training the model for a fixed number of epochs (iterations on a dataset).