Let's work on a use case that will help us in understanding the network.
We will work on a time series problem. We have got the Google stock price dataset. One being training and the other being test. We will now look at a use case to forecast the stock prices of Google:
- Let's start by importing the libraries:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
- Next, import the training set:
dataset_train = pd.read_csv('Google_Stock_Price_Train.csv')
training_set = dataset_train.iloc[:, 1:2].values
- Feature scaling is done in the next step:
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_set_scaled = sc.fit_transform(training_set)
- Let's create a data structure with 60 time steps and 1 output:
X_train = []
y_train = []
for i in range(60, 1258):
X_train.append(training_set_scaled[i-60:i, 0])
y_train.append(training_set_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
- Next, reshape the data:
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
- Now, import the Keras libraries and packages:
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
- We will initialize the RNN with the regressor function:
regressor = Sequential()
- Now, add the first LSTM layer and some dropout regularization:
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
- Now, add the second LSTM layer and some dropout regularization:
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
- Add the third LSTM layer and some dropout regularization:
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
- Add a fourth LSTM layer and some dropout regularization:
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
- Finally, add the output layer:
regressor.add(Dense(units = 1))
- Next, we will compile the RNN:
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
- We will fit the RNN to the training set:
regressor.fit(X_train, y_train, epochs = 100, batch_size = 32)
- We get the real stock price of 2017 as shown:
dataset_test = pd.read_csv('Google_Stock_Price_Test.csv')
real_stock_price = dataset_test.iloc[:, 1:2].values
- We get the predicted stock price of 2017 as shown:
dataset_total = pd.concat((dataset_train['Open'], dataset_test['Open']), axis = 0)
inputs = dataset_total[len(dataset_total) - len(dataset_test) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 80):
X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = regressor.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
- Finally, we will visualize the results as shown:
plt.plot(real_stock_price, color = 'red', label = 'Real Google Stock Price')
plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Google Stock Price')
plt.title('Google Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('Google Stock Price')
plt.legend()
plt.show()