Stacked bidirectional model

Bidirectional models are good at picking up information from future states that can affect the current state. Stacked bidirectional models allow us to stack multiple LSTM/GRU layers in a similar manner to how we stack multiple convolutional layers in computer vision tasks. The code for our bidirectional LSTM model is in Chapter7/classify_keras7.R. The parameters for the model are max length=150, the size of the embedding layer=32, and the model was trained for 10 epochs:

word_index <- dataset_reuters_word_index()
max_features <- length(word_index)
maxlen <- 250
skip_top = 0

..................

model <- keras_model_sequential() %>%
layer_embedding(input_dim = max_features, output_dim = 32,input_length = maxlen) %>%
layer_dropout(rate = 0.25) %>%
bidirectional(layer_lstm(units=32,dropout=0.2,return_sequences = TRUE)) %>%
bidirectional(layer_lstm(units=32,dropout=0.2)) %>%
layer_dense(units = 1, activation = "sigmoid")

..................

history <- model %>% fit(
x_train, y_train,
epochs = 10,
batch_size = 32,
validation_split = 0.2
)

Here is the output from the model's training:

Train on 8982 samples, validate on 2246 samples
Epoch 1/10
8982/8982 [==============================] - 70s 8ms/step - loss: 0.2854 - acc: 0.9006 - val_loss: 0.1945 - val_acc: 0.9372
Epoch 2/10
8982/8982 [==============================] - 66s 7ms/step - loss: 0.1795 - acc: 0.9511 - val_loss: 0.1791 - val_acc: 0.9484
Epoch 3/10
8982/8982 [==============================] - 69s 8ms/step - loss: 0.1586 - acc: 0.9557 - val_loss: 0.1756 - val_acc: 0.9492
Epoch 4/10
8982/8982 [==============================] - 70s 8ms/step - loss: 0.1467 - acc: 0.9607 - val_loss: 0.1664 - val_acc: 0.9559
Epoch 5/10
8982/8982 [==============================] - 70s 8ms/step - loss: 0.1394 - acc: 0.9614 - val_loss: 0.1775 - val_acc: 0.9533
Epoch 6/10
8982/8982 [==============================] - 70s 8ms/step - loss: 0.1347 - acc: 0.9636 - val_loss: 0.1667 - val_acc: 0.9519
Epoch 7/10
8982/8982 [==============================] - 70s 8ms/step - loss: 0.1344 - acc: 0.9618 - val_loss: 0.2101 - val_acc: 0.9332
Epoch 8/10
8982/8982 [==============================] - 70s 8ms/step - loss: 0.1306 - acc: 0.9647 - val_loss: 0.1893 - val_acc: 0.9479
Epoch 9/10
8982/8982 [==============================] - 70s 8ms/step - loss: 0.1286 - acc: 0.9646 - val_loss: 0.1663 - val_acc: 0.9550
Epoch 10/10
8982/8982 [==============================] - 70s 8ms/step - loss: 0.1254 - acc: 0.9669 - val_loss: 0.1687 - val_acc: 0.9492

The best validation accuracy was after epoch 4, when we got 95.59% accuracy, which is worse than our bidirectional model, which got 95.77% accuracy.