Hands On: Keeping It Simple

In this chapter, we applied L1 regularization to reduce overfitting on our four-layers neural network. Now it’s up to you to try a few other regularization techniques. How does early stopping work on this network? What about removing a few nodes from each layer?

Try out those techniques, keeping an eye on the accuracy on the validation set. Maybe you’ll find a more accurate result than we did in this chapter. Don’t worry if you don’t! The point of this exercise is experimenting with regularization, not necessarily beating that 92% score.

Once you’ve optimized the network’s accuracy on the validation set, there is one last thing that you can take care of. Do you remember what we talked about in A Testing Conundrum? By improving the network’s performance on the validation set, we run the risk of overfitting the validation set. There is only one way to find out: recover the Echidna test set, that we’ve been ignoring until now, and run a final test.

Edit the network’s code to replace the validation set (data.X_validation and data.Y_validation) with the test set (data.X_test and data.Y_test). Check out the network’s accuracy on the test set. Is it as good as the best accuracy you got on the validation set?