Summary

In this chapter, we wrote code to conduct reinforcement learning using deep Q-learning. We noted that while Q-learning is a simpler approach, it requires a limited and known environment. Applying deep Q-learning allows us to solve problems at a larger scale. We also defined our agent, which required creating a class. The class defined our agent and we instantiated an object with the attributes defined in our class to solve the reinforcement learning challenge. We then created a custom environment using functions that defined boundaries, as well as the range of moves the agent could take and the target or objective. Deep Q-learning involves adding a neural network to select actions, rather than relying on the Q matrix, as in Q-learning. We then added a neural network to our agent class.

Lastly, we put it all together by placing our agent object in our custom environment and letting it take various actions until it solved the problem. We further discussed some choices we could make to improve the agent's performance. With this framework, you are ready to apply reinforcement learning to any number of environments using any number of possible agents. The process will largely stay consistent; the changes will be in how the agent is programmed to act and learn and what the rules are in the environment.

This completes Hands-On Deep Learning with R. Throughout this book, you have learned a wide variety of deep learning methods. In addition, we applied these methods to a diverse set of tasks. This book was written with a bias toward action. The goal of this book was to provide concise code that addresses practical projects. Using what you have learned in this book, I hope that I have prepared you well to begin solving real-world challenges using deep learning.