What You Just Learned

This chapter was all about multiple linear regression. We extended our program to datasets with more than one input variable—using multiple weights to match the input variables. We also got rid of the explicit bias, turning it into just another weight. Our learning program is now powerful enough to tackle real-world problems, although it doesn’t look any more complicated than it did before.

Along the way, you also learned about matrix multiplication and matrix transpose. These pieces of math sit right at the core of ML. Now that they’re in your toolbox, they’ll serve you well for years to come.

Finally, in this chapter we started delving deeper into NumPy. I personally have a love--hate relationship with NumPy: I love its power, but I keep getting confused by its interface. Say what you want, NumPy is a must-have ML tool, so it’s important to get familiar with it. We’ll keep using it throughout this book.

It’s time for a plot twist: everything that we talked about in these first few chapters was just groundwork for something different and cooler. In the next chapter, we’ll abandon the beaten track of linear regression, and set on the road less traveled… the one that leads to computer vision.