Making Smarter Apps with CoreML

Over the past few years machine learning has gained popularity. For the past couple of iOS releases, Apple has mentioned how they used advanced machine learning to improve Siri, make smart suggestions for the Apple Keyboard on iOS, and improve Spotlight and many more features. While this is all great, machine learning has never been easy to implement on a mobile device. And if you do succeed in implementing machine learning, chances are that your implementation is not the most performant and energy efficient implementation possible.

Apple aims to solve this problem in iOS 11 with the CoreML framework. CoreML is Apple's solution to all problems they have run into themselves while implementing machine learning for iOS. As a result, CoreML should have the fastest, most efficient implementations for working with complex machine learning models through an interface that is as simple and flexible as possible.

In this chapter you will learn what machine learning is, how it works, and how you can use trained models in your own apps. We'll also have a brief look at the new vision framework that works beautifully with CoreML to perform complex analysis on images. This chapter covers the following topics:

By the end of the chapter you will have updated the augment reality application from the preceding chapters with automated content analysis for the images in your user's art gallery.