Summary

This chapter wraps up the work we'll do on the Augmented Reality app. By now you have built an app that implements some of iOS 11's most exiting new technologies. In this chapter we focused on CoreML, Apple's machine learning framework. You saw that adding a machine learning model to your app is extremely simple since you only have to drag it to Xcode and add it to your target app. You also learned how you can obtain models and where to look in order to convert existing models to CoreML models. Creating a machine learning model is not simple so it's great that Apple has made it so simple to implement machine learning by embedding trained models in your apps.

In addition to CoreML, we also looked at the Vision framework. Vision combines the power of CoreML and smart image analysis to create an extremely powerful framework that can perform a huge amount of work on images. Convenient requests like facial landmark detection, text detection, and barcode detection are available out of the box without adding any machine learning models to your app. If you're looking for custom analysis, you can always add your own CoreML model just like we did for the Augmented Reality gallery app. The next chapter introduces you to entirely different aspects of iOS that are partially backed by machine learning: Spotlight and Universal Links.