Summary

In this chapter, you have seen how you can make use of the machine learning capabilities that iOS provides. You saw that adding a machine learning model to your app is extremely simple, since you only have to drag it to Xcode and add it to your target app. You also learned how you can obtain models, and where to look to convert existing models to CoreML models. Creating a machine learning model is not simple, so it's great that Apple has made it so simple to implement machine learning by embedding trained models in your apps.

In addition to CoreML, you also learned about the Vision and Natural Language frameworks. Vision combines the power of CoreML and smart image analysis to create a compelling framework that can perform a massive amount of work on images. Convenient requests, such as facial landmark detection, text analysis, and more are available out of the box without adding any machine learning models to your app. If you do find that you need more power in the form of custom models, you now know how to use CreateML to train, export, and use your own custom, trained CoreML models. You learned that CreateML makes training models simple, but you also learned that the quality of your model is drastically impacted by the quality of your training data.

In the next chapter, you will learn how you can help your users live healthier lives with workouts and activity data.