© Wallace Wang 2018
Wallace WangBeginning ARKit for iPhone and iPadhttps://doi.org/10.1007/978-1-4842-4102-8_16

16. Image Tracking and Object Detection

Wallace Wang1 
(1)
San Diego, CA, USA
 

In previous chapters, we learned about image detection. That’s where an app can detect an image, stored in its AR Resources folder, and then respond when the iOS device camera recognizes that image in the real world. Image detection works with two-dimensional items such as pictures and photographs, but ARKit 2.0 and iOS 12 offer two additional features that expand on image detection: image tracking and object detection.

Right now, image detection works by linking text, a virtual object, or a video to the location of a detected image. However, if that detected image moves, then the displayed text, virtual object, or video won’t move. That’s why ARKit 2.0 offers image tracking, which allows text, a virtual object, or video to move if the detected image also moves.

While image detection might be impressive, it’s limited to two-dimensional items such as pictures or photographs. To overcome this limitation, ARKit 2.0 offers object detection. First, you can scan in a three-dimensional object. Then you can store this three-dimensional object scan in your augmented reality app.

As soon as the user scans the same item, the augmented reality app can recognize the three-dimensional object and respond by displaying text, virtual objects, or video, just like image detection. Where image detection works with two-dimensional, flat items, object detection works with three-dimensional objects, allowing the user to get information about that object no matter which position or angle of the iOS camera.

For this chapter, let’s create a new Xcode project by following these steps:
  1. 1.

    Start Xcode. (Make sure you’re using Xcode 10 or greater.)

     
  2. 2.

    Choose File ➤ New ➤ Project. Xcode asks you to choose a template.

     
  3. 3.

    Click the iOS category.

     
  4. 4.

    Click the Single View App icon and click the Next button. Xcode asks for a product name, organization name, organization identifiers, and content technology.

     
  5. 5.

    Click in the Product Name text field and type a descriptive name for your project, such as ImageTracking. (The exact name does not matter.)

     
  6. 6.

    Click the Next button. Xcode asks where you want to store your project.

     
  7. 7.

    Choose a folder and click the Create button. Xcode creates an iOS project.

     
Now modify the Info.plist file to allow access to the camera and to use ARKit by following these steps:
  1. 1.

    Click the Info.plist file in the Navigator pane. Xcode displays a list of keys, types, and values.

     
  2. 2.

    Click the disclosure triangle to expand the Required Device Capabilities category to display Item 0.

     
  3. 3.

    Move the mouse pointer over Item 0 to display a plus (+) icon.

     
  4. 4.

    Click this plus (+) icon to display a blank Item 1.

     
  5. 5.

    Type arkit under the Value category in the Item 1 row.

     
  6. 6.

    Move the mouse pointer over the last row to display a plus (+) icon.

     
  7. 7.

    Click on the plus (+) icon to create a new row. A popup menu appears.

     
  8. 8.

    Choose Privacy – Camera Usage Description.

     
  9. 9.

    Type AR needs to use the camera under the Value category in the Privacy – Camera Usage Description row.

     
Now it’s time to modify the ViewController.swift file to use ARKit and SceneKit by following these steps:
  1. 1.

    Click on the ViewController.swift file in the Navigator pane.

     
  2. 2.
    Edit the ViewController.swift file so it looks like this:
    import UIKit
    import SceneKit
    import ARKit
    class ViewController: UIViewController, ARSCNViewDelegate {
    let configuration = ARImageTrackingConfiguration()
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
        }
    }
     
The most important line to notice is the one that defines an ARImageTrackingConfiguration :
let configuration = ARImageTrackingConfiguration()

Previously, we’ve only defined an ARWorldTrackingConfiguration but we need ARImageTrackingConfiguration to let our augmented reality app track a detected image when it moves.

To view augmented reality in our app, add a single ARKit SceneKit View (ARSCNView) so it fills the entire view. After you’ve designed your user interface, you need to add constraints. To add constraints, choose Editor ➤ Resolve Auto Layout Issues ➤ Reset to Suggested Constraints at the bottom half of the menu under the All Views in Container category.

The next step is to connect the user interface items to the Swift code in the ViewController.swift file. To do this, follow these steps:
  1. 1.

    Click the Main.storyboard file in the Navigator pane.

     
  2. 2.

    Click the Assistant Editor icon or choose View ➤ Assistant Editor ➤ Show Assistant Editor to display the Main.storyboard and the ViewController.swift file side by side.

     
  3. 3.

    Move the mouse pointer over the ARSCNView, hold down the Control key, and Ctrl-drag under the class ViewController line.

     
  4. 4.

    Release the Control key and the left mouse button. A popup menu appears.

     
  5. 5.
    Click in the Name text field and type sceneView, then click the Connect button. Xcode creates an IBOutlet as shown here:
    @IBOutlet var sceneView: ARSCNView!
     
  6. 6.
    Edit the viewDidLoad function so it looks like this:
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
            sceneView.delegate = self
            sceneView.session.run(configuration)
        }
     

Remember, ARKit can only recognize physical objects in the real world after you have stored images of those items in your app. In addition to storing an image, you must also specify the width and height of that real-world object. That way, when ARKit spots that actual item through the iOS device’s camera, it can compare that image with its stored image. If they match in both appearance and size, then ARKit can recognize that real-world item.

First, you must capture an image of the item you want to detect. Since these images need to be high resolution, you can capture public domain images off the Internet, such as at NASA ( www.nasa.gov ). Then you can display these images on your laptop or iPad screen for your iOS device to recognize.

To store one or more images that you want ARKit to recognize, follow these steps:
  1. 1.

    Click the Assets.xcassets folder in the Navigator pane.

     
  2. 2.

    Click the plus (+) icon in the bottom of the pane. A popup menu appears.

     
  3. 3.

    Choose New AR Resource Group. Xcode creates an AR Resources folder.

     
  4. 4.

    Drag and drop the images you want ARKit to recognize in the real world. Xcode displays a yellow alert icon in the bottom-right corner of your images.

     
  5. 5.

    Click the Attributes Inspector icon or choose View ➤ Inspectors ➤ Show Attributes Inspector. An AR Reference Image pane appears, as shown in Figure 16-1.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig1_HTML.jpg
Figure 16-1

Defining the width and height of the item to recognize

  1. 6.

    Click in the Width and Height text fields and type the actual width and height of the real item. You can also click the Units popup menu to change the default measurement unit from meters to something else, such as inches or centimeters.

     

Once we’ve added one or more images of the real objects we want ARKit to recognize, we need to write actual Swift code to recognize the image when spotted through an iOS device’s camera.

First, we need to access the folder containing the images of items to recognize. This folder can be called anything, such as AR Resources. This means using a guard statement to verify that the image folder even exists like this:
   guard let storedImages =  ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
            fatalError("Missing AR Resources images")
   }
This code looks for a folder named AR Resources. If it fails to find it, it ends the program and displays "Missing AR Resources images". If it finds an AR Resources folder, then we can define where the detected images are stored, like this:
   configuration.trackingImages = storedImages

Notice that this line of code uses trackingImages instead of detectionImages. With detectionImages, an app will recognize an image only if it stays in one place. With trackingImages, the app can follow the detected image if it moves.

Finally, we need to use the didAdd renderer function, which runs every time the camera updates its view. If the camera detects a recognized image (ARImageAnchor) then we want to display a virtual object that appears near the detected image.

In our earlier examples, we would display a virtual object to the detected image by attaching it to the rootnode of the scene like this:
sceneView.scene.rootNode.addChildNode(objectNode)
However, this ties the virtual object to a specific location in the augmented reality view. What we want is to tie the virtual object to the detected image like this:
node.addChildNode(objectNode)
Now if the detected image moves, the virtual object will move as well. To detect and track the stored image, we need to use the didAdd renderer function. First, we need to make sure we detected a stored image as an ARImageAnchor like this:
    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        if anchor is ARImageAnchor {
        }
    }
Then we can create a virtual object to appear over the detected image. In this example, we’ll create text as follows:
   let movingImage = SCNText(string: "Moving Text", extrusionDepth: 0.0)
   movingImage.flatness = 0.1
   movingImage.font = UIFont.boldSystemFont(ofSize: 10)
Next, we’ll need to store this SCNText in a node, so we need to define a node by defining its color and scale:
   let titleNode = SCNNode()
   titleNode.geometry = movingImage
   titleNode.geometry?.firstMaterial?.diffuse.contents = UIColor.white
   titleNode.scale = SCNVector3(0.0015, 0.0015, 0.0015)
Then we just need to add the node containing our SCNText to the detected image:
   node.addChildNode(titleNode)
The entire ViewController.swift file should look like this:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
    @IBOutlet var sceneView: ARSCNView!
    let configuration = ARImageTrackingConfiguration()
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
        sceneView.delegate = self
        guard let storedImages =  ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
            fatalError("Missing AR Resources images")
        }
        configuration.trackingImages = storedImages
        sceneView.session.run(configuration)
    }
    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        if anchor is ARImageAnchor {
            let movingImage = SCNText(string: "Moving Text", extrusionDepth: 0.0)
            movingImage.flatness = 0.1
            movingImage.font = UIFont.boldSystemFont(ofSize: 10)
            let titleNode = SCNNode()
            titleNode.geometry = movingImage
            titleNode.geometry?.firstMaterial?.diffuse.contents = UIColor.white
            titleNode.scale = SCNVector3(0.0015, 0.0015, 0.0015)
            node.addChildNode(titleNode)
        }
    }
}
To test this app, place the item that you took a picture of and stored in the AR Resources folder and display that item on a laptop or iPad placed on a table or floor. Then follow these steps:
  1. 1.

    Connect an iOS device to your Macintosh through its USB cable.

     
  2. 2.

    Click the Run button or choose Product ➤ Run. The first time you run this app, it will ask permission to access the camera so give it permission.

     
  3. 3.

    Using the picture on your you stored in your project’s AR Resources folder, display this same picture on a laptop or iPad screen.

     
  4. 4.

    Aim the iOS device’s camera at the screen that displays the picture you want ARKit to recognize. When ARKit recognizes the image, it displays the text “Moving Text” over the detected image, as shown in Figure 16-2.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig2_HTML.jpg
Figure 16-2

Text appears over the detected image even when you move the detected image

  1. 5.

    Move the laptop or iPad screen that’s displaying the detected image. Notice that as you move the detected image, the virtual object (“Moving Text”) moves along to maintain its distance and orientation with the detected image at all times.

     
  2. 6.

    Click the Stop button or choose Product ➤ Stop.

     

Detecting Objects

With image detection, we had to store the actual images we want the app to detect in a special AR Resources folder . Object detection works in a similar way except instead of storing a single image to detect, object detection stores the three-dimensional spatial features of an object in the AR Resources folder. To get these three-dimensional spatial features of an object we want to detect in the future, we have to scan that image ahead of time and store that scanned representation of the image in our app.

To enable object detection in your app, you must follow these steps:
  1. 1.

    Scan the object you want your app to detect using Apple’s scanning app. This stores the three-dimensional spatial representation of that object in an .arobject file format.

     
  2. 2.

    Store this .arobject file in the AR Resources folder of your own app.

     
  3. 3.

    Write Swift code to make your app respond when it detects the object defined by the .arobject file.

     

Scanning an Object

The quality of your app’s ability to detect an object depends on accurate scanning of that object beforehand. Scanning requires a device capable of running iOS 12 with at least an A9 processor (iPhone 6s, iPhone 6s Plus, iPhone SE, and iPad 2017). The more current the device (higher resolution camera, faster processor, etc.), the better the reference data the scanning will capture.

To increase the accuracy of your scanning, place the object you want to detect on a flat surface free of any additional items. That way the scanning can focus solely on the object you want to detect.

First, you need to install Apple’s ARKit Scanner app on an iOS 12 device. To get this ARKit Scanner app, follow these steps:
  1. 1.

    Visit https://developer.apple.com/documentation/arkit/scanning_and_detecting_3d_objects and download the ScanningApp, which includes the Swift source code.

     
  2. 2.

    Open this ScanningApp project into Xcode.

     
  3. 3.

    Connect an iOS 12 device to your Macintosh using a USB cable.

     
  4. 4.

    Click the Run button or choosing Product ➤ Run. This installs the ARKit Scanner app on your iOS device.

     
  5. 5.

    Click the Stop button or choose Product ➤ Stop. At this point, you can disconnect the iOS 12 device from your Macintosh.

     
Once you have Apple’s ARKit Scanner installed on an iOS 12 device, you can start scanning objects. To scan an item, follow these steps:
  1. 1.

    Place the object you want your app to detect on a flat, well-lit surface that’s free of any other objects.

     
  2. 2.

    Run the ARKit Scanner app on your iOS 12 device.

     
  3. 3.

    Point the iOS device’s camera at the object until a cartoon bounding box encloses the object you want to detect, as shown in Figure 16-3.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig3_HTML.jpg
Figure 16-3

A bounding box defines the area of the object to detect

  1. 4.

    When the object appears centered inside the bounding box, tap the Next button. The ARKit Scanner app displays the bounding box around your object along with measurement information about that object, as shown in Figure 16-4.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig4_HTML.jpg
Figure 16-4

The bounding box encloses the object you want to detect

  1. 5.

    Tap the Scan button and move around the object. As you move, the app displays yellow planes along the sides and top of the bounding box to let you know which angles it can detect, as shown in Figure 16-5. The goal is to move your iOS device’s camera along all sides and the top of the object until yellow planes completely cover the bounding box.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig5_HTML.jpg
Figure 16-5

As you scan the object, yellow planes show you which areas you’ve already scanned

  1. 6.

    When yellow planes completely cover all sides of the bounding box, tap the Finish button. The ARKit Scanner app now displays the origin of the detected object that you can move if you wish, as shown in Figure 16-6.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig6_HTML.jpg
Figure 16-6

After scanning an object, you can move its origin

  1. 7.

    Move the origin by swiping your finger on the origin and then tap the Test button.

     
  2. 8.

    Move the object to a new location and aim the iOS device’s camera at it to make sure the ARKit Scanner app can recognize the object, as shown in Figure 16-7.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig7_HTML.jpg
Figure 16-7

Testing if object detection is successful or not

  1. 9.

    On your Macintosh, open a Finder window and choose Go ➤ AirDrop to open an AirDrop window, as shown in Figure 16-8.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig8_HTML.jpg
Figure 16-8

Turning on AirDrop on a Macintosh

  1. 10.

    Tap the Share button in the ARKit Scanner screen. A popup window appears, letting you choose how you want to share the .arobject file, as shown in Figure 16-9.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig9_HTML.jpg
Figure 16-9

Accessing AirDrop from an iOS device

  1. 11.

    Tap the top gray circle that represents your Macintosh to transfer the .arobject file to the Downloads folder of your Macintosh.

     

Detecting Objects in an App

Once you’ve transferred the .arobject file to your Macintosh, you need to create an app that can detect the object captured in that .arobject file. To detect objects, let’s create a new Xcode project by following these steps:
  1. 1.

    Start Xcode. (Make sure you’re using Xcode 10 or greater.)

     
  2. 2.

    Choose File ➤ New ➤ Project. Xcode asks you to choose a template.

     
  3. 3.

    Click the iOS category.

     
  4. 4.

    Click the Single View App icon and click the Next button. Xcode asks for a product name, organization name, organization identifiers, and content technology.

     
  5. 5.

    Click in the Product Name text field and type a descriptive name for your project, such as ObjectDetection. (The exact name does not matter.)

     
  6. 6.

    Click the Next button. Xcode asks where you want to store your project.

     
  7. 7.

    Choose a folder and click the Create button. Xcode creates an iOS project.

     
Now modify the Info.plist file to allow access to the camera and to use ARKit by following these steps:
  1. 1.

    Click the Info.plist file in the Navigator pane. Xcode displays a list of keys, types, and values.

     
  2. 2.

    Click the disclosure triangle to expand the Required Device Capabilities category to display Item 0.

     
  3. 3.

    Move the mouse pointer over Item 0 to display a plus (+) icon.

     
  4. 4.

    Click this plus (+) icon to display a blank Item 1.

     
  5. 5.

    Type arkit under the Value category in the Item 1 row.

     
  6. 6.

    Move the mouse pointer over the last row to display a plus (+) icon.

     
  7. 7.

    Click on the plus (+) icon to create a new row. A popup menu appears.

     
  8. 8.

    Choose Privacy – Camera Usage Description.

     
  9. 9.

    Type AR needs to use the camera under the Value category in the Privacy – Camera Usage Description row.

     
Now it’s time to modify the ViewController.swift file to use ARKit and SceneKit by following these steps:
  1. 1.

    Click on the ViewController.swift file in the Navigator pane.

     
  2. 2.
    Edit the ViewController.swift file so it looks like this:
    import UIKit
    import SceneKit
    import ARKit
    class ViewController: UIViewController, ARSCNViewDelegate {
    let configuration = ARWorldTrackingConfiguration()
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
        }
    }
     

To view augmented reality in our app, add a single ARKit SceneKit View (ARSCNView) so it fills the entire view. After you’ve designed your user interface, you need to add constraints. To add constraints, choose Editor ➤ Resolve Auto Layout Issues ➤ Reset to Suggested Constraints at the bottom half of the menu under the All Views in Container category.

The next step is to connect the user interface items to the Swift code in the ViewController.swift file. To do this, follow these steps:
  1. 1.

    Click the Main.storyboard file in the Navigator pane.

     
  2. 2.

    Click the Assistant Editor icon or choose View ➤ Assistant Editor ➤ Show Assistant Editor to display the Main.storyboard and the ViewController.swift file side by side.

     
  3. 3.

    Move the mouse pointer over the ARSCNView, hold down the Control key, and Ctrl-drag under the class ViewController line.

     
  4. 4.

    Release the Control key and the left mouse button. A popup menu appears.

     
  5. 5.
    Click in the Name text field and type sceneView, then click the Connect button. Xcode creates an IBOutlet, as shown here:
    @IBOutlet var sceneView: ARSCNView!
     
  6. 6.
    Edit the viewDidLoad function so it looks like this:
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
            sceneView.delegate = self
            sceneView.session.run(configuration)
        }

    ARKit can only recognize physical objects in the real world after you have stored .arobject files of those items in your app. To store one or more .arobject files that you want ARKit to recognize, follow these steps:

     
  7. 1.

    Click the Assets.xcassets folder in the Navigator pane.

     
  8. 2.

    Click + icon in the bottom of the pane. A popup menu appears.

     
  9. 3.

    Choose New AR Resource Group. Xcode creates an AR Resources folder.

     
  10. 4.

    Drag and drop the .arobject file into the AR Resources folder , as shown in Figure 16-10.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig10_HTML.jpg
Figure 16-10

Displaying an .arobject file in the AR Resources folder

Once we’ve added one or more .arobject files of the real objects we want ARKit to recognize, we need to write actual Swift code to recognize the object when spotted through an iOS device’s camera.

First, we need to access the folder containing the images of items to recognize. This folder can be called anything, such as AR Resources. This means using a guard statement to verify that the AR Resources folder even exists like this:
        guard let storedObjects =  ARReferenceObject.referenceObjects(inGroupNamed: "AR Resources", bundle: nil) else {
            fatalError("Missing AR Resources images")
        }
This code looks for a folder named AR Resources. If it fails to find it, it ends the program and displays "Missing AR Resources images". If it finds an AR Resources folder, then we can define where the detected .arobject files are stored like this:
   configuration.detectionObjects = storedObjects

Notice that this line of code uses detectionObjects and the guard statement uses ARReferenceObject.referenceObjects.

Finally, we need to use the didAdd renderer function, which runs every time the camera updates its view. If the camera detects a recognized object (ARObjectAnchor), then we want to display a virtual object that appears near the detected object.

First, we need to make sure we detected a stored .arobject as an ARObjectAnchor like this:
    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        if let objectAnchor = anchor as? ARObjectAnchor {
        }
    }
Then we can create a virtual object to appear over the detected image. In this example, we’ll create text as follows:
   let movingImage = SCNText(string: "Object Detected", extrusionDepth: 0.0)
   movingImage.flatness = 0.1
   movingImage.font = UIFont.boldSystemFont(ofSize: 10)
Next, we’ll need to store this SCNText in a node, so we need to define a node by defining its color and scale:
   let titleNode = SCNNode()
   titleNode.geometry = movingImage
   titleNode.geometry?.firstMaterial?.diffuse.contents = UIColor.white
   titleNode.scale = SCNVector3(0.0015, 0.0015, 0.0015)
Then we just need to add the node containing our SCNText to the detected image:
   node.addChildNode(titleNode)
The entire ViewController.swift file should look like this:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate{
    @IBOutlet var sceneView: ARSCNView!
    let configuration = ARWorldTrackingConfiguration()
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
        sceneView.delegate = self
        guard let storedObjects =  ARReferenceObject.referenceObjects(inGroupNamed: "AR Resources", bundle: nil) else {
            fatalError("Missing AR Resources images")
        }
        configuration.detectionObjects = storedObjects
        sceneView.session.run(configuration)
    }
    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        if let objectAnchor = anchor as? ARObjectAnchor {
            let movingImage = SCNText(string: "Object Detected", extrusionDepth: 0.0)
            movingImage.flatness = 0.1
            movingImage.font = UIFont.boldSystemFont(ofSize: 10)
            let titleNode = SCNNode()
            titleNode.geometry = movingImage
            titleNode.geometry?.firstMaterial?.diffuse.contents = UIColor.white
            titleNode.scale = SCNVector3(0.0015, 0.0015, 0.0015)
            node.addChildNode(titleNode)
        }
    }
}
To test this app, place the item that you scanned and stored in the AR Resources folder as an .arobject file and put that item on a table or floor. Then follow these steps:
  1. 1.

    Connect an iOS device to your Macintosh through its USB cable.

     
  2. 2.

    Click the Run button or choose Product ➤ Run. The first time you run this app, it will ask permission to access the camera so give it permission.

     
  3. 3.

    Place the object you want to detect on a flat surface.

     
  4. 4.

    Aim the iOS device’s camera at the object you want ARKit to recognize. When ARKit recognizes the object, it displays the text “Object Detected” near the detected object, as shown in Figure 16-11.

     
../images/469983_1_En_16_Chapter/469983_1_En_16_Fig11_HTML.jpg
Figure 16-11

Detecting an object

  1. 5.

    Click the Stop button or choose Product ➤ Stop.

     

Summary

Image tracking lets your app not only recognize an image and display a virtual object nearby, but also keep that virtual object linked to that image even if that image moves. Object detection lets your app detect a pre-scanned object and display virtual objects around it, such as text.

When using image tracking, make sure you use ARImageTrackingConfiguration (not ARWorldTrackingConfiguration) like this:
let configuration = ARImageTrackingConfiguration()
Then use a guard statement to look for stored images in a special AR Resources folder:
   guard let storedImages =  ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
            fatalError("Missing AR Resources images")
   }
Finally, use trackingImages to access the stored images in the AR Resources folder:
   configuration.trackingImages = storedImages
When using object detection, use a guard statement to define stored .arobject files in the AR Resources folder. Make sure you use ARReferenceObject.referenceObjects like this:
        guard let storedObjects =  ARReferenceObject.referenceObjects(inGroupNamed: "AR Resources", bundle: nil) else {
            fatalError("Missing AR Resources images")
        }
This code looks for a folder named AR Resources. If it fails to find it, it ends the program and displays "Missing AR Resources images". If it finds an AR Resources folder, then we can define where the detected .arobject files are stored like this:
   configuration.detectionObjects = storedObjects

With both image tracking and object detection available in ARKit 2.0, augmented reality apps can become far more versatile than ever before.