© Wallace Wang 2018
Wallace WangBeginning ARKit for iPhone and iPadhttps://doi.org/10.1007/978-1-4842-4102-8_13

13. Interacting with the Real World

Wallace Wang1 
(1)
San Diego, CA, USA
 

When we’ve placed virtual objects in an augmented reality view, those virtual objects can interact with each other in different ways. The simplest way two virtual objects can interact is by placing one in front of another. Then the closest virtual object blocks your view of the second virtual object.

For greater realism within an augmented reality view, you can also have real-world objects appear to block virtual objects, which is called occlusion . We can mimic a real-world object blocking the view of a virtual object by creating an invisible virtual object that matches the position and size of a real-world object.

Another form of interaction occurs when virtual objects can interact with real-world objects. The simplest example is horizontal or vertical plane detection where ARKit can identify walls or floors. Of course, ARKit can also recognize points in the real world. It can, for example, compare the distance between two points that exist in the real world, such as the distance from one corner of a table to another corner.

To learn about occlusion with virtual objects, let’s create a new Xcode project by following these steps:
  1. 1.

    Start Xcode. (Make sure you’re using Xcode 10 or greater.)

     
  2. 2.

    Choose File ➤ New ➤ Project. Xcode asks you to choose a template.

     
  3. 3.

    Click the iOS category.

     
  4. 4.

    Click the Single View App icon and click the Next button. Xcode asks for a product name, organization name, organization identifiers, and content technology.

     
  5. 5.

    Click in the Product Name text field and type a descriptive name for your project, such as Occlusion. (The exact name does not matter.)

     
  6. 6.

    Click the Next button. Xcode asks where you want to store your project.

     
  7. 7.

    Choose a folder and click the Create button. Xcode creates an iOS project.

     
Now modify the Info.plist file to allow access to the camera and to use ARKit by following these steps:
  1. 1.

    Click the Info.plist file in the Navigator pane. Xcode displays a list of keys, types, and values.

     
  2. 2.

    Click the disclosure triangle to expand the Required Device Capabilities category to display Item 0.

     
  3. 3.

    Move the mouse pointer over Item 0 to display a plus (+) icon.

     
  4. 4.

    Click this plus (+) icon to display a blank Item 1.

     
  5. 5.

    Type arkit under the Value category in the Item 1 row.

     
  6. 6.

    Move the mouse pointer over the last row to display a plus (+) icon.

     
  7. 7.

    Click on the plus (+) icon to create a new row. A popup menu appears.

     
  8. 8.

    Choose Privacy – Camera Usage Description.

     
  9. 9.

    Type AR needs to use the camera under the Value category in the Privacy – Camera Usage Description row.

     
Now it’s time to modify the ViewController.swift file to use ARKit and SceneKit by following these steps:
  1. 1.

    Click on the ViewController.swift file in the Navigator pane.

     
  2. 2.
    Edit the ViewController.swift file so it looks like this:
    import UIKit
    import SceneKit
    import ARKit
    class ViewController: UIViewController, ARSCNViewDelegate {
    let configuration = ARWorldTrackingConfiguration()
        var x : Float = 0
        var y : Float = 0
        var z : Float = 0
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
        }
    }
     

This code declares three variables x, y, and z. These variables will be used to store the location of a horizontal plane. Once our app detects a horizontal plane, we’ll draw a virtual object underneath the horizontal plane using these x, y, and z variables.

To view augmented reality in our app, add a single ARKit SceneKit View (ARSCNView) and expand it so it fills the entire user interface. Then add constraints by choosing Editor ➤ Resolve Auto Layout Issues ➤ Reset to Suggested Constraints at the bottom half of the menu under the All Views in Container category.

The next step is to connect the user interface items to the Swift code in the ViewController.swift file. To do this, follow these steps:
  1. 1.

    Click the Main.storyboard file in the Navigator pane.

     
  2. 2.

    Click the Assistant Editor icon or choose View ➤ Assistant Editor ➤ Show Assistant Editor to display the Main.storyboard and the ViewController.swift file side by side.

     
  3. 3.

    Move the mouse pointer over the ARSCNView, hold down the Control key, and Ctrl-drag under the class ViewController line.

     
  4. 4.

    Release the Control key and the left mouse button. A popup menu appears.

     
  5. 5.
    Click in the Name text field and type sceneView, then click the Connect button. Xcode creates an IBOutlet as shown here:
    @IBOutlet var sceneView: ARSCNView!
     
  6. 6.
    Edit the viewDidLoad function so it looks like this:
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
            sceneView.delegate = self
            configuration.planeDetection = .horizontal
            sceneView.session.run(configuration)
            let tapGesture = UITapGestureRecognizer(target: self, action: #selector(tapResponse))
            sceneView.addGestureRecognizer(tapGesture)
        }

    The last two lines in the viewDidLoad function create a tap gesture, which means we’ll need a function to handle the tap gesture, called tapGesture.

     
  7. 7.
    Underneath the viewDidLoad function, write the following tapResponse function:
        @objc func tapResponse(sender: UITapGestureRecognizer) {
            let boxNode = SCNNode()
            boxNode.geometry = SCNBox(width: 0.08, height: 0.08, length: 0.08, chamferRadius: 0)
            boxNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
            boxNode.position = SCNVector3(x, y, z)
            sceneView.scene.rootNode.addChildNode(boxNode)
        }

    This tapResponse function identifies the location on the screen where the user tapped and then displays a green box at the x, y, and z coordinates, which represent the center coordinates of as horizontal plane.

     
  8. 8.
    Underneath the tapResponse function , write the following didAdd renderer function:
        func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
            guard anchor is ARPlaneAnchor else { return }
            let planeNode = detectPlane(anchor: anchor as! ARPlaneAnchor)
            node.addChildNode(planeNode)
        }

    This renderer function runs the first time ARKit detects a horizontal plane. Once it detects a horizontal plane, it runs the detectPlane function (which we’ll need to write later).

     
  9. 9.
    Write the following didUpdate renderer function:
        func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
            guard anchor is ARPlaneAnchor else { return }
            node.enumerateChildNodes { (childNode, _) in
                childNode.removeFromParentNode()
            }
            let planeNode = detectPlane(anchor: anchor as! ARPlaneAnchor)
            node.addChildNode(planeNode)
            print("updating plane anchor")
        }

    This didUpdate renderer function constantly resizes the horizontal plane as the iOS camera detects more of the horizontal plane. Notice that this function also calls the detectPlane function while also printing “updating plane anchor” each time it detects the expands the size of the horizontal plane.

     
  10. 10.
    Finally, write the following detectPlane function:
        func detectPlane(anchor: ARPlaneAnchor) -> SCNNode {
            let planeNode = SCNNode()
            planeNode.geometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))
            planeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.yellow
            planeNode.position = SCNVector3(anchor.center.x, anchor.center.y, anchor.center.z)
            x = anchor.center.x
            y = anchor.center.y - 0.4
            z = anchor.center.z
            let ninetyDegrees = GLKMathDegreesToRadians(90)
            planeNode.eulerAngles = SCNVector3(ninetyDegrees, 0, 0)
            planeNode.geometry?.firstMaterial?.isDoubleSided = true
            return planeNode
        }
     

The first two lines in the detectPlane function create a node and then define the plane’s size based on the width and height of the detected plane. The extent property contains the detected horizontal plane’s width and height

The next two lines define a yellow color for the plane and position the plate at the center of the detected plane anchor.

The next three lines store the values of the plane’s x, y, and z position, except it subtracts 0.4 meters from the plane’s y position. This creates a y value below and underneath the horizontal plane.

The next two lines use the GLKMathDegreesToRadians to convert 90 degrees into radians. Then it rotates the plane 90 degrees around the x-axis because the plane will initially be drawn vertically. Rotating the plane 90 degrees around the x-axis makes the plane appear horizontally.

Finally, the last line defines the plate as double-sided so the color yellow appears on the top and bottom.

To test this project, follow these steps:
  1. 1.

    Connect an iOS device to your Macintosh through its USB cable.

     
  2. 2.

    Click the Run button or choose Product ➤ Run. The first time you run this app, it will ask permission to access the camera, so give it permission.

     
  3. 3.

    Aim the iOS device’s camera at a horizontal plane that has empty space underneath it, such as a table. When ARKit identifies enough feature points, it draws a yellow plane on top of the horizontal surface the iOS camera is aimed at.

     
  4. 4.

    Tap the screen. This places a green box 0.4 meters underneath the yellow plane. You may need to move to the side to see the green box underneath the yellow plane. Notice that yellow plane blocks your view of the green box when the yellow plane appears over the green box, as shown in Figure 13-1.

     
../images/469983_1_En_13_Chapter/469983_1_En_13_Fig1_HTML.jpg
Figure 13-1

The yellow plane blocks the green box from view

  1. 5.

    Click the Stop button or choose Product ➤ Stop.

     

Occlusion works by displaying an invisible horizontal plane on a detected horizontal surface, such as a table top. Since the invisible horizontal plane can’t be seen, it will look like it’s not there. Yet it will block the green box from view unless you move to the side. This creates the illusion that the horizontal surface (such as a table top) is actually blocking the view of the green box.

To create an invisible horizontal plane, just comment out the line in the detectPlane function that displays the plane as yellow. Then replace it with two lines like this:
        planeNode.geometry?.firstMaterial?.colorBufferWriteMask = []
        planeNode.renderingOrder = -1

This creates a plane with no color and rendering order of -1. Most virtual objects have a default renderingOrder value of 0, but a higher renderingOrder value makes the virtual object appear drawn last. So a -1 renderingOrder value means that the virtual object always appears over other virtual objects. This helps create the illusion that the real horizontal plane will block the view of the green virtual box even though it’s really an invisible horizontal plane that’s doing it.

The entire detectPlane function should look like this:
    func detectPlane(anchor: ARPlaneAnchor) -> SCNNode {
        let planeNode = SCNNode()
        //planeNode.geometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))
        planeNode.geometry?.firstMaterial?.colorBufferWriteMask = []
        planeNode.renderingOrder = -1
        planeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.yellow
        planeNode.position = SCNVector3(anchor.center.x, anchor.center.y, anchor.center.z)
        x = anchor.center.x
        y = anchor.center.y - 0.4
        z = anchor.center.z
        let ninetyDegrees = GLKMathDegreesToRadians(90)
        planeNode.eulerAngles = SCNVector3(ninetyDegrees, 0, 0)
        planeNode.geometry?.firstMaterial?.isDoubleSided = true
        return planeNode
    }
To test this project, follow these steps:
  1. 1.

    Connect an iOS device to your Macintosh through its USB cable.

     
  2. 2.

    Click the Run button or choose Product ➤ Run. The first time you run this app, it will ask permission to access the camera, so give it permission.

     
  3. 3.

    Aim the iOS device’s camera at a horizontal plane that has empty space underneath it, such as a table. When you see a lot of feature points on the horizontal plane and see “updating plane anchor” in the Xcode debug area, you’ll know that ARKit has detected a horizontal plane and placed an invisible virtual plane on top of it.

     
  4. 4.

    Tap the screen. This draws a green box 0.4 meters underneath the invisible horizontal plane, but you won’t be able to see it unless you move to the side, as shown in Figure 13-2.

     
../images/469983_1_En_13_Chapter/469983_1_En_13_Fig2_HTML.jpg
Figure 13-2

The green box appears underneath the real horizontal surface

  1. 5.

    Move directly over the green box. Notice that real horizontal surface appears to cut off your view of the green box, as shown in Figure 13-3.

     
../images/469983_1_En_13_Chapter/469983_1_En_13_Fig3_HTML.jpg
Figure 13-3

The green box appears cut from view by the real horizontal surface

  1. 6.

    Click the Stop button or choose Product ➤ Stop.

     

Detecting Points in the Real World

ARKit can detect horizontal and vertical planes, but you can also make it detect individual points as well. For example, a measuring app would let you point the center of an iOS device’s camera at an object to record that position. Then, as you move the iOS device’s camera and tap to identify another position, such a measuring app could determine the distance between the two points.

Let’s create an Xcode project to identify two points in the real world and calculate the distance between them by following these steps:
  1. 1.

    Start Xcode. (Make sure you’re using Xcode 10 or greater.)

     
  2. 2.

    Choose File ➤ New ➤ Project. Xcode asks you to choose a template.

     
  3. 3.

    Click the iOS category.

     
  4. 4.

    Click the Single View App icon and click the Next button. Xcode asks for a product name, organization name, organization identifiers, and content technology.

     
  5. 5.

    Click in the Product Name text field and type a descriptive name for your project, such as Ruler. (The exact name does not matter.)

     
  6. 6.

    Click the Next button. Xcode asks where you want to store your project.

     
  7. 7.

    Choose a folder and click the Create button. Xcode creates an iOS project.

     
Now modify the Info.plist file to allow access to the camera and to use ARKit by following these steps:
  1. 1.

    Click the Info.plist file in the Navigator pane. Xcode displays a list of keys, types, and values.

     
  2. 2.

    Click the disclosure triangle to expand the Required Device Capabilities category to display Item 0.

     
  3. 3.

    Move the mouse pointer over Item 0 to display a plus (+) icon.

     
  4. 4.

    Click this plus (+) icon to display a blank Item 1.

     
  5. 5.

    Type arkit under the Value category in the Item 1 row.

     
  6. 6.

    Move the mouse pointer over the last row to display a plus (+) icon.

     
  7. 7.

    Click on the plus (+) icon to create a new row. A popup menu appears.

     
  8. 8.

    Choose Privacy – Camera Usage Description.

     
  9. 9.

    Type AR needs to use the camera under the Value category in the Privacy – Camera Usage Description row.

     
Now it’s time to modify the ViewController.swift file to use ARKit and SceneKit by following these steps:
  1. 1.

    Click on the ViewController.swift file in the Navigator pane.

     
  2. 2.
    Edit the ViewController.swift file so it looks like this:
    import UIKit
    import SceneKit
    import ARKit
    class ViewController: UIViewController, ARSCNViewDelegate {
    let configuration = ARWorldTrackingConfiguration()
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
        }
    }
     
  3. 3.

    Click the Main.storyboard file in the Navigator pane.

     
  4. 4.

    Drag and drop an ARSCNView and expand it to fill the entire view.

     
  5. 5.

    Click the Add New Constraints icon near the bottom of the Xcode screen. A popup window appears.

     
  6. 6.

    Make sure the values on the top, bottom, left, and right edges are all 0. Then click the constraint in each direction so they appear in red, as shown in Figure 13-4.

     
../images/469983_1_En_13_Chapter/469983_1_En_13_Fig4_HTML.jpg
Figure 13-4

Defining constraints on the ARSCNView

  1. 7.

    Click the Add 4 Constraints button to define constraints on the ARSCNView.

     
  2. 8.

    Drag and drop a UILabel on the ARSCNView.

     
  3. 9.

    Click on the UILabel and click the Attributes Inspector icon, or choose View ➤ Inspectors ➤ Show Attributes Inspector.

     
  4. 10.

    Click in the text field that displays Label and type a plus sign (+). Press Return.

     
  5. 11.

    Click on the T icon that appears on the far right of the Font popup menu. A popup window appears, as shown in Figure 13-5.

     
../images/469983_1_En_13_Chapter/469983_1_En_13_Fig5_HTML.jpg
Figure 13-5

Defining a size for the text in the UILabel

  1. 12.

    Click in the Size text field and type 24. Then click the Done button.

     
  2. 13.

    Click the Align icon near the bottom of the Xcode screen to display a popup window.

     
  3. 14.

    Select the Horizontally in Container and Vertically in Container check boxes. Then click the Add 2 Constraints button. Xcode centers your UILabel, displaying the plus sign, in the center of the screen, as shown in Figure 13-6.

     
../images/469983_1_En_13_Chapter/469983_1_En_13_Fig6_HTML.jpg
Figure 13-6

Aligning the UILabel horizontally and vertically

The whole purpose of the plus sign in the label is to show us where the center of the camera is when viewed through an augmented reality view.

The next step is to connect the user interface items to the Swift code in the ViewController.swift file . To do this, follow these steps:
  1. 1.

    Click the Main.storyboard file in the Navigator pane.

     
  2. 2.

    Click the Assistant Editor icon or choose View ➤ Assistant Editor ➤ Show Assistant Editor to display the Main.storyboard and the ViewController.swift file side by side.

     
  3. 3.

    Move the mouse pointer over the ARSCNView, hold down the Control key, and Ctrl-drag under the class ViewController line.

     
  4. 4.

    Release the Control key and the left mouse button. A popup menu appears.

     
  5. 5.

    Click in the Name text field and type sceneView, then click the Connect button. Xcode creates an IBOutlet as shown here:

    @IBOutlet var sceneView: ARSCNView!
     
  6. 6.
    Edit the viewDidLoad function so it looks like this:
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
            sceneView.delegate = self
            sceneView.session.run(configuration)
            let tapGesture = UITapGestureRecognizer(target: self, action: #selector(tapResponse))
            sceneView.addGestureRecognizer(tapGesture)
        }
     
At this point, we’ve added a Tap Gesture Recognizer so we need to write a function to handle this tap gesture by following these steps:
  1. 1.

    Click the ViewController.swift file in the Navigator pane.

     
  2. 2.
    Type the following underneath the viewDidLoad function:
    @objc func tapResponse(sender: UITapGestureRecognizer) {
    print ("Tapped screen")
        }
     

If you test this app, you’ll be able to see the plus sign as a tiny black crosshair that appears in the center of the screen. This center is where we want to use for defining our two points to measure in the real world. For now, tapping the screen just displays “Tapped screen” in the Xcode debug area.

Defining a Point in the Real World

For our ruler app to work, we need to define two points in the real world and then measure the distance between them. That means placing two points in the real world and storing the location of those points.

Each time the user taps the screen, we want to display a small sphere that defines a point in the real world. Then the user will need to point and tap the iOS device at another point so the app can measure the distance between the two points.

Inside the tapResponse function , we need to get the center of the screen where the center of the camera is pointed at, which is where the plus sign appears on the screen. To do that, we need to retrieve the scene that the user tapped on and identify the center like this:
        let scene = sender.view as! ARSCNView
        let location = scene.center
Next, we need to use the hitTest method to identify a feature point where the camera is pointing. A feature point represents a real-world surface that ARKit can recognize so the code looks like this:
let hitTestResults = scene.hitTest(location, types: .featurePoint)
As long as ARKit can identify a feature point, we can proceed so we need an if statement like this:
        if hitTestResults.isEmpty == false {
        }
Inside this if statement, we need to retrieve the first item retrieved from the hitTestResults constant :
   guard let hitTestResults = hitTestResults.first else { return }
Once we’ve identified a point that the camera is pointing at, we can create a green sphere to appear at that point:
   let sphereNode = SCNNode()
   sphereNode.geometry = SCNSphere(radius: 0.003)
   sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
To place the green sphere where the center of the camera is pointing, we need to retrieve the x, y, and z coordinates of the camera through its worldTransform property , which is a matrix. The third column of the matrix contains the position we need, so we can define the green sphere’s position like this:
   sphereNode.position = SCNVector3(hitTestResults.worldTransform.columns.3.x, hitTestResults.worldTransform.columns.3.y, hitTestResults.worldTransform.columns.3.z)

Finally we need to add the sphere to the scene like this:

    sceneView.scene.rootNode.addChildNode(sphereNode)

The entire tapResponse function should look like this:
    @objc func tapResponse(sender: UITapGestureRecognizer) {
        let scene = sender.view as! ARSCNView
        let location = scene.center
        let hitTestResults = scene.hitTest(location, types: .featurePoint)
        if hitTestResults.isEmpty == false {
            guard let hitTestResults = hitTestResults.first else { return }
            let sphereNode = SCNNode()
            sphereNode.geometry = SCNSphere(radius: 0.003)
            sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
            sphereNode.position = SCNVector3(hitTestResults.worldTransform.columns.3.x, hitTestResults.worldTransform.columns.3.y, hitTestResults.worldTransform.columns.3.z)
            sceneView.scene.rootNode.addChildNode(sphereNode)
        }

If you test this app, you’ll be able to point the plus sign in the center of the screen at any real item and tap the screen to place a green sphere.

Measuring Distance Between Virtual Objects

Our ruler app will define two feature points detected in the real world, display green spheres to mark their locations, and then calculate the distance between the two virtual objects. Finally, it will display the result on the screen. First, we’ll need to keep track of the two points using an array of SCNNodes:
    var realPoints = [SCNNode]()
This array will store the location of the two feature points, identified by green spheres. After we place a green sphere in the augmented reality view, we need to store that sphere’s location in the array by using the append method like this:
    realPoints.append(sphereNode)
If the number of spheres added to the augmented reality view is exactly two, then we can calculate the distance between those two points. If the number of spheres added is only one or zero, then we don’t need to do anything, so we need an if statement that counts the number of spheres in the array like this:
   if realPoints.count == 2 {
   }
Inside this if statement, we need to retrieve the two stored spheres:
   if realPoints.count == 2 {
     let pointOne = realPoints.first!
     let pointTwo = realPoints.last!
   }
This code retrieves the first and last elements stored in the realPoints array . Now we need to get the x, y, and z positions by subtracting the second sphere (pointTwo) from the position of the first sphere (pointOne):
   if realPoints.count == 2 {
      let pointOne = realPoints.first!
      let pointTwo = realPoints.last!
      let x = pointTwo.position.x - pointOne.position.x
      let y = pointTwo.position.y - pointOne.position.y
      let z = pointTwo.position.z - pointOne.position.z
   }
We can define the position using these x, y, and z values:
   if realPoints.count == 2 {
      let pointOne = realPoints.first!
      let pointTwo = realPoints.last!
      let x = pointTwo.position.x - pointOne.position.x
      let y = pointTwo.position.y - pointOne.position.y
      let z = pointTwo.position.z - pointOne.position.z
      let position = SCNVector3(x, y, z)
    }
Now we can calculate the distance using Pythagorean’s theorem. Since we’re working in three dimensions, we need to use the x, y, and z coordinates to define the distance like this:
$$ distance=\sqrt{x^2+{y}^2+{z}^2} $$
In Swift, this equation looks like this:
    let distance = sqrt(position.x * position.x + position.y * position.y + position.z * position.z)
So the complete if statement looks like this:
  if realPoints.count == 2 {
     let pointOne = realPoints.first!
     let pointTwo = realPoints.last!
     let x = pointTwo.position.x - pointOne.position.x
     let y = pointTwo.position.y - pointOne.position.y
     let z = pointTwo.position.z - pointOne.position.z
     let position = SCNVector3(x, y, z)
     let distance = sqrt(position.x * position.x + position.y * position.y + position.z * position.z)
  }

Now that we can accurately calculate the distance between two points, defined by green spheres in the augmented reality view, the final step is to display the results on the screen. To do this, we can create SCNText, which displays text as a virtual object.

We want to display the distance in between the two green spheres. To do that, we need to calculate a halfway point in between the two green spheres for the distance result to appear. At the end of the if statement, we need to calculate a location like this:
   let x1 = (pointOne.position.x + pointTwo.position.x) / 2
   let y1 = pointOne.position.y + pointTwo.position.y
   let z1 = pointOne.position.z + pointTwo.position.z
   let centerPosition = SCNVector3(x1, y1, z1)
Then we need to call a displayText function that takes the distance between the two points and a position to display the actual answer as virtual text:
   displayText(answer: distance, position: centerPosition)
This means we need to create a displayText function that defines SCNText and displays the distance as yellow text that floats in the augmented reality view:
    func displayText(answer: Float, position: SCNVector3) {
        let textDisplay = SCNText(string: "\(answer) meters", extrusionDepth: 0.5)
        textDisplay.firstMaterial?.diffuse.contents = UIColor.yellow
        let textNode = SCNNode()
        textNode.geometry = textDisplay
        textNode.position = position
        textNode.scale = SCNVector3(0.003, 0.003, 0.003)
        sceneView.scene.rootNode.addChildNode(textNode)
    }
The entire ViewController.swift file should look like this:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
    @IBOutlet var sceneView: ARSCNView!
    var realPoints = [SCNNode]()
    let configuration = ARWorldTrackingConfiguration()
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
        sceneView.delegate = self
        sceneView.session.run(configuration)
        let tapGesture = UITapGestureRecognizer(target: self, action: #selector(tapResponse))
        sceneView.addGestureRecognizer(tapGesture)
    }
    @objc func tapResponse(sender: UITapGestureRecognizer) {
        let scene = sender.view as! ARSCNView
        let location = scene.center
        let hitTestResults = scene.hitTest(location, types: .featurePoint)
        if hitTestResults.isEmpty == false {
            guard let hitTestResults = hitTestResults.first else { return }
            let sphereNode = SCNNode()
            sphereNode.geometry = SCNSphere(radius: 0.003)
            sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
            sphereNode.position = SCNVector3(hitTestResults.worldTransform.columns.3.x, hitTestResults.worldTransform.columns.3.y, hitTestResults.worldTransform.columns.3.z)
            sceneView.scene.rootNode.addChildNode(sphereNode)
            realPoints.append(sphereNode)
            if realPoints.count == 2 {
                let pointOne = realPoints.first!
                let pointTwo = realPoints.last!
                let x = pointTwo.position.x - pointOne.position.x
                let y = pointTwo.position.y - pointOne.position.y
                let z = pointTwo.position.z - pointOne.position.z
                let position = SCNVector3(x, y, z)
                let distance = sqrt(position.x * position.x + position.y * position.y + position.z * position.z)
                let x1 = (pointOne.position.x + pointTwo.position.x) / 2
                let y1 = pointOne.position.y + pointTwo.position.y
                let z1 = pointOne.position.z + pointTwo.position.z
                let centerPosition = SCNVector3(x1, y1, z1)
                displayText(answer: distance, position: centerPosition)
            }
        }
    }
    func displayText(answer: Float, position: SCNVector3) {
        let textDisplay = SCNText(string: "\(answer) meters", extrusionDepth: 0.5)
        textDisplay.firstMaterial?.diffuse.contents = UIColor.yellow
        let textNode = SCNNode()
        textNode.geometry = textDisplay
        textNode.position = position
        textNode.scale = SCNVector3(0.003, 0.003, 0.003)
        sceneView.scene.rootNode.addChildNode(textNode)
    }
}
To test this app, follow these steps:
  1. 1.

    Connect an iOS device to your Macintosh through its USB cable.

     
  2. 2.

    Click the Run button or choose Product ➤ Run.

     
  3. 3.

    Move the plus sign in the center of the screen to the corner or tip of the object you want to measure, such as a pencil, book, or table.

     
  4. 4.

    Tap the screen to place the first green sphere on the screen.

     
  5. 5.

    Move the plus sign in the center of the screen to another corner or tip of the object you want to measure.

     
  6. 6.

    Tap the screen to place the second green sphere on the screen. Your app now displays the distance result, as shown in Figure 13-7.

     
../images/469983_1_En_13_Chapter/469983_1_En_13_Fig7_HTML.jpg
Figure 13-7

Measuring a real-world object

  1. 7.

    Click the Stop button or choose Product ➤ Stop.

     

Summary

Augmented reality combines reality with virtual objects. To create the illusion that virtual objects can interact with the real world, you can use occlusion. By displaying invisible planes that you place in the real world, occlusion can create the illusion that real objects like horizontal or vertical surfaces can actually cover and hide a virtual object. Without placing an invisible plane to block a user’s view, virtual objects always appear floating in mid-air no matter what real-world objects may seem to get in the way.

Another way to interact with the real world is by using feature points to identify points in the real world. By placing virtual objects at these real points, you can mimic measuring the distance between two real points by using virtual objects.

By making virtual objects appear to interact with real-world items, you can create more realistic augmented reality experiences for the users of your app.