© Wallace Wang 2018
Wallace WangBeginning ARKit for iPhone and iPadhttps://doi.org/10.1007/978-1-4842-4102-8_12

12. Physics on Virtual Objects

Wallace Wang1 
(1)
San Diego, CA, USA
 

So far when we’ve added virtual objects to an augmented reality view, those virtual objects simply float in mid-air. While this might be suitable in some cases, in other cases you might want the virtual objects to behave more like real objects that can fall, bounce, and collide with one another.

To give virtual objects the same characteristics as real objects, you can define physical characteristics to any virtual object. That means you can define how a virtual object interacts with others (static, dynamic, or kinematic), the shape of a virtual object such as a box or sphere, and the force applied to a virtual object.

A virtual object can behave in one of three ways:
  • Static—Cannot move and is unaffected by collisions or force.

  • Dynamic—Affected by forces and collisions.

  • Kinematic—Not affected by forces or collisions, but can affect dynamic virtual objects.

Static virtual objects are fine for scenery that simply exists. So far, all the virtual objects we’ve created so far have been static in that they don’t move or affect any other virtual objects.

Dynamic virtual objects are more interesting because they can move and be affected by gravity, which means they’ll fall to the ground along the y-axis. A dynamic virtual object can move and bounce off other dynamic or kinematic virtual objects.

Kinematic virtual objects don’t move but they can collide with dynamic virtual objects. In a video game, a stationary item like a road or an obstacle could be a kinematic virtual object.

To learn about applying physics to virtual objects, let’s create a new Xcode project by following these steps:
  1. 1.

    Start Xcode. (Make sure you’re using Xcode 10 or greater.)

     
  2. 2.

    Choose File ➤ New ➤ Project. Xcode asks you to choose a template.

     
  3. 3.

    Click the iOS category.

     
  4. 4.

    Click the Single View App icon and click the Next button. Xcode asks for a product name, organization name, organization identifiers, and content technology.

     
  5. 5.

    Click in the Product Name text field and type a descriptive name for your project, such as Physics. (The exact name does not matter.)

     
  6. 6.

    Click the Next button. Xcode asks where you want to store your project.

     
  7. 7.

    Choose a folder and click the Create button. Xcode creates an iOS project.

     
Now modify the Info.plist file to allow access to the camera and to use ARKit by following these steps:
  1. 1.

    Click the Info.plist file in the Navigator pane. Xcode displays a list of keys, types, and values.

     
  2. 2.

    Click the disclosure triangle to expand the Required Device Capabilities category to display Item 0.

     
  3. 3.

    Move the mouse pointer over Item 0 to display a plus (+) icon.

     
  4. 4.

    Click this plus (+) icon to display a blank Item 1.

     
  5. 5.

    Type arkit under the Value category in the Item 1 row.

     
  6. 6.

    Move the mouse pointer over the last row to display a plus (+) icon.

     
  7. 7.

    Click on the plus (+) icon to create a new row. A popup menu appears.

     
  8. 8.

    Choose Privacy – Camera Usage Description.

     
  9. 9.

    Type AR needs to use the camera under the Value category in the Privacy – Camera Usage Description row.

     
Now it’s time to modify the ViewController.swift file to use ARKit and SceneKit by following these steps:
  1. 1.

    Click on the ViewController.swift file in the Navigator pane.

     
  2. 2.

    Edit the ViewController.swift file so it looks like this:

    import UIKit
    import SceneKit
    import ARKit
    class ViewController: UIViewController, ARSCNViewDelegate {
    let configuration = ARWorldTrackingConfiguration()
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
        }
    }
     

To view augmented reality in our app, add a single ARKit SceneKit View (ARSCNView) and expand it so it fills the entire user interface. Then add constraints by choosing Editor ➤ Resolve Auto Layout Issues ➤ Reset to Suggested Constraints at the bottom half of the menu under the All Views in Container category.

The next step is to connect the user interface items to the Swift code in the ViewController.swift file. To do this, follow these steps:
  1. 1.

    Click the Main.storyboard file in the Navigator pane.

     
  2. 2.

    Click the Assistant Editor icon or choose View ➤ Assistant Editor ➤ Show Assistant Editor to display the Main.storyboard and the ViewController.swift file side by side.

     
  3. 3.

    Move the mouse pointer over the ARSCNView, hold down the Control key, and Ctrl-drag under the class ViewController line.

     
  4. 4.

    Release the Control key and the left mouse button. A popup menu appears.

     
  5. 5.
    Click in the Name text field and type sceneView, then click the Connect button. Xcode creates an IBOutlet as shown here:
    @IBOutlet var sceneView: ARSCNView!
     
  6. 6.
    Edit the viewDidLoad function so it looks like this:
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
            sceneView.delegate = self
            configuration.planeDetection = .horizontal
            sceneView.session.run(configuration)
            let tapGesture = UITapGestureRecognizer(target: self, action: #selector(tapResponse))
            sceneView.addGestureRecognizer(tapGesture)
        }

    The last two lines in the viewDidLoad function create a tap gesture, which means we’ll need a function to handle the tap gesture, called tapGesture.

     
  7. 7.
    Underneath the viewDidLoad function, write the following tapResponse function :
        @objc func tapResponse(sender: UITapGestureRecognizer) {
            let scene = sender.view as! ARSCNView
            let tapLocation = sender.location(in: scene)
            let hitTest = scene.hitTest(tapLocation, types: .existingPlaneUsingExtent)
            if hitTest.isEmpty{
                print ("no plane detected")
            } else {
                print("found a horizontal plane")
                guard let hitResult = hitTest.first else { return }
                addObject(hitResult: hitResult)
            }
        }

    This tapResponse function identifies the location on the screen where the user tapped and then sends this information to an addObject function, which means we need to write the addObject function next.

     
  8. 8.
    Underneath the tapResponse function, write the following addObject function:
        func addObject(hitResult: ARHitTestResult) {
            let objectNode = SCNNode()
            objectNode.geometry = SCNSphere(radius: 0.1)
            objectNode.geometry?.firstMaterial?.diffuse.contents = UIColor.orange
            objectNode.position = SCNVector3(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y + 0.5, hitResult.worldTransform.columns.3.z)
            objectNode.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil)
            sceneView.scene.rootNode.addChildNode(objectNode)
        }
     
The addObject function creates an orange sphere. Normally this orange sphere would float in mid-air but we’ve given it a physicsBody with this line:
   objectNode.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil)

This code defines the orange sphere as dynamic, which means it can be affected by forces and collisions with other virtual objects. Also its shape is defined as nil, which means ARKit will treat the sphere’s boundaries as its body when calculating collisions with other virtual objects.

Now we need to detect a horizontal plane and draw a virtual plane in that spot. To do that, we need a didAdd renderer function like this:
    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        guard anchor is ARPlaneAnchor else { return }
        let planeNode = displayPlane(anchor: anchor as! ARPlaneAnchor)
        node.addChildNode(planeNode)
    }
Notice that this renderer function runs multiple times by constantly looking for a horizontal plane. When it identifies a horizontal plane, then it calls a displayPlane function to create that virtual plane. That means we need to write a displayPlane function like this:
    func displayPlane(anchor: ARPlaneAnchor) -> SCNNode {
        let planeNode = SCNNode()
        planeNode.geometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))
        planeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.yellow
        planeNode.position = SCNVector3(anchor.center.x, anchor.center.y, anchor.center.z)
        let ninetyDegrees = GLKMathDegreesToRadians(90)
        planeNode.eulerAngles = SCNVector3(ninetyDegrees, 0, 0)
        planeNode.physicsBody = SCNPhysicsBody(type: .kinematic, shape: nil)
        planeNode.geometry?.firstMaterial?.isDoubleSided = true
        return planeNode
    }
This displayPlane function receives information about horizontal plane stored in ARPlaneAnchor, which defines a plane’s size and position. So we need to create a plane node and give it a size based on the ARPlaneAnchor information:
planeNode.geometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))

Then we color the virtual plane yellow and position it in the center of the horizontal plane that ARKit recognized. Next, we need to rotate the plane 90 degrees around the x-axis, so instead of appearing vertically like a wall, it appears horizontally like a floor.

Most importantly, we need to give this virtual plane a physics body like this:
planeNode.physicsBody = SCNPhysicsBody(type: .kinematic, shape: nil)

This code defines the virtual plane as kinematic, which means it won’t move when colliding with virtual objects, but it will affect any virtual objects colliding with it. We define its shape as nil, which tells ARKit to define the entire virtual plane as its physical body when calculating collisions with other virtual objects.

Finally, we need a renderer didUpdate function to expand the size of the virtual plane if the user moves the iOS device’s camera to capture more of the horizontal plane. This renderer didUpdate function looks like this:
    func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
        guard anchor is ARPlaneAnchor else { return }
        node.enumerateChildNodes { (childNode, _) in
            childNode.removeFromParentNode()
        }
        let planeNode = displayPlane(anchor: anchor as! ARPlaneAnchor)
        node.addChildNode(planeNode)
    }

This didUpdate renderer function removes the virtual plane and redraws a new virtual plane each time it detects that the horizontal plane is larger than it initially calculated. Then it calls the displayPlane function to draw a virtual plane in the augmented reality view.

To test this project, follow these steps:
  1. 1.

    Connect an iOS device to your Macintosh through its USB cable.

     
  2. 2.

    Click the Run button or choose Product ➤ Run. The first time you run this app, it will ask permission to access the camera so give it permission.

     
  3. 3.

    Aim the iOS device’s camera at a horizontal plane such as a table or the floor. The first time ARKit identifies a horizontal plane, the Xcode debug area displays the message “found a horizontal plane”.

     
  4. 4.

    Move the iOS device around to capture more of the horizontal plane. Each time ARKit recognizes a new part of the horizontal plane, the yellow plane grows in size.

     
  5. 5.

    Tap the screen. Each time you tap the screen, an orange sphere should appear, as shown in Figure 12-1. Because the orange sphere was defined as a .dynamic physics body, it’s affected by forces such as gravity, which causes the orange sphere to drop. If the orange sphere hits the yellow plane, it either bounces off it or rests on top of it. If you keep tapping the screen to add more orange spheres, the orange spheres will bounce off each other and the yellow plane. That’s because the yellow sphere is defined as a .kinematic physics body, which means forces such as gravity do not affect it, but it can collide with other virtual objects such as the orange spheres.

     
../images/469983_1_En_12_Chapter/469983_1_En_12_Fig1_HTML.jpg
Figure 12-1

The orange sphere can fall and hit the yellow plane

  1. 6.

    Click the Stop button or choose Product ➤ Stop.

     

Because the orange sphere is defined as a .dynamic physics body, it’s affected by gravity. Because the yellow plane is defined as a . kinematic physics body, it is not affected by gravity but can interact with other virtual objects like the orange spheres.

If you ever create a virtual object defined as a . dynamic physics body and don’t want it affected by gravity, you can set its isAffectedByGravity property to false like this:
objectNode.physicsBody?.isAffectedByGravity = false

If you add this line to the addObject function, each time you tap on the screen to add an orange sphere, the orange sphere will just hover in mid-air because it won’t be affected by gravity, even though it’s defined as a .dynamic physics body.

Applying Force on Virtual Objects

So far, we’ve created virtual objects that either hover in mid-air or respond to gravity by falling. Another way to interact with virtual objects is by applying a force to them. To apply a force on a virtual object, you need to define the force’s direction and whether you want it to be instantaneous or not.

Let’s create a new Xcode project to display three targets and fire a projectile at those three targets by following these steps:
  1. 1.

    Start Xcode. (Make sure you’re using Xcode 10 or greater.)

     
  2. 2.

    Choose File ➤ New ➤ Project. Xcode asks you to choose a template.

     
  3. 3.

    Click the iOS category.

     
  4. 4.

    Click the Single View App icon and click the Next button. Xcode asks for a product name, organization name, organization identifiers, and content technology.

     
  5. 5.

    Click in the Product Name text field and type a descriptive name for your project, such as PhysicsForce. (The exact name does not matter.)

     
  6. 6.

    Click the Next button. Xcode asks where you want to store your project.

     
  7. 7.

    Choose a folder and click the Create button. Xcode creates an iOS project.

     
Now modify the Info.plist file to allow access to the camera and to use ARKit by following these steps:
  1. 1.

    Click the Info.plist file in the Navigator pane. Xcode displays a list of keys, types, and values.

     
  2. 2.

    Click the disclosure triangle to expand the Required Device Capabilities category to display Item 0.

     
  3. 3.

    Move the mouse pointer over Item 0 to display a plus (+) icon.

     
  4. 4.

    Click this plus (+) icon to display a blank Item 1.

     
  5. 5.

    Type arkit under the Value category in the Item 1 row.

     
  6. 6.

    Move the mouse pointer over the last row to display a plus (+) icon.

     
  7. 7.

    Click on the plus (+) icon to create a new row. A popup menu appears.

     
  8. 8.

    Choose Privacy – Camera Usage Description.

     
  9. 9.

    Type AR needs to use the camera under the Value category in the Privacy – Camera Usage Description row.

     
Now it’s time to modify the ViewController.swift file to use ARKit and SceneKit by following these steps:
  1. 1.

    Click on the ViewController.swift file in the Navigator pane.

     
  2. 2.
    Edit the ViewController.swift file so it looks like this:
    import UIKit
    import SceneKit
    import ARKit
    class ViewController: UIViewController, ARSCNViewDelegate {
    let configuration = ARWorldTrackingConfiguration()
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
        }
    }
     

To view augmented reality in our app, add a single ARKit SceneKit View (ARSCNView) so it fills the entire user interface (see Figure 11-1 in Chapter 11).

After you’ve designed your user interface, you need to add constraints. To add constraints, choose Editor ➤ Resolve Auto Layout Issues ➤ Reset to Suggested Constraints at the bottom half of the menu under the All Views in Container category.

The next step is to connect the user interface items to the Swift code in the ViewController.swift file. To do this, follow these steps:
  1. 1.

    Click the Main.storyboard file in the Navigator pane.

     
  2. 2.

    Click the Assistant Editor icon or choose View ➤ Assistant Editor ➤ Show Assistant Editor to display the Main.storyboard and the ViewController.swift file side by side.

     
  3. 3.

    Move the mouse pointer over the ARSCNView, hold down the Control key, and Ctrl-drag under the class ViewController line.

     
  4. 4.

    Release the Control key and the left mouse button. A popup menu appears.

     
  5. 5.
    Click in the Name text field and type sceneView, then click the Connect button. Xcode creates an IBOutlet as shown here:
    @IBOutlet var sceneView: ARSCNView!
     
  6. 6.
    Edit the viewDidLoad function so it looks like this:
        override func viewDidLoad() {
            super.viewDidLoad()
            // Do any additional setup after loading the view, typically from a nib.
            sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
            sceneView.delegate = self
            sceneView.session.run(configuration)
            let tapGesture = UITapGestureRecognizer(target: self, action: #selector(tapResponse))
            sceneView.addGestureRecognizer(tapGesture)
        }
     
At this point, we’ve added a tap gesture recognizer so we need to write a function to handle this tap gesture by following these steps:
  1. 1.

    Click the ViewController.swift file in the Navigator pane.

     
  2. 2.
    Type the following underneath the viewDidLoad function:
        @objc func tapResponse(sender: UITapGestureRecognizer) {
        }
     
Each time the user taps the screen, we want a sphere to shoot out from the center of the screen and away from the user. To do this, we must first get the current orientation and location of the camera. That means making sure the user tapped on an augmented reality view and then retrieving information about the current camera’s orientation and location in a matrix using these three lines of code:
   guard let scene = sender.view as? ARSCNView else { return }
   guard let pov = scene.pointOfView else { return }
   let transform = pov.transform
The transform constant stores a 4 by 4 matrix that contains information about the camera’s position and orientation. To retrieve the orientation information, we need to access the third column of this matrix:
   let orientation = SCNVector3(-transform.m31, -transform.m32, -transform.m33)

All of this information needs to be reversed (hence the negative signs) because the orientation faces toward us and we need it to face the opposite direction away from us.

The location of the camera can be retrieved in the fourth column of the matrix like this:
   let location = SCNVector3(transform.m41, transform.m42, transform.m43)
To get the final position of the camera, we need to combine the orientation with the location like this:
   let position = SCNVector3(orientation.x + location.x, orientation.y + location.y, orientation.z + location.z)
Once we know the position of the camera, we need to create a projectile, which will be a sphere that’s colored purple and appears at the center of the screen, which is the position of the camera:
   let projectile = SCNNode()
   projectile.geometry = SCNSphere(radius: 0.35)
   projectile.geometry?.firstMaterial?.diffuse.contents = UIColor.purple
   projectile.position = position
This creates a purple sphere that will float in mid-air in the center of the screen when the user taps the screen. We need to give the projectile a physics body that defines its type as .dynamic , which means it can collide with other virtual objects:
   projectile.physicsBody = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(node: projectile, options: nil))
   projectile.physicsBody?.isAffectedByGravity = false

The first line defines the purple sphere as a physics body capable of moving and colliding, and the second line turns off its gravity. Otherwise gravity would just make the purple sphere plummet to the ground.

Now it’s time to apply a force to the purple sphere. First, declare a constant named force and set its value to an arbitrary value of 50. Then apply that force on the project using the applyForce method like this:
   let force: Float = 50
   projectile.physicsBody?.applyForce(SCNVector3(orientation.x * force, orientation.y * force, orientation.z * force), asImpulse: true)

This code applies a force to the projectile but this force is relatively weak, so we need to multiply it by the arbitrary force constant (50). The asImpulse value is true to create an instantaneous force on the projectile. If this asImpulse value were false, then the force would be applied continuously on the projectile.

The entire tapResponse function should look like this:
    @objc func tapResponse(sender: UITapGestureRecognizer) {
        guard let scene = sender.view as? ARSCNView else { return }
        guard let pov = scene.pointOfView else { return }
        let transform = pov.transform
        let orientation = SCNVector3(-transform.m31, -transform.m32, -transform.m33)
        let location = SCNVector3(transform.m41, transform.m42, transform.m43)
        let position = SCNVector3(orientation.x + location.x, orientation.y + location.y, orientation.z + location.z)
        let projectile = SCNNode()
        projectile.geometry = SCNSphere(radius: 0.35)
        projectile.geometry?.firstMaterial?.diffuse.contents = UIColor.purple
        projectile.position = position
        projectile.physicsBody = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(node: projectile, options: nil))
        projectile.physicsBody?.isAffectedByGravity = false
        let force: Float = 50
        projectile.physicsBody?.applyForce(SCNVector3(orientation.x * force, orientation.y * force, orientation.z * force), asImpulse: true)
        sceneView.scene.rootNode.addChildNode(projectile)
    }
To test this code, follow these steps:
  1. 1.

    Connect an iOS device to your Macintosh through its USB cable.

     
  2. 2.

    Click the Run button or choose Product ➤ Run. The first time you run this app, it will ask permission to access the camera so give it permission.

     
  3. 3.

    Aim the iOS device’s camera and tap the screen. Each time you tap the screen, a purple sphere should shoot out and gradually disappear.

     
  4. 4.

    Click the Stop button or choose Product ➤ Stop.

     

Modify the force constant with different values such as 20 or 75 to see the effect it has on the force applied to the purple sphere.

Colliding with Virtual Objects

To make a virtual object collide with another one, the two colliding virtual objects need to be either .static or .dynamic physics body types. At the end of the viewDidLoad function, add this line to call a function called addTargets:
      addTargets()
The projectile purples sphere is defined as a .dynamic physics body, which means that any other virtual objects that we want to collide with the purple sphere must be .dynamic or .static physics bodies. First, let’s create an addTargets function :
func addTargets() {
    }
Add a pyramid in the addTargets function that defines an orange color, specific dimensions, and a position based on the world origin. Then define the pyramid as a .static physics body and add it to the scene like this:
   let pyramidNode = SCNNode()
   pyramidNode.geometry = SCNPyramid(width: 4, height: 4.5, length: 4)
   pyramidNode.geometry?.firstMaterial?.diffuse.contents = UIColor.orange
   pyramidNode.position = SCNVector3(-3, 1, -15)
   pyramidNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
   sceneView.scene.rootNode.addChildNode(pyramidNode)
Create a green box with specific dimensions and position it nearby like this:
   let boxNode = SCNNode()
   boxNode.geometry = SCNBox(width: 3.5, height: 3.5, length: 3.5, chamferRadius: 0)
   boxNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
   boxNode.position = SCNVector3(5, 1, -15)
   boxNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
   sceneView.scene.rootNode.addChildNode(boxNode)
Notice that when defining the physics body of the pyramid and box, the shape is defined as nil, which means to use the shape of the virtual object as its boundaries like this:
   pyramidNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
   boxNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
The final virtual object to create inside the addTargets function is a torus, which looks like a doughnut or a hoop. Creating a blue torus involves defining physical dimensions, a color, and a position:
   let torusNode = SCNNode()
   torusNode.geometry = SCNTorus(ringRadius: 2, pipeRadius: 0.5)
   torusNode.geometry?.firstMaterial?.diffuse.contents = UIColor.blue
   torusNode.position = SCNVector3(0, -2, -15)
First, we’ll need to rotate the torus 90 degrees around the x-axis or else it will appear as a flat disk. To rotate the torus, we need to first convert 90 degrees into radians and then apply the value in radians into rotating the torus around its x-axis like this:
   let ninetyDegrees = GLKMathDegreesToRadians(90)
  torusNode.eulerAngles = SCNVector3(ninetyDegrees, 0, 0)
Now we need to define the physics body of the torus. If we simply define its shape as nil like this:
torusNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
This will create torus that looks like it has a hole in the middle of it, but really doesn’t. That’s because the nil value for its shape simply uses the entire boundary of the torus as the boundaries for detecting collections, including the inner hole. To make the hole behave like empty air, we need to define the torus’s physics body to use the boundaries of the actual shape itself, not just the outer boundaries. To do this, we can use this code:
   torusNode.physicsBody = SCNPhysicsBody(type: .static, shape: SCNPhysicsShape(node: torusNode, options: [SCNPhysicsShape.Option.type: SCNPhysicsShape.ShapeType.concavePolyhedron]))
The entire ViewController.swift file should look like this:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate  {
    @IBOutlet var sceneView: ARSCNView!
    let configuration = ARWorldTrackingConfiguration()
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
        sceneView.delegate = self
        sceneView.session.run(configuration)
        let tapGesture = UITapGestureRecognizer(target: self, action: #selector(tapResponse))
        sceneView.addGestureRecognizer(tapGesture)
        addTargets()
    }
    @objc func tapResponse(sender: UITapGestureRecognizer) {
        guard let scene = sender.view as? ARSCNView else { return }
        guard let pov = scene.pointOfView else { return }
        let transform = pov.transform
        let orientation = SCNVector3(-transform.m31, -transform.m32, -transform.m33)
        let location = SCNVector3(transform.m41, transform.m42, transform.m43)
        let position = SCNVector3(orientation.x + location.x, orientation.y + location.y, orientation.z + location.z)
        let projectile = SCNNode()
        projectile.geometry = SCNSphere(radius: 0.35)
        projectile.geometry?.firstMaterial?.diffuse.contents = UIColor.purple
        projectile.position = position
        projectile.physicsBody = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(node: projectile, options: nil))
        projectile.physicsBody?.isAffectedByGravity = false
        let force: Float = 50
        projectile.physicsBody?.applyForce(SCNVector3(orientation.x * force, orientation.y * force, orientation.z * force), asImpulse: true)
        sceneView.scene.rootNode.addChildNode(projectile)
    }
    func addTargets() {
        let pyramidNode = SCNNode()
        pyramidNode.geometry = SCNPyramid(width: 4, height: 4.5, length: 4)
        pyramidNode.geometry?.firstMaterial?.diffuse.contents = UIColor.orange
        pyramidNode.position = SCNVector3(-3, 1, -15)
        pyramidNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
        sceneView.scene.rootNode.addChildNode(pyramidNode)
        let torusNode = SCNNode()
        torusNode.geometry = SCNTorus(ringRadius: 2, pipeRadius: 0.5)
        torusNode.geometry?.firstMaterial?.diffuse.contents = UIColor.blue
        torusNode.position = SCNVector3(0, -2, -15)
        torusNode.physicsBody = SCNPhysicsBody(type: .static, shape: SCNPhysicsShape(node: torusNode, options: [SCNPhysicsShape.Option.type: SCNPhysicsShape.ShapeType.concavePolyhedron]))
        let ninetyDegrees = GLKMathDegreesToRadians(90)
        torusNode.eulerAngles = SCNVector3(ninetyDegrees, 0, 0)
        sceneView.scene.rootNode.addChildNode(torusNode)
        let boxNode = SCNNode()
        boxNode.geometry = SCNBox(width: 3.5, height: 3.5, length: 3.5, chamferRadius: 0)
        boxNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
        boxNode.position = SCNVector3(5, 1, -15)
        boxNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
        sceneView.scene.rootNode.addChildNode(boxNode)
    }
}
To test this code, follow these steps:
  1. 1.

    Connect an iOS device to your Macintosh through its USB cable.

     
  2. 2.

    Click the Run button or choose Product ➤ Run.

     
  3. 3.

    The world origin should appear along with an orange pyramid, a blue torus, and a green box. Aim the center of the screen and tap the screen to shoot a purple projectile out. Each time the purple projectile hits a virtual object, it should ricochet off it, as shown in Figure 12-2. Make sure you aim for all three virtual objects and aim for the center of the torus to see the purple sphere shoot through its center.

     
../images/469983_1_En_12_Chapter/469983_1_En_12_Fig2_HTML.jpg
Figure 12-2

The purple sphere should bounce off the other three virtual objects when they collide

  1. 4.

    Click the Stop button or choose Product ➤ Stop.

     

Detecting Collisions

Turning virtual objects into .dynamic and .static physics bodies lets them collide against each other. However, in many cases you may want to identify when a virtual object collides with another one. To do that, you must first create an enumeration structure that assigns each virtual object to an arbitrary numeric value.

In our example, we have a purple sphere that acts as a projectile, an orange pyramid, a blue torus, and a green box. So we can define an enumeration structure underneath the IBOutlet like this:
    enum contactType : Int {
        case projectile = 1
        case target = 2
    }
Next we have to assign the enum value of each virtual object to its categoryBitMask property inside the addTargets function like this:
projectile.physicsBody?.categoryBitMask = contactType.projectile.rawValue
pyramidNode.physicsBody?.categoryBitMask = contactType.target.rawValue
torusNode.physicsBody?.categoryBitMask = contactType.target.rawValue
boxNode.physicsBody?.categoryBitMask = contactType.target.rawValue
Once we’ve identified all the virtual objects with an arbitrary value for its categoryBitMask property, we need to use the SCNPhysicsContactDelegate for our class like this:
class ViewController: UIViewController, ARSCNViewDelegate, SCNPhysicsContactDelegate  {
This delegate allows us to get notifications when virtual objects collide. After defining the SCNPhysicsContactDelegate, we must also assign the class as the contact delegate like this:
   sceneView.scene.physicsWorld.contactDelegate = self
We also need to define the contactTestBitMask for the projectile in the tapResponse function . This defines what type of collisions we want to track. Since we want to be notified when the projectile hits any of the three virtual objects (pyramid, torus, or box), we can use the following:
projectile.physicsBody?.contactTestBitMask = contactType.target.rawValue
Since all of our targets (pyramid, torus, and box) are assigned the same contactType.target enumerated value, we need to identify when the projectile hits each different virtual object. That means we need to give each virtual object a unique name in the addTargets function like this:
projectile.name = "Projectile"
pyramidNode.name = "Pyramid"
torusNode.name = "Torus"
boxNode.name = "Box"
Now we need to use the didBegin physicsWorld function to detect collisions like this:
    func physicsWorld(_ world: SCNPhysicsWorld, didBegin contact: SCNPhysicsContact) {
    }
This function runs every time two virtual objects collide. When two virtual objects collide, the didBegin physicsWorld identifies the two objects as nodeA and nodeB. Unfortunately, we don’t know which node represents the projectile and which node represents the target. First, we need to declare a variable to hold the node containing the target:
var targetNode : SCNNode!
Now we need to determine if nodeA is the projectile or the target that was hit. To determine this information, we just need to look for the name of the node:
        if contact.nodeA.name == "Projectile" {
            targetNode = contact.nodeB
        } else {
            targetNode = contact.nodeA
        }

If nodeA is named Projectile, then we know that nodeB contains the target. If nodeA is not named Projectile, then we know nodeA contains the target.

Now depending on the targetNode’s name, we can change the color of the virtual object that the projectile hit using a switch statement . If the projectile hit the pyramid, the pyramid will change color to magenta. If the projectile hits the torus, then the torus will change color to yellow. If the projectile hits the box, the box will change color to red:
       switch targetNode.name {
        case "Pyramid":
            targetNode.geometry?.firstMaterial?.diffuse.contents = UIColor.magenta
        case "Torus":
            targetNode.geometry?.firstMaterial?.diffuse.contents = UIColor.yellow
        case "Box":
            targetNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
        default:
            return
        }
The entire ViewController.swift file should look like this:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate, SCNPhysicsContactDelegate  {
    @IBOutlet var sceneView: ARSCNView!
    let configuration = ARWorldTrackingConfiguration()
    enum contactType : Int {
        case projectile = 1
        case target = 2
    }
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        sceneView.debugOptions = [ARSCNDebugOptions.showWorldOrigin, ARSCNDebugOptions.showFeaturePoints]
        sceneView.delegate = self
        sceneView.scene.physicsWorld.contactDelegate = self
        sceneView.session.run(configuration)
        let tapGesture = UITapGestureRecognizer(target: self, action: #selector(tapResponse))
        sceneView.addGestureRecognizer(tapGesture)
        addTargets()
    }
    @objc func tapResponse(sender: UITapGestureRecognizer) {
        guard let scene = sender.view as? ARSCNView else { return }
        guard let pov = scene.pointOfView else { return }
        let transform = pov.transform
        let orientation = SCNVector3(-transform.m31, -transform.m32, -transform.m33)
        let location = SCNVector3(transform.m41, transform.m42, transform.m43)
        let position = SCNVector3(orientation.x + location.x, orientation.y + location.y, orientation.z + location.z)
        let projectile = SCNNode()
        projectile.geometry = SCNSphere(radius: 0.35)
        projectile.geometry?.firstMaterial?.diffuse.contents = UIColor.purple
        projectile.position = position
        projectile.physicsBody = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(node: projectile, options: nil))
        projectile.physicsBody?.isAffectedByGravity = false
        projectile.physicsBody?.categoryBitMask = contactType.projectile.rawValue
        projectile.physicsBody?.contactTestBitMask = contactType.target.rawValue
        projectile.name = "Projectile"
        let force: Float = 50
        projectile.physicsBody?.applyForce(SCNVector3(orientation.x * force, orientation.y * force, orientation.z * force), asImpulse: true)
        sceneView.scene.rootNode.addChildNode(projectile)
    }
    func addTargets() {
        let pyramidNode = SCNNode()
        pyramidNode.geometry = SCNPyramid(width: 4, height: 4.5, length: 4)
        pyramidNode.geometry?.firstMaterial?.diffuse.contents = UIColor.orange
        pyramidNode.position = SCNVector3(-3, 1, -15)
        pyramidNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
        pyramidNode.physicsBody?.categoryBitMask = contactType.target.rawValue
        pyramidNode.name = "Pyramid"
        sceneView.scene.rootNode.addChildNode(pyramidNode)
        let torusNode = SCNNode()
        torusNode.geometry = SCNTorus(ringRadius: 2, pipeRadius: 0.5)
        torusNode.geometry?.firstMaterial?.diffuse.contents = UIColor.blue
        torusNode.position = SCNVector3(0, -2, -15)
        torusNode.physicsBody = SCNPhysicsBody(type: .static, shape: SCNPhysicsShape(node: torusNode, options: [SCNPhysicsShape.Option.type: SCNPhysicsShape.ShapeType.concavePolyhedron]))
        torusNode.physicsBody?.categoryBitMask = contactType.target.rawValue
        torusNode.name = "Torus"
        let ninetyDegrees = GLKMathDegreesToRadians(90)
        torusNode.eulerAngles = SCNVector3(ninetyDegrees, 0, 0)
        sceneView.scene.rootNode.addChildNode(torusNode)
        let boxNode = SCNNode()
        boxNode.geometry = SCNBox(width: 3.5, height: 3.5, length: 3.5, chamferRadius: 0)
        boxNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
        boxNode.position = SCNVector3(5, 1, -15)
        boxNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
        boxNode.physicsBody?.categoryBitMask = contactType.target.rawValue
        boxNode.name = "Box"
        sceneView.scene.rootNode.addChildNode(boxNode)
    }
    func physicsWorld(_ world: SCNPhysicsWorld, didBegin contact: SCNPhysicsContact) {
        var targetNode : SCNNode!
        if contact.nodeA.name == "Projectile" {
            targetNode = contact.nodeB
        } else {
            targetNode = contact.nodeA
        }
        switch targetNode.name {
        case "Pyramid":
            targetNode.geometry?.firstMaterial?.diffuse.contents = UIColor.magenta
        case "Torus":
            targetNode.geometry?.firstMaterial?.diffuse.contents = UIColor.yellow
        case "Box":
            targetNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
        default:
            return
        }
    }
}
To test this code, follow these steps:
  1. 1.

    Connect an iOS device to your Macintosh through its USB cable.

     
  2. 2.

    Click the Run button or choose Product ➤ Run.

     
  3. 3.

    The world origin should appear along with an orange pyramid, a blue torus, and a green box. Aim the center of the screen and tap the screen to shoot a purple projectile out. Each time the purple projectile hits a virtual object, the virtual object should change to a different color to let you visually see that it was hit.

     
  4. 4.

    Click the Stop button or choose Product ➤ Stop.

     

Summary

By default, virtual objects simply hover in mid-air within an augmented reality view. By applying a physics body to a virtual object, you can have it be affected by gravity so it falls down, or have it interact and collide with other virtual objects.

You can define a virtual object with different types of physics bodies that define how it reacts to collisions. To initiate a collision, you can apply a force to a virtual object along the x-, y-, and z-axes. To determine what a virtual object may have hit, you need to define an enumeration structure that identifies different virtual objects that might collide. Then you need to write a didBegin physicsWorld function to respond to that collision.

Adding physics, force, and collision detection gives your app a chance to make virtual objects respond like real-life items and notify you when they physically touch in a collision.