In Chapter 14, we learned how to use image detection to detect specific items and then respond by displaying text related to that particular image. In this chapter, we’ll continue using image detection but instead of displaying text as virtual objects, we’re going to learn how to display videos or virtual models.
When using videos with image detection, users will be able to aim an iOS device’s camera at a still image and the augmented reality view will display a video over that image.
When using virtual models with image detection, users will be able to aim an iOS device’s camera at a still image and see a three-dimension virtual model float in mid-air in front of the image.
To learn about using video and virtual models with image detection, we’re going to modify the image detection project created in Chapter 14. You can make a copy of that project or just modify the existing code.
First, let’s edit the code in the project to make room for adding code to display video and virtual models. Delete all the code inside the switch statement in the nodeFor renderer function so it looks like this:
switch imageAnchor.referenceImage.name {
case "CRS-15":
print("1st image")
case "SaturnV":
print("2nd image")
default:
print("Nothing found")
}
Now we need to drag and drop an .scn file into our Xcode project. The simplest way to do that is to create an Augmented Reality App by choosing File ➤ New ➤ Project. This creates a sample augmented reality app that includes the ship.scn SceneKit file inside an art.scnassets folder.
With two Xcode windows open side by side, drag and drop ship.scn and its accompanying texture.png file from the sample augmented reality project Xcode window into the Navigator pane of your current project’s Xcode window.
Now you’ll need to drag and drop a .mov video file into the Navigator pane as well. This video can be anything you want, such as a video captured off your iPhone or a video file downloaded from the Internet.
When you have both the ship.scn and a .mov file in the Navigator pane of your current project, click on each file and make sure the check box is selected under the Target Membership category, as shown in Figure 15-1. If the Target Membership check box is not selected, Xcode won’t include the file in your current project even though it appears in the Navigator pane.
Figure 15-1
The Target Membership check box must be selected for your .scn and .mp4 files
Displaying Virtual Objects in Mid-Air
The goal is to have ARKit use image detection to recognize an image in the real world. As soon as it recognizes an image, it displays a virtual object in front of that image. All code will be written inside the switch statement in the nodeFor renderer function. To display a virtual object after recognizing an image, follow these steps:
1.
Click on the ship.scn file in the Navigator pane. Xcode displays both the image and a list of the nodes that make up that image, as shown in Figure 15-2. The rootnode name for the ship.scn file is “ship”.
Figure 15-2
Identifying the rootnode name of an image
2.
Write the following inside the first case statement inside the switch statement to create a variable that represents the ship.scn virtual object like this:
let item = SCNScene(named: "ship.scn")
3.
Now we need to identify the parent or rootnode of the virtual object by writing the following line:
let itemNode = item?.rootNode.childNode(withName: "ship", recursively: true)
4.
Now we need to position the virtual object node in the augmented reality view by retrieving the position from the ARImageAnchor:
At this point, the code will recognize the first image and then display ship.scn in mid-air. However, to create a more interesting visual effect, we can also rotate the virtual object around.
5.
Add the following two lines above the code from Step 4. The following code defines rotation around the x-, y-, and z-axes and keeps this rotation animation going forever:
The rotation values for the x-, y-, and z-axes are arbitrary, so feel free to experiment with different values so you can see how they change the rotation of the virtual object.
The entire ViewController.swift file should look like this:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
@IBOutlet var sceneView: ARSCNView!
let configuration = ARWorldTrackingConfiguration()
struct Images {
var title: String
var info: String
}
var imageArray: [Images] = []
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
let item2 = Images(title: "Saturn V rocket", info: "Apollo moon launch vehicle")
imageArray.append(item1)
imageArray.append(item2)
}
}
To test this app, follow these steps:
1.
Connect an iOS device to your Macintosh through its USB cable.
2.
Click the Run button or choose Product ➤ Run.
3.
Load the first picture on your computer that you stored in your project’s AR Resources folder.
4.
Aim the iOS device’s camera at the screen that displays the picture you want ARKit to recognize. When ARKit recognizes the image, it displays the ship.scn virtual object that rotates slowly in the air, as shown in Figure 15-3.
Figure 15-3
The ship.scn virtual object appears and rotates when the app recognizes the first image
5.
Click the Stop button or choose Product ➤ Stop.
Repeat these steps except point the iOS device’s camera at the second image stored in the AR Resources folder. Notice that the virtual object does not appear for this second virtual object.
Displaying Video on a Plane
After detecting an image, an app can present additional information through text or by displaying a virtual object. Yet another way an app can respond is by displaying a video where the video appears over the detected image so the image appears to come to life.
The first step is to convert any video file into a QuickTime .mov file format. To do that, simply load the QuickTime Player program and load the video file that you currently have stored in a different file format, such as .mp4. Now choose File ➤ Export. Choose a video resolution such as 480p or 720p, as shown in Figure 15-4. Keep in mind that higher resolution means a larger video file size. After choosing a video resolution, QuickTime Player will save your video as a QuickTime movie.
Figure 15-4
The QuickTime Player program can export QuickTime .mov files
Before we even attempt to display video, we need to make sure we can view the augmented reality view through the iOS device’s camera:
guard let currentFrame = sceneView.session.currentFrame else { return nil }
To play video, we can use a SceneKit class called SKVideoNode that allows us to load a QuickTime .mov file and play it immediately, like this:
let videoNode = SKVideoNode(fileNamed: "SaturnV.mov")
videoNode.play()
Next, we need to define this videoNode in a SKScene class where we define the video size, its scale such as aspectFill to maintain the aspect ratio of the original video regardless of the size of the plane it appears on, and then add that videoNode to the SKScene. Finally, we need to position the videoNode in the middle like this:
let videoScene = SKScene(size: CGSize(width: 640, height: 480))
Now we need to create a plane that will appear over the detected image. This plane needs to be the exact same size as the detected image and then display the video. The video also needs to play on both sides of the plane since we’re going to be flipping the plane:
let videoPlane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
Now we need to create an SCNNode to contain the plane (which displays the video). This SCNNode will hold the plane and then we need to flip the plane and position it over the plane that covers the detected image. Finally, we need to add the plane to the planeNode:
let item2 = Images(title: "Saturn V rocket", info: "Apollo moon launch vehicle")
imageArray.append(item1)
imageArray.append(item2)
}
}
To test this code, follow these steps:
1.
Connect an iOS device to your Macintosh through its USB cable.
2.
Click the Run button or choose Product ➤ Run.
3.
Display one of the images, stored in the app’s AR Resources folder, on to your computer screen.
4.
Point the iOS device’s camera at one of the images stored in the app’s AR Resources folder.
5.
Your video starts playing over the detected image. Notice that the video is the same size as the detected image.
6.
Click the Stop button or choose Product ➤ Stop.
Summary
Image detection can identify recognized images, but your app still needs a way to respond to the user who provides more information. Displaying text can be suitable in many cases, but two other ways to provide additional information is by displaying virtual objects or by playing video.
Displaying a virtual object can provide a three-dimensional view of an item while also using animation to make an augmented reality view come to life. Displaying video can make a detected image change from a static image to a video that provides further information.
Virtual objects and video are just two more ways an augmented reality can respond to detected images and provide additional information about a particular image.