Vuforia is an augmented reality platform that you can use to supplement your HoloLens and mobile apps with robust AR experiences. Vuforia uses computer vision algorithms that recognize and track real objects that you select. Vuforia can track various objects, which Vuforia calls targets, including images, 3D models, and something referred to as VuMarks, which can be understood as colorful two-dimensional barcodes. For UWP apps running on a Surface, Vuforia can also be used to detect ground planes.
When Vuforia recognizes a target, it can automatically attach digital content that you specify to it, essentially augmenting it with virtually generated objects that move with it. Vuforia is great for developing mixed reality (MR) apps in which digital content should be connected with real objects—for example, to provide instructions or to associate a real object with a hologram. Unity version 2017.2 and beyond include Vuforia 7, so you can start using Vuforia capabilities in your HoloLens apps straight away.
Note
The Vuforia website features plenty of mobile apps. However, there are not so many examples of HoloLens apps. This leaves you plenty of room to let your imagination run wild!
This chapter shows you how to use Vuforia to attach digital content to a target. When you complete this chapter, you will end up with an app that recognizes a specific image and displays the Ethan character on top of it. The Ethan character will also walk between the two opposite vertices of the recognized image to produce the illusion that the digitally created character is walking on the real surface. (See Figure 14-1.) To create this app, I displayed the X sign on my smartphone. Apart from relatively low image quality, Vuforia correctly recognized the object and attached the Ethan hologram to it. Ethan’s position changes when I move my smartphone due to robust tracking provided by Vuforia.
Your first step is to create the Unity project for the app. Follow these instructions:
The HoloLens project is configured with default MR settings. Your next step is to extend it to support Vuforia capabilities. To that end, you need to add at least two Vuforia prefabs:
To add the AR Camera prefab, follow these steps:
Click the ARCamera object in the Hierarchy to investigate its properties in the relevant Inspector. As shown in Figure 14-4, the ARCamera prefab has the familiar Transform, Camera, and Audio Listener components. For now, deselect the Audio Listener checkbox. This is to avoid conflicts with the listener from the MixedRealityCamera object. If you have both listeners enabled, Unity will display the following error in the console:
“There are 2 audio listeners in the scene. Please ensure there is always exactly one audio listener in the scene.”
Notice that ARCamera has two scripts attached:
As shown in Figure 14-4, properties of Vuforia behavior are inactive. This is because Vuforia is currently disabled. To enable Vuforia, you must configure PlayerSettings. Follow these steps:
The AR Camera prefab is ready. Next, let’s learn how to use the Image prefab to add the image target.
Note
Because you used Mixed Reality Toolkit for Unity to configure the project, the platform is already set to UWP. Similarly, the scripting backend is set to .NET.
You add the Image prefab the same way you did the AR Camera prefab: In the Hierarchy, click Create, choose Vuforia, and select Image. (Refer Figure 14-2.) A new object appears in the Hierarchy: ImageTarget. Click this object to investigate its properties using the Inspector. (See Figure 14-6.) For now, leave all options at their default values. By default, image target is a picture of an astronaut.
The Vuforia engine will try to recognize and track the image target you just created. You can go even further and display a hologram whenever the target is recognized. In this section, you will attach the Ethan character to the astronaut image. Follow these steps:
To test the app, you need some way to place the image target in the real world. There are a couple of ways to do this:
Once you have the image target ready, follow these steps:
In practical applications, you will most likely supplement the real object with digital content that provides textual instructions or some description of the detected object. To add text, you can use the 3DTextPrefab from the MRTKu package. Let’s see how this works:
Vuforia automatically displays the selected hologram when the target is recognized. However, the digital content disappears when the image target moves out of the camera field of vision (FOV). Such behavior is not desirable when you want to display larger holograms. You don’t want the hologram to disappear as the user gazes around the scene. Instead, you would like to let the user see the whole hologram. This is where extended tracking comes in. With extended tracking, Vuforia extrapolates the target position based on its past trajectory as shown here: http://bit.ly/extended_tracking.
The best way to visualize this effect is to use an example. Follow these steps:
This section shows you how to extend the HoloAugmentedExperience app so that Ethan will walk on the target recognized by Vuforia. (See Figure 14-9.) In Chapter 11, you learned how to implement natural character movement using a navigation mesh. Here, you will use the EthanAnimatorController you developed in that chapter to configure Ethan to switch between idle and walking states.
In the app, Ethan will walk on the target whenever the target it is actively tracked. To achieve this, you need to know when Vuforia recognizes the target. You can obtain this information from the DefaultTrackableEventHandler
script. This script is automatically attached to every image target, which you can quickly verify in the ImageTarget Inspector.
As shown in Listing 14-1, the DefaultTrackableEventHandler
class, like any other C# script, derives from the UnityEngine.MonoBehaviour
class. The script implements the Vuforia.ITrackableEventHandler
interface, which declares the OnTrackableStateChanged
method. This method is invoked when the tracking state changes. The object being tracked (here, the image target) is represented by an instance of the Vuforia.ImageTargetBehaviour
class. This class derives from Vuforia.DataSetTrackableBehaviour
and then from Vuforia.TrackableBehaviour
. DefaultTrackableEventHandler
stores a reference to the image target in an mTrackableBehaviour
field of type TrackableBehaviour
. An instance of this type is achieved using the GetComponent
generic method. (See the Start
method in Listing 14-1.) After that, the current instance of DefaultTrackableEventHandler
is registered as a handler that tracks state changes of the image target. (See the last statement of the Start
method in Listing 14-1.) In practice, this means that whenever the tracking state changes, OnTrackableStateChanged
will be invoked.
LISTING 14-1 Default definition of DefaultTrackableEventHandler
public class DefaultTrackableEventHandler : MonoBehaviour, ITrackableEventHandler
{
protected TrackableBehaviour mTrackableBehaviour;
protected virtual void Start()
{
mTrackableBehaviour = GetComponent<TrackableBehaviour>();
if (mTrackableBehaviour)
mTrackableBehaviour.RegisterTrackableEventHandler(this);
}
// Definition of the OnTrackableStateChanged
// Definitions of the OnTrackingFound and OnTrackingLost methods
}
Listing 14-2 shows the default definition of the OnTrackableStateChanged
method.
LISTING 14-2 Handling tracking state changes
public void OnTrackableStateChanged(
TrackableBehaviour.Status previousStatus,
TrackableBehaviour.Status newStatus)
{
if (newStatus == TrackableBehaviour.Status.DETECTED ||
newStatus == TrackableBehaviour.Status.TRACKED ||
newStatus == TrackableBehaviour.Status.EXTENDED_TRACKED)
{
Debug.Log("Trackable " + mTrackableBehaviour.TrackableName + " found");
OnTrackingFound();
}
else if (previousStatus == TrackableBehaviour.Status.TRACKED &&
newStatus == TrackableBehaviour.Status.NOT_FOUND)
{
Debug.Log("Trackable " + mTrackableBehaviour.TrackableName + " lost");
OnTrackingLost();
}
else
{
OnTrackingLost();
}
}
As shown in Listing 14-2, this method supports two arguments, previousStatus
and newStatus
. Both arguments are of type Vuforia.TrackableBehaviour.Status
, which is an enumeration that defines the following values:
The OnTrackableStateChanged
method uses an if
clause to check whether newStatus
equals DETECTED
, TRACKED
, or EXTENDED_TRACKED
. If so, the method executes two statements. The first statement uses the Debug.Log
method to display a string in the console with information about the detected target. (It uses the TrackableName
property of the TrackableBehaviour
class instance to obtain the name of the target.) The second statement invokes the OnTrackingFound
method (discussed later). If newStatus
does not equal DETECTED
, TRACKED
, or EXTENDED_TRACKED
, the OnTrackableStateChanged
method uses another if
clause to determine whether the previousStatus
was TRACKED
and the newStatus
is NOT_FOUND
. When this condition is true
, the OnTrackableStateChanged
method outputs a debug string that indicates which target tracking has been lost. Again, the target’s name is obtained from the TrackableName
property. If none of the aforementioned conditions is true
, the OnTrackingLost
method will be called. (See the statement under the else
clause in Listing 14-2.)
Listing 14-3 presents definitions of the OnTrackingFound
and OnTrackingLost
methods. These methods work in a similar manner. First, they obtain lists of child renderers, colliders, and canvases with respect to the target. Then, the OnTrackingFound
method sets the enabled
property of each element to true
, while the OnTrackingLost
method sets the enabled
property to false
. As a result, OnTrackingFound
will show all child holograms of the target, while OnTrackingLost
will hide them. Vuforia is no longer able to track the target.
LISTING 14-3 Showing and hiding holograms when the tracking state changes
protected virtual void OnTrackingFound()
{
var rendererComponents = GetComponentsInChildren<Renderer>(true);
var colliderComponents = GetComponentsInChildren<Collider>(true);
var canvasComponents = GetComponentsInChildren<Canvas>(true);
foreach (var component in rendererComponents)
component.enabled = true;
foreach (var component in colliderComponents)
component.enabled = true;
foreach (var component in canvasComponents)
component.enabled = true;
}
protected virtual void OnTrackingLost()
{
var rendererComponents = GetComponentsInChildren<Renderer>(true);
var colliderComponents = GetComponentsInChildren<Collider>(true);
var canvasComponents = GetComponentsInChildren<Canvas>(true);
foreach (var component in rendererComponents)
component.enabled = false;
foreach (var component in colliderComponents)
component.enabled = false;
foreach (var component in canvasComponents)
component.enabled = false;
}
These definitions are the defaults. You can freely adjust them to your needs. Later in this chapter you will use OnTrackingFound
and OnTrackingLost
to invoke statements that will send a message to another script that will make Ethan walk. Before doing that, however, let’s prepare the scene.
To prepare the scene, start by modifying Ethan’s properties. Follow these steps:
Now that you’ve configured Ethan, you’re ready to create two new objects: the plane used to define the navigation mesh and an empty object with a LineRenderer component. The LineRenderer component will draw a line connecting Ethan’s current position with the position he is moving toward.
To create the navigation mesh, proceed as follows:
Tip
If you do not see the navigation mesh, enlarge the ReferencePlane object—for example, change the X and Z Scale settings in the Transform group in the ReferencePlane Inspector to 10. Then re-bake the navigation mesh.
The walkable area is already defined. However, there is no need to render this plane when the target is displayed. The plane is used only to define the walkable area, and to easily find points that will be used to set destinations for patrolling. To easily hide the ReferencePlane object, you can use the transparent_background material from the MRTKu. Follow these steps:
Next, you will create the object that will indicate the path Ethan will follow when walking, which I will call the TargetIndicator. You will design this object using an empty GameObject object with a LineRenderer component. This component draws a straight line between two or more points. To create the TargetIndicator, follow these steps:
Now you need to set the color used by the LineRenderer component by creating a material. Follow these steps:
You are now ready to implement the logic that will cause the Ethan model to continuously walk between two edges of the ReferencePlane object. (Vuforia will automatically position this plane.) This type of continuous movement between given positions is called patrolling. Basically, whenever the tracking state of the image target changes, either the OnTrackingFound
or OnTrackingLost
method of the DefaultTrackableEventHandler script will send a message to the Patrolling script, which will be responsible for controlling Ethan’s state.
To implement this logic, follow these steps:
using
statements in the header of the Patrolling.cs file with the following two statements:
using UnityEngine;
using UnityEngine.AI;
UnityEngine.RequireComponentAttribute
:
[RequireComponent(typeof(NavMeshAgent))]
[RequireComponent(typeof(Animator))]
These attributes will ensure that the NavMeshAgent
and Animator
components will always be available for your script. If you do not add these components through the Editor, Unity will automatically add them to the object using the script.
Patrolling
class, implement the helper method from Listing 14-4. This method obtains references to all components that will be used later, including the following:
LISTING 14-4 Obtaining references to required components
private NavMeshAgent navMeshAgent;
private Animator animator;
private MeshRenderer referencePlaneRenderer;
private LineRenderer targetIndicator;
private void ObtainReferencesToRequiredComponents()
{
navMeshAgent = GetComponent<NavMeshAgent>();
animator = GetComponent<Animator>();
referencePlaneRenderer = GameObject.
Find("ReferencePlane").GetComponent<MeshRenderer>();
targetIndicator = GameObject.
Find("TargetIndicator").GetComponent<LineRenderer>();
}
ConfigureAgent
helper method. (See Listing 14-5.) This method sets public properties of the NavMeshAgent
instance to configure Ethan’s linear and angular speeds, stopping distance, and braking mode. (The braking mode is set to false
, so the Ethan model will not slow down as it approaches its destination.)
LISTING 14-5 Configuring the agent’s properties
private void ConfigureAgent()
{
navMeshAgent.speed = 0.05f;
navMeshAgent.angularSpeed = 300.0f;
navMeshAgent.stoppingDistance = 0.01f;
navMeshAgent.autoBraking = false;
}
Start
method of the Patrolling
script. (See Listing 14-6.)
LISTING 14-6 Initializing the Patrolling script
private void Start()
{
ObtainReferencesToRequiredComponents();
ConfigureAgent();
}
UpdatePatrollingStatus
method. (See Listing 14-7.) This method handles messages sent from DefaultTrackableEventHandler
. UpdatePatrollingStatus
accepts one Boolean
argument: isPatrolling
. The value of this argument indicates whether or not Ethan should walk. Therefore, UpdatePatrollingStatus
uses the input argument to set the local member isPatrolling
, update the IsWalking
animation parameter (refer to Chapter 11), and configure Ethan’s target position using another method, UdpateAgentDestination
. Additionally, UpdatePatrollingStatus
sets the isStopped
property of the NavMeshAgent
instance to !isPatrolling
. This ensures that Ethan will not be moved by Unity’s navigation engine when the image target is not recognized.
LISTING 14-7 Updating the patrolling status
private bool isPatrolling;
private void UpdatePatrollingStatus(bool isPatrolling)
{
this.isPatrolling = isPatrolling;
animator.SetBool("IsWalking", isPatrolling);
navMeshAgent.isStopped = !isPatrolling;
UpdateAgentDestination();
}
As shown in Listing 14-7, the UpdateAgentDestination
method uses the bounds
property of the MeshRenderer
associated with the ReferencePlane
object. This property is represented as an instance of the UnityEngine.Bounds
struct and represents the bounding box of the plane. This property has min
and max
properties. They are both of type Vector3
and represent minimum and maximum points of the box plane. Practically, these points are calculated as follows, where center denotes the middle point of the bounding box:
min
and max
points are used to initialize a two-dimensional array of name destinations
. This collection stores objects of type Vector3
. They define the locations between which Ethan will walk. To determine which of those points should be set as the next Ethan destination, implement a private currentDestinationIndex
member, which stores an array index. As shown in Listing 14-8, the value of currentDestinationIndex
is incremented right after setting the new destination. However, the array index cannot be larger than the length of the array. Accordingly, an incremented value is then divided by the array length and the currentDestinationIndex
is set to the remainder of that division (modulo operator).
LISTING 14-8 Updating the agent destination
private int currentDestinationIndex = 0;
private void UpdateAgentDestination()
{
if (referencePlaneRenderer != null)
{
var destinations = new Vector3[]
{
referencePlaneRenderer.bounds.min,
referencePlaneRenderer.bounds.max
};
navMeshAgent.destination = destinations[currentDestinationIndex];
currentDestinationIndex = (currentDestinationIndex + 1) % destinations.Length;
}
}
IndicateDestination
helper method from Listing 14-9. This method dynamically sets the positions
property of the LineRenderer
component to draw the line (refer to Figure 14-11). One end of this line is set to Ethan’s current position, while the other is set to Ethan’s destination. So, IndicateDestination
dynamically shows Ethan’s current path.
LISTING 14-9 Indicating the destination position
private void IndicateDestination()
{
if (targetIndicator != null)
{
targetIndicator.SetPositions(new Vector3[]
{
navMeshAgent.transform.position,
navMeshAgent.destination
});
}
}
Update
method of the Patrolling
script as shown in Listing 14-10. This implementation checks whether the isPatrolling
member is true
(see the first if
clause). If so, the second if
clause is used to determine whether the agent’s path is not currently being calculated (see the pathPending
property) and whether the remaining distance is not smaller or equal to the agent’s stopping distance. If these logical conditions evaluate to true
, the agent’s destination is modified using the UpdateAgentDestination
method. (Refer to Listing 14-9.) Whenever the agent is patrolling the image target area, the IndicatePosition
method is invoked to update the LineRenderer
to reflect the agent’s path.
LISTING 14-10 Ethan’s destination is updated at every frame, provided the isPatrolling
member is true
private void Update()
{
if (isPatrolling)
{
if (!navMeshAgent.pathPending
&& navMeshAgent.remainingDistance <= navMeshAgent.stoppingDistance)
{
UpdateAgentDestination();
}
IndicateDestination();
}
}
OnTrackingFound
and OnTrackingLost
methods of DefaultTrackableEventHandler
as shown in Listing 14-11. When tracking is found or lost, this broadcasts the UpdatePatrollingStatus
message to all child components (including Ethan). A Boolean parameter supplements the message to indicate whether Ethan should walk (true
in the OnTrackingFound
method) or not walk (false
in the OnTrackingLost
method).
LISTING 14-11 Broadcasting a message to the child components
protected virtual void OnTrackingFound()
{
var rendererComponents = GetComponentsInChildren<Renderer>(true);
var colliderComponents = GetComponentsInChildren<Collider>(true);
var canvasComponents = GetComponentsInChildren<Canvas>(true);
// Enable rendering:
foreach (var component in rendererComponents)
component.enabled = true;
// Enable colliders:
foreach (var component in colliderComponents)
component.enabled = true;
// Enable canvas':
foreach (var component in canvasComponents)
component.enabled = true;
BroadcastMessage("UpdatePatrollingStatus", true);
}
protected virtual void OnTrackingLost()
{
var rendererComponents = GetComponentsInChildren<Renderer>(true);
var colliderComponents = GetComponentsInChildren<Collider>(true);
var canvasComponents = GetComponentsInChildren<Canvas>(true);
// Disable rendering:
foreach (var component in rendererComponents)
component.enabled = false;
// Disable colliders:
foreach (var component in colliderComponents)
component.enabled = false;
// Disable canvas':
foreach (var component in canvasComponents)
component.enabled = false;
BroadcastMessage("UpdatePatrollingStatus", false);
}
Let’s test the app to see how it works so far. Follow these steps:
The gray plane is a product of the MixedRealityCameraParent object, which has a Boundary child object. The Boundary object renders the floor and boundaries of the headset’s user. Specifically, the ReferencePlane is interpreted as the floor, so the Boundary object renders a gray plane.
You just learned how to use Vuforia to detect one of its own built-in image targets. For practical applications, however, you will likely want the app to recognize custom images rather than one of Vuforia’s predefined ones. To achieve this, you will need to create a custom image database. To find out how, read on.
Before you can create a custom image database, you must register as a Vuforia developer and obtain a Development License Key. Follow these steps:
To create your custom image database, follow these steps:
Notice in the Create Database dialog box that you can create three types of databases:
Initially, the custom database will contain no targets. To add a target, follow these steps:
An Add Target dialog box opens, which enables you to upload an image target. (See Figure 14-20.) You can choose between four types:
Your next step is to download the database containing the image target and import it into Unity. Follow these steps:
Now that you’ve imported the database, you can select the custom image target for use in your app. Follow these steps:
Tip
If the size of the custom image target differs significantly from that of the astronaut image, you will need to adjust the scale of the ReferencePlane as well as the width of the LineRenderer accordingly.
You’ve created a custom image target. Now all you need to do is enable the integration of Vuforia with HoloLens. Here’s how:
In this chapter you learned how to create augmented reality experiences with Vuforia. Specifically, you created an app that recognizes image targets and then overlays virtual content on top of them. In this example the virtual content was a character that moved intelligently between two corners of a bounding box around the image target. The presented content applies to HoloLens only because immersive headsets do not provide AR experiences.