Custom properties and behaviors in a Unity project are defined through various types of files that are generically called Assets. Our project has four remaining questions and requirements that we must address by creating and configuring assets:
MonoBehaviour
—in order to control objects in the scene at various stages in their life cycle.The following subsections tackle these requirements one by one.
A shader is a set of functions that run on the GPU. Although such functions can be applied to general-purpose computing, typically they are used for graphics rendering—that is, to define the color of output pixels on the screen based on the inputs that describe lighting, geometry, surface texture, and perhaps other variables such as time. Unity comes with many shaders for common styles of 3D and 2D rendering. We can also write our own shader.
Let's create a folder, Rollingball/Shaders
, and then create a shader in it (by clicking on Shader under Create in the Project pane's context menu). Rename the shader DrawSolidColor
. Double-click on it to edit it and replace the contents with the following code:
Shader "Draw/Solid Color" { Properties { _Color ("Main Color", Color) = (1.0, 1.0, 1.0, 1.0) } SubShader { Pass { Color [_Color] } } }
This humble shader has one parameter, a color. The shader renders pixels in this color regardless of conditions such as lighting. For the purposes of Inspector GUI, the shader's name is Draw | Solid Color and its parameter's name is Main Color.
A material has a shader and a set of parameter values for the shader. The same shader can be used by multiple materials, which might use different parameter values. Let's create a material that draws solid red. We will use this material to highlight detected circles and lines.
Create a new folder, Rollingball/Materials
, and then create a material in it (by clicking on Material under Create in the context menu). Rename the material DrawSolidRed. Select it and, in Inspector, set its shader to Draw/Solid Color and its Main
Color to the RGBA value for red (255, 0, 0, 255). Inspector should now look like what is shown in the following screenshot:
We are going to create two more materials using shaders that come with Unity. First, create a material named Cyan and configure it so that its shader is Diffuse and its Main Color is cyan (0, 255, 255, 255). Leave the Base (RBG) texture as None. We will apply this material to the simulated balls and lines. Its Inspector should look like this:
Now, create a material named Video and configure it so that its shader is Unlit/Texture. Leave the Base (RBG) texture as None(Texture). Later, via code, we will assign the video texture to this material. Drag the Video material (from the Project pane) to VideoRenderer (in the Hierarchy pane) in order to assign the material to the quad. Select VideoRenderer and confirm that its Inspector includes the material component that is shown in the following screenshot:
We will assign the remaining materials once we create prefabs and scripts.
Now that we have made materials for rendering, let's look at the analogous concept of physics materials.
Although Unity's rendering pipeline can run custom functions that we write in shaders, its physics pipeline runs fixed functions. Nonetheless, we can configure the parameters of those functions via physics materials.
Let's create a folder, Rollingball/Physics Materials
, and in it create a physics material (by clicking on Physics Material under Create in the context menu). Rename the physics material as Bouncy
. Select it and note that it has the following properties in Inspector:
Careful! Are those physics materials explosive?
A physics simulation is said to explode when values grow continually and overflow the system's floating-point numeric limits. For example, if a collision's combined bounciness is greater than 1 and the collision occurs repeatedly, then over time the forces tend toward infinity. Ka-boom! We broke the physics engine.
Even without weird physics materials, numeric problems arise in scenes of an extremely large or small scale. For example, consider a multiplayer game that uses input from the Global Positioning System (GPS) such that objects in a Unity scene are positioned according to players' real-world longitude and latitude. The physics simulation cannot handle a human-sized object in this scene because the object and the forces acting on it are so small that they vanish inside the margin of floating-point error. This is a case where the simulation implodes (rather than explodes).
Let's set Bounciness to 1
(very bouncy!) and leave the other values at their defaults. Later, you can adjust everything to your taste if you wish. Inspector should look like this:
Our simulated lines will use default physics parameters, so they do not need a physics material.
Now that we have our rendering materials and physics materials, let's create prefabs for an entire simulated ball and an entire simulated line.
A prefab is an object that is not itself part of a scene but is designed to be copied into scenes during editing or at runtime. It can be copied many times to make many objects in the scene. At runtime, the copies have no special connection to the prefab or each other and all copies can behave independently. Although the role of a prefab is sometimes likened to the role of a class, a prefab is not a type.
Even though prefabs are not part of a scene, they are created and typically edited via a scene. Let's create a sphere in the scene by navigating to Game Object | Create Other | Sphere from the menu bar. An object named Sphere should appear in Hierarchy. Rename it as SimulatedCircle
. Drag each of the following assets from the Project pane onto SimulatedCircle in Hierarchy:
Rollingball/Materials
)Rollingball/PhysicsMaterials
)Now select SimulatedCircle and, in the Rigidbody section of Inspector, expand the Constraints field and check Z under Freeze Position. The effect of this change is to constrain the sphere's motion to two dimensions. Confirm that Inspector looks like this:
Create a folder, Rollingball/Prefabs
, and drag SimulatedCircle from Hierarchy into the folder in the Project pane. A prefab, also named SimulatedCircle, should appear in the folder. Meanwhile, the name of the SimulatedCircle object in Hierarchy should turn blue to indicate that the object has a prefab connection. Changes to the object in the scene can be applied back to the prefab by clicking on the Apply button in the scene object's Inspector. Conversely, changes to the prefab (at edit time, not at runtime) are automatically applied to instances in scenes except for properties in which an instance has unapplied changes.
Now, let's follow similar steps to create a prefab of a simulated line. Create a cube in the scene by navigating to Game Object | Create Other | Cube from the menu bar. An object named Cube should appear in Hierarchy. Rename it as SimulatedLine. Drag Cyan from the Project pane onto SimulatedLine in Hierarchy. Select SimulatedLine and, in the Rigidbody section of its Inspector, tick the Is Kinematic checkbox, which means that the object is not moved by the physics simulation (even though it is part of the simulation for the purpose of other objects colliding with it). Recall that we want the lines to be stationary. They are just obstacles for the falling balls. Inspector should now look like this:
Let's clean up our scene by deleting the instances of the prefabs from Hierarchy (but we want to keep the prefabs themselves in the Project pane). Now, let's turn our attention to the writing of scripts, which among other things, are able to copy prefabs at runtime.
As mentioned earlier, a Unity script is a subclass of MonoBehaviour
. A MonoBehaviour
object can obtain references to objects in Hierarchy and components that we attach to these objects in Inspector. A MonoBehaviour
object also has its own Inspector where we can assign additional references, including references to Project assets such as prefabs. At runtime, Unity sends messages to all MonoBehaviour
objects when certain events occur. A subclass of MonoBehaviour
can implement callbacks for any of these messages. MonoBehaviour
supports more than 60 standard message callbacks. Here are some examples:
Awake
: This is called during initialization.Start
: This is called after Awake
but before the first call to Update
, which is explained in the following bullet point.Update
: This callback is called with every frame.OnGUI
: This is called when the GUI overlay is ready for rendering instructions and the GUI events are ready to be handled.OnPostRender
: This is called after the scene is rendered. This is an appropriate callback in which to implement post-processing effects.OnDestroy
: This is called when the script is about to be deleted.For more information on the standard message callbacks, and the arguments that some callbacks' implementations may optionally take, refer to the official documentation at http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. Also note that we can send custom messages to all MonoBehaviour
objects using the SendMessage
method.
Implementations of these and Unity's other callbacks can be private
, protected
, or public
. Unity calls them regardless of protection level.
To summarize, then, scripts are the glue—the game logic—that connects runtime events to various objects that we see in Project, Hierarchy, and Inspector.
Let's create a folder, Rollingball/Scripts
, and in it create a script (by clicking on C#Script under Create in the context menu). Rename the script QuitOnAndroidBack
and double-click on it to edit it. Replace its contents with the following code:
using UnityEngine; namespace com.nummist.rollingball { public sealed class QuitOnAndroidBack : MonoBehaviour { void Update() { if (Input.GetKeyUp(KeyCode.Escape)) { Application.Quit(); } } } }
We are using a namespace, com.nummist.rollingball
, to keep our code organized and to avoid potential conflicts between our type names and type names in other parties' code. Namespaces in C# are like packages in Java. Our class is called QuitOnAndroidBack
. It extends Unity's MonoBehaviour
class. We use the sealed
modifier (similar to Java's final
modifier) to indicate that we do not intend to create subclasses of QuitOnAndroidBack
.
Thanks to Unity's callback system, the script's Update
method gets called in every frame. It checks whether the user has pressed a key (or button) that is mapped to the Escape
keycode. On Android, the standard back button is mapped to Escape
. When the key (or button) is pressed, the application quits.
Save the script and drag it from the Project pane to the QuitOnAndroidBack object in Hierarchy. Click on the QuitOnAndroidBack object and confirm that its Inspector looks like this:
That was an easy script, right? The next one is a bit trickier—but more fun—because it handles everything except quitting.
Let's create a folder, Rollingball/Scripts, and in it create a script (by clicking on C# Script under Create in the context menu). Rename the script DetectAndSimulate
and double-click on it to edit it. Delete its default contents and begin the code with the following import statements:
using UnityEngine; using System.Collections; using System.Collections.Generic; using System.IO; using OpenCVForUnity;
Next, let's declare our namespace and class with the following code:
namespace com.nummist.rollingball { [RequireComponent (typeof(Camera))] public sealed class DetectAndSimulate : MonoBehaviour {
Note that the class has an attribute, [RequireComponent (typeof(Camera))]
, which means that the script can only be attached to a game object that has a camera (a game-world camera, not a video camera). We will specify this requirement because we are going to highlight the detected shapes via an implementation of the standard OnPostRender
callback, and this callback only gets called for scripts attached to a game object with a camera.
DetectAndSimulate
needs to store representations of circles and lines in both 2D screen space and 3D world space. These representations do not need to be visible to any other class in our application, so it is appropriate to define their types as private inner structs. Our Circle
type stores 2D coordinates that represent the circle's center in screen space, a float representing its radius in screen space, and 3D coordinates representing the circle's center in world space. A constructor accepts all these values as arguments. Here is the Circle
type's implementation:
struct Circle { public Vector2 screenPosition; public float screenDiameter; public Vector3 worldPosition; public Circle(Vector2 screenPosition, float screenDiameter, Vector3 worldPosition) { this.screenPosition = screenPosition; this.screenDiameter = screenDiameter; this.worldPosition = worldPosition; } }
We will define another inner struct, Line
, to store two sets of 2D coordinates representing endpoints in screen space and two sets of 3D coordinates representing the same endpoints in world space. A constructor accepts all these values as arguments. Here is the implementation of Line
:
struct Line { public Vector2 screenPoint0; public Vector2 screenPoint1; public Vector3 worldPoint0; public Vector3 worldPoint1; public Line(Vector2 screenPoint0, Vector2 screenPoint1, Vector3 worldPoint0, Vector3 worldPoint1) { this.screenPoint0 = screenPoint0; this.screenPoint1 = screenPoint1; this.worldPoint0 = worldPoint0; this.worldPoint1 = worldPoint1; } }
Next, we will define member variables that are editable in Inspector. Such a variable is marked with the [SerializeField]
attribute, which means that Unity serializes the variable despite it being non-public. (Alternatively, public variables are also editable in Inspector.) The following four variables describe our preferences for camera input, including the direction the camera faces, its resolution, and its frame rate:
[SerializeField] bool useFrontFacingCamera = false; [SerializeField] int preferredCaptureWidth = 640; [SerializeField] int preferredCaptureHeight = 480; [SerializeField] int preferredFPS = 15;
At runtime, the camera devices and modes available to us might differ from these preferences.
We will also make several more variables editable in Inspector—namely, a reference to the video background's renderer, a reference to the material for highlighting detected shapes, a factor for adjusting the scale of the simulation's gravity, references to the simulated shapes' prefabs, and a font size for the button:
[SerializeField] Renderer videoRenderer; [SerializeField] Material drawPreviewMaterial; [SerializeField] float gravityScale = 8f; [SerializeField] GameObject simulatedCirclePrefab; [SerializeField] GameObject simulatedLinePrefab; [SerializeField] int buttonFontSize = 24;
We also have a number of member variables that do not need to be editable in Inspector. Among them are references to the game world's camera, a reference to the real-world camera's video texture, matrices to store images and intermediate processing results, and measurements relating to camera images, the screen, simulated objects, and the button:
Camera _camera; WebCamTexture webCamTexture; Color32[] colors; Mat rgbaMat; Mat grayMat; Mat cannyMat; float screenWidth; float screenHeight; float screenPixelsPerImagePixel; float screenPixelsYOffset; float raycastDistance; float lineThickness; UnityEngine.Rect buttonRect;
We will store a blob detector, a matrix of blob representations in OpenCV's format, and a list of circle representations in our own Circle
format:
FeatureDetector blobDetector; MatOfKeyPoint blobs = new MatOfKeyPoint(); List<Circle> circles = new List<Circle>();
Similarly, we will store a matrix of Hough line representations in OpenCV's format, and a list of line representations in our own Line
format:
Mat houghLines = new Mat(); List<Line> lines = new List<Line>();
We will hold a reference to the gyroscope input device, and we will store the magnitude of gravity to be used in our physics simulation:
Gyroscope gyro; float gravityMagnitude;
We (and the Unity API) are using the terms "gyroscope" and "gyro" loosely. We are referring to a fusion of motion sensors that might or might not include a real gyroscope. A gyroscope can be simulated, albeit poorly, using other real sensors such as an accelerometer and/or gravity sensor.
Unity provides a property, SystemInfo.supportsGyroscope
, to indicate whether the device has a real gyroscope. However, this information does not concern us. We just use Unity's Gyroscope.gravity
property, which can be derived from a real gravity sensor or can be simulated using other real sensors such as an accelerometer and/or gyroscope. Unity Android apps are configured by default to require an accelerometer, so we can safely assume that at least a simulated gravity sensor is available.
We will keep track of a list of simulated objects and provide a property, simulating
, that is true
when the list is non-empty:
List<GameObject> simulatedObjects = new List<GameObject>(); bool simulating { get { return simulatedObjects.Count > 0; } }
Now, let's turn our attention to methods. We will implement the standard Start
callback. The implementation begins by getting a reference to the attached camera, getting a reference to the gyro, and computing the magnitude of the game world's gravity, as seen in the following code:
void Start() { // Cache the reference to the game world's // camera. _camera = camera; gyro = Input.gyro; gravityMagnitude = Physics.gravity.magnitude * gravityScale;
MonoBehaviour
provides getters for many components that might be attached to the same game object as the script. (Such components would appear alongside the script in Inspector). For example, the camera
getter returns a Camera
object (or null
if none is present). These getters are expensive because they use introspection. Thus, if you need to refer to a component repeatedly, it is more efficient to store the reference in a member variable using a statement such as _camera = camera;
, as shown in the preceding code.
The implementation of Start
proceeds by finding a camera that faces the required direction (either front or rear, depending on the value of the useFrontFacingCamera
field, above). If no suitable camera is found, the method returns early, as seen in the following code:
// Try to find a (physical) camera that faces // the required direction. WebCamDevice[] devices = WebCamTexture.devices; int numDevices = devices.Length; for (int i = 0; i < numDevices; i++) { WebCamDevice device = devices[i]; if (device.isFrontFacing == useFrontFacingCamera) { string name = device.name; Debug.Log("Selecting camera with " + "index " + i + " and name " + name); webCamTexture = new WebCamTexture( name, preferredCaptureWidth, preferredCaptureHeight, preferredFPS); break; } } if (webCamTexture == null) { // No camera faces the required direction. // Give up. Debug.LogError("No suitable camera found"); Destroy(this); return; }
The Start
callback concludes by activating the camera and gyroscope (including the gravity sensor) and launching a helper coroutine called Init
:
// Ask the camera to start capturing. webCamTexture.Play(); if (gyro != null) { gyro.enabled = true; } // Wait for the camera to start capturing. // Then, initialize everything else. StartCoroutine(Init()); }
Our Init
coroutine begins by waiting for the camera to capture the first frame. Then, we determine the frame's dimensions and we create OpenCV matrices to match these dimensions. Here is the first part of the method's implementation:
IEnumerator Init() { // Wait for the camera to start capturing. while (!webCamTexture.didUpdateThisFrame) { yield return null; } int captureWidth = webCamTexture.width; int captureHeight = webCamTexture.height; float captureDiagonal = Mathf.Sqrt( captureWidth * captureWidth + captureHeight * captureHeight); Debug.Log("Started capturing frames at " + captureWidth + "x" + captureHeight); colors = new Color32[ captureWidth * captureHeight]; rgbaMat = new Mat(captureHeight, captureWidth, CvType.CV_8UC4); grayMat = new Mat(captureHeight, captureWidth, CvType.CV_8UC1); cannyMat = new Mat(captureHeight, captureWidth, CvType.CV_8UC1);
The coroutine proceeds by configuring the game world's orthographic camera and video quad to match the capture resolution and to render the video texture:
transform.localPosition = new Vector3(0f, 0f, -captureWidth); _camera.nearClipPlane = 1; _camera.farClipPlane = captureWidth + 1; _camera.orthographicSize = 0.5f * captureDiagonal; raycastDistance = 0.5f * captureWidth; Transform videoRendererTransform = videoRenderer.transform; videoRendererTransform.localPosition = new Vector3(captureWidth / 2, -captureHeight / 2, 0f); videoRendererTransform.localScale = new Vector3(captureWidth, captureHeight, 1f); videoRenderer.material.mainTexture = webCamTexture;
The device's screen and captured camera images likely have different resolutions. Moreover, remember that our application is configured for portrait orientation (in Player Settings). This orientation affects screen coordinates but not the coordinates in camera images, which will remain in landscape orientation. Thus, we need to calculate conversion factors between image coordinates and screen coordinates, as seen in the following code:
// Calculate the conversion factors between // image and screen coordinates. // Note that the image is landscape but the // screen is portrait. screenWidth = (float)Screen.width; screenHeight = (float)Screen.height; screenPixelsPerImagePixel = screenWidth / captureHeight; screenPixelsYOffset = 0.5f * (screenHeight - (screenWidth * captureWidth / captureHeight));
Our conversions will be based on fitting the video background to the width of the portrait screen, while either letterboxing or cropping the video at the top and bottom if necessary.
The thickness of simulated lines and the dimensions of the button are based on screen resolution, as seen in the following code, which concludes the Init
coroutine:
lineThickness = 0.01f * screenWidth; buttonRect = new UnityEngine.Rect( 0.4f * screenWidth, 0.75f * screenHeight, 0.2f * screenWidth, 0.1f * screenHeight); InitBlobDetector(); }
Our InitBlobDetector
helper method serves to create a blob detector and set its blob detection parameters. The method begins by calling a factory method for a detector and validating that the returned detector is non-null, as seen in the following code:
void InitBlobDetector() { // Try to create the blob detector. blobDetector = FeatureDetector.create( FeatureDetector.SIMPLEBLOB); if (blobDetector == null) { Debug.LogError( "Unable to create blob detector"); Destroy(this); return; }
Unlike the Python API that we used for blob detection in Chapter 5, Equipping Your Car with a Rearview Camera and Hazard Detection, the OpenCV for Unity API requires the detector's parameters to be read from a YAML configuration file. (The official OpenCV Java API has the same limitation.) However, I prefer to keep the parameter values in the source code in case we ever decide that we want to compute them based on runtime data such as capture resolution. To work around the API's limitation, we can construct a string of parameters in YAML format and then save it to a temporary file. The YAML format is very simple. The first line declares the YAML version and each subsequent line consists of a variable name, a colon, and the variable's value. Let's continue with the method's implementation by declaring the following string:
// The blob detector's parameters as a verbatim // string literal. // Do not indent the string's contents. string blobDetectorParams = @"%YAML:1.0 thresholdStep: 10.0 minThreshold: 50.0 maxThreshold: 220.0 minRepeatability: 2 minDistBetweenBlobs: 10.0 filterByColor: False blobColor: 0 filterByArea: True minArea: 50.0 maxArea: 5000.0 filterByCircularity: True minCircularity: 0.8 maxCircularity: 3.4028234663852886e+38 filterByInertia: False minInertiaRatio: 0.1 maxInertiaRatio: 3.4028234663852886e+38 filterByConvexity: False minConvexity: 0.95 maxConvexity: 3.4028234663852886e+38 ";
Now, let's try to save the string of parameters to a temporary file. If the file cannot be saved, the method returns early. Otherwise, the detector reads the parameters back from the file. Finally, the file is deleted. Here is the code, which concludes the InitBlobDetector
method:
// Try to write the blob detector's parameters // to a temporary file. string path = Application.persistentDataPath + "/blobDetectorParams.yaml"; File.WriteAllText(path, blobDetectorParams); if (!File.Exists(path)) { Debug.LogError( "Unable to write blob " + "detector's parameters to " + path); Destroy(this); return; } // Read the blob detector's parameters from the // temporary file. blobDetector.read(path); // Delete the temporary file. File.Delete(path); }
We will implement the standard Update
callback by processing gravity sensor input and processing camera input, provided that certain conditions are met. At the beginning of the method, if OpenCV objects are not yet initialized, the method returns early. Otherwise, the game world's direction of gravity is updated based on the real-world direction of gravity, as detected by the device's gravity sensor. Here is the first part of the method's implementation:
void Update() { if (rgbaMat == null) { // Initialization is not yet complete. return; } if (gyro != null) { // Align the game-world gravity to real-world // gravity. Vector3 gravity = gyro.gravity; gravity.z = 0f; gravity = gravityMagnitude * gravity.normalized; Physics.gravity = gravity; }
Next, if there is no new camera frame ready or if the simulation is currently running, the method returns early. Otherwise, we will convert the frame to OpenCV's format, convert it to gray, find edges, and call the two helper methods, UpdateCircles
and UpdateLines
, to perform shape detection. Here is the relevant code, which concludes the Update
method:
if (!webCamTexture.didUpdateThisFrame) { // No new frame is ready. return; } if (simulating) { // No new detection results are needed. return; } // Convert the RGBA image to OpenCV's format using // a utility function from OpenCV for Unity. Utils.WebCamTextureToMat(webCamTexture, rgbaMat, colors); // Convert the OpenCV image to gray and // equalize it. Imgproc.cvtColor(rgbaMat, grayMat, Imgproc.COLOR_RGBA2GRAY); Imgproc.Canny(grayMat, cannyMat, 50.0, 200.0); Imgproc.equalizeHist(grayMat, grayMat); UpdateCircles(); UpdateLines(); }
Our UpdateCircles
helper method begins by performing blob detection. We will clear the list of any previously detected circles. Then, we will iterate over the blob detection results. Here is the opening of the method's implementation:
void UpdateCircles() { // Detect blobs. blobDetector.detect(grayMat, blobs); // // Calculate the circles' screen coordinates // and world coordinates. // // Clear the previous coordinates. circles.Clear(); // Iterate over the blobs. KeyPoint[] blobsArray = blobs.toArray(); int numBlobs = blobsArray.Length; for (int i = 0; i < numBlobs; i++) {
We will use a helper method, ConvertToScreenPosition
, to convert the circle's center point from image space to screen space. We will also convert its diameter:
// Convert blobs' image coordinates to // screen coordinates. KeyPoint blob = blobsArray[i]; Point imagePoint = blob.pt; Vector2 screenPosition = ConvertToScreenPosition( (float)imagePoint.x, (float)imagePoint.y); float screenDiameter = blob.size * screenPixelsPerImagePixel;
We will use another helper method, ConvertToWorldPosition
, to convert the circle's center point from screen space to world space. We will also convert its diameter. Having done our conversions, we will instantiate Circle
and add it to the list. Here is the code that completes the UpdateCircles
method:
// Convert screen coordinates to world // coordinates based on raycasting. Vector3 worldPosition = ConvertToWorldPosition( screenPosition); Circle circle = new Circle( screenPosition, screenDiameter, worldPosition); circles.Add(circle); } }
Our UpdateLines
helper method begins by performing probabilistic Hough line detection with step sizes of one pixel and one degree. For each line, we require at least 50 detected intersections with edge pixels, a length of at least 50 pixels, and no gaps of more than 10 pixels. We will clear the list of any previously detected lines. Then, we will iterate over the results of the Hough line detection. Here is the first part of the method's implementation:
void UpdateLines() { // Detect lines. Imgproc.HoughLinesP(cannyMat, houghLines, 1.0, Mathf.PI / 180.0, 50, 50.0, 10.0); // // Calculate the lines' screen coordinates and // world coordinates. // // Clear the previous coordinates. lines.Clear(); // Iterate over the lines. int numHoughLines = houghLines.cols() * houghLines.rows() * houghLines.channels(); int[] houghLinesArray = new int[numHoughLines]; houghLines.get(0, 0, houghLinesArray); for (int i = 0; i < numHoughLines; i += 4) {
We will use our ConvertToScreenPosition
helper method to convert the line's endpoints from image space to screen space:
// Convert lines' image coordinates to // screen coordinates. Vector2 screenPoint0 = ConvertToScreenPosition( houghLinesArray[i], houghLinesArray[i + 1]); Vector2 screenPoint1 = ConvertToScreenPosition( houghLinesArray[i + 2], houghLinesArray[i + 3]);
Similarly, we will use our ConvertToWorldPosition
helper method to convert the line's endpoints from screen space to world space. Having done our conversions, we will instantiate Line
and add it to the list. Here is the code that completes the UpdateLines
method:
// Convert screen coordinates to world // coordinates based on raycasting. Vector3 worldPoint0 = ConvertToWorldPosition(screenPoint0); Vector3 worldPoint1 = ConvertToWorldPosition(screenPoint1); Line line = new Line( screenPoint0, screenPoint1, worldPoint0, worldPoint1); lines.Add(line); } }
Our ConvertToScreenPosition
helper method takes account of the fact that our screen coordinates are in portrait format, whereas our image coordinates are in the landscape format. The conversion from image space to screen space is implemented as follows:
Vector2 ConvertToScreenPosition(float imageX, float imageY) { float screenX = screenWidth - imageY * screenPixelsPerImagePixel; float screenY = screenHeight - imageX * screenPixelsPerImagePixel - screenPixelsYOffset; return new Vector2(screenX, screenY); }
Our
ConvertToWorldPosition
helper method uses Unity's built-in raycasting functionality and our specified target distance, raycastDistance
, to convert the given 2D screen coordinates to 3D world coordinates:
Vector3 ConvertToWorldPosition( Vector2 screenPosition) { Ray ray = _camera.ScreenPointToRay( screenPosition); return ray.GetPoint(raycastDistance); }
We will implement the standard OnPostRender
callback by checking whether any simulated balls or lines are present and, if not, by calling a helper method, DrawPreview
. The code is as follows:
void OnPostRender() { if (!simulating) { DrawPreview(); } }
The DrawPreview
helper method serves to show the positions and dimensions of detected circles and lines, if any. To avoid unnecessary draw calls, the method returns early if there are no objects to draw, as seen in the following code:
void DrawPreview() { // Draw 2D representations of the detected // circles and lines, if any. int numCircles = circles.Count; int numLines = lines.Count; if (numCircles < 1 && numLines < 1) { return; }
Having determined that there are detected shapes to draw, the method proceeds by configuring the OpenGL context to draw in screen space using drawPreviewMaterial
. This setup is seen in the following code:
GL.PushMatrix(); if (drawPreviewMaterial != null) { drawPreviewMaterial.SetPass(0); } GL.LoadPixelMatrix();
If there are any detected circles, we will do one draw call to highlight them all. Specifically, we will tell OpenGL to begin drawing quads, we will feed it the screen coordinates of squares that approximate the circles, and then we will tell it to stop drawing quads. Here is the code:
if (numCircles > 0) { // Draw the circles. GL.Begin(GL.QUADS); for (int i = 0; i < numCircles; i++) { Circle circle = circles[i]; float centerX = circle.screenPosition.x; float centerY = circle.screenPosition.y; float radius = 0.5f * circle.screenDiameter; float minX = centerX - radius; float maxX = centerX + radius; float minY = centerY - radius; float maxY = centerY + radius; GL.Vertex3(minX, minY, 0f); GL.Vertex3(minX, maxY, 0f); GL.Vertex3(maxX, maxY, 0f); GL.Vertex3(maxX, minY, 0f); } GL.End(); }
Similarly, if there are any detected lines, we perform one draw call to highlight them all. Specifically, we will tell OpenGL to begin drawing lines, we will feed it the lines' screen coordinate, and then we will tell it to stop drawing lines. Here is the code, which completes the DrawPreview
method:
if (numLines > 0) { // Draw the lines. GL.Begin(GL.LINES); for (int i = 0; i < numLines; i++) { Line line = lines[i]; GL.Vertex(line.screenPoint0); GL.Vertex(line.screenPoint1); } GL.End(); } GL.PopMatrix(); }
We will implement the standard OnGUI
callback by drawing a button. Depending on whether simulated balls and lines are already present, the button displays either Stop Simulation or Start Simulation. When the button is clicked a helper method is called—either StopSimulation
or StartSimulation
. Here is the code for OnGUI
:
void OnGUI() { GUI.skin.button.fontSize = buttonFontSize; if (simulating) { if (GUI.Button(buttonRect, StopSimulation(); } } else { if (GUI.Button(buttonRect, "Start Simulation")) { StartSimulation(); } } }
The StartSimulation
helper method begins by pausing the video feed and placing copies of simulatedCirclePrefab
atop the detected circles. Each instance is scaled to match a detected circle's diameter. Here is the first part of the method:
void StartSimulation() { // Freeze the video background webCamTexture.Pause(); // Create the circles' representation in the // physics simulation. int numCircles = circles.Count; for (int i = 0; i < numCircles; i++) { Circle circle = circles[i]; GameObject simulatedCircle = (GameObject)Instantiate( simulatedCirclePrefab); Transform simulatedCircleTransform = simulatedCircle.transform; simulatedCircleTransform.position = circle.worldPosition; simulatedCircleTransform.localScale = circle.screenDiameter * Vector3.one; simulatedObjects.Add(simulatedCircle); }
The method finishes by placing copies of simulatedLinePrefab
atop the detected lines. Each instance is scaled to match a detected line's length. Here is the rest of the method:
// Create the lines' representation in the // physics simulation. int numLines = lines.Count; for (int i = 0; i < numLines; i++) { Line line = lines[i]; GameObject simulatedLine = (GameObject)Instantiate( simulatedLinePrefab); Transform simulatedLineTransform = simulatedLine.transform; float angle = -Vector2.Angle( Vector2.right, line.screenPoint1 - line.screenPoint0); Vector3 worldPoint0 = line.worldPoint0; Vector3 worldPoint1 = line.worldPoint1; simulatedLineTransform.position = 0.5f * (worldPoint0 + worldPoint1); simulatedLineTransform.eulerAngles = new Vector3(0f, 0f, angle); simulatedLineTransform.localScale = new Vector3( Vector3.Distance( worldPoint0, worldPoint1), lineThickness, lineThickness); simulatedObjects.Add(simulatedLine); } }
The StopSimulation
helper method simply serves to resume the video feed, delete all simulated balls and lines, and clear the list that contained these simulated objects. With the list empty, the conditions for the detectors to run (in the Update
method) are fulfilled again. StopSimulation
is implemented like this:
void StopSimulation() { // Unfreeze the video background. webCamTexture.Play(); // Destroy all objects in the physics simulation. int numSimulatedObjects = simulatedObjects.Count; for (int i = 0; i < numSimulatedObjects; i++) { GameObject simulatedObject = simulatedObjects[i]; Destroy(simulatedObject); } simulatedObjects.Clear(); }
When the script's instance is destroyed (at the end of the scene), we will ensure that the webcam and gyroscope are released, as seen in the following code:
void OnDestroy() { if (webCamTexture != null) { webCamTexture.Stop(); } if (gyro != null) { gyro.enabled = false; } } } }
Save the script and drag it from the Project pane to the Main Camera object in Hierarchy. Click on the Main Camera object and, in the Detect And Simulate (Script) section of its Inspector, drag the following objects to the following fields:
Rollingball/Materials
in the Project pane) to the Draw Preview Material field (in Inspector)Rollingball/Prefabs
in the Project pane) to the Simulated Circle Prefab field (in Inspector)Rollingball/Prefabs
in the Project pane) to the Simulated Line Prefab field (in Inspector)After these changes, the script's section in Inspector should look like this:
Our scene is complete! All that remains is to configure, build, and test it.