Chapter 13. Custom Views and View Controllers

Selfiegram is now themed and looking much nicer than before, but we still haven’t really done much customization beyond some color tweaks and font changes. Let’s take a look at building a custom UIView and UIViewController subclass to further improve our app.

The reason to make a custom view or view controller is to do something that isn’t available in the ones Apple or third-party libraries provide. In our case we are going to be making a replacement for the image picker we are currently using to take photos.

Note

There are reams of third-party camera view controllers out there, each with its own strengths and weaknesses, but we are going to be creating our own. This is both because it is a good example of what it involves and because it lets us talk a bit about working with the camera in iOS, beyond what we could do with the image picker.

We will be using the AVKit framework to handle the communication with, and presentation of, the camera. AVKit is a framework that includes a great deal of classes and functions for interacting with, displaying, and creating audio and video content—hence the name AVKit. Using AVKit you could build yourself an entire video and audio editing suite, but we’re going to start small and just capture images from the camera.

There is a lot to do to this chapter. We will need a custom view to show what the camera is seeing, we are going to need a custom view controller to perform the role of talking to the camera and returning an image, and we need to hook it into our existing application structure. Let’s get started!

A Camera View

The first step in building our own camera controller is showing what the camera is seeing, as it is going to be terrifically hard to take a selfie if we can’t see what the camera will end up saving. But we are going to need somewhere for our custom view code to exist. To get started, we are going to make a new view controller to hold our custom camera view:

  1. Create a new Cocoa Touch Class file.

  2. Name it CaptureViewController.swift.

  3. Make it a subclass of UIViewController.

  4. Save it into the project and ensure the Selfiegram target is selected.

  5. Import the AVKit framework:

    import AVKit
  6. Create a new UIView subclass called PreviewView:

    class PreviewView : UIView {
    
    }
  7. Create the previewLayer property:

    var previewLayer : AVCaptureVideoPreviewLayer?

    This property holds an optional AVCaptureVideoPreviewLayer object, which is itself a subclass of CALayer. A CALayer comes from the Core Animation framework and is an important component of drawing views in iOS. Each UIView has a layer inside of it, which is responsible for the actual drawing of the view. They are called layers because they can be laid on top of one another to create a draw hierarchy. In our code here, the AVCaptureVideoPreviewLayer is a layer designed to show video content. Later we will be configuring our code so that this layer shows what the camera is seeing and adding it as a sublayer to this view’s main layer.

  8. Now we need to give our preview layer something to actually preview. Implement the setSession function:

    func setSession(_ session: AVCaptureSession) {
        // Ensure that we only ever do this once for this view
        guard self.previewLayer == nil else {
            NSLog("Warning: \(self.description) attempted to set its"
            + " preview layer more than once. This is not allowed.")
            return
        }
    
        // Create a preview layer that gets its content from the
        // provided capture session
        let previewLayer = AVCaptureVideoPreviewLayer(session: session)
    
        // Fill the contents of the layer, preserving the original
        // aspect ratio
        previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
    
        // Add the preview layer to our layer
        self.layer.addSublayer(previewLayer)
    
        // Store a reference to the layer
        self.previewLayer = previewLayer
    
        // Ensure that the sublayer is laid out
        self.setNeedsLayout()
    }

    This method does a fair amount of work, and is intended to be called once we have set up the camera ready to display its content. The meat of the method is inside the session parameter that’s passed into it. This is an AVCaptureSession object and represents a current capture session—essentially, it has all the information from the camera bundled up inside of it. This will be configured elsewhere; in our code here all we need to worry about is grabbing the camera data from it.

    First we do a check to make sure that we haven’t already set up the layer, and if we have, we abort. If we didn’t have this check we could end up with multiple video layers being drawn into our view at once. Then we set the gravity of the layer so that it always keeps the input aspect ratio and resizes itself to fit the entire layer. Without this we would get distorted and squished video.

    Finally, we add our video layer to the layer hierarchy of the view and then tell the view to redraw.

    Tip

    Technically we are telling the view to lay out itself and all of its child views, which isn’t the same as a redraw but works well enough for our purposes. Additionally, the call to setNeedsLayout won’t force an immediate drawing of the view and all its children. What it does is schedule a redraw, and in the next update of the drawing system iOS will cause the view to be redrawn. Most of the time this all happens so fast you won’t even see it, but it is worth knowing it isn’t actually instantaneous, although the method does return instantly.

    The real advantage of this is that you can bundle up all your requests at once and get iOS to do all the redrawing in the next update, without having to wait for each call to finish drawing before moving on to the next.

    With a call ready to redraw our view, we have to write the code to handle that redraw. We’ll be overriding a UIView method call for this, layoutSubviews, which will be called by iOS when it comes time to lay out the view and its subviews. In our case this will be triggered by our call to setNeedsLayout.

    Warning

    You shouldn’t ever call this method yourself. Trying to interrupt when and how iOS draws views is risky and will very likely result in unexpected behavior and crashes.

  9. Implement the layoutSubviews method:

    override func layoutSubviews() {
        previewLayer?.frame = self.bounds
    }
    override func viewWillLayoutSubviews() {
        self.cameraPreview?.setCameraOrientation(currentVideoOrientation)
    }

    All we do here is make the size of the preview layer the same as that of the view itself, essentially filling it completely.

    Finally, we need to handle what happens when the device rotates. As it currently stands we aren’t dealing with that, which could result in some very strange-looking results being shown on the preview layer. Correctly handling resizing and laying out a video layer based on orientation isn’t an easy task, but luckily for us Apple has already handled all that for us.

  10. Implement the setCameraOrientation method:

    func setCameraOrientation(_ orientation : AVCaptureVideoOrientation) {
        previewLayer?.connection?.videoOrientation = orientation
    }

    All we do here is set the orientation of the video layer to the orientation of the parameter call—the video layer knows how to handle it from here—and we don’t have to think about it. This method will be called by the view controller whenever the orientation changes on the device.

With that done, our custom view is ready to be used.

The Camera View Controller

The time has come to start creating our camera capture view controller. There is a fair amount of work we have to do in this section to build our view controller. We’ll get started with the UI.

Building the UI

  1. Open Main.storyboard and drag a navigation controller into the scene.

  2. Delete the table view controller that came with the navigation controller.

  3. Drag a view controller into the scene.

  4. Control-drag from the navigation controller onto the view controller, and select Root View Controller from the Relation Segue section.

    Note

    We are using a navigation controller here because later we will be adding another view controller after the capture view controller, and we might as well set up the basics now rather than have to change the controller hierarchy later on.

  5. Drag an empty UIView into the main view of the new navigation controller.

  6. Resize the view so that it takes up the entirety of the screen.

  7. Using the Add New Constraints menu, add the following constraints to the view:

    • Top edge: 0
    • Bottom edge: 0
    • Left edge: 0
    • Right edge: 0

    Now our view is fully pinned to its parent view and will always be the same size as it. This view will be our camera preview view, and as such it needs to take up all the space available.

  8. Select the new view and open the Identity inspector.

  9. Change the type of the view from UIView to PreviewView.

Note

We could have just changed the default view all view controllers have to be our custom preview class, but as we plan on having other UI elements laid out on top it makes more sense to have a separate view perform the role of the preview.

With that done, our preview view is ready to be configured. It’s time for the rest of the UI:

  1. Drag a button into the navigation bar.

  2. Select the button and, using the Attributes inspector, configure it in the following way:

    • Set Style to Bordered.

    • Set the System Item to Cancel.

  3. Select the navigation bar and, using the Attributes inspector, set its title to “Selfie!”.

  4. In the object library, search for a “Visual Effect with Blur” view.

  5. Drag this into the main view (not the preview view!).

  6. In the Attributes inspector, change the Blur Style to Dark.

  7. Using the Add New Constraints menu, add the following constraints:

  8. Drag a label into the effect view.

  9. Set the text of the label to “Tap to take a selfie”.

  10. Using the Attributes inspector, change the color of the label to white.

  11. Control-drag from the label into the effect view, and in the menu that appears add the following constraints:

    • Center vertically

    • Center horizontally

  12. Drag a tap gesture recognizer into the view controller.

With this done, our UI is complete.

Talking to the Camera

Our UI is complete and hooked up; now it’s time to give it some functionality. There are quite a few steps involved in this, however, and a lot of new libraries we haven’t touched on so far. Our first step will be configuring a few properties for later use:

  1. Create a completion handler property:

    typealias CompletionHandler = (UIImage?) -> Void
    var completion : CompletionHandler?

    This will be used later on as a way to signal to the rest of the application that we have successfully grabbed an image for use in the selfie. As this view controller is a replacement for the image picker, we will use this handler as the way of passing back information to the list view controller. When we have a photo, or the cancel button is pressed, we will call this completion handler and pass in the image (or nil in the case of a cancellation) to let the list view controller continue in its job of creating the rest of the selfie.

  2. Create a session and an output property:

    let captureSession = AVCaptureSession()
    let photoOutput = AVCapturePhotoOutput()

    These two properties are our main interface into the camera (or will be). The capture session represents a live stream of what the camera sees, and the output provides an interface for taking a still photo from the camera. We will use the session for displaying into our custom preview view what the camera sees, and the output will give us our selfie image.

  3. Create an orientation computed property:

    var currentVideoOrientation : AVCaptureVideoOrientation {
        let orientationMap : [UIDeviceOrientation:AVCaptureVideoOrientation]
    
    orientationMap = [
            .portrait: .portrait,
            .landscapeLeft: .landscapeRight,
            .landscapeRight: .landscapeLeft,
            .portraitUpsideDown: .portraitUpsideDown
        ]
    
        let currentOrientation = UIDevice.current.orientation
    
        let videoOrientation =
            orientationMap[currentOrientation, default: .portrait]
    
        return videoOrientation
    }

    This property uses the device’s orientation to work out what the correct orientation for both the video and the photo will be. This is to prevent taking a photo and having it present sideways or upside down. The code first maps device orientations (of type UIDeviceOrientation) to AVKit orientations (of type AVCaptureVideoOrientation). Then we get the device’s current orientation and use our mapping to return the same AVKit orientation, using portrait as the default.

Setting up the session

Our properties are done; our next moves are to set up our capture session and our preview to show what the camera is seeing:

  1. Replace viewDidLoad with the following:

    override func viewDidLoad() {
        let discovery = AVCaptureDevice.DiscoverySession(
            deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera],
            mediaType: AVMediaType.video,
            position: AVCaptureDevice.Position.front)
    
        // Get the first available device; bail if we can't find one
        guard let captureDevice = discovery.devices.first else {
            NSLog("No capture devices available.")
            self.completion?(nil)
            return
        }
    
        // Attempt to add this device to the capture session
        do {
            try captureSession.addInput(AVCaptureDeviceInput(device:
               captureDevice))
        } catch let error {
            NSLog("Failed to add camera to capture session: \(error)")
            self.completion?(nil)
        }
    
        // Configure the camera to use high-resolution
        // capture settings
        captureSession.sessionPreset = AVCaptureSession.Preset.photo
    
        // Begin the capture session
        captureSession.startRunning()
    
        // Add the photo output to the session, so that
        // it can receive photos when it wants them
        if captureSession.canAddOutput(photoOutput) {
            captureSession.addOutput(photoOutput)
        }
    
        self.cameraPreview.setSession(captureSession)
    
        super.viewDidLoad()
    }

    The first thing we do here is ask AVKit for all capture devices that are wide-angle cameras, are capable of capturing video, and are on the front. Essentially, we are asking for the front-facing camera. This returns a list of all devices that match these requirements.

    Note

    On every device Apple makes there is only a single camera that matches these requirements, but it is possible that it may add more front cameras in the future. This is why the API is set up in this way.

  2. Then we try and get the first camera out of this list. If this fails (for example, if the front camera is unavailable or perhaps damaged) we bail out: we call the completion handler with nil and end here.

  3. Once we have a device, we try and add it as the device for our capture session, essentially telling the session to be ready to stream video from the front camera. If we fail to do that, we again run the completion handler.

  4. Next, we set the session preset (the quality) of the session to a level appropriate for taking photos (that is to say, a high level of quality) and then start the session running. This will begin the streaming of the camera data into the session property.

    Note

    The call to startRunning can take some time to return and will block the rest of the code that follows it until it finishes. We generally want our UIs to be snappy and responsive at all times, so having something block the rest of the method from completing is a bad thing. In our case, as we can’t continue the loading of the view without the camera, we don’t have to worry about it slowing down the view’s setup—we need it to finish before continuing. Normally, however, it is worth running a call like this on a different queue to prevent it slowing down your UI.

  5. With our session running, we then configure it to use our output property as a valid output for the session. We need to do this so we can grab photos out of the session while it is running.

  6. Finally, we configure our custom preview view to show the session.

Handling interaction

Our session is now configured, and our UI is ready to show the camera. So how do we interact with it all?

  1. Create the viewWillLayoutSubviews method:

    override func layoutSubviews() {
        previewLayer?.frame = self.bounds
    }
    override func viewWillLayoutSubviews() {
        self.cameraPreview?.setCameraOrientation(currentVideoOrientation)
    }

    This will be called whenever the device orientation changes; all we do is update the orientation of the video preview.

  2. Now we need to handle when the cancel button is tapped. Add the following to the close method:

    self.completion?(nil)

    If the user taps the cancel button all we need to do is call the completion handler with nil passed in. When we replace the image picker in the list view controller (see “Calling the Capture View Controller”) we will be using the value from the completion handler to work out what to do. This means that we don’t have to worry about dismissing ourselves from the view hierarchy, as the selfie list view controller will be handling that task.

  3. Next we need to handle when the user wants to take a selfie. In our app we are making it so that when the user taps on the screen it will take the picture. This is different from the image picker, where there was a dedicated button for taking the photo. Add the following to the takeSelfie method:

    // Get a connection to the output
    guard let videoConnection
       = photoOutput.connection(with: AVMediaType.video) else
    {
        NSLog("Failed to get camera connection")
        return
    }
    
    // Set its orientation, so that the image is oriented correctly
    videoConnection.videoOrientation = currentVideoOrientation
    
    // Indicate that we want the data it captures to be in JPEG format
    let settings =
     AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
    
    // Begin capturing a photo; it will call
    // photoOutput(_, didFinishProcessingPhoto:, error:) when done
    photoOutput.capturePhoto(with: settings, delegate: self)

The first thing we are doing in this code is getting a connection to the video stream inside our output property. We are getting this so we can then set its orientation correctly.

Then we set up the settings we want for our output—in this case, a single JPEG photo.

Tip

There’s a rather impressive amount of flexibility in the settings format. While we are just creating a JPEG, it is worth checking out the documentation on the available format options if you ever need more precise output control. Amongst the different formats Apple supports both JPEG and HEIF, both of which are great for photos. As it gains more adoption in the future it will be well worth considering supporting the newer HEIF (High Efficiency Image File Format) in your apps, in places where you would otherwise use JPEG. Until that future, though, we’ll stick with JPEG.

Finally, we tell the output that we want it to capture a photo. Its delegate (soon to be the CaptureViewController class) will take it from here.

Calling the Capture View Controller

Our capture view controller is now complete—it is time to hook it into our existing app. Currently, we are using the prebuilt image picker to collect images. We need to strip that part of our code out and add code to call our view controller in its place.

Now, we could create a normal segue in the storyboard to present the capture view controller, but we’ve already seen how to do that. Instead, we’ll take a look at how to create and display a view controller from a storyboard through code:

  1. Open Main.storyboard and select the navigation controller of the capture view controller.

  2. In the Identity inspector, set the storyboard ID to CaptureScene.

    Tip

    Setting a Storyboard ID will also get rid of the warning Xcode is showing about an unreachable scene.

    We will be using the storyboard ID as a means of finding the view controller inside the storyboard to instantiate a copy of it.

  3. Open SelfieListViewController.swift.

  4. Inside the createNewSelfie method, delete all references to the image picker controller, its instantiation, its configuration, and its presentation.

  5. Add the following to the createNewSelfie method:

    guard let navigation = self.storyboard?
            .instantiateViewController(withIdentifier: "CaptureScene")
            as? UINavigationController,
          let capture = navigation.viewControllers.first
            as? CaptureViewController
    else {
        fatalError("Failed to create the capture view controller!")
    }

    We are using the instantiateViewController(withIdentifier:) method call to instantiate a copy of the navigation controller that encapsulates our capture view. This works by diving into the storyboard, finding a scene with that identifier, and returning an instance of that view controller. Once we have that we then get a reference to the capture view controller inside of it. If either of these two actions fails we exit using the fatalError call, as failure to load either of these means the flow of the app is compromised. However, as we’ve named our storyboard scene correctly and the first view controller of our navigation controller is the capture view, the else should never run. As you start writing your own apps, bear in mind that while using fatalError and similar dangerous and crashing calls is fine at the start of development, as you get closer to releasing your app out into the world you want to limit this, even in cases where you are sure it can’t happen—a crash rarely looks good. What to do when something unexpected happens will change on an app-by-app basis, so we haven’t implemented anything here.

    Note

    Most of the time using the built-in means of linking scenes together through storyboard segues is the correct way of moving between view controllers. We’re doing it differently to show off the technique, but normally you wouldn’t do this and would instead use a segue. As a rule of thumb, you should only be creating view controllers through code when it isn’t easily done or logically sensible via segues.

  6. Below the guard statement we just wrote, add the following:

    capture.completion = {(image : UIImage?) in
    
        if let image = image {
            self.newSelfieTaken(image: image)
        }
    
        self.dismiss(animated: true, completion: nil)
    }

    Here we are setting the completion handler closure of the capture view controller. We check if we have an image, and if we do we run the method to create a new selfie. Then we dismiss the capture view controller. As we are using the image property to determine if the image creation worked, our completion handler is thankfully very small. Now we need to present our newly configured view controller.

  7. Add the following after the closure we just wrote:

    self.present(navigation, animated: true, completion: nil)

    With that done our capture view controller will now be presented, allow you to take photos, and then be dismissed. The last thing we need to do is a bit of cleanup.

    As the image picker required delegate callbacks and our capture view instead uses a completion handler, we no longer need that part of our codebase.

    Tip

    We know deleting code can seem a little scary, but as you are of course using version control in your apps (right?), you should never be afraid of deleting code. If it turns out you need it back, just use your version control system to bring it back.

  8. Delete the SelfieListViewController extension that made it conform to the UIImagePickerControllerDelegate and UINavigationControllerDelegate protocols.

Now our replacement of the image picker is complete! Check out the new functionality in Figure 13-1.

lsw3 1301
Figure 13-1. Our custom capture view controller
Warning

At this point we are at the stage where we can’t test the functionality of Selfiegram without a device. The simulator has no cameras, only a photo library. The image picker we originally used had the fallback of the photo library, but our capture view controller does not.