Selfiegram is now themed and looking much nicer than before, but we still haven’t really done much customization beyond some color tweaks and font changes. Let’s take a look at building a custom UIView
and UIViewController
subclass to further improve our app.
The reason to make a custom view or view controller is to do something that isn’t available in the ones Apple or third-party libraries provide. In our case we are going to be making a replacement for the image picker we are currently using to take photos.
There are reams of third-party camera view controllers out there, each with its own strengths and weaknesses, but we are going to be creating our own. This is both because it is a good example of what it involves and because it lets us talk a bit about working with the camera in iOS, beyond what we could do with the image picker.
We will be using the AVKit
framework to handle the communication with, and presentation of, the camera. AVKit
is a framework that includes a great deal of classes and functions for interacting with, displaying, and creating audio and video content—hence the name AVKit
. Using AVKit
you could build yourself an entire video and audio editing suite, but we’re going to start small and just capture images from the camera.
There is a lot to do to this chapter. We will need a custom view to show what the camera is seeing, we are going to need a custom view controller to perform the role of talking to the camera and returning an image, and we need to hook it into our existing application structure. Let’s get started!
The first step in building our own camera controller is showing what the camera is seeing, as it is going to be terrifically hard to take a selfie if we can’t see what the camera will end up saving. But we are going to need somewhere for our custom view code to exist. To get started, we are going to make a new view controller to hold our custom camera view:
Create a new Cocoa Touch Class file.
Name it CaptureViewController.swift.
Make it a subclass of UIViewController
.
Save it into the project and ensure the Selfiegram target is selected.
Import the AVKit
framework:
import
AVKit
Create a new UIView
subclass called PreviewView
:
class
PreviewView
:
UIView
{
}
Create the previewLayer
property:
var
previewLayer
:
AVCaptureVideoPreviewLayer
?
This property holds an optional AVCaptureVideoPreviewLayer
object, which is itself a subclass of CALayer
. A CALayer
comes from the Core Animation framework and is an important component of drawing views in iOS. Each UIView
has a layer inside of it, which is responsible for the actual drawing of the view. They are called layers because they can be laid on top of one another to create a draw hierarchy. In our code here, the AVCaptureVideoPreviewLayer
is a layer designed to show video content. Later we will be configuring our code so that this layer shows what the camera is seeing and adding it as a sublayer to this view’s main layer.
We are going to be showing a video layer because we want the view that is appearing to the user to be dynamic and reflect what the camera is seeing in real time. We could cobble together something ourselves using UIImageView
s to create an approximation of what the camera sees, but it would be a bad hack and look terrible. It’s much easier to stream in the camera output as a video and display it live, capturing only the relevant moments when the users want them.
Now we need to give our preview layer something to actually preview. Implement the setSession
function:
func
setSession
(
_
session
:
AVCaptureSession
)
{
// Ensure that we only ever do this once for this view
guard
self
.
previewLayer
==
nil
else
{
NSLog
(
"Warning:
\(
self
.
description
)
attempted to set its"
+
" preview layer more than once. This is not allowed."
)
return
}
// Create a preview layer that gets its content from the
// provided capture session
let
previewLayer
=
AVCaptureVideoPreviewLayer
(
session
:
session
)
// Fill the contents of the layer, preserving the original
// aspect ratio
previewLayer
.
videoGravity
=
AVLayerVideoGravity
.
resizeAspectFill
// Add the preview layer to our layer
self
.
layer
.
addSublayer
(
previewLayer
)
// Store a reference to the layer
self
.
previewLayer
=
previewLayer
// Ensure that the sublayer is laid out
self
.
setNeedsLayout
()
}
This method does a fair amount of work, and is intended to be called once we have set up the camera ready to display its content. The meat of the method is inside the session
parameter that’s passed into it. This is an AVCaptureSession
object and represents a current capture session—essentially, it has all the information from the camera bundled up inside of it. This will be configured elsewhere; in our code here all we need to worry about is grabbing the camera data from it.
First we do a check to make sure that we haven’t already set up the layer, and if we have, we abort. If we didn’t have this check we could end up with multiple video layers being drawn into our view at once. Then we set the gravity of the layer so that it always keeps the input aspect ratio and resizes itself to fit the entire layer. Without this we would get distorted and squished video.
Finally, we add our video layer to the layer hierarchy of the view and then tell the view to redraw.
Technically we are telling the view to lay out itself and all of its child views, which isn’t the same as a redraw but works well enough for our purposes. Additionally, the call to setNeedsLayout
won’t force an immediate drawing of the view and all its children. What it does is schedule a redraw, and in the next update of the drawing system iOS will cause the view to be redrawn. Most of the time this all happens so fast you won’t even see it, but it is worth knowing it isn’t actually instantaneous, although the method does return instantly.
The real advantage of this is that you can bundle up all your requests at once and get iOS to do all the redrawing in the next update, without having to wait for each call to finish drawing before moving on to the next.
With a call ready to redraw our view, we have to write the code to handle that redraw. We’ll be overriding a UIView
method call for this, layoutSubviews
, which will be called by iOS when it comes time to lay out the view and its subviews. In our case this will be triggered by our call to setNeedsLayout
.
You shouldn’t ever call this method yourself. Trying to interrupt when and how iOS draws views is risky and will very likely result in unexpected behavior and crashes.
Implement the layoutSubviews
method:
override
func
layoutSubviews
()
{
previewLayer
?.
frame
=
self
.
bounds
}
override
func
viewWillLayoutSubviews
()
{
self
.
cameraPreview
?.
setCameraOrientation
(
currentVideoOrientation
)
}
All we do here is make the size of the preview layer the same as that of the view itself, essentially filling it completely.
Finally, we need to handle what happens when the device rotates. As it currently stands we aren’t dealing with that, which could result in some very strange-looking results being shown on the preview layer. Correctly handling resizing and laying out a video layer based on orientation isn’t an easy task, but luckily for us Apple has already handled all that for us.
Implement the setCameraOrientation
method:
func
setCameraOrientation
(
_
orientation
:
AVCaptureVideoOrientation
)
{
previewLayer
?.
connection
?.
videoOrientation
=
orientation
}
All we do here is set the orientation of the video layer to the orientation of the parameter call—the video layer knows how to handle it from here—and we don’t have to think about it. This method will be called by the view controller whenever the orientation changes on the device.
The time has come to start creating our camera capture view controller. There is a fair amount of work we have to do in this section to build our view controller. We’ll get started with the UI.
Open Main.storyboard and drag a navigation controller into the scene.
Delete the table view controller that came with the navigation controller.
Control-drag from the navigation controller onto the view controller, and select Root View Controller from the Relation Segue section.
We are using a navigation controller here because later we will be adding another view controller after the capture view controller, and we might as well set up the basics now rather than have to change the controller hierarchy later on.
Drag an empty UIView
into the main view of the new navigation controller.
Resize the view so that it takes up the entirety of the screen.
Using the Add New Constraints menu, add the following constraints to the view:
When moving around and resizing regular views it can help to change their background color to something very obvious, such as full green or red, so that you can easily see how they differ from the views around them. Just make sure to set the background color back to the appropriate default when you finish.
Now our view is fully pinned to its parent view and will always be the same size as it. This view will be our camera preview view, and as such it needs to take up all the space available.
Select the new view and open the Identity inspector.
Change the type of the view from UIView
to PreviewView
.
We could have just changed the default view all view controllers have to be our custom preview class, but as we plan on having other UI elements laid out on top it makes more sense to have a separate view perform the role of the preview.
With that done, our preview view is ready to be configured. It’s time for the rest of the UI:
Drag a button into the navigation bar.
Select the button and, using the Attributes inspector, configure it in the following way:
Set Style to Bordered.
Set the System Item to Cancel.
Select the navigation bar and, using the Attributes inspector, set its title to “Selfie!”.
In the object library, search for a “Visual Effect with Blur” view.
Drag this into the main view (not the preview view!).
In the Attributes inspector, change the Blur Style to Dark.
Using the Add New Constraints menu, add the following constraints:
Height: 40 points
Leading space: 0 points
Trailing space: 0 points
Bottom space: 0 points
This will pin our effect view to the bottom of the view, making it 40 points high and as wide as the main view.
UIVisualEffectView
is a custom view class designed for presenting special visual effects as a mask over the top of a view and its contents. In our case we are using the blur form of the view to create a dark blurred area, so that we don’t have a harsh line between our camera preview and the instructions on its use (which we are about to create).
Visual effect views currently support both blur and vibrancy masks, but it is possible that more effects will be created over time.
Set the text of the label to “Tap to take a selfie”.
Using the Attributes inspector, change the color of the label to white.
Control-drag from the label into the effect view, and in the menu that appears add the following constraints:
Center vertically
Center horizontally
Drag a tap gesture recognizer into the view controller.
UITapGestureRecognizer
is a specialized subclass of gesture recognizer designed to trigger when a tap is encountered. You can configure them to respond to single or multiple taps, as well as single or multiple fingers. The UIGestureRecognizer
class in general can be used to recognize almost any form of gesture, from taps, to presses, to swipes.
With this done, our UI is complete.
The next step is to hook up our freshly made UI to our code:
Select the view controller and, in the Identity inspector, set the class to CaptureViewController
.
Open the assistant editor and make sure that the CaptureViewController.swift file is open.
Control-drag from the preview view into the CaptureViewController
class and create a new outlet called cameraPreview
.
Control-drag from the cancel button into the CaptureViewController
class and create a new action called close
.
Control-drag from the cancel button into the CaptureViewController
class and create a new action called takeSelfie
.
Now we can start writing some code to set all this into action.
Our UI is complete and hooked up; now it’s time to give it some functionality. There are quite a few steps involved in this, however, and a lot of new libraries we haven’t touched on so far. Our first step will be configuring a few properties for later use:
Create a completion handler property:
typealias
CompletionHandler
=
(
UIImage
?)
->
Void
var
completion
:
CompletionHandler
?
This will be used later on as a way to signal to the rest of the application that we have successfully grabbed an image for use in the selfie. As this view controller is a replacement for the image picker, we will use this handler as the way of passing back information to the list view controller. When we have a photo, or the cancel button is pressed, we will call this completion handler and pass in the image (or nil
in the case of a cancellation) to let the list view controller continue in its job of creating the rest of the selfie.
Create a session and an output property:
let
captureSession
=
AVCaptureSession
()
let
photoOutput
=
AVCapturePhotoOutput
()
These two properties are our main interface into the camera (or will be). The capture session represents a live stream of what the camera sees, and the output provides an interface for taking a still photo from the camera. We will use the session for displaying into our custom preview view what the camera sees, and the output will give us our selfie image.
Create an orientation computed property:
var
currentVideoOrientation
:
AVCaptureVideoOrientation
{
let
orientationMap
:
[
UIDeviceOrientation
:
AVCaptureVideoOrientation
]
orientationMap
=
[
.
portrait
:
.
portrait
,
.
landscapeLeft
:
.
landscapeRight
,
.
landscapeRight
:
.
landscapeLeft
,
.
portraitUpsideDown
:
.
portraitUpsideDown
]
let
currentOrientation
=
UIDevice
.
current
.
orientation
let
videoOrientation
=
orientationMap
[
currentOrientation
,
default
:
.
portrait
]
return
videoOrientation
}
This property uses the device’s orientation to work out what the correct orientation for both the video and the photo will be. This is to prevent taking a photo and having it present sideways or upside down. The code first maps device orientations (of type UIDeviceOrientation
) to AVKit
orientations (of type AVCaptureVideoOrientation
). Then we get the device’s current orientation and use our mapping to return the same AVKit
orientation, using portrait as the default.
You might be wondering why iOS has both a device and an AV orientation. This is so that if you want you can change the orientation of the device but have the video not change.
Our properties are done; our next moves are to set up our capture session and our preview to show what the camera is seeing:
Replace viewDidLoad
with the following:
override
func
viewDidLoad
()
{
let
discovery
=
AVCaptureDevice
.
DiscoverySession
(
deviceTypes
:
[
AVCaptureDevice
.
DeviceType
.
builtInWideAngleCamera
],
mediaType
:
AVMediaType
.
video
,
position
:
AVCaptureDevice
.
Position
.
front
)
// Get the first available device; bail if we can't find one
guard
let
captureDevice
=
discovery
.
devices
.
first
else
{
NSLog
(
"No capture devices available."
)
self
.
completion
?(
nil
)
return
}
// Attempt to add this device to the capture session
do
{
try
captureSession
.
addInput
(
AVCaptureDeviceInput
(
device
:
captureDevice
))
}
catch
let
error
{
NSLog
(
"Failed to add camera to capture session:
\(
error
)
"
)
self
.
completion
?(
nil
)
}
// Configure the camera to use high-resolution
// capture settings
captureSession
.
sessionPreset
=
AVCaptureSession
.
Preset
.
photo
// Begin the capture session
captureSession
.
startRunning
()
// Add the photo output to the session, so that
// it can receive photos when it wants them
if
captureSession
.
canAddOutput
(
photoOutput
)
{
captureSession
.
addOutput
(
photoOutput
)
}
self
.
cameraPreview
.
setSession
(
captureSession
)
super
.
viewDidLoad
()
}
The first thing we do here is ask AVKit
for all capture devices that are wide-angle cameras, are capable of capturing video, and are on the front. Essentially, we are asking for the front-facing camera. This returns a list of all devices that match these requirements.
On every device Apple makes there is only a single camera that matches these requirements, but it is possible that it may add more front cameras in the future. This is why the API is set up in this way.
Then we try and get the first camera out of this list. If this fails (for example, if the front camera is unavailable or perhaps damaged) we bail out: we call the completion handler with nil
and end here.
Once we have a device, we try and add it as the device for our capture session, essentially telling the session to be ready to stream video from the front camera. If we fail to do that, we again run the completion handler.
Next, we set the session preset (the quality) of the session to a level appropriate for taking photos (that is to say, a high level of quality) and then start the session running. This will begin the streaming of the camera data into the session property.
The call to startRunning
can take some time to return and will block the rest of the code that follows it until it finishes. We generally want our UIs to be snappy and responsive at all times, so having something block the rest of the method from completing is a bad thing. In our case, as we can’t continue the loading of the view without the camera, we don’t have to worry about it slowing down the view’s setup—we need it to finish before continuing. Normally, however, it is worth running a call like this on a different queue to prevent it slowing down your UI.
With our session running, we then configure it to use our output property as a valid output for the session. We need to do this so we can grab photos out of the session while it is running.
Finally, we configure our custom preview view to show the session.
Our session is now configured, and our UI is ready to show the camera. So how do we interact with it all?
Create the viewWillLayoutSubviews
method:
override
func
layoutSubviews
()
{
previewLayer
?.
frame
=
self
.
bounds
}
override
func
viewWillLayoutSubviews
()
{
self
.
cameraPreview
?.
setCameraOrientation
(
currentVideoOrientation
)
}
This will be called whenever the device orientation changes; all we do is update the orientation of the video preview.
Now we need to handle when the cancel button is tapped. Add the following to the close
method:
self
.
completion
?(
nil
)
If the user taps the cancel button all we need to do is call the completion handler with nil
passed in. When we replace the image picker in the list view controller (see “Calling the Capture View Controller”) we will be using the value from the completion handler to work out what to do. This means that we don’t have to worry about dismissing ourselves from the view hierarchy, as the selfie list view controller will be handling that task.
Next we need to handle when the user wants to take a selfie. In our app we are making it so that when the user taps on the screen it will take the picture. This is different from the image picker, where there was a dedicated button for taking the photo. Add the following to the takeSelfie
method:
// Get a connection to the output
guard
let
videoConnection
=
photoOutput
.
connection
(
with
:
AVMediaType
.
video
)
else
{
NSLog
(
"Failed to get camera connection"
)
return
}
// Set its orientation, so that the image is oriented correctly
videoConnection
.
videoOrientation
=
currentVideoOrientation
// Indicate that we want the data it captures to be in JPEG format
let
settings
=
AVCapturePhotoSettings
(
format
:
[
AVVideoCodecKey
:
AVVideoCodecType
.
jpeg
])
// Begin capturing a photo; it will call
// photoOutput(_, didFinishProcessingPhoto:, error:) when done
photoOutput
.
capturePhoto
(
with
:
settings
,
delegate
:
self
)
You are probably getting an error warning you that the CaptureViewController
class doesn’t conform to the AVCapturePhotoCaptureDelegate
protocol. Don’t worry about it for now; we are just about to fix that.
The first thing we are doing in this code is getting a connection to the video stream inside our output property. We are getting this so we can then set its orientation correctly.
Then we set up the settings we want for our output—in this case, a single JPEG photo.
There’s a rather impressive amount of flexibility in the settings format. While we are just creating a JPEG, it is worth checking out the documentation on the available format options if you ever need more precise output control. Amongst the different formats Apple supports both JPEG and HEIF, both of which are great for photos. As it gains more adoption in the future it will be well worth considering supporting the newer HEIF (High Efficiency Image File Format) in your apps, in places where you would otherwise use JPEG. Until that future, though, we’ll stick with JPEG.
Finally, we tell the output that we want it to capture a photo. Its delegate (soon to be the CaptureViewController
class) will take it from here.
Earlier, we made a call on our output property to capturePhoto(with: delegate:)
, which requires a delegate to handle the results of trying to save out an image. Now we need to actually conform to this protocol:
Create a new extension on the CaptureViewController
class:
extension
CaptureViewController
:
AVCapturePhotoCaptureDelegate
{
}
Implement the photoOutput(didFinishProcessingPhoto photo:, error:)
delegate method:
func
photoOutput
(
_
output
:
AVCapturePhotoOutput
,
didFinishProcessingPhoto
photo
:
AVCapturePhoto
,
error
:
Error
?)
{
if
let
error
=
error
{
NSLog
(
"Failed to get the photo:
\(
error
)
"
)
return
}
guard
let
jpegData
=
photo
.
fileDataRepresentation
(),
let
image
=
UIImage
(
data
:
jpegData
)
else
{
NSLog
(
"Failed to get image from encoded data"
)
return
}
self
.
completion
?(
image
)
}
This method will be called once the output has either managed to create an image or failed to do so. If it fails, the error variable will have a value; in our case all we are doing is logging this, but you could present a dialog box to the users to let them know. If it doesn’t fail, we instead get the image out of the data that was sent over, in the form of an AVCapturePhoto
object, and convert it into a UIImage
for use in creating a selfie.
Then we call the completion handler, passing in the image we just collected. From this point onward it is the responsibility of the selfie list view controller.
Our capture view controller is now complete—it is time to hook it into our existing app. Currently, we are using the prebuilt image picker to collect images. We need to strip that part of our code out and add code to call our view controller in its place.
Now, we could create a normal segue in the storyboard to present the capture view controller, but we’ve already seen how to do that. Instead, we’ll take a look at how to create and display a view controller from a storyboard through code:
Open Main.storyboard and select the navigation controller of the capture view controller.
In the Identity inspector, set the storyboard ID to CaptureScene
.
Setting a Storyboard ID will also get rid of the warning Xcode is showing about an unreachable scene.
We will be using the storyboard ID as a means of finding the view controller inside the storyboard to instantiate a copy of it.
Open SelfieListViewController.swift.
Inside the createNewSelfie
method, delete all references to the image picker controller, its instantiation, its configuration, and its presentation.
Add the following to the createNewSelfie
method:
guard
let
navigation
=
self
.
storyboard
?
.
instantiateViewController
(
withIdentifier
:
"CaptureScene"
)
as
?
UINavigationController
,
let
capture
=
navigation
.
viewControllers
.
first
as
?
CaptureViewController
else
{
fatalError
(
"Failed to create the capture view controller!"
)
}
We are using the instantiateViewController(withIdentifier:)
method call to instantiate a copy of the navigation controller that encapsulates our capture view. This works by diving into the storyboard, finding a scene with that identifier, and returning an instance of that view controller. Once we have that we then get a reference to the capture view controller inside of it. If either of these two actions fails we exit using the fatalError
call, as failure to load either of these means the flow of the app is compromised. However, as we’ve named our storyboard scene correctly and the first view controller of our navigation controller is the capture view, the else
should never run.
As you start writing your own apps, bear in mind that while using fatalError
and similar dangerous and crashing calls is fine at the start of development, as you get closer to releasing your app out into the world you want to limit this, even in cases where you are sure it can’t happen—a crash rarely looks good. What to do when something unexpected happens will change on an app-by-app basis, so we haven’t implemented anything here.
Most of the time using the built-in means of linking scenes together through storyboard segues is the correct way of moving between view controllers. We’re doing it differently to show off the technique, but normally you wouldn’t do this and would instead use a segue. As a rule of thumb, you should only be creating view controllers through code when it isn’t easily done or logically sensible via segues.
Below the guard
statement we just wrote, add the following:
capture
.
completion
=
{(
image
:
UIImage
?)
in
if
let
image
=
image
{
self
.
newSelfieTaken
(
image
:
image
)
}
self
.
dismiss
(
animated
:
true
,
completion
:
nil
)
}
Here we are setting the completion handler closure of the capture view controller. We check if we have an image, and if we do we run the method to create a new selfie. Then we dismiss the capture view controller. As we are using the image property to determine if the image creation worked, our completion handler is thankfully very small. Now we need to present our newly configured view controller.
Add the following after the closure we just wrote:
self
.
present
(
navigation
,
animated
:
true
,
completion
:
nil
)
With that done our capture view controller will now be presented, allow you to take photos, and then be dismissed. The last thing we need to do is a bit of cleanup.
As the image picker required delegate callbacks and our capture view instead uses a completion handler, we no longer need that part of our codebase.
We know deleting code can seem a little scary, but as you are of course using version control in your apps (right?), you should never be afraid of deleting code. If it turns out you need it back, just use your version control system to bring it back.
Delete the SelfieListViewController
extension that made it conform to the UIImagePickerControllerDelegate
and UINavigationControllerDelegate
protocols.
Now our replacement of the image picker is complete! Check out the new functionality in Figure 13-1.