The still photos and movies accessed by the user through the Photos app constitute the photo library. Your app can give the user an interface for exploring this library through the UIImagePickerController class.
In addition, the Assets Library framework lets you access the photo library and its contents programmatically. You’ll need to @import AssetsLibrary
.
At a deeper level, the AV Foundation framework (Chapter 15) provides direct control over the camera hardware. You’ll need to @import AVFoundation
(and probably CoreMedia
).
Constants such as kUTTypeImage
, referred to in this chapter, are provided by the Mobile Core Services framework; you’ll need to @import MobileCoreServices
.
UIImagePickerController is a view controller (UINavigationController) providing a navigation interface in which the user can choose an item from the photo library. There are two ways to show this interface:
UIImagePickerControllerSourceType type = UIImagePickerControllerSourceTypePhotoLibrary; BOOL ok = [UIImagePickerController isSourceTypeAvailable:type]; if (!ok) { NSLog(@"alas"); return; } UIImagePickerController* picker = [UIImagePickerController new]; picker.sourceType = type; picker.mediaTypes = [UIImagePickerController availableMediaTypesForSourceType:type]; picker.delegate = self; [self presentViewController:picker animated:YES completion:nil]; // iPhone
Your app is now attempting to access the photo library. The very first time your app does that, a system alert will appear, prompting the user to grant your app permission (Figure 17-1). You can modify the body of this alert by setting the “Privacy — Photo Library Usage Description” key (NSPhotoLibraryUsageDescription
) in your app’s Info.plist to tell the user why you want to access the photo library. This is a kind of “elevator pitch”; you need to persuade the user in very few words.
If the user denies your app access, you’ll still be able to present the UIImagePickerController, but it will be empty (with a reminder that the user has denied your app access to the photo library) and the user won’t be able to do anything but cancel (Figure 17-2). Thus, your code is unaffected. You can check beforehand to learn whether your app has access to the photo library — I’ll explain how later in this chapter — and opt to do something other than present the UIImagePickerController if access has been denied; but you don’t have to, because the user will see a coherent interface and will cancel, and your app will proceed normally from there.
If the user does what Figure 17-2 suggests, switching to the Settings app and enabling access for your app under Privacy → Photos, your app will be terminated in the background! This is unfortunate, but is probably not a bug; Apple presumably feels that in this situation your app cannot continue coherently and should start over from scratch.
On the iPhone, the delegate will receive one of these messages:
imagePickerController:didFinishPickingMediaWithInfo:
imagePickerControllerDidCancel:
On the iPad, if you’re using a popover, there’s no Cancel button, so there’s no imagePickerControllerDidCancel:
; you can detect the dismissal of the popover through the popover delegate. With a presented view controller, if a UIImagePickerControllerDelegate method is not implemented, the view controller is dismissed automatically at the point where that method would be called; but rather than relying on this, you should probably implement both delegate methods and dismiss the view controller yourself in each.
The didFinish...
method is handed a dictionary of information about the chosen item. The keys in this dictionary depend on the media type:
The keys are:
UIImagePickerControllerMediaType
@"public.image"
, which is the same as kUTTypeImage
.
UIImagePickerControllerReferenceURL
UIImagePickerControllerOriginalImage
The keys are:
UIImagePickerControllerMediaType
@"public.movie"
, which is the same as kUTTypeMovie
.
UIImagePickerControllerReferenceURL
UIImagePickerControllerMediaURL
Optionally, you can set the view controller’s allowsEditing
to YES. In the case of an image, the interface then allows the user to scale the image up and to move it so as to be cropped by a preset rectangle; the dictionary will include two additional keys:
UIImagePickerControllerCropRect
UIImagePickerControllerEditedImage
In the case of a movie, if the view controller’s allowsEditing
is YES, the user can trim the movie just as with a UIVideoEditorController (Chapter 15). The dictionary keys are the same as before.
Here’s an example implementation of imagePickerController:didFinishPickingMediaWithInfo:
that covers the fundamental cases:
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { NSURL* url = info[UIImagePickerControllerMediaURL]; UIImage* im = info[UIImagePickerControllerOriginalImage]; UIImage* edim = info[UIImagePickerControllerEditedImage]; if (edim) im = edim; if (!self.currentPop) { // presented view [self dismissViewControllerAnimated:YES completion:nil]; } else { // popover [self.currentPop dismissPopoverAnimated:YES]; self.currentPop = nil; } NSString* type = info[UIImagePickerControllerMediaType]; if ([type isEqualToString: (NSString*)kUTTypeImage] && im) [self showImage:im]; else if ([type isEqualToString: (NSString*)kUTTypeMovie] && url) [self showMovie:url]; }
The Assets Library framework does for the photo library roughly what the Media Player framework does for the music library (Chapter 16), letting your code explore the library’s contents. One obvious use of the Assets Library framework might be to implement your own interface for letting the user choose an image, in a way that transcends the limitations of UIImagePickerController. But you can go further with the photo library than you can with the media library: you can save media into the Camera Roll / Saved Photos album, and you can even create a new album and save media into it.
A photo or video in the photo library is an ALAsset. Like a media entity, an ALAsset can describe itself through key–value pairs called properties. (This use of the word “properties” has nothing to do with Objective-C language properties.) For example, it can report its type (photo or video), its creation date, its orientation if it is a photo whose metadata contains this information, and its duration if it is a video. You fetch a property value with valueForProperty:
. The properties have names like ALAssetPropertyType
.
Once you have an ALAssetRepresentation, you can interrogate it to get the actual image, either as raw data or as a CGImage (see Chapter 2). The simplest way is to ask for its fullResolutionImage
or its fullScreenImage
(the latter is more suitable for display in your interface, and is identical to what the Photos app displays); you may then want to derive a UIImage from this using imageWithCGImage:scale:orientation:
. The original scale and orientation of the image are available as the ALAssetRepresentation’s scale
and orientation
. Alternatively, if all you need is a small version of the image to display in your interface, you can ask the ALAsset itself for its aspectRatioThumbnail
. An ALAssetRepresentation also has a url
, which is the unique identifier for the ALAsset.
The photo library itself is an ALAssetsLibrary instance. It is divided into groups (ALAssetsGroup), which have types. For example, the user might have multiple albums; each of these is a group of type ALAssetsGroupAlbum
. You also have access to the PhotoStream album. An ALAssetsGroup has properties, such as a name, which you can fetch with valueForProperty:
; one such property, the group’s URL (ALAssetsGroupPropertyURL
), is its unique identifier. To fetch assets from the library, you either fetch one specific asset by providing its URL, or you can start with a group, in which case you can then enumerate the group’s assets. To obtain a group, you can enumerate the library’s groups of a certain type, in which case you are handed each group as an ALAssetsGroup, or you can provide a particular group’s URL. Before enumerating a group’s assets, you may optionally filter the group using a simple ALAssetsFilter; this limits any subsequent enumeration to photos only, videos only, or both.
The Assets Library framework uses Objective-C blocks for fetching and enumerating assets and groups. These blocks behave in a special way: at the end of the enumeration, they are called one extra time with a nil first parameter. Thus, you must code your block carefully to avoid treating the first parameter as real on that final call. I was initially mystified by this curious block enumeration behavior, but one day the reason for it came to me in a flash: these blocks are all called asynchronously (on the main thread), meaning that the rest of your code has already finished running, so you’re given an extra pass through the block as your first opportunity to do something with all the data you’ve presumably gathered in the previous passes.
As I mentioned in the previous section, the system will ask the user for permission the first time your app tries to access the photo library, and the user can refuse. You can learn directly beforehand whether access has been refused:
ALAuthorizationStatus stat = [ALAssetsLibrary authorizationStatus]; if (stat == ALAuthorizationStatusDenied || stat == ALAuthorizationStatusRestricted) { NSLog(@"%@", @"No access"); return; }
There is, however, no need to do this, because all the block-based methods for accessing the library allow you to supply a failure block; thus, your code will be able to retreat in good order when it discovers that it can’t access the library.
We now know enough for an example! Given an album title, I’ll find that album, pull out the first photo, and display that photo in the interface. The first step is to find the album; I do that by cycling through all albums, stopping when I find the one whose title matches the target title. The block will then be called one last time; at that point, I call another method to pull the first photo out of that album:
- (void) findAlbumWithTitle: (NSString*) albumTitle { __block ALAssetsGroup* album = nil; ALAssetsLibrary* library = [ALAssetsLibrary new]; [library enumerateGroupsWithTypes: ALAssetsGroupAlbum usingBlock: ^ (ALAssetsGroup *group, BOOL *stop) { if (group) { NSString* title = [group valueForProperty: ALAssetsGroupPropertyName]; if ([title isEqualToString: albumTitle]) { album = group; *stop = YES; } } else { // afterward if (!album) { NSLog(@"%@", @"failed to find album"); return; } [self showFirstPhotoOfGroup:album]; } } failureBlock: ^ (NSError *error) { NSLog(@"oops! %@", [error localizedDescription]); // e.g. "Global denied access" } ]; }
And here’s the second method; it starts to enumerate the items of the album, stopping immediately after the first photo and showing that photo in the interface. I don’t need a very big version of the photo, so I use the asset’s aspectRatioThumbnail
:
- (void) showFirstPhotoOfGroup: (ALAssetsGroup*) group { __block ALAsset* photo; [group enumerateAssetsUsingBlock: ^(ALAsset *result, NSUInteger index, BOOL *stop) { if (result) { NSString* type = [result valueForProperty:ALAssetPropertyType]; if ([type isEqualToString: ALAssetTypePhoto]) { photo = result; *stop = YES; } } else { // afterward if (!photo) return; CGImageRef im = photo.aspectRatioThumbnail; UIImage* im2 = [UIImage imageWithCGImage:im scale:0 orientation:UIImageOrientationUp]; self.iv.image = im2; // put image into our UIImageView } }]; }
You can write files into the Camera Roll / Saved Photos album. The basic function for writing an image file to this location is UIImageWriteToSavedPhotosAlbum
. Some kinds of video file can also be saved here; in an example in Chapter 15, I checked whether this was true of a certain video file by calling UIVideoAtPathIsCompatibleWithSavedPhotosAlbum
, and I saved the file by calling UISaveVideoAtPathToSavedPhotosAlbum
.
The ALAssetsLibrary class extends these abilities by providing five additional methods:
writeImageToSavedPhotosAlbum:orientation:completionBlock:
writeImageToSavedPhotosAlbum:metadata:completionBlock:
UIImagePickerControllerMediaMetadata
key when the user takes a picture using UIImagePickerController).
writeImageDataToSavedPhotosAlbum:metadata:completionBlock:
videoAtPathIsCompatibleWithSavedPhotosAlbum:
writeVideoAtPathToSavedPhotosAlbum:completionBlock:
To use UIImagePickerController in this way, first check isSourceTypeAvailable:
for UIImagePickerControllerSourceTypeCamera
; it will be NO if the user’s device has no camera or the camera is unavailable. If it is YES, call availableMediaTypesForSourceType:
to learn whether the user can take a still photo (kUTTypeImage
), a video (kUTTypeMovie
), or both. Now instantiate UIImagePickerController, set its source type to UIImagePickerControllerSourceTypeCamera
, and set its mediaTypes
in accordance with which types you just learned are available. Finally, set a delegate (adopting UINavigationControllerDelegate and UIImagePickerControllerDelegate), and present the view controller. In this situation, it is legal (and preferable) to use a presented view controller even on the iPad.
BOOL ok = [UIImagePickerController isSourceTypeAvailable: UIImagePickerControllerSourceTypeCamera]; if (!ok) { NSLog(@"no camera"); return; } NSArray* arr = [UIImagePickerController availableMediaTypesForSourceType: UIImagePickerControllerSourceTypeCamera]; if ([arr indexOfObject:(NSString*)kUTTypeImage] == NSNotFound) { NSLog(@"no stills"); return; } UIImagePickerController* picker = [UIImagePickerController new]; picker.sourceType = UIImagePickerControllerSourceTypeCamera; picker.mediaTypes = @[(NSString*)kUTTypeImage]; picker.delegate = self; [self presentViewController:picker animated:YES completion:nil];
isCameraDeviceAvailable:
Checks to see whether the front or rear camera is available, using one of these values as argument:
cameraDevice
availableCaptureModesForCameraDevice:
cameraCaptureMode
isFlashAvailableForCameraDevice:
cameraFlashMode
Lets you learn and set the flash mode (or, for a movie, toggles the LED “torch”). Your choices are:
When the view controller’s view appears, the user will see the interface for taking a picture, familiar from the Camera app, possibly including flash options, camera selection button, digital zoom (if the hardware supports it), photo/video option (if your mediaTypes
setting allows both), and Cancel and Shutter buttons. If the user takes a picture, the presented view offers an opportunity to use the picture or to retake it.
New in iOS 7, the first time your app tries to let the user capture video, a system dialog will appear requesting access to the microphone. You can modify the body of this alert by setting the “Privacy — Microphone Usage Description” key (NSMicrophoneUsageDescription
) in your app’s Info.plist. See the discussion of privacy settings earlier in this chapter. You can learn whether microphone permission has been granted by calling the AVAudioSession method requestRecordPermission:
(Chapter 14).
Allowing the user to edit the captured image or movie (allowsEditing
), and handling the outcome with the delegate messages, is the same as I described earlier. There won’t be any UIImagePickerControllerReferenceURL
key in the dictionary delivered to the delegate, because the image isn’t in the photo library. A still image might report a UIImagePickerControllerMediaMetadata
key containing the metadata for the photo. The photo library was not involved in the process of media capture, so no user permission to access the photo library is needed; of course, if you now propose to save the media into the photo library (as I described in the previous section), you will need permission.
New in iOS 7, devices destined for some markets may request permission to access the camera itself. But I have not been able to test this feature.
You can customize the UIImagePickerController interface. If you need to do that, you should probably consider dispensing with UIImagePickerController altogether and designing your own image capture interface from scratch, based around AV Foundation and AVCaptureSession, which I’ll introduce in the next section. Still, it may be that a modified UIImagePickerController is all you need.
In the image capture interface, you can hide the standard controls by setting showsCameraControls
to NO, replacing them with your own overlay view, which you supply as the value of the cameraOverlayView
. In this case, you’re probably going to want some means in your overlay view to allow the user to take a picture! You can do that through these methods:
takePicture
startVideoCapture
stopVideoCapture
The key to customizing the look and behavior of the image capture interface is that a UIImagePickerController is a UINavigationController; the controls shown at the bottom of the default interface are the navigation controller’s toolbar. In this example, I’ll remove all the default controls and use a gesture recognizer on the cameraOverlayView
to permit the user to double-tap the image in order to take a picture:
// ... starts out as before ... picker.delegate = self; picker.showsCameraControls = NO; CGRect f = self.view.window.bounds; UIView* v = [[UIView alloc] initWithFrame:f]; UITapGestureRecognizer* t = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tap:)]; t.numberOfTapsRequired = 2; [v addGestureRecognizer:t]; picker.cameraOverlayView = v; [self presentViewController:picker animated:YES completion:nil]; self.picker = picker;
Our tap:
gesture recognizer action handler simply calls takePicture
:
- (void) tap: (id) g { [self.picker takePicture]; }
It would be nice, however, to tell the user to double-tap to take a picture; we also need to give the user a way to dismiss the image capture interface. We could put a button and a label into the cameraOverlayView
, but here, I’ll take advantage of the UINavigationController’s toolbar. We are the UIImagePickerController’s delegate, meaning that we are not only its UIImagePickerControllerDelegate but also its UINavigationControllerDelegate; I’ll use a delegate method to populate the toolbar:
- (void)navigationController:(UINavigationController *)nc didShowViewController:(UIViewController *)vc animated:(BOOL)animated { [nc setToolbarHidden:NO]; UIGraphicsBeginImageContextWithOptions(CGSizeMake(10,10), NO, 0); [[[UIColor blackColor] colorWithAlphaComponent:0.1] setFill]; CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(0,0,10,10)); UIImage* im = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); [nc.toolbar setBackgroundImage:im forToolbarPosition:UIBarPositionAny barMetrics:UIBarMetricsDefault]; nc.toolbar.translucent = YES; UIBarButtonItem* b = [[UIBarButtonItem alloc] initWithTitle:@"Cancel" style:UIBarButtonItemStylePlain target:self action:@selector(doCancel:)]; UILabel* lab = [UILabel new]; lab.text = @"Double tap to take a picture"; lab.textColor = [UIColor whiteColor]; lab.backgroundColor = [UIColor clearColor]; [lab sizeToFit]; UIBarButtonItem* b2 = [[UIBarButtonItem alloc] initWithCustomView:lab]; [nc.topViewController setToolbarItems:@[b, b2]]; }
When the user double-taps to take a picture, our didFinishPickingMediaWithInfo
delegate method is called, just as before. We don’t automatically get the secondary interface where the user is shown the resulting image and offered an opportunity to use it or retake the image. But we can provide such an interface ourselves, by pushing another view controller onto the navigation controller:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { UIImage* im = info[UIImagePickerControllerOriginalImage]; if (!im) return; SecondViewController* svc = [[SecondViewController alloc] initWithNibName:nil bundle:nil image:im]; [picker pushViewController:svc animated:YES]; }
(Designing the SecondViewController class is left as an exercise for the reader.)
Instead of using UIImagePickerController, you can control the camera and capture images using the AV Foundation framework (Chapter 15). You get no help with interface (except for displaying in your interface what the camera “sees”), but you get vastly more detailed control than UIImagePickerController can give you; for example, for stills, you can control focus and exposure directly and independently, and for video, you can determine the quality, size, and frame rate of the resulting movie. You can also capture audio, of course.
The heart of all AV Foundation capture operations is an AVCaptureSession object. You configure this and provide it as desired with inputs (such as a camera) and outputs (such as a file); then you call startRunning
to begin the actual capture. You can reconfigure an AVCaptureSession, possibly adding or removing an input or output, while it is running — indeed, doing so is far more efficient than stopping the session and starting it again — but you should wrap your configuration changes in beginConfiguration
and commitConfiguration
.
self.sess = [AVCaptureSession new]; // add input AVCaptureDevice* cam = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:cam error:nil]; [self.sess addInput:input]; // create preview layer AVCaptureVideoPreviewLayer* lay = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.sess]; lay.frame = // whatever [self.view.layer addSublayer:lay]; self.previewLayer = lay; // keep a reference so we can remove it later // go! [self.sess startRunning];
Presto! Our interface now contains a window on the world, so to speak.
self.sess = [AVCaptureSession new]; self.sess.sessionPreset = AVCaptureSessionPreset640x480; self.snapper = [AVCaptureStillImageOutput new]; self.snapper.outputSettings = @{AVVideoCodecKey: AVVideoCodecJPEG, AVVideoQualityKey:@0.6}; [self.sess addOutput:self.snapper]; // ... and the rest is as before ...
When the user asks to snap a picture, we send captureStillImageAsynchronouslyFromConnection:completionHandler:
to our AVCaptureStillImageOutput object. The first argument is an AVCaptureConnection; to find it, we ask the output for its connection that is currently inputting video. The second argument is the block that will be called, possibly on a background thread, when the image data is ready; in the block, we capture the data into a UIImage and, stepping out to the main thread (Chapter 25), we construct in the interface a UIImageView containing that image, in place of the AVCaptureVideoPreviewLayer we were displaying previously:
if (!self.sess || !self.sess.isRunning) return; AVCaptureConnection *vc = [self.snapper connectionWithMediaType:AVMediaTypeVideo]; [self.snapper captureStillImageAsynchronouslyFromConnection:vc completionHandler: ^(CMSampleBufferRef buf, NSError *err) { NSData* data = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:buf]; UIImage* im = [UIImage imageWithData:data]; dispatch_async(dispatch_get_main_queue(), ^{ UIImageView* iv = [[UIImageView alloc] initWithFrame:self.previewLayer.frame]; iv.contentMode = UIViewContentModeScaleAspectFit; iv.image = im; [self.view addSubview: iv]; [self.iv removeFromSuperview]; self.iv = iv; [self.previewLayer removeFromSuperlayer]; self.previewLayer = nil; [self.sess stopRunning]; }); }];
My favorite part of that example is that capturing the image emits, automatically, the built-in “shutter” sound!
Our code has not illustrated setting the focus, changing the flash settings, and so forth; doing so is not difficult (see the class documentation on AVCaptureDevice), but note that you should wrap such changes in calls to lockForConfiguration:
and unlockForConfiguration
. Also, always call the corresponding is...Supported:
method before setting any feature of an AVCaptureDevice; for example, before setting the flashMode
, call isFlashModeSupported:
for that mode. You can turn on the LED “torch” by setting the back camera’s torchMode
to AVCaptureTorchModeOn
, even if no AVCaptureSession is running.
You can stop the flow of video data by setting the AVCaptureConnection’s enabled
to NO, and there are some other interesting AVCaptureConnection features, mostly involving stabilization of the video image (not relevant to the example, because a preview layer’s video isn’t stabilized). Plus, AVCaptureVideoPreviewLayer provides methods for converting between layer coordinates and capture device coordinates; without such methods, this can be a very difficult problem to solve. New in iOS 7, you can scan bar codes, shoot video at 60 frames per second (on some devices), and more.
AV Foundation’s control over the camera, and its ability to process incoming data — especially video data — goes far deeper than there is room to discuss here, so consult the documentation; in particular, see the “Media Capture” chapter of the AV Foundation Programming Guide. There are also excellent WWDC videos on AV Foundation, and some fine sample code; I found Apple’s AVCam example very helpful while preparing this discussion.