Chapter 2. Drawing

Many UIView subclasses, such as a UIButton or a UILabel, know how to draw themselves; sooner or later, though, you’re going to want to do some drawing of your own. You can draw an image in code, and then display it in your interface in a class that knows how to show an image, such as a UIImageView or a UIButton. A pure UIView is all about drawing, and it leaves that drawing largely up to you; your code determines what the view draws, and hence what it looks like in your interface.

This chapter discusses the mechanics of drawing. Don’t be afraid to write drawing code of your own! It isn’t difficult, and it’s often the best way to make your app look the way you want it to.

(For how to draw text, see Chapter 12.)

UIImage and UIImageView

The basic general UIKit image class is UIImage. UIImage can read a file from disk, so if an image does not need to be created dynamically, but has already been created before your app runs, then drawing may be as simple as providing an image file as a resource in your app’s bundle. The system knows how to work with many standard image file types, such as TIFF, JPEG, GIF, and PNG; when an image file is to be included in your app bundle, iOS has a special affinity for PNG files, and you should prefer them whenever possible. You can also obtain image data in some other way, such as by downloading it, and transform this into a UIImage. Conversely, you can draw your own image for display in your interface or for saving to disk (image file output is discussed in Chapter 23).

In the very simplest case, an image file in your app’s bundle can be obtained through the UIImage class method imageNamed:. Now that there are asset catalogs, this method looks in two places for the image:

A nice thing about imageNamed: is that memory management is handled for you: the image data may be cached in memory, and if you ask for the same image by calling imageNamed: again later, the cached data may be supplied immediately. Alternatively, you can read an image file from anywhere in your app’s bundle directly and without caching, using the class method imageWithContentsOfFile: or the instance method initWithContentsOfFile:, both of which expect a pathname string; you can get a reference to your app’s bundle with [NSBundle mainBundle], and NSBundle then provides instance methods for getting the pathname of a file within the bundle, such as pathForResource:ofType:.

Methods that specify a resource in the app bundle, such as imageNamed: and pathForResource:ofType:, respond to suffixes in the name of an actual resource file. On a device with a double-resolution screen, when an image is obtained by name from the app bundle, a file with the same name extended by @2x, if there is one, will be used automatically, with the resulting UIImage marked as double-resolution by assigning it a scale property value of 2. In this way, your app can contain both a single-resolution and a double-resolution version of an image file; on the double-resolution display device, the double-resolution version of the image is used, and is drawn at the same size as the single-resolution image. Thus, on the double-resolution screen, your code continues to work without change, but your images look sharper.

Similarly, a file with the same name extended by ~ipad will automatically be used if the app is running on an iPad. You can use this in a universal app to supply different images automatically depending on whether the app runs on an iPhone or iPod touch, on the one hand, or on an iPad, on the other. (This is true not just for images but for any resource obtained by name from the bundle. See Apple’s Resource Programming Guide.)

One of the great benefits of an asset catalog, though, is that you can forget all about those name suffix conventions. An asset catalog knows when to use an alternate image within an image set, not from its name, but from its place in the catalog. Put the single- and double-resolution alternatives into the slots marked “1x” and “2x” respectively. For a distinct iPad version of an image, switch the Devices pop-up menu in the image set’s Attributes inspector from Universal to Device Specific and check the boxes for the cases you want to distinguish; separate slots for those device types will appear in the asset catalog.

Many built-in Cocoa interface objects will accept a UIImage as part of how they draw themselves; for example, a UIButton can display an image, and a UINavigationBar or a UITabBar can have a background image. I’ll discuss those in Chapter 12. But when you simply want an image to appear in your interface, you’ll probably hand it to a UIImageView, which has the most knowledge and flexibility with regard to displaying images and is intended for this purpose.

When you configure an interface object’s image in the nib editor, you’re instructing that interface object to call imageNamed: to fetch its image, and everything about how imageNamed: conducts the search for the image will be true of how the interface object finds its image at runtime. The nib editor supplies some shortcuts in this regard: the Attributes inspector of an interface object that can have an image will have a pop-up menu listing known images in your project, and such images are also listed in the Media library (Command-Option-Control-4). Media library images can often be dragged onto an interface object in the canvas to assign them, and if you drag a Media library image into a plain view, it is transformed into a UIImageView displaying that image.

A UIImageView can actually have two images, one assigned to its image property and the other assigned to its highlightedImage property; the value of the UIImageView’s highlighted property dictates which of the two is displayed at any given moment. A UIImageView does not automatically highlight itself merely because the user taps it, the way a button does. However, there are certain situations where a UIImageView will respond to the highlighting of its surroundings; for example, within a table view cell, a UIImageView will show its highlighted image when the cell is highlighted. You can, of course, also use the notion of UIImageView highlighting yourself however you like.

A UIImageView is a UIView, so it can have a background color in addition to its image, it can have an alpha (transparency) value, and so forth (see Chapter 1). A UIImageView without a background color is invisible except for its image, so the image simply appears in the interface, without the user being aware that it resides in a rectangular host. An image may have areas that are transparent, and a UIImageView will respect this; thus an image of any shape can appear. A UIImageView without an image and without a background color is invisible, so you could start with an empty UIImageView in the place where you will later need an image and subsequently assign the image in code. You can assign a new image to substitute one image for another.

How a UIImageView draws its image depends upon the setting of its contentMode property. (The contentMode property is inherited from UIView; I’ll discuss its more general purpose later in this chapter.) For example, UIViewContentModeScaleToFill means the image’s width and height are set to the width and height of the view, thus filling the view completely even if this alters the image’s aspect ratio; UIViewContentModeCenter means the image is drawn centered in the view without altering its size. The best way to get a feel for the meanings of the various contentMode settings is to assign a UIImageView a small image in a nib and then, in the Attributes inspector, change the Mode pop-up menu, and see where and how the image draws itself.

You should also pay attention to a UIImageView’s clipsToBounds property; if it is NO, its image, even if it is larger than the image view and even if it is not scaled down by the contentMode, may be displayed in its entirety, extending beyond the image view itself.

When creating a UIImageView in code, you can take advantage of a convenience initializer, initWithImage: (or initWithImage:highlightedImage:). The default contentMode is UIViewContentModeScaleToFill, but the image is not initially scaled; rather, the view itself is sized to match the image. You will still probably need to position the UIImageView correctly in its superview. In this example, I’ll put a picture of the planet Mars in the center of the app’s interface (Figure 2-1):

UIImageView* iv =
    [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"Mars"]];
[mainview addSubview: iv];
iv.center = CGPointMake(CGRectGetMidX(iv.superview.bounds),
                        CGRectGetMidY(iv.superview.bounds));
iv.frame = CGRectIntegral(iv.frame);

What happens to the size of an existing UIImageView when you assign an image to it depends on whether the image view is using autolayout. If it isn’t, or if its size is constrained absolutely, the image view’s size doesn’t change. But under autolayout, the size of the new image becomes the image view’s new intrinsicContentSize, so the image view will adopt the image’s size unless other constraints prevent. (If a UIImageView is assigned both an image and a highlightedImage, and if they are of different sizes, the view’s intrinsicContentSize adopts the size of the image.)

A UIImage can be transformed into a resizable image, by sending it the resizableImageWithCapInsets:resizingMode: message. The capInsets: argument is a UIEdgeInsets, a struct consisting of four floats representing inset values starting at the top and proceeding counterclockwise — top, left, bottom, right. They represent distances inward from the edges of the image. In a context larger than the image, a resizable image can behave in one of two ways, depending on the resizingMode: value:

Certain places in the interface require a resizable image; for example, a custom image that serves as the track of a slider or progress view (Chapter 12) must be resizable, so that it can fill a space of any length. And there can frequently be other situations where you want to fill a background by tiling or stretching an existing image.

In these examples, assume that self.iv is a UIImageView with absolute height and width (so that it won’t adopt the size of its image) and with a contentMode of UIViewContentModeScaleToFill (so that the image will exhibit resizing behavior). First, I’ll illustrate tiling an entire image (Figure 2-2); note that the capInsets: is UIEdgeInsetsZero:

UIImage* mars = [UIImage imageNamed:@"Mars"];
UIImage* marsTiled =
    [mars resizableImageWithCapInsets: UIEdgeInsetsZero
                         resizingMode: UIImageResizingModeTile];
self.iv.image = marsTiled;

Now we’ll tile the interior of the image, changing the capInsets: argument from the previous code (Figure 2-3):

UIImage* marsTiled = [mars resizableImageWithCapInsets:
                      UIEdgeInsetsMake(mars.size.height/4.0,
                                       mars.size.width/4.0,
                                       mars.size.height/4.0,
                                       mars.size.width/4.0)
                      resizingMode: UIImageResizingModeTile];

Next, I’ll illustrate stretching. We’ll start by changing just the resizingMode: from the previous code (Figure 2-4):

UIImage* marsTiled = [mars resizableImageWithCapInsets:
                      UIEdgeInsetsMake(mars.size.height/4.0,
                                       mars.size.width/4.0,
                                       mars.size.height/4.0,
                                       mars.size.width/4.0)
                      resizingMode: UIImageResizingModeStretch];

A common stretching strategy is to make almost half the original image serve as a cap inset, leaving just a pixel or two in the center to fill the entire interior of the resulting image (Figure 2-5):

UIImage* marsTiled = [mars resizableImageWithCapInsets:
                      UIEdgeInsetsMake(mars.size.height/2.0 - 1,
                                       mars.size.width/2.0 - 1,
                                       mars.size.height/2.0 - 1,
                                       mars.size.width/2.0 - 1)
                      resizingMode: UIImageResizingModeStretch];

You should also experiment with different scaling contentMode settings. In the preceding example, if the image view’s contentMode is UIViewContentModeScaleAspectFill, and if the image view’s clipsToBounds is YES, we get a sort of gradient effect, because the top and bottom of the stretched image are outside the image view and aren’t drawn (Figure 2-6).

New in Xcode 5, you can configure a resizable image without code, in the project itself. This is a feature of asset catalogs, and another great reason to use them. It is often the case that a particular image will be used in your app chiefly as a resizable image, and always with the same capInsets: and resizingMode:, so it makes sense to configure this image once rather than having to repeat the same code. And even if an image is configured in the asset catalog to be resizable, it can appear in your interface as a normal image as well — for example, if you use it to initialize an image view, or assign it to an image view under autolayout, or if the image view doesn’t scale its image (it has a contentMode of UIViewContentModeCenter or larger).

To configure an image in an asset catalog as a resizable image, select the image and, in the Slicing section of the Attributes inspector, change the Slices pop-up menu to Horizontal, Vertical, or Horizontal and Vertical. You can specify the resizingMode with another pop-up menu. You can work numerically, or click Show Slicing at the lower right of the canvas and work graphically. The graphical editor is zoomable, so zoom in to work comfortably.

The reason this feature is called Slicing and not Resizing is that it can do more than resizableImageWithCapInsets:resizingMode: can do: it lets you specify the end caps separately from the tiled or stretched region, with the rest of the image being sliced out. The meaning of your settings is intuitively clear from the graphical slicing editor. In Figure 2-7, for example, the dark areas at the top left, top right, bottom left, and bottom right will be drawn as is. The narrow bands will be stretched, and the small rectangle at the top center will be stretched to fill most of the interior. But the rest of the image, the large central area covered by a sort of gauze curtain, will be omitted entirely. The result is shown in Figure 2-8.

Several places in an iOS app’s interface automatically treat an image as a transparency mask, also known as a template. This means that the image color values are ignored, and only the transparency (alpha) values of each pixel matter. The image shown on the screen is formed by combining the image’s transparency values with a single tint color. Such, for example, is the behavior of a tab bar item’s image.

New in iOS 7, the way an image will be treated is a property of the image, its renderingMode. This property is read-only; to change it, make from an existing image a new image with a different rendering mode, by calling imageWithRenderingMode:. The rendering mode values are:

The default is UIImageRenderingModeAutomatic, which results in the old behavior: such an image is drawn normally everywhere except in certain limited contexts, where it is used as a transparency mask.

With the renderingMode property, you can force an image to be drawn normally, even in a context that would usually treat it as a transparency mask. You can also do the opposite: you can force an image to be treated as a transparency mask, even in a context that would otherwise treat it normally. Apple wants iOS 7 apps to adopt more of a transparency mask look throughout the interface; some of the icons in the Settings app, for example, appear to be transparency masks (Figure 2-9).

To accompany this feature, iOS 7 gives every UIView a tintColor, which will be used to tint any template images it contains. Moreover, this tintColor by default is inherited down the view hierarchy, and indeed throughout the entire app, starting with the UIWindow (Chapter 1). Thus, assigning your app’s main window a tint color is probably one of the few changes you’ll make to the window; otherwise, your app adopts the system’s blue tint color. (Alternatively, if you’re using a main storyboard, set the tint color in its File inspector.) Individual views can be assigned their own tint color, which is inherited by their subviews. Figure 2-10 shows two buttons displaying the same background image, one in normal rendering mode, the other in template rendering mode, in an app whose window tint color is red. (I’ll say more about template images and the tintColor in Chapter 12.)

UIImageView draws an image for you and takes care of all the details; in many cases, it will be all you’ll need. Eventually, though, you may want to create some drawing yourself, directly, in code. To do so, you will always need a graphics context.

A graphics context is basically a place you can draw. Conversely, you can’t draw in code unless you’ve got a graphics context. There are several ways in which you might obtain a graphics context; in this chapter I will concentrate on two, which have proven in my experience to be far and away the most common:

Moreover, at any given moment there either is or is not a current graphics context:

What beginners find most confusing about drawing is that there are two separate sets of tools with which you can draw, and they take different attitudes toward the context in which they will draw:

UIKit

Various Objective-C classes know how to draw themselves; these include UIImage, NSString (for drawing text), UIBezierPath (for drawing shapes), and UIColor. Some of these classes provide convenience methods with limited abilities; others are extremely powerful. In many cases, UIKit will be all you’ll need.

With UIKit, you can draw only into the current context. So if you’re in a UIGraphicsBeginImageContextWithOptions or drawRect: situation, you can use the UIKit convenience methods directly; there is a current context and it’s the one you want to draw into. If you’ve been handed a context: argument, on the other hand, then if you want to use the UIKit convenience methods, you’ll have to make that context the current context; you do this by calling UIGraphicsPushContext (and be sure to restore things with UIGraphicsPopContext later).

Core Graphics

This is the full drawing API. Core Graphics, often referred to as Quartz, or Quartz 2D, is the drawing system that underlies all iOS drawing — UIKit drawing is built on top of it — so it is low-level and consists of C functions. There are a lot of them! This chapter will familiarize you with the fundamentals; for complete information, you’ll want to study Apple’s Quartz 2D Programming Guide.

With Core Graphics, you must specify a graphics context (a CGContextRef) to draw into, explicitly, in every function call. If you’ve been handed a context: argument, then that’s probably the graphics context you want to draw into. But in a UIGraphicsBeginImageContextWithOptions or drawRect: situation, you have no reference to a context; to use Core Graphics, you need to get such a reference. Since the context you want to draw into is the current graphics context, you call UIGraphicsGetCurrentContext to get the needed reference.

So we have two sets of tools and three ways in which a context might be supplied; that makes six ways of drawing. To clarify, I’ll now demonstrate all six of them! Without worrying just yet about the actual drawing commands, focus your attention on how the context is specified and on whether we’re using UIKit or Core Graphics. First, I’ll draw a blue circle by implementing a UIView subclass’s drawRect:, using UIKit to draw into the current context, which Cocoa has already prepared for me:

- (void) drawRect: (CGRect) rect {
    UIBezierPath* p =
        [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0,0,100,100)];
    [[UIColor blueColor] setFill];
    [p fill];
}

Now I’ll do the same thing with Core Graphics; this will require that I first get a reference to the current context:

- (void) drawRect: (CGRect) rect {
    CGContextRef con = UIGraphicsGetCurrentContext();
    CGContextAddEllipseInRect(con, CGRectMake(0,0,100,100));
    CGContextSetFillColorWithColor(con, [UIColor blueColor].CGColor);
    CGContextFillPath(con);
}

Next, I’ll implement a UIView subclass’s drawLayer:inContext:. In this case, we’re handed a reference to a context, but it isn’t the current context. So I have to make it the current context in order to use UIKit:

- (void)drawLayer:(CALayer*)lay inContext:(CGContextRef)con {
    UIGraphicsPushContext(con);
    UIBezierPath* p =
        [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0,0,100,100)];
    [[UIColor blueColor] setFill];
    [p fill];
    UIGraphicsPopContext();
}

To use Core Graphics in drawLayer:inContext:, I simply keep referring to the context I was handed:

- (void)drawLayer:(CALayer*)lay inContext:(CGContextRef)con {
    CGContextAddEllipseInRect(con, CGRectMake(0,0,100,100));
    CGContextSetFillColorWithColor(con, [UIColor blueColor].CGColor);
    CGContextFillPath(con);
}

Finally, for the sake of completeness, let’s make a UIImage of a blue circle. We can do this at any time (we don’t need to wait for some particular method to be called) and in any class (we don’t need to be in a UIView subclass). The resulting UIImage (here called im) is suitable anywhere you would use a UIImage. For instance, you could hand it over to a visible UIImageView as its image, thus causing the image to appear onscreen. Or you could save it as a file. Or, as I’ll explain in the next section, you could use it in another drawing.

First, I’ll draw my image using UIKit:

UIGraphicsBeginImageContextWithOptions(CGSizeMake(100,100), NO, 0);
UIBezierPath* p =
    [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0,0,100,100)];
[[UIColor blueColor] setFill];
[p fill];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// im is the blue circle image, do something with it here ...

Here’s the same thing using Core Graphics:

UIGraphicsBeginImageContextWithOptions(CGSizeMake(100,100), NO, 0);
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(con, CGRectMake(0,0,100,100));
CGContextSetFillColorWithColor(con, [UIColor blueColor].CGColor);
CGContextFillPath(con);
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// im is the blue circle image, do something with it here ...

You may be wondering about the arguments to UIGraphicsBeginImageContextWithOptions. The first argument is obviously the size of the image to be created. The second argument declares whether the image should be opaque; if I had passed YES instead of NO here, my image would have a black background, which I don’t want. The third argument specifies the image scale, corresponding to the UIImage scale property I discussed earlier; by passing 0, I’m telling the system to set the scale for me in accordance with the main screen resolution, so my image will look good on both single-resolution and double-resolution devices.

You don’t have to use UIKit or Core Graphics exclusively; on the contrary, you can intermingle UIKit calls and Core Graphics calls to operate on the same graphics context. They merely represent two different ways of telling a graphics context what to do.

A UIImage provides methods for drawing itself into the current context. We know how to obtain a UIImage, and we know how to obtain an image context and make it the current context, so we can experiment with these methods. Here, I’ll make a UIImage consisting of two pictures of Mars side by side (Figure 2-11):

UIImage* mars = [UIImage imageNamed:@"Mars"];
CGSize sz = mars.size;
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*2, sz.height), NO, 0);
[mars drawAtPoint:CGPointMake(0,0)];
[mars drawAtPoint:CGPointMake(sz.width,0)];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Observe that image scaling works perfectly in that example. If we have a single/double-resolution pair for our original Mars image, the correct one for the current device is used, and is assigned the correct scale value. Our call to UIGraphicsBeginImageContextWithOptions has a third argument of 0, so the image context that we are drawing into also has the correct scale. And the image that results from calling UIGraphicsGetImageFromCurrentImageContext has the correct scale as well. Thus, this same code produces an image that looks correct on the current device, whatever its screen resolution may be.

Additional UIImage methods let you scale an image into a desired rectangle as you draw, and specify the compositing (blend) mode whereby the image should combine with whatever is already present. To illustrate, I’ll create an image showing Mars centered in another image of Mars that’s twice as large, using the Multiply blend mode (Figure 2-12):

UIImage* mars = [UIImage imageNamed:@"Mars"];
CGSize sz = mars.size;
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*2, sz.height*2), NO, 0);
[mars drawInRect:CGRectMake(0,0,sz.width*2,sz.height*2)];
[mars drawInRect:CGRectMake(sz.width/2.0, sz.height/2.0, sz.width, sz.height)
       blendMode:kCGBlendModeMultiply alpha:1.0];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

There is no UIImage drawing method for specifying the source rectangle — that is, for specifying that you want to extract a smaller region of the original image. You can work around this by specifying a smaller graphics context and positioning the image drawing so that the desired region falls into it. For example, to obtain an image of the right half of Mars, you’d make a graphics context half the width of the mars image, and then draw mars shifted left, so that only its right half intersects the graphics context. There is no harm in doing this, and it’s a perfectly standard strategy; the left half of mars simply isn’t drawn (Figure 2-13):

UIImage* mars = [UIImage imageNamed:@"Mars"];
CGSize sz = mars.size;
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width/2.0, sz.height), NO, 0);
[mars drawAtPoint:CGPointMake(-sz.width/2.0, 0)];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

The Core Graphics version of UIImage is CGImage (actually a CGImageRef). They are easily converted to one another: a UIImage has a CGImage property that accesses its Quartz image data, and you can make a UIImage from a CGImage using imageWithCGImage: or initWithCGImage: (in real life, you are likely to use their more configurable siblings, imageWithCGImage:scale:orientation: and initWithCGImage:scale:​orientation:).

A CGImage lets you create a new image directly from a rectangular region of the original image, which you can’t do with UIImage. (A CGImage has other powers a UIImage doesn’t have; for example, you can apply an image mask to a CGImage.) I’ll demonstrate by splitting the image of Mars in half and drawing the two halves separately (Figure 2-14). Observe that we are now operating in the CFTypeRef world and must take care to manage memory manually:

UIImage* mars = [UIImage imageNamed:@"Mars"];
// extract each half as a CGImage
CGSize sz = mars.size;
CGImageRef marsLeft = CGImageCreateWithImageInRect([mars CGImage],
                       CGRectMake(0,0,sz.width/2.0,sz.height));
CGImageRef marsRight = CGImageCreateWithImageInRect([mars CGImage],
                        CGRectMake(sz.width/2.0,0,sz.width/2.0,sz.height));
// draw each CGImage into an image context
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*1.5, sz.height), NO, 0);
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con,
                   CGRectMake(0,0,sz.width/2.0,sz.height), marsLeft);
CGContextDrawImage(con,
                   CGRectMake(sz.width,0,sz.width/2.0,sz.height), marsRight);
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(marsLeft); CGImageRelease(marsRight);

But there’s a problem with that example: the drawing is upside-down! It isn’t rotated; it’s mirrored top to bottom, or, to use the technical term, flipped. This phenomenon can arise when you create a CGImage and then draw it with CGContextDrawImage, and is due to a mismatch in the native coordinate systems of the source and target contexts.

There are various ways of compensating for this mismatch between the coordinate systems. One is to draw the CGImage into an intermediate UIImage and extract another CGImage from that. Example 2-1 presents a utility function for doing this.

Armed with the utility function from Example 2-1, we can fix our calls to CGContextDrawImage in the previous example so that they draw the halves of Mars the right way up:

CGContextDrawImage(con, CGRectMake(0,0,sz.width/2.0,sz.height),
                   flip(marsLeft));
CGContextDrawImage(con, CGRectMake(sz.width,0,sz.width/2.0,sz.height),
                   flip(marsRight));

However, we’ve still got a problem: on a double-resolution device, if there is a double-resolution variant of our image file, the drawing comes out all wrong. The reason is that we are obtaining our initial Mars image using imageNamed:, which returns a UIImage that compensates for the doubled size of a double-resolution image by setting its own scale property to match. But a CGImage doesn’t have a scale property, and knows nothing of the fact that the image dimensions are doubled! Therefore, on a double-resolution device, the CGImage that we extract from our Mars UIImage by calling [mars CGImage] is twice as large (in each dimension) as mars.size, and all our calculations after that are wrong.

So, in extracting a desired piece of the CGImage, we must either multiply all appropriate values by the scale or express ourselves in terms of the CGImage’s dimensions. Here’s a version of our original code that draws correctly on either a single-resolution or a double-resolution device, and compensates for flipping:

UIImage* mars = [UIImage imageNamed:@"Mars"];
CGSize sz = mars.size;
// Derive CGImage and use its dimensions to extract its halves
CGImageRef marsCG = [mars CGImage];
CGSize szCG = CGSizeMake(CGImageGetWidth(marsCG), CGImageGetHeight(marsCG));
CGImageRef marsLeft =
    CGImageCreateWithImageInRect(
        marsCG, CGRectMake(0,0,szCG.width/2.0,szCG.height));
CGImageRef marsRight =
    CGImageCreateWithImageInRect(
        marsCG, CGRectMake(szCG.width/2.0,0,szCG.width/2.0,szCG.height));
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*1.5, sz.height), NO, 0);
// The rest is as before, calling flip() to compensate for flipping
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, CGRectMake(0,0,sz.width/2.0,sz.height),
                   flip(marsLeft));
CGContextDrawImage(con, CGRectMake(sz.width,0,sz.width/2.0,sz.height),
                   flip(marsRight));
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(marsLeft); CGImageRelease(marsRight);

Another solution is wrap a CGImage in a UIImage and draw the UIImage instead of the CGImage. The UIImage can be formed in such a way as to compensate for scale: call imageWithCGImage:scale:orientation: as you form the UIImage from the CGImage. Moreover, by drawing a UIImage instead of a CGImage, we avoid the flipping problem! So here’s an approach that deals with both flipping and scale (without calling the flip utility):

UIImage* mars = [UIImage imageNamed:@"Mars"];
CGSize sz = mars.size;
// Derive CGImage and use its dimensions to extract its halves
CGImageRef marsCG = [mars CGImage];
CGSize szCG = CGSizeMake(CGImageGetWidth(marsCG), CGImageGetHeight(marsCG));
CGImageRef marsLeft =
    CGImageCreateWithImageInRect(
        marsCG, CGRectMake(0,0,szCG.width/2.0,szCG.height));
CGImageRef marsRight =
    CGImageCreateWithImageInRect(
        marsCG, CGRectMake(szCG.width/2.0,0,szCG.width/2.0,szCG.height));
UIGraphicsBeginImageContextWithOptions(
    CGSizeMake(sz.width*1.5, sz.height), NO, 0);
[[UIImage imageWithCGImage:marsLeft
                     scale:mars.scale
               orientation:UIImageOrientationUp]
 drawAtPoint:CGPointMake(0,0)];
[[UIImage imageWithCGImage:marsRight
                     scale:mars.scale
               orientation:UIImageOrientationUp]
 drawAtPoint:CGPointMake(sz.width,0)];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(marsLeft); CGImageRelease(marsRight);

Yet another solution to flipping is to apply a transform to the graphics context before drawing the CGImage, effectively flipping the context’s internal coordinate system. This is elegant, but can be confusing if there are other transforms in play. I’ll talk more about graphics context transforms later in this chapter.

An entire view — anything from a single button to your whole interface, complete with its contained hierarchy of views — can be drawn into the current graphics context by calling the UIView instance method drawViewHierarchyInRect:afterScreenUpdates:. This method is new in iOS 7 (and is much faster than the CALayer method renderInContext:, which it effectively replaces). The result is a snapshot of the original view: it looks like the original view, but it’s basically just a bitmap image of it, a lightweight visual duplicate. Snapshots are useful because of the dynamic nature of the iOS interface. For example, you might place a snapshot of a view in your interface in front of the real view to hide what’s happening, or use it during an animation to present the illusion of a view moving when in fact it’s just a snapshot.

Figure 2-15 shows how a snapshot is used in one of my apps. The user can tap any of three color swatches to edit that color. When the color-editing interface appears, I want the user to have the impression that it is just temporary, with the original interface still lurking behind it. So the color-editing interface shows the original interface behind it. But the original interface mustn’t be too distracting, so it’s blurred. In reality, what’s blurred is a snapshot of the original interface.

Here’s how the snapshot in Figure 2-15 is created:

UIGraphicsBeginImageContextWithOptions(vc1.view.frame.size, YES, 0);
[vc1.view drawViewHierarchyInRect: vc1.view.frame afterScreenUpdates:NO];
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

The image im is then blurred, and the blurred image is made the image of a UIImageView inserted behind the color-editing interface. How to achieve a blur effect is another question. I might have used a CIFilter (the subject of the next section), but it’s too slow; instead, I used a UIImage category distributed by Apple as part of the Blurring and Tinting an Image sample code.

An even faster way to obtain a snapshot of a view is to use the UIView (or UIScreen) instance method snapshotViewAfterScreenUpdates:. The result is a UIView, not a UIImage; it’s rather like a UIImageView that knows how to draw only one image, namely the snapshot. Such a snapshot view will typically be used as is, but you can enlarge its bounds and the snapshot image will stretch. If you want the stretched snapshot to behave like a resizable image, call resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets: instead. It is perfectly reasonable to make a snapshot view from a snapshot view.

CIFilter and CIImage

The “CI” in CIFilter and CIImage stands for Core Image, a technology for transforming images through mathematical filters. Core Image started life on the desktop (OS X); some of the filters available on the desktop aren’t available in iOS (perhaps are they are too intensive mathematically for a mobile device).

A filter is a CIFilter. The available filters (there are about 120 of them, with nearly two dozen being new in iOS 7) fall naturally into several categories:

The basic use of a CIFilter is quite simple; it essentially works as if a filter were a kind of dictionary consisting of keys and values. You create the filter with filterWithName:, supplying the string name of a filter; to learn what these names are, consult Apple’s Core Image Filter Reference, or call the CIFilter class method filterNamesInCategories: with a nil argument. Each filter has a small number of keys and values that determine its behavior. You can learn about these keys entirely in code, but typically you’ll consult the documentation. For each key that you’re interested in, you supply a key–value pair, either by calling setValue:forKey: or by supplying the keys and values as you specify the filter name by calling filterWithName:keysAndValues:. In supplying values, a number must be wrapped up as an NSNumber, and there are a few supporting classes such as CIVector (like CGPoint and CGRect combined) and CIColor, whose use is easy to grasp.

A CIFilter’s keys include any image or images on which the filter is to operate; such an image must be a CIImage. You can obtain a CIImage as the output of a filter; thus filters can be chained together. But what about the first filter in the chain? Where will its CIImage come from? You can obtain a CIImage from a CGImage with initWithCGImage:, and you can obtain a CGImage from a UIImage as its CGImage property.

Warning

Do not attempt, as a shortcut, to obtain a CIImage directly from a UIImage by calling the UIImage instance method CIImage. This method does not transform a UIImage into a CIImage! It merely points to the CIImage that already backs the UIImage, and your images are not backed by a CIImage, but rather by a CGImage. I’ll explain where a CIImage-backed UIImage comes from in just a moment.

As you build a chain of filters, nothing actually happens. The only calculation-intensive move comes at the very end, when you transform the final CIImage in the chain into a bitmap drawing. There are two ways to do this:

  • Create a CIContext (by calling contextWithOptions:) and then call createCGImage:fromRect:, handing it the final CIImage as the first argument. The only mildly tricky thing here is that a CIImage doesn’t have a frame or bounds; it has an extent. You will often use this as the second argument to createCGImage:fromRect:. The final output CGImage is ready for any purpose, such as for display in your app, for transformation into a UIImage, or for use in further drawing.
  • Create a UIImage directly from the final CIImage by calling one of these methods:

    • imageWithCIImage:
    • initWithCIImage:
    • imageWithCIImage:scale:orientation:
    • initWithCIImage:scale:orientation:

    You must then draw the UIImage into some graphics context. That last step is essential; the CIImage is not transformed into a bitmap until you do it. Thus, a UIImage generated from imageWithCIImage: is not suitable for display directly in a UIImageView; it contains no drawing of its own. It is useful for drawing, not for display.

To illustrate, I’ll start with an ordinary photo of myself (it’s true I’m wearing a motorcycle helmet, but it’s still ordinary) and create a circular vignette effect (Figure 2-16). We derive from the image of me (moi) a CGImage and from there a CIImage (moi2). We use a CIFilter to form a radial gradient between the default colors of white and black. Then we use a second CIFilter that treats the radial gradient as a mask for blending between the photo of me and a default clear background: where the radial gradient is white (everything inside the gradient’s inner radius) we see just me, and where the radial gradient is black (everything outside the gradient’s outer radius) we see just the clear color, with a gradation in between, so that the image fades away in the circular band between the gradient’s radii. From the last CIImage output by this CIFilter chain, we form a CGImage (moi3), which we transform into a UIImage (moi4):

UIImage* moi = [UIImage imageNamed:@"Moi"];
CIImage* moi2 = [[CIImage alloc] initWithCGImage:moi.CGImage];
CGRect moiextent = moi2.extent;
// first filter
CIFilter* grad = [CIFilter filterWithName:@"CIRadialGradient"];
CIVector* center = [CIVector vectorWithX:moiextent.size.width/2.0
                                       Y:moiextent.size.height/2.0];
[grad setValue:center forKey:@"inputCenter"];
[grad setValue:@85 forKey:@"inputRadius0"];
[grad setValue:@100 forKey:@"inputRadius1"];
CIImage *gradimage = [grad valueForKey: @"outputImage"];
// second filter
CIFilter* blend = [CIFilter filterWithName:@"CIBlendWithMask"];
[blend setValue:moi2 forKey:@"inputImage"];
[blend setValue:gradimage forKey:@"inputMaskImage"];
// extract a bitmap
CGImageRef moi3 =
    [[CIContext contextWithOptions:nil]
     createCGImage:blend.outputImage
     fromRect:moiextent];
UIImage* moi4 = [UIImage imageWithCGImage:moi3];
CGImageRelease(moi3);

Instead of generating a CGImage from the last CIImage in the chain and transforming that into a UIImage, we could capture that CIImage as a UIImage directly — but then we must draw with it in order to generate the bitmap output of the filter chain. For example, we could draw it into an image context:

UIGraphicsBeginImageContextWithOptions(moiextent.size, NO, 0);
[[UIImage imageWithCIImage:blend.outputImage] drawInRect:moiextent];
UIImage* moi4 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

A filter chain can be encapsulated into a single custom filter by subclassing CIFilter. Your subclass just needs to implement outputImage (and possibly other methods such as setDefaults), with private properties to make it key–value coding compliant for any input keys. Here’s our vignette filter as a simple CIFilter subclass, where the only input key is the input image:

@interface MyVignetteFilter ()
@property (nonatomic, strong) CIImage* inputImage;
@end
@implementation MyVignetteFilter
-(CIImage *)outputImage {
    CGRect inextent = self.inputImage.extent;
    CIFilter* grad = [CIFilter filterWithName:@"CIRadialGradient"];
    CIVector* center = [CIVector vectorWithX:inextent.size.width/2.0
                                           Y:inextent.size.height/2.0];
    [grad setValue:center forKey:@"inputCenter"];
    [grad setValue:@85 forKey:@"inputRadius0"];
    [grad setValue:@100 forKey:@"inputRadius1"];
    CIImage *gradimage = [grad valueForKey: @"outputImage"];

    CIFilter* blend = [CIFilter filterWithName:@"CIBlendWithMask"];
    [blend setValue:self.inputImage forKey:@"inputImage"];
    [blend setValue:gradimage forKey:@"inputMaskImage"];
    return blend.outputImage;
}
@end

And here’s how to use our CIFilter subclass and display its output:

CIFilter* vig = [MyVignetteFilter new];
CIImage* im =
    [CIImage imageWithCGImage:[UIImage imageNamed:@"Moi"].CGImage];
[vig setValue:im forKey:@"inputImage"];
CIImage* outim = vig.outputImage;
UIGraphicsBeginImageContextWithOptions(outim.extent.size, NO, 0);
[[UIImage imageWithCIImage:outim] drawInRect:outim.extent];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.iv.image = result;

The examples of drawing so far in this chapter have mostly produced UIImage objects, chiefly by calling UIGraphicsBeginImageContextWithOptions to obtain a graphics context, suitable for display by a UIImageView or any other interface object that knows how to display an image. But, as I’ve already explained, a UIView provides a graphics context; whatever you draw into that graphics context will appear in that view. The technique here is to subclass UIView and implement the subclass’s drawRect: method. At the time that drawRect: is called, the current graphics context has already been set to the view’s own graphics context. You can use Core Graphics functions or UIKit convenience methods to draw into that context.

So, for example, let’s say we have a UIView subclass called MyView. You would then instantiate this class and get the instance into the view hierarchy. One way to do this would be to drag a UIView into a view in the nib editor and set its class to MyView in the Identity inspector; another would be to create the MyView instance and put it into the interface in code. The result is that, from time to time, MyView’s drawRect: will be called. This is your subclass, so you get to write the code that runs at that moment. Whatever you draw will appear inside the MyView instance. There will usually be no need to call super, since UIView’s own implementation of drawRect: does nothing.

The need to draw in real time, on demand, surprises some beginners, who worry that drawing may be a time-consuming operation. This can indeed be a reasonable consideration, and where the same drawing will be used in many places in your interface, it may well make sense to draw a UIImage instead, once, and then reuse that UIImage. In general, however, you should not optimize prematurely. The code for a drawing operation may appear verbose and yet be extremely fast. Moreover, the iOS drawing system is efficient; it doesn’t call drawRect: unless it has to (or is told to, through a call to setNeedsDisplay), and once a view has drawn itself, the result is cached so that the cached drawing can be reused instead of repeating the drawing operation from scratch. (Apple refers to this cached drawing as the view’s bitmap backing store.) You can readily satisfy yourself of this fact with some caveman debugging, logging in your drawRect: implementation; you may be amazed to discover that your code is called only once in the entire lifetime of the app! In fact, moving code to drawRect: is commonly a way to increase efficiency. This is because it is more efficient for the drawing engine to render directly onto the screen than for it to render offscreen and then copy those pixels onto the screen.

Where drawing is extensive and can be compartmentalized into sections, you may be able to gain some additional efficiency by paying attention to the rect parameter passed into drawRect:. It designates the region of the view’s bounds that needs refreshing. Normally, this is the view’s entire bounds; but if you call setNeedsDisplayInRect:, it will be the CGRect that you passed in as argument. You could respond by drawing only what goes into those bounds; but even if you don’t, your drawing will be clipped to those bounds, so, while you may not spend less time drawing, the system will draw more efficiently.

When creating a custom UIView subclass instance in code, you may be surprised and annoyed to find that the view has a black background:

MyView* mv = [[MyView alloc] initWithFrame:CGRectMake(20,20,150,100)];
[self.view addSubview: mv]; // appears as a black rectangle

This can be frustrating if what you expected and wanted was a transparent background, and it’s a source of considerable confusion among beginners. The black background arises when two things are true:

Unfortunately, when creating a UIView in code, both those things are true by default! So if you don’t want the black background, you must do something about one or the other of them (or both). If a view isn’t going to be opaque, its opaque should be set to NO anyway, so that’s probably the cleanest solution:

MyView* mv = [[MyView alloc] initWithFrame:CGRectMake(20,20,150,100)];
[self.view addSubview: mv];
mv.opaque = NO;

Alternatively, this being your own UIView subclass, you could implement its initWithFrame: (the designated initializer) to have the view set its own opaque to NO:

- (id)initWithFrame:(CGRect)frame {
    self = [super initWithFrame:frame];
    if (self) {
        self.opaque = NO;
    }
    return self;
}

With a UIView created in the nib, on the other hand, the black background problem doesn’t arise. This is because such a UIView’s backgroundColor is not nil. The nib assigns it some actual background color, even if that color is [UIColor clearColor].

Of course, if a view fills its rectangle with opaque drawing or has an opaque background color, you can leave opaque set to YES and gain some drawing efficiency (see Chapter 1).

Graphics Context Settings

As you draw in a graphics context, the drawing obeys the context’s current settings. Thus, the procedure is always to configure the context’s settings first, and then draw. For example, to draw a red line followed by a blue line, you would first set the context’s line color to red, and then draw the first line; then you’d set the context’s line color to blue, and then draw the second line. To the eye, it appears that the redness and blueness are properties of the individual lines, but in fact, at the time you draw each line, line color is a feature of the entire graphics context. This is true regardless of whether you use UIKit methods or Core Graphics functions.

A graphics context thus has, at every moment, a state, which is the sum total of all its settings; the way a piece of drawing looks is the result of what the graphics context’s state was at the moment that piece of drawing was performed. To help you manipulate entire states, the graphics context provides a stack for holding states. Every time you call CGContextSaveGState, the context pushes the entire current state onto the stack; every time you call CGContextRestoreGState, the context retrieves the state from the top of the stack (the state that was most recently pushed) and sets itself to that state.

Thus, a common pattern is: call CGContextSaveGState; manipulate the context’s settings, thus changing its state; draw; call CGContextRestoreGState to restore the state and the settings to what they were before you manipulated them. You do not have to do this before every manipulation of a context’s settings, however, because settings don’t necessarily conflict with one another or with past settings. You can set the context’s line color to red and then later to blue without any difficulty. But in certain situations you do want your manipulation of settings to be undoable, and I’ll point out several such situations later in this chapter.

Many of the settings that constitute a graphics context’s state, and that determine the behavior and appearance of drawing performed at that moment, are similar to those of any drawing application. Here are some of them, along with some of the commands that determine them. I list Core Graphics functions, followed by some UIKit convenience methods that call them:

Additional settings include:

Many of these settings will be illustrated by examples later in this chapter.

By issuing a series of instructions for moving an imaginary pen, you trace out a path. Such a path does not constitute drawing! First you provide a path; then you draw. Drawing can mean stroking the path or filling the path, or both. Again, this should be a familiar notion from certain drawing applications.

A path is constructed by tracing it out from point to point. Think of the drawing system as holding a pen. Then you must first tell that pen where to position itself, setting the current point; after that, you issue a series of commands telling it how to trace out each subsequent piece of the path. Each additional piece of the path starts at the current point; its end becomes the new current point.

Here are some path-drawing commands you’re likely to give:

A path can be compound, meaning that it consists of multiple independent pieces. For example, a single path might consist of two separate closed shapes: a rectangle and a circle. When you call CGContextMoveToPoint in the middle of constructing a path (that is, after tracing out a path and without clearing it by filling, stroking, or calling CGContextBeginPath), you pick up the imaginary pen and move it to a new location without tracing a segment, thus preparing to start an independent piece of the same path. If you’re worried, as you begin to trace out a path, that there might be an existing path and that your new path might be seen as a compound part of that existing path, you can call CGContextBeginPath to specify that this is a different path; many of Apple’s examples do this, but in practice I usually do not find it necessary.

To illustrate the typical use of path-drawing commands, I’ll generate the up-pointing arrow shown in Figure 2-17. This might not be the best way to create the arrow, and I’m deliberately avoiding use of the convenience functions, but it’s clear and shows a nice basic variety of typical commands:

// obtain the current graphics context
CGContextRef con = UIGraphicsGetCurrentContext();
// draw a black (by default) vertical line, the shaft of the arrow
CGContextMoveToPoint(con, 100, 100);
CGContextAddLineToPoint(con, 100, 19);
CGContextSetLineWidth(con, 20);
CGContextStrokePath(con);
// draw a red triangle, the point of the arrow
CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);
CGContextMoveToPoint(con, 80, 25);
CGContextAddLineToPoint(con, 100, 0);
CGContextAddLineToPoint(con, 120, 25);
CGContextFillPath(con);
// snip a triangle out of the shaft by drawing in Clear blend mode
CGContextMoveToPoint(con, 90, 101);
CGContextAddLineToPoint(con, 100, 90);
CGContextAddLineToPoint(con, 110, 101);
CGContextSetBlendMode(con, kCGBlendModeClear);
CGContextFillPath(con);

If a path needs to be reused or shared, you can encapsulate it as a CGPath, which is actually a CGPathRef. You can either create a new CGMutablePathRef and construct the path using various CGPath functions that parallel the CGContext path-construction functions, or you can copy the graphics context’s current path using CGContextCopyPath. There are also a number of CGPath functions for creating a path based on simple geometry or based on an existing path:

The UIKit class UIBezierPath wraps CGPath; it, too, provides methods parallel to the CGContext path-construction functions, such as:

Also, UIBezierPath offers one extremely useful convenience method, bezierPathWithRoundedRect:cornerRadius: — drawing a rectangle with rounded corners using only Core Graphics functions is rather tedious.

When you call the UIBezierPath instance method fill or stroke (or fillWithBlendMode:alpha: or strokeWithBlendMode:alpha:), the current graphics context is saved, the wrapped CGPath path is made the current graphics context’s path and stroked or filled, and the current graphics context is restored.

Thus, using UIBezierPath together with UIColor, we could rewrite our arrow-drawing routine entirely with UIKit methods:

// shaft of the arrow
UIBezierPath* p = [UIBezierPath bezierPath];
[p moveToPoint:CGPointMake(100,100)];
[p addLineToPoint:CGPointMake(100, 19)];
[p setLineWidth:20];
[p stroke];
// point of the arrow
[[UIColor redColor] set];
[p removeAllPoints];
[p moveToPoint:CGPointMake(80,25)];
[p addLineToPoint:CGPointMake(100, 0)];
[p addLineToPoint:CGPointMake(120, 25)];
[p fill];
// snip out triangle in the tail
[p removeAllPoints];
[p moveToPoint:CGPointMake(90,101)];
[p addLineToPoint:CGPointMake(100, 90)];
[p addLineToPoint:CGPointMake(110, 101)];
[p fillWithBlendMode:kCGBlendModeClear alpha:1.0];

There’s no savings of code here over calling Core Graphics functions. UIBezierPath is particularly useful when you want to capture a CGPath and pass it around as an object; an example appears in Chapter 21.

Clipping

Another use of a path is to mask out areas, protecting them from future drawing. This is called clipping. By default, a graphics context’s clipping region is the entire graphics context: you can draw anywhere within the context.

The clipping area is a feature of the context as a whole, and any new clipping area is applied by intersecting it with the existing clipping area; so if you apply your own clipping region, the way to remove it from the graphics context later is to plan ahead and wrap things with calls to CGContextSaveGState and CGContextRestoreGState.

To illustrate, I’ll rewrite the code that generated our original arrow (Figure 2-17) to use clipping instead of a blend mode to “punch out” the triangular notch in the tail of the arrow. This is a little tricky, because what we want to clip to is not the region inside the triangle but the region outside it. To express this, we’ll use a compound path consisting of more than one closed area — the triangle, and the drawing area as a whole (which we can obtain with CGContextGetClipBoundingBox).

Both when filling a compound path and when using it to express a clipping region, the system follows one of two rules:

Winding rule
The fill or clipping area is denoted by an alternation in the direction (clockwise or counterclockwise) of the path demarcating each region.
Even-odd rule (EO)
The fill or clipping area is denoted by a simple count of the paths demarcating each region.

Our situation is extremely simple, so it’s easier to use the even-odd rule. So we set up the clipping area using CGContextEOClip and then draw the arrow:

// obtain the current graphics context
CGContextRef con = UIGraphicsGetCurrentContext();
// punch triangular hole in context clipping region
CGContextMoveToPoint(con, 90, 100);
CGContextAddLineToPoint(con, 100, 90);
CGContextAddLineToPoint(con, 110, 100);
CGContextClosePath(con);
CGContextAddRect(con, CGContextGetClipBoundingBox(con));
CGContextEOClip(con);
// draw the vertical line
CGContextMoveToPoint(con, 100, 100);
CGContextAddLineToPoint(con, 100, 19);
CGContextSetLineWidth(con, 20);
CGContextStrokePath(con);
// draw the red triangle, the point of the arrow
CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);
CGContextMoveToPoint(con, 80, 25);
CGContextAddLineToPoint(con, 100, 0);
CGContextAddLineToPoint(con, 120, 25);
CGContextFillPath(con);

The UIBezierPath clipping commands are usesEvenOddFillRule and addClip.

Gradients can range from the simple to the complex. A simple gradient (which is all I’ll describe here) is determined by a color at one endpoint along with a color at the other endpoint, plus (optionally) colors at intermediate points; the gradient is then painted either linearly between two points in the context or radially between two circles in the context.

You can’t use a gradient as a path’s fill color, but you can restrict a gradient to a path’s shape by clipping, which amounts to the same thing.

To illustrate, I’ll redraw our arrow, using a linear gradient as the “shaft” of the arrow (Figure 2-18):

// obtain the current graphics context
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextSaveGState(con);
// punch triangular hole in context clipping region
CGContextMoveToPoint(con, 90, 100);
CGContextAddLineToPoint(con, 100, 90);
CGContextAddLineToPoint(con, 110, 100);
CGContextClosePath(con);
CGContextAddRect(con, CGContextGetClipBoundingBox(con));
CGContextEOClip(con);
// draw the vertical line, add its shape to the clipping region
CGContextMoveToPoint(con, 100, 100);
CGContextAddLineToPoint(con, 100, 19);
CGContextSetLineWidth(con, 20);
CGContextReplacePathWithStrokedPath(con);
CGContextClip(con);
// draw the gradient
CGFloat locs[3] = { 0.0, 0.5, 1.0 };
CGFloat colors[12] = {
    0.3,0.3,0.3,0.8, // starting color, transparent gray
    0.0,0.0,0.0,1.0, // intermediate color, black
    0.3,0.3,0.3,0.8 // ending color, transparent gray
};
CGColorSpaceRef sp = CGColorSpaceCreateDeviceGray();
CGGradientRef grad =
    CGGradientCreateWithColorComponents (sp, colors, locs, 3);
CGContextDrawLinearGradient (
    con, grad, CGPointMake(89,0), CGPointMake(111,0), 0);
CGColorSpaceRelease(sp);
CGGradientRelease(grad);
CGContextRestoreGState(con); // done clipping
// draw the red triangle, the point of the arrow
CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);
CGContextMoveToPoint(con, 80, 25);
CGContextAddLineToPoint(con, 100, 0);
CGContextAddLineToPoint(con, 120, 25);
CGContextFillPath(con);

The call to CGContextReplacePathWithStrokedPath pretends to stroke the current path, using the current line width and other line-related context state settings, but then creates a new path representing the outside of that stroked path. Thus, instead of a thick line we have a rectangular region that we can use as the clip region.

We then create the gradient and paint it. The procedure is verbose but simple; everything is boilerplate. We describe the gradient as a set of locations on the continuum between one endpoint (0.0) and the other endpoint (1.0), along with the colors corresponding to each location; in this case, I want the gradient to be lighter at the edges and darker in the middle, so I use three locations, with the dark one at 0.5. We must also supply a color space in order to create the gradient. Finally, we create the gradient, paint it into place, and release the color space and the gradient.

A color is a CGColor (actually a CGColorRef). CGColor is not difficult to work with, and can be converted to and from a UIColor through UIColor’s colorWithCGColor: and CGColor methods.

A pattern, on the other hand, is a CGPattern (actually a CGPatternRef). You can create a pattern and stroke or fill with it. The process is rather elaborate. As an extremely simple example, I’ll replace the red triangular arrowhead with a red-and-blue striped triangle (Figure 2-19). To do so, remove this line:

CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);

In its place, put the following:

CGColorSpaceRef sp2 = CGColorSpaceCreatePattern(nil);
CGContextSetFillColorSpace (con, sp2);
CGColorSpaceRelease (sp2);
CGPatternCallbacks callback = {
    0, drawStripes, nil
};
CGAffineTransform tr = CGAffineTransformIdentity;
CGPatternRef patt = CGPatternCreate(nil,
                      CGRectMake(0,0,4,4),
                      tr,
                      4, 4,
                      kCGPatternTilingConstantSpacingMinimalDistortion,
                      true,
                      &callback);
CGFloat alph = 1.0;
CGContextSetFillPattern(con, patt, &alph);
CGPatternRelease(patt);

That code is verbose, but it is almost entirely boilerplate. To understand it, it almost helps to read it backward. What we’re leading up to is the call to CGContextSetFillPattern; instead of setting a fill color, we’re setting a fill pattern, to be used the next time we fill a path (in this case, the triangular arrowhead). The third parameter to CGContextSetFillPattern is a pointer to a CGFloat, so we have to set up the CGFloat itself beforehand. The second parameter to CGContextSetFillPattern is a CGPatternRef, so we have to create that CGPatternRef beforehand (and release it afterward).

So now let’s talk about the call to CGPatternCreate. A pattern is a drawing in a rectangular “cell”; we have to state both the size of the cell (the second argument) and the spacing between origin points of cells (the fourth and fifth arguments). In this case, the cell is 4×4, and every cell exactly touches its neighbors both horizontally and vertically. We have to supply a transform to be applied to the cell (the third argument); in this case, we’re not doing anything with this transform, so we supply the identity transform. We supply a tiling rule (the sixth argument). We have to state whether this is a color pattern or a stencil pattern; it’s a color pattern, so the seventh argument is true. And we have to supply a pointer to a callback function that actually draws the pattern into its cell (the eighth argument).

Except that that’s not what we have to supply as the eighth argument. To make matters more complicated, what we actually have to supply here is a pointer to a CGPatternCallbacks struct. This struct consists of the number 0 and pointers to two functions, one called to draw the pattern into its cell, the other called when the pattern is released. We’re not specifying the second function, however; it is for memory management, and we don’t need it in this simple example.

We have almost worked our way backward to the start of the code. It turns out that before you can call CGContextSetFillPattern with a colored pattern, you have to set the context’s fill color space to a pattern color space. If you neglect to do this, you’ll get an error when you call CGContextSetFillPattern. So we create the color space, set it as the context’s fill color space, and release it.

But we are still not finished, because I haven’t shown you the function that actually draws the pattern cell! This is the function whose address is taken as drawStripes in our code. Here it is:

void drawStripes (void *info, CGContextRef con) {
    // assume 4 x 4 cell
    CGContextSetFillColorWithColor(con, [[UIColor redColor] CGColor]);
    CGContextFillRect(con, CGRectMake(0,0,4,4));
    CGContextSetFillColorWithColor(con, [[UIColor blueColor] CGColor]);
    CGContextFillRect(con, CGRectMake(0,0,4,2));
}

As you can see, the actual pattern-drawing code is very simple. The only tricky issue is that the call to CGPatternCreate must be in agreement with the pattern-drawing function as to the size of a cell, or the pattern won’t come out the way you expect. We know in this case that the cell is 4×4. So we fill it with red, and then fill its lower half with blue. When these cells are tiled touching each other horizontally and vertically, we get the stripes that you see in Figure 2-19.

Note, finally, that the code as presented has left the graphics context in an undesirable state, with its fill color space set to a pattern color space. This would cause trouble if we were later to try to set the fill color to a normal color. The solution, as usual, is to wrap the code in calls to CGContextSaveGState and CGContextRestoreGState.

You may have observed in Figure 2-19 that the stripes do not fit neatly inside the triangle of the arrowhead: the bottommost stripe is something like half a blue stripe. This is because a pattern is positioned not with respect to the shape you are filling (or stroking), but with respect to the graphics context as a whole. We could shift the pattern position by calling CGContextSetPatternPhase before drawing.

For such a simple pattern, it would have been easier to take advantage of UIColor’s colorWithPatternImage:, which takes a UIImage:

UIGraphicsBeginImageContextWithOptions(CGSizeMake(4,4), NO, 0);
drawStripes(nil, UIGraphicsGetCurrentContext());
UIImage* stripes = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIColor* stripesPattern = [UIColor colorWithPatternImage:stripes];
[stripesPattern setFill];
UIBezierPath* p = [UIBezierPath bezierPath];
[p moveToPoint:CGPointMake(80,25)];
[p addLineToPoint:CGPointMake(100,0)];
[p addLineToPoint:CGPointMake(120,25)];
[p fill];

Graphics Context Transforms

Just as a UIView can have a transform, so can a graphics context. However, applying a transform to a graphics context has no effect on the drawing that’s already in it; it affects only the drawing that takes place after it is applied, altering the way the coordinates you provide are mapped onto the graphics context’s area. A graphics context’s transform is called its CTM, for “current transform matrix.”

It is quite usual to take full advantage of a graphics context’s CTM to save yourself from performing even simple calculations. You can multiply the current transform by any CGAffineTransform using CGContextConcatCTM; there are also convenience functions for applying a translate, scale, or rotate transform to the current transform.

The base transform for a graphics context is already set for you when you obtain the context; this is how the system is able to map context drawing coordinates onto screen coordinates. Whatever transforms you apply are applied to the current transform, so the base transform remains in effect and drawing continues to work. You can return to the base transform after applying your own transforms by wrapping your code in calls to CGContextSaveGState and CGContextRestoreGState.

For example, we have hitherto been drawing our upward-pointing arrow with code that knows how to place that arrow at only one location: the top left of its rectangle is hard-coded at {80,0}. This is silly. It makes the code hard to understand, as well as inflexible and difficult to reuse. Surely the sensible thing would be to draw the arrow at {0,0}, by subtracting 80 from all the x-values in our existing code. Now it is easy to draw the arrow at any position, simply by applying a translate transform beforehand, mapping {0,0} to the desired top-left corner of the arrow. So, to draw it at {80,0}, we would say:

CGContextTranslateCTM(con, 80, 0);
// now draw the arrow at (0,0)

A rotate transform is particularly useful, allowing you to draw in a rotated orientation without any nasty trigonometry. However, it’s a bit tricky because the point around which the rotation takes place is the origin. This is rarely what you want, so you have to apply a translate transform first, to map the origin to the point around which you really want to rotate. But then, after rotating, in order to figure out where to draw you will probably have to reverse your translate transform.

To illustrate, here’s code to draw our arrow repeatedly at several angles, pivoting around the end of its tail (Figure 2-20). Since the arrow will be drawn multiple times, I’ll start by encapsulating the drawing of the arrow as a UIImage. This is not merely to reduce repetition and make drawing more efficient; it’s also because we want the entire arrow to pivot, including the pattern stripes, and this is the simplest way to achieve that:

- (UIImage*) arrowImage {
    UIGraphicsBeginImageContextWithOptions(CGSizeMake(40,100), NO, 0.0);
    // obtain the current graphics context
    CGContextRef con = UIGraphicsGetCurrentContext();
    // draw the arrow into the image context
    // draw it at (0,0)! adjust all x-values by subtracting 80
    // ... actual code omitted ...
    UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return im;
}

We produce the arrow image once and store it somewhere — I’ll use an instance variable accessed as self.arrow. In our drawRect: implementation, we draw the arrow image multiple times:

- (void)drawRect:(CGRect)rect {
    CGContextRef con = UIGraphicsGetCurrentContext();
    [self.arrow drawAtPoint:CGPointMake(0,0)];
    for (int i=0; i<3; i++) {
        CGContextTranslateCTM(con, 20, 100);
        CGContextRotateCTM(con, 30 * M_PI/180.0);
        CGContextTranslateCTM(con, -20, -100);
        [self.arrow drawAtPoint:CGPointMake(0,0)];
    }
}

A transform is also one more solution for the “flip” problem we encountered earlier with CGContextDrawImage. Instead of reversing the drawing, we can reverse the context into which we draw it. Essentially, we apply a “flip” transform to the context’s coordinate system. You move the context’s top downward, and then reverse the direction of the y-coordinate by applying a scale transform whose y-multiplier is -1:

CGContextTranslateCTM(con, 0, theHeight);
CGContextScaleCTM(con, 1.0, -1.0);

How far down you move the context’s top (theHeight) depends on how you intend to draw the image.

To add a shadow to a drawing, give the context a shadow value before drawing. The shadow position is expressed as a CGSize, where the positive direction for both values indicates down and to the right. The blur value is an open-ended positive number; Apple doesn’t explain how the scale works, but experimentation shows that 12 is nice and blurry, 99 is so blurry as to be shapeless, and higher values become problematic.

Figure 2-21 shows the result of the same code that generated Figure 2-20, except that before we start drawing the arrow repeatedly, we give the context a shadow:

con = UIGraphicsGetCurrentContext();
CGContextSetShadow(con, CGSizeMake(7, 7), 12);
[self.arrow drawAtPoint:CGPointMake(0,0)]; // ... and so on

However, there’s a subtle cosmetic problem with this approach. It may not be evident from Figure 2-21, but we are adding a shadow each time we draw. Thus the arrows are able to cast shadows on one another. What we want, however, is for all the arrows to cast a single shadow collectively. The way to achieve this is with a transparency layer; this is basically a subcontext that accumulates all drawing and then adds the shadow. Our code for drawing the shadowed arrows now looks like this:

CGContextRef con = UIGraphicsGetCurrentContext();
CGContextSetShadow(con, CGSizeMake(7, 7), 12);
CGContextBeginTransparencyLayer(con, nil);
[self.arrow drawAtPoint:CGPointMake(0,0)];
for (int i=0; i<3; i++) {
    CGContextTranslateCTM(con, 20, 100);
    CGContextRotateCTM(con, 30 * M_PI/180.0);
    CGContextTranslateCTM(con, -20, -100);
    [self.arrow drawAtPoint:CGPointMake(0,0)];
}
CGContextEndTransparencyLayer(con);

The function CGContextClearRect erases all existing drawing in a rectangle; combined with clipping, it can erase an area of any shape. The result can “punch a hole” through all existing drawing.

The behavior of CGContextClearRect depends on whether the context is transparent or opaque. This is particularly obvious and intuitive when drawing into an image context. If the image context is transparent — the second argument to UIGraphicsBeginImageContextWithOptions is NO — CGContextClearRect erases to transparent; otherwise it erases to black.

When drawing directly into a view (as with drawRect: or drawLayer:inContext:), if the view’s background color is nil or a color with even a tiny bit of transparency, the result of CGContextClearRect will appear to be transparent, punching a hole right through the view including its background color; if the background color is completely opaque, the result of CGContextClearRect will be black. This is because the view’s background color determines whether the view’s graphics context is transparent or opaque; thus, this is essentially the same behavior that I described in the preceding paragraph.

Figure 2-22 illustrates; the blue square on the left has been partly cut away to black, while the blue square on the right has been partly cut away to transparency. Yet these are instances of the same UIView subclass, drawn with exactly the same code! The difference between the views is that the backgroundColor of the first view is solid red with an alpha of 1, while the backgroundColor of the second view is solid red with an alpha of 0.99. This difference is utterly imperceptible to the eye (not to mention that the red color never appears, as it is covered with a blue fill), but it completely changes the effect of CGContextClearRect. The UIView subclass’s drawRect: looks like this:

CGContextRef con = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(con, [UIColor blueColor].CGColor);
CGContextFillRect(con, rect);
CGContextClearRect(con, CGRectMake(0,0,30,30));

A point is a dimensionless location described by an x-coordinate and a y-coordinate. When you draw in a graphics context, you specify the points at which to draw, and this works regardless of the device’s resolution, because Core Graphics maps your drawing nicely onto the physical output using the base CTM and anti-aliasing. Therefore, throughout this chapter I’ve concerned myself with graphics context points, disregarding their relationship to screen pixels.

However, pixels do exist. A pixel is a physical, integral, dimensioned unit of display in the real world. Whole-numbered points effectively lie between pixels, and this can matter if you’re fussy, especially on a single-resolution device. For example, if a vertical path with whole-number coordinates is stroked with a line width of 1, half the line falls on each side of the path, and the drawn line on the screen of a single-resolution device will seem to be 2 pixels wide (because the device can’t illuminate half a pixel).

You will sometimes encounter advice suggesting that if this effect is objectionable, you should try shifting the line’s position by 0.5, to center it in its pixels. This advice may appear to work, but it makes some simpleminded assumptions. A more sophisticated approach is to obtain the UIView’s contentScaleFactor property. This value will be either 1.0 or 2.0, so you can divide by it to convert from pixels to points. Consider also that the most accurate way to draw a vertical or horizontal line is not to stroke a path but to fill a rectangle. So this UIView subclass code will draw a perfect 1-pixel-wide vertical line on any device:

CGContextFillRect(con, CGRectMake(100,0,1.0/self.contentScaleFactor,100));

A view that draws something within itself, as opposed to merely having a background color and subviews (as in the previous chapter), has content. This means that its contentMode property becomes important whenever the view is resized. As I mentioned earlier, the drawing system will avoid asking a view to redraw itself from scratch if possible; instead, it will use the cached result of the previous drawing operation (the bitmap backing store). So, if the view is resized, the system may simply stretch or shrink or reposition the cached drawing, if your contentMode setting instructs it to do so.

It’s a little tricky to illustrate this point when the view’s content is coming from drawRect:, because I have to arrange for the view to obtain its content (from drawRect:) and then cause it to be resized without also causing it to be redrawn (that is, without drawRect: being called again). Here’s how I’ll do that. As the app starts up, I’ll create an instance of a UIView subclass that knows how to draw our arrow. Then I’ll use delayed performance to resize the instance after the window has shown and the interface has been initially displayed:

void (^resize) (void) = ^{
    CGRect f = mv.bounds; // mv is the MyView instance
    f.size.height *= 2;
    mv.bounds = f;
};
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC);
dispatch_after(popTime, dispatch_get_main_queue(), resize);

We double the height of the view without causing drawRect: to be called. The result is that the view’s drawing appears at double its correct height. For example, if our view’s drawRect: code is the same as the code that generated Figure 2-18, we get Figure 2-23.

Sooner or later, however, drawRect: will be called, and the drawing will be refreshed in accordance with our code. Our code doesn’t say to draw the arrow at a height that is relative to the height of the view’s bounds; it draws the arrow at a fixed height. Thus, the arrow will snap back to its original size.

A view’s contentMode property should therefore usually be in agreement with how the view draws itself. Our drawRect: code dictates the size and position of the arrow relative to the view’s bounds origin, its top left; so we could set its contentMode to UIViewContentModeTopLeft. Alternatively, we could set it to UIViewContentModeRedraw; this will cause automatic scaling of the cached content to be turned off — instead, when the view is resized, its setNeedsDisplay method will be called, ultimately triggering drawRect: to redraw the content.