Pretty image
Bill takes us into the heart of iOS 5, introducing some powerful new APIs that have not received as much exposure as the APIs behind the new user features.

iOS 5 is an amazing release for users, with tons of great features, like Wireless Updating, Notification Center, Newsstand, and iMessage. Of course you’ve heard about Siri, the virtual assistant built into the new iPhone 4S. Even if you don’t see Siri fitting into your daily work flow, it is definitely worth checking out.

But this latest release of iOS is even more significant for developers. With the introduction of iOS 5 we get access to some really powerful features of the SDK. I’d like to take you through three of the new APIs that have not received as much exposure as the APIs directly tied to the user features. Two of these are serious infrastructure to help us make better apps, while the third is just so much fun it’s hard not to talk about.

JavaScript Object Notation Serialization

If you have written an application that interfaces with a web service you have no doubt had to make the decision whether to use XML or JSON. XML has been fully supported by iOS since version 2.0. Up until this release, though, we had to turn to third party and open source to parse JSON. Well, no longer. In iOS 5 we have the NSJSONSerialization to make parsing JSON a one-line exercise.

Here is some sample JSON for a photograph:

 {
  "taken": "2011/07/13",
  "width": "3072",
  "height": "2304",
  "latitude": "39.52",
  "longitude": "-106.05",
  "url": "http://mypictures.com/12345.png"
 }

Parsing this JSON is really simple. Here is the code:

 NSError *error = nil;
 NSData *data = [NSData dataWithContentsOfURL:webServiceURL];
 NSDictionary *photo = [NSJSONSerialization
  JSONObjectWithData:data
  options:NSJSONReadingMutableLeaves
  error:&error];
 NSNumber *width = [photo objectForKey:@"width"];
 NSNumber *height = [photo objectForKey:@"height"];

That’s it. No library to download, no build settings to configure, just code and go. The JSONOBjectWithData:options:error: method turns the data argument into an NSDictionary or an NSArray depending on the root of the JSON file. In our case it’s a dictionary. Once parsed, the data is available as an object for us to interact with.

NSJSONSerialization can parse data in memory, as we have done here, as well as a stream. Using a stream can be very important for large data sets. Instead of having to pull all the content into memory at once, you can process the data in smaller chunks. Your maximum memory usage could be much smaller using the stream technique. In either case it’s just as easy to parse. Here is the code to load via a stream, which is more or less the same as before. The only change is to use the JSONOBjectWithStream: variant of the method.

 NSError *error = nil;
 NSStream *stream = [self getMyStream];
 NSDictionary *photo = [NSJSONSerialization
  JSONObjectWithStream:stream
  options:NSJSONReadingMutableLeaves
  error:&error];
 NSNumber *width = [photo objectForKey:@"width"];
 NSNumber *height = [photo objectForKey:@"height"];

The JSON support in iOS goes even deeper: we can use it to produce JSON data. This makes posting data to web services via JSON just as easy as consuming JSON.

Automatic Reference Counting (ARC)

One of the more difficult challenges for developers moving from garbage-collected languages to Objective-C is understanding how memory management works. While garbage collection has been available on OS X for some time, Apple has resisted bringing it to iOS because of the power usage profile and the realtime nature of a mobile device. But with the release of iOS 5, Apple has provided automatic memory management without the adverse performance characteristics of a garbage collector.

ARC works by formalizing a set of rules that have been around on the Mac since the advent of OS X. Every time you want to hold onto an object you send it the retain message. When you are done with the object, you send release. These two calls have to be balanced. For every retain you must send release, or the object is leaked. Too much leaked memory and the OS will kill an app.

When iOS 4 was released, Apple included a tool called the Static Analyzer that analyzed code and pointed out potential leaks. Some clever soul asked the question, “If it can tell me about where a leak is going to happen, why doesn’t it just fix it for me?” That is basically what Apple delivered in ARC.

In addition to managing the retain-and-release memory management of our objects, ARC adds a new concept called “zeroing weak references.” Weak references nil out when they are no longer referenced. Since messages to nil are no-ops in Objective-C, we get a huge safety bonus. With this new feature, our weak references will no longer point to dangling pointers, but will instead be cleaned up for us automatically.

To use ARC on a new project, just check the box for “Use Automatic Reference Counting” in the new project panel, which turns on all the features of ARC for your entire project.

XcodeSettings.jpg

For existing projects, you can test the waters by turning on ARC one file at a time via the -fobjc-arc flag. Or, if you are ready to take the plunge, you can use the “Edit->Refactor->Convert to Objective-C ARC...” menu item built into Xcode to transform an existing project and its code over to ARC.

RefactorToARC.jpg

Core Foundation and ARC

There are a few small gotchas for large existing projects that will likely require some tweaking if you use Core Foundation objects and toll-free bridging. With ARC you must tell the compiler what you intend to do with the Core Foundation object. If you don’t, you get a compiler error. Here is an example that shows the error:

 NSDictionary *values =
  [NSDictionary dictionaryWithObject:@"object" forKey:@"key"];
 CFDictionaryRef dictionary = (CFDictionaryRef)values;
 SomeFunctionCallThatNeesACFDictionary(dictionary);

When ARC sees this code, it is unsure what the intent is for the CFDictionaryRef pointer and can’t know if it will be safe to release the values dictionary when it normally would. So, it insists that you tell it. You can specify one of two intents. Say you only want to use this dictionary as a CFDictionaryRef in this scope and won’t be keeping it around after that. In this case, use the __bridge modifier in the cast, like this:

 NSDictionary *values =
  [NSDictionary dictionaryWithObject:@"object" forKey:@"key"];
 CFDictionaryRef dictionary = (__bridge CFDictionaryRef)values;
 SomeFunctionCallThatNeesACFDictionary(dictionary);

If, on the other hand, you want to keep the dictionary around for a longer time, you need to tell ARC that. Use the __bridge_retained modifier and ARC will add a retain to the Core Foundation (CF) object. That ensures that the CF object will remain until it is explicitly released with the CFRelease function. Of course, that means you must explicitly release values or it will be leaked.

If you are casting from a CF type to an Objective-C object you can tell ARC that you intend the Objective-C version of the object to be the real owner of the pointer. Use __bridge_transfer in that case, like this:

 CFDataRef dataRef = CFDataCreate(NULL, longStringOfBytes, 0);
 NSData *data = (__bridge_transfer NSData *)dataRef;
 [someObject thatNeedsAnNSData:data];

Now ARC has taken over the retain/release responsibility for the data object, and the CF object will be released for you. If you don’t use CF objects then you don’t have to worry about these gotchas.

ARC is a fantastic new feature that has not gotten nearly the exposure that it deserves. It will greatly simplify the lives of current developers and smooth the learning curve for developers new to the platform. If you are starting a new project, definitely give ARC a go. It’s fantastic!

Core Image and Face Detection

The last feature I want to describe, Core Image and Face Detection, is just so cool it’s hard not to laugh out loud the first time you see it in practice. If you are a registered iOS developer it’s worth pushing the sample out to your device and playing with it. Enough talk, let’s get building. This example has three major features: choose or take a photo, detect facial features, and draw shapes around the detected features.

To choose or take a photo on iOS, you use the UIImagePickerController view controller. The picker controller is a view controller, just like any other view controller on iOS. We create it, configure it, then present it. To configure it we ask what features are available (is there a camera, etc.) and then configure our UI accordingly. If the camera is present, we display an action sheet to let the users choose to take a photo or choose a photo. If there is no camera, then we display the user’s photo library. The details of how the action sheet and picker are displayed are in the sample code, so go take a look there if you need to brush up on using either of those classes.

Once the photo is chosen or taken, the image picker gives us the photo via the imagePickerController:didFinishPickingMediaWithInfo: delegate callback method. In this method we’ll grab the selected image and do the face detection. First, we get the image:

 self.image =
  [info objectForKey:UIImagePickerControllerOriginalImage];

The info dictionary passed in has a bunch of interesting keys. The documentation lists them all.

Now that we have an image, we need to transform it into a format that Core Image can work with. Fortunately it’s really easy to do that; we just create a CIImage with the image we just got out of the info dictionary, like this:

 NSNumber *orientation = [NSNumber numberWithInt:
  [self.image imageOrientation]];
 NSDictionary *imageOptions =
 [NSDictionary dictionaryWithObject:orientation
  forKey:CIDetectorImageOrientation];
 CIImage *ciimage = [CIImage imageWithCGImage:[self.image CGImage]
  options:imageOptions];

We have to pass along the orientation, because CGImage does not contain the orientation information and Core Image needs to know that to recognize faces.

Next, we create a feature detector to scan the image for faces. Here is the code:

  NSDictionary *detectorOptions =
  [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh
  forKey:CIDetectorAccuracy];
 CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
  context:nil
  options:detectorOptions];

The first argument must always be CIDetectorTypeFace. However, since we have a parameter here we can hope that someday we’ll be able to scan for other items in images as easily as we can scan for faces now. The list of options takes only this one key, CIDetectorAccuracy and one of two values CIDetectorAccuracyHigh, as we have used here, or CIDetectorAccuracyLow. These two values allow us to trade off between speed and accuracy. The higher the accuracy the longer it can take to do the detection work.

The last step is to grab the features from the detector, like this:

 self.features = [detector featuresInImage:ciimage];

Each feature that is returned is an instance of CIFaceFeature, one instance for each detected face. The face features have four properties we are interested in: the overall rectangle containing the face, the location of the right eye, the left eye, and the mouth. We take each feature and draw a shape around that feature. I’ve provided a screen shot of William Shakespeare’s face with the highlighted shapes drawn.

While this is a fun but silly use of the face detection feature, it can be combined with other filters to do some really amazing stuff. For example, if you are upgraded to Lion on your Mac, fire up Photo Booth and take a look at any of the first set of features. My kids are especially fond of Frog. This set of effects is using the face and facial feature detection to determine where to place the effect. Frog uses the eyes, while Nose Twirl uses both eyes and mouth to approximate where your nose is. While Photo Booth is often silly, this feature is used to add features to iMovie that are seriously useful. Finding faces in a movie makes it possible to do the automated trailer generation.

ScreenShot.jpg

iOS 5 is a rich and powerful release. Apple has given us some serious tools to make us more productive as well as some really fun features to inspire our creative side. Now that you’ve seen a bit of what’s in iOS 5, I hope you’ll go out and build something beautiful!

Bill Dudney is co-author of iPhone SDK Development. He is a software developer and entrepreneur currently building software for the Mac. Bill started his computing career on a NeXT cube with a magneto-optical drive running NeXTStep 0.9. Over the years Bill migrated into the Java world where he worked for years on building cool enterprise software. But he never forgot his roots and how much fun it was to write software that did cool things for normal people. Bill is back to AppKit to stay. You can follow him on his blog.