One of the primary benefits of using something like MonoMac is the ability to share code across platforms, especially if you are already familiar with C#. Because we are writing C#, any common logic and data structures can be reused if we want to build a part of the same app for a different platform. By way of example, a popular app named iCircuit (http://icircuitapp.com), which was written using the Mono framework, has been published for iOS, Android, Mac, and also Windows Phone. The iCircuit app achieved nearly 90 percent code reuse on some of the platforms.
The reason that this figure was not 100 percent is that one of the guiding principles that the Mono framework has been focusing on recently is building applications using native frameworks and interfaces. One of the main points of contention with cross platform toolkits in the past has been that they never feel particularly native because they are forced to settle for the lowest common denominator to maintain compatibility. With Mono, you are encouraged to use a platform's native APIs through C# so that you can take advantage of all of the strengths of that platform.
The model is where you will be able to find the most reuse, as long as you take care to keep all platform-specific dependencies out of the model where possible. To keep things organized, create a folder, named models
, in your project, which we will use to store all of our model classes.
As with the Windows 8 application that we built in Chapter 4, Creating a Windows Store App, the first thing we want to do is provide the ability to connect to a URL and download data from a remote server. In this case, though, we just want access to the HTML text so that we can parse it and look for various attributes. Add a class, named WebHelper
, to the /Models
directory, as follows:
using System; using System.IO; using System.Net; using System.Threading.Tasks; namespace SiteWatcher { internal static class WebHelper { public static async Task<string> Get(string url) { var tcs = new TaskCompletionSource<string>(); var request = WebRequest.Create(url); request.BeginGetResponse(o => { var response = request.EndGetResponse(o); using (var reader = new StreamReader(response.GetResponseStream())) { var result = reader.ReadToEnd(); tcs.SetResult(result); } }, null); return await tcs.Task; } } }
This is very similar to the WebRequest
class that we built in Chapter 4, Creating a Windows Store App, except that it simply returns the HTML string that we want to parse instead of deserializing a JSON object; and because the Get
method will be carrying out remote I/O, we use the async
keyword. As a rule of thumb, any I/O bound method that could potentially take more than 50 milliseconds to complete should be asynchronous. 50 milliseconds is the threshold used by Microsoft when deciding which OS-level APIs will be asynchronous.
Now, we are going to build the backing storage model for the data that the user enters in the user interface. One of the things we want to be able to do for the user is save their input so that they don't have to re-enter it the next time they launch the application. Thankfully, we can take advantage of one of the built-in classes on Mac OS and the dynamic object features of C# 5 to do this in an easy way.
The NSUserDefaults
class is a simple key/value storage API that persists the settings that you put into it across application sessions. But while programming against "property bags" can provide you with a very flexible API, it can be verbose and difficult to understand at a glance. To mitigate that, we are going to build a nice dynamic wrapper around NSUserDefaults
so that our code at least looks strongly typed.
First, make sure that your project has a reference to the Microsoft.CSharp.dll
assembly; if not, add it. Then, add a new class file, named UserSettings.cs
, to your Models
folder and inherit from the DynamicObject
class. Take note of the MonoMac.Foundation
namespace being used in this class, as this is where the Mono bindings to the Mac's Core Foundation APIs reside.
using System; using System.Dynamic; using MonoMac.Foundation; namespace SiteWatcher { public class UserSettings : DynamicObject { NSUserDefaults defaults = NSUserDefaults.StandardUserDefaults; public override bool TryGetMember(GetMemberBinder binder, out object result) { result = defaults.ValueForKey(new NSString(binder.Name)); if (result == null) result = string.Empty; return result != null; } public override bool TrySetMember(SetMemberBinder binder, object value) { defaults.SetValueForKey(NSObject.FromObject(value), new NSString(binder.Name)); return true; } } }
We only need to override two methods, TryGetMember
and TrySetMember
. In those methods, we will use the NSUserDefaults
class, which is a native Mac API, to get and set the given value. This is a great example of how we can bridge the native platform that we are running on while still having a C# friendly API surface to program against.
Of course, the astute reader will remember that, at the beginning of this chapter, I said that we should keep platform-specific code out of the model where possible. That is, as these things usually are, more of a guideline. If we wanted to port this program to another platform, we could just replace the internal implementation of this class to something appropriate for the platform, such as using SharedSettings
on Android, or ApplicationDataContainer
on Windows RT.
Next, we are going to build the class that will encapsulate most of our primary business logic. When we talk about cross-platform development, this would be a primary candidate for code that would be shared across all platforms; and the better you are able to abstract your code into self-sustained classes such as these, the higher the likelihood that it will be reusable.
Create a new file, called WebDataSource.cs
, in the Models
folder. This class will be responsible for going out over the Web and parsing the results. Once the class has been created, add the two following members to the class:
private List<string> results = new List<string>(); public IEnumerable<string> Results { get { return this.results; } }
This list of strings will be what drives the user interface whenever we find a match in the website's source. In order to parse the HTML to get those results, we can take advantage of a great open source library called the HTML Agility Pack, which you can find on the CodePlex site (http://htmlagilitypack.codeplex.com/).
When you download the package and unzip it, look in the Net45
folder for the file named HtmlAgilityPack.dll
. This assembly will work on all CLR platforms, so you can take it and copy it right into your project. Add the assembly as a reference by right-clicking on the References
node in the solution explorer, and choosing Edit References | .NET Assembly. Browse to the HtmlAgilityPack.dll
assembly from the .NET Assembly table and click on OK.
Now that we have added this dependency, we can start writing the primary logic for the application. Remember, our goal is to make an interface that allows us to spider a website looking for a particular piece of text. Add the following method to the WebDataSource
class:
public async Task Retrieve() { dynamic settings = new UserSettings(); var htmlString = await WebHelper.Get(settings.Url); HtmlDocument html = new HtmlDocument(); html.LoadHtml(htmlString); foreach(var link in html.DocumentNode.SelectNodes(settings.LinkXPath)) { string linkUrl = link.Attributes["href"].Value; if (!linkUrl.StartsWith("http")) { linkUrl = settings.Url + linkUrl; } // get this URL string post = await WebHelper.Get (linkUrl); ProcessPost(settings, link, post); } }
The Retrieve
method, which has the async
keyword to enable you to wait an asynchronous operation, starts by instantiating the UserSettings
class as a dynamic object so that we can pull out the values from the UI. Next, we retrieve the initial URL and load the results into an HtmlDocument
class, which lets us parse out all of the links that we are looking for. Here is where it gets interesting, for each link, we retrieve that URL's content asynchronously and process it.
You might assume that, because you are waiting in the loop (with the await
keyword), each iteration of the loop will execute concurrently. But remember that asynchrony does not necessarily mean concurrency. In this case, the compiler will rewrite the code so that the main thread is not held up while waiting for the HTTP calls to complete, but the loop will not continue iterating while waiting either, so each iteration of the loop will be completed in the correct sequence.
Finally, we implement the ProcessPost
method, which takes the contents of a single URL and searches it using the regular expression provided by the user.
private void ProcessPost(dynamic settings, HtmlNode link, string postHtml) { // parse the doc to get the content area: settings.ContentXPath HtmlDocument postDoc = new HtmlDocument(); postDoc.LoadHtml(postHtml); var contentNode = postDoc.DocumentNode.SelectSingleNode(settings.ContentXPath); if (contentNode == null) return; // apply settings.TriggerRegex string contentText = contentNode.InnerText; if (string.IsNullOrWhiteSpace(contentText)) return; Regex regex = new Regex(settings.TriggerRegex); var match = regex.Match(contentText); // if found, add to results if (match.Success) { results.Add(link.InnerText); } }
With the WebDataSource
class completed, we have everything we need to start working on the user interface. This goes to show how a few good abstractions (WebHelper
and UserSettings
) and new features such as async
and await
can be combined to produce relatively complex functionality, all while maintaining a great performance profile.