So far you have learned the various techniques that you can use to attack and exploit vulnerabilities within iOS applications. This chapter progresses from the offensive aspects of mobile app security to documenting the ways in which you can secure an application. Understanding the defensive strategies that an application can employ is essential knowledge for any security professional or developer; it not only helps you offer remedial and preventative advice but understanding the intricacies of defense can help you to become a better tester.
This chapter covers the ways in which you can protect the data in your application, not only at rest but also in transit. It also details how you can avoid some of the injection attacks that were detailed in Chapter 3, as well as how you begin to build defenses in to your application to slow down your adversary and hopefully make them consider softer targets.
In most mobile applications the data is the thing that is of most interest to an attacker. As such, considering how your data is received; processed; transmitted to other components, hosts, and ultimately destroyed is important. This section details how to protect data within your application and reduce the likelihood of it being intercepted or compromised by an attacker.
Prior to implementation, considering how your desired functionality may impact the security of your application is important. With a little thought and a carefully constructed design plan, you can avoid or mitigate many common vulnerabilities. Following are several factors that you might want to consider when designing your application:
NSDefaults
will lead to its quickly being identified by an attacker, whereas data being stored using steganography and embedded within an image file used by your application is likely to be discovered only by a significant amount of reverse engineering. In addition to how you store data, you should consider what data your application may be inadvertently storing by consequence of the functionality you have built in to it. A good example is if your application uses a UIWebView
: You may not be aware that you are inadvertently caching web data, cookies, form input, and potentially other content just by virtue of using this class!LocalAuthentication
framework and TouchID, which can offer validation that the user is physically present providing no tampering has taken place. You should also consider several important factors when implementing client-side authentication: namely whether the passcode is stored and if so, where; how it is validated; the key space of the passcode; and how other application areas will be protected until the authentication has been completed.UDID
and geolocation information to online resources.These examples are just a handful of the key design considerations that you should assess prior to developing an application. In general, design is a critical stage in the software development lifecycle (SDL) for any application and you should use it to preempt vulnerabilities before development.
As you will know from the section “Understanding the Data Protection API” in Chapter 2, you can encrypt individual files on the filesystem using a key derived from the user’s passcode. However, the usual recommendation to secure sensitive information is to supplement this encryption with your own encryption implementation to give additional assurance against the following scenarios:
This section only briefly touches on the topic of encryption principles because a thorough examination is far beyond the scope of this book.
Implementing an encryption scheme in your application is often a daunting task, and one that you should not take lightly. You must consider many factors to avoid inadvertently exposing your data to unauthorized access. The following is a set of guidelines that you should follow when implementing encryption within your application:
https://s3.amazonaws.com/s3.documentcloud.org/documents/1302613/ios-security-guide-sept-2014.pdf
).Apple provides a number of APIs to help you accomplish many of the common tasks that you will likely need to do when implementing an encryption solution in your application, many of which come as part of the Security
framework or the Common Crypto library. You will find some example use cases in this section.
To obtain entropy or a cryptographically secure block of random bytes using the /dev/random
random-number generator, you can use the SecRandomCopyBytes
function. A sample implementation used to generate a 128-bit salt is shown here:
+(NSData*) generateSalt:(size_t) length
{
NSMutableData *data = [NSMutableData dataWithLength:length];
int result = SecRandomCopyBytes(kSecRandomDefault, length,
data.mutableBytes);
if(result != 0){
NSLog(@"%@", @"Unable to generate salt");
return nil;
}
return data;
}
+(NSData*) salt
{
return [self generateSalt:16];
}
Here is a simple implementation of how to generate a 256-bit AES key using PBKDF2 and the Common Crypto library by virtue of the CCKeyDerivationPBKDF
function:
+(NSData*) generateKey:(NSString*)password salt:(NSData*)salt
rounds:(uint)rounds
{
NSMutableData *key = [NSMutableData dataWithLength:16];
int result = CCKeyDerivationPBKDF(kCCPBKDF2, [password UTF8String],
[password lengthOfBytesUsingEncoding: NSUTF8StringEncoding],
[salt bytes], [salt length], kCCPRFHmacAlgSHA256, rounds, key.mutableBytes,
kCCKeySizeAES256);
if (result == kCCParamError)
{
NSLog(@"%@", @"Unable to generate key");
return nil;
}
return key;
}
A common problem faced by developers is how to go about encrypting content stored in a database, which often leads to you “rolling your own” encryption solution to encrypt content before it is inserted into the database. This has the obvious disadvantage of leaving the database metadata unencrypted. A popular solution to this problem is SQLCipher (https://www.zetetic.net/sqlcipher/
), which is an open-source SQLite database implementation that supports encryption. Using SQLCipher certainly makes encryption of SQLite databases relatively seamless. Here is a simple implementation:
-(void)OpenDatabaseConnection:(NSString*)dbName pass:(NSString*)password
{
NSString *databasePath = \
[[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, \
NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent:\
dbName];
sqlite3 *db;
if (sqlite3_open([databasePath UTF8String], &db) == SQLITE_OK) {
const char* key = [password UTF8String];
sqlite3_key(db, key, strlen(key));
if (sqlite3_exec(db, (const char*) "SELECT count(*) FROM \
sqlite_master;", NULL, NULL, NULL) == SQLITE_OK) {
// password is correct
} else {
// incorrect password!
}
sqlite3_close(db);
}
}
In this example, a database relative to the application’s Documents folder can be opened using the appropriate database encryption password. Of course, the same principles apply as previously noted and the key should be derived from input that is taken from the user.
In summary, encryption is a key security control that you can use in your application to protect sensitive data (not just on the filesystem!), and in most cases you should implement your own form of encryption in addition to that of the Data Protection API. Although a number of pitfalls exist, implementing encryption securely is possible and when doing so you should use a password derived from the user to generate your encryption key instead of using a static or hard-coded key in your application.
So far you have learned how to secure your data at rest. However, more than likely you will at some point need to communicate your data to a server-side application. Chapter 3 detailed the need for a secure channel and also covered some of the pitfalls that can occur when implementing one. You also learned how with sufficient access to the operating system you could bypass security controls such as certificate pinning. However, pinning still remains an important security control and is generally recommended for any application. In case you skipped this section of Chapter 3, certificate pinning is the process of associating a particular host that you connect to with a known and expected certificate or public key. This protection gives you additional confidence that the host you are connecting to is who it reports to be and negates the impact of a compromised Certificate Authority. In short, the process requires you to embed a public key or certificate within your application, allowing you to compare it against what the server presents during your SSL session. The OWASP wiki provides an excellent write-up of the advantages of certificate pinning, including examples of how to implement it across different platforms (https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning
). For completeness, a short example of how you would implement this, borrowed from the aforementioned resource, is described here.
Within the didReceiveAuthenticationChallenge
delegate method for your NSURLConnection
, you should include the following code, which reads the mahh .der
certificate from within the application’s bundle directory and does a binary comparison against the certificate presented by the server:
-(void)connection:(NSURLConnection *)connection
didReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)
challenge
{
if ([[[challenge protectionSpace] authenticationMethod] isEqualToString:
NSURLAuthenticationMethodServerTrust])
{
do
{
SecTrustRef serverTrust = [[challenge protectionSpace] \
serverTrust];
if(nil == serverTrust)
break; /* failed */
OSStatus status = SecTrustEvaluate(serverTrust, NULL);
if(!(errSecSuccess == status))
break; /* failed */
SecCertificateRef serverCertificate = \
SecTrustGetCertificateAtIndex(serverTrust, 0);
if(nil == serverCertificate)
break; /* failed */
CFDataRef serverCertificateData = \
SecCertificateCopyData(serverCertificate);
//[(__bridge id)serverCertificateData autorelease];
if(nil == serverCertificateData)
break; /* failed */
const UInt8* const data = \
CFDataGetBytePtr(serverCertificateData);
const CFIndex size = CFDataGetLength(serverCertificateData);
NSData* cert1 = [NSData dataWithBytes:data \
length:(NSUInteger)size];
NSString *file = [[NSBundle mainBundle] pathForResource:@"mahh"\
ofType:@"der"];
NSData* cert2 = [NSData dataWithContentsOfFile:file];
if(nil == cert1 ║ nil == cert2)
break; /* failed */
const BOOL equal = [cert1 isEqualToData:cert2];
if(!equal)
break; /* failed */
// The only good exit point
return [[challenge sender] useCredential: [NSURLCredential \
credentialForTrust: serverTrust]
forAuthenticationChallenge: challenge];
} while(0);
// Bad dog
return [[challenge sender] cancelAuthenticationChallenge: \
challenge];
}
}
Insecurely developed iOS applications can be plagued with a variety of injection-style vulnerabilities, much the same way as traditional web applications can. Injection vulnerabilities can occur any time an application accepts user-controlled input; however, they most commonly manifest when a response is received from a server-side application that contains tainted data. A simple example of this would be a social networking application that reads status updates of the user’s friends; in this instance the status updates should be regarded as potentially tainted data. This section details how to reliably avoid the two most common types of injection vulnerability: SQL injection and cross-site scripting (XSS).
One of the most common injection attacks is SQL injection, and those of you familiar with web application testing will undoubtedly have knowledge of it. This type of attack can happen any time an application directly populates tainted data into an SQL query and although the consequences within a mobile application are likely to be much less serious, you should take appropriate preventative measures.
Much like the recommendations for an SQL injection vulnerability in a web application, you can achieve reliable avoidance using parameterized SQL queries in which you substitute placeholders for the strings you want to populate to your query. By far the most popular database in use by iOS applications is SQLite. SQLite provides sqlite3_prepare
, sqlite3_bind_text
, and similar functions to parameterize your queries and bind the relevant values to your parameters. Consider the following example, which constructs a query, parameterizes it, and then binds the user controller values to the query:
NSString* safeInsert = @"INSERT INTO messages(uid, message, username)
VALUES(?, ?, ?)";
if(sqlite3_prepare(database, [safeInsert UTF8String], -1, &statement, NULL)
!= SQLITE_OK)
{
// Unable to prepare statement
}
if(sqlite3_bind_text(statement, 2, [status.message UTF8String], -1,
SQLITE_TRANSIENT) != SQLITE_OK)
{
// Unable to bind variabless
}
This example shows how to bind the status.message
variable to a text column in the query. To add the remaining variables, you would use similar code and the function appropriate to the type of column you want to bind to.
Cross-site scripting (XSS) can occur any time that tainted data is populated into a UIWebView
, and the consequences can vary depending on how the web view is loaded, the permissions your application has, and whether your application exposes additional functionality using a JavaScript to Objective-C bridge.
A number of approaches can help you not only thwart cross-site scripting attacks, but also to minimize the impact they can have if they do occur:
UIWebView
from and always avoid loading it with the file://
protocol handler.UIWebView
method stringByEvaluatingJavaScriptFromString
.UIWebView
when using tainted data. Ensure appropriate sanitization and encoding takes place before loading your HTML into the web view. This problem is particularly common when using the UIWebView
method loadHTMLString
.When working with HTML and XML you may need to dynamically populate potentially tainted data in to a web view. In these scenarios you can achieve some confidence that cross-site scripting has been avoided by encoding any data that you believe could be tainted. The following rules can be used to determine what and how specific meta-characters can be encoded:
<
everywhere>
everywhere&
everywhere"
inside attribute values&apos
inside attribute valuesA relatively new consideration, binary protections were introduced in to the OWASP mobile top ten in January 2014 and although their merit has come under some controversy, they can undoubtedly provide a means to slow down your adversary. The term is used to generically describe the security controls that can be implemented within a mobile application. These protections attempt to achieve the following goals:
According to a research study by Hewlett-Packard in 2013 (http://www8.hp.com/us/en/hp-news/press-release.html?id=1528865#.U_tU4YC1bFO
), 86% of the mobile applications that they reviewed lacked adequate binary hardening. Applications failing to implement any form of binary protection are typically an easier target for cybercriminals and can be more at risk of one or more of the following categories of attack:
If you have conducted mobile application security assessments on a regular basis, you have likely encountered some binary protections. Improving your understanding of the defenses that you’re trying to break or attack will always help you become a better attacker. In the subsequent sections we detail some of the protections that we have encountered, assisted in developing, and in some cases had to circumvent. You should be aware that on their own all of these protections are trivial to bypass, even by attackers with a basic knowledge of reverse engineering. However, when combined and correctly implemented they can significantly increase the complexity of reverse engineering and attacks against your application.
Before delving in to this topic it is also important to stress that binary protections do not solve any underlying issues that an application might have and by no means should be used to plaster over any cracks that exist. Binary protections simply exist as a defense-in-depth control to slow down an attacker and perhaps shift them on to a softer target.
Perhaps the most commonly implemented of the different binary protections, jailbreak detection attempts to determine whether the application is running on a jailbroken or otherwise-compromised device. If the detection mechanisms are triggered, the application will typically implement some form of reactive measures; common reactions include:
You can use several techniques to perform jailbreak detection; however, be aware that these are often trivial to bypass unless other protections are also in place. At a high-level some of the common methods of detection that you might encounter include:
The following sections cover these detection methods and provide brief sample implementations and proof of concepts where applicable.
When a device is jailbroken, this process will almost always leave an imprint on the filesystem: typically, artifacts that will be used by the user post-jailbreak or residual content from the jailbreak process itself. Attempting to find this content can often be used as a reliable means of determining the status of a device.
To achieve the best and most reliable results you use a mixture of file-handling routines, both from the SDK APIs such as NSFileManager fileExistsAtPath
and standard POSIX-like functions such as stat()
. Using a mixture of functions to determine the presence of a file or directory means that you may still achieve some success if your attacker is instrumenting only a subset of your functions. Where possible you should inline these functions, which causes the compiler to embed the full body of the function rather than a function call; inlining means that your attacker must identify and patch each instance of your jailbreak detection.
Here is a simple example of how to implement this:
inline int checkPath(char * path) __attribute__((always_inline));
int checkPath(char * path)
{
struct stat buf;
int exist = stat ( (path), &buf );
if ( exist == 0 )
{
return 1;
}
return 0;
};
You could leverage this example by passing it paths associated with a jailbreak; assuming no tampering has occurred, the function will return 1 if the file exists. Some common paths that you can use to identify the presence of a jailbreak/root are
/bin/bash
/usr/sbin/sshd
/Applications/Cydia.app
/private/var/lib/apt
/pangueaxe
/System/Library/LaunchDaemons/io.pangu.axe.untether.plist
/Library/MobileSubstrate/MobileSubstrate.dylib
/usr/libexec/sftp-server
/private/var/stash
To avoid easy detection by reverse engineering, use encryption or obfuscation to disguise the paths that you validate.
Many users of jailbroken devices install remote access software to allow them to interactively access their device; this often causes a nondefault port to be opened on the device. The most popular software to achieve this is OpenSSH, which in its default configuration causes TCP port 22 to be opened on the device.
You can generally safely assume that if SSH or other non-default ports are open on a device that it may have been jailbroken. Therefore, an additional detection technique that you can employ is to scan the device’s interfaces for nondefault ports, performing banner grabbing for additional confidence where necessary. A simple example of how you might check the loopback interface to determine whether a given port is open is shown next; again, in a production application, you may want to encrypt or obfuscate strings to mitigate against easy identification through reverse engineering:
inline int isPortOpen(short port) __attribute__((always_inline));
int isPortOpen(short port)
{
struct sockaddr_in addr;
int sock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
memset(&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(port);
if (inet_pton(AF_INET, "127.0.0.1", &addr.sin_addr))
{
int result = connect(sock, (struct sockaddr *)&addr, \
sizeof(addr));
if(result==0) {
return 1;
}
close(sock);
}
return 0;
}
It is well documented that many mobile devices sandbox applications to prevent interaction with other applications on the device and the wider OS. On iOS devices you may also find that jailbreaking your device weakens the sandbox in some way. As an application developer, testing the constraints of the sandbox may give you some confidence as to whether the device has been jailbroken.
An example of sandbox behavior that differs between jailbroken and non-jailbroken devices is how the fork()
function operates; on a non-jailbroken device it should always fail because third-party applications are not allowed to spawn a new process; however, on some jailbroken devices the fork()
will succeed. You can use this behavior to determine whether the sandbox has weakened and the device has been jailbroken. The following is a simple example of how you can implement this:
inline int checkSandbox() __attribute__((always_inline));
int checkSandbox() {
int result = fork();
if (result >= 0) return 1;
return 0;
}
In some cases, applications installed through third-party application stores may also run with elevated (for example, root) as opposed to the standard mobile user privileges. As such, the sandbox restrictions may not be in force and you can use an attempt to write to a file outside of the sandbox as a test case for determining the integrity of the device. Here is a simple example of how to implement this:
inline int checkWrites() __attribute__((always_inline));
int checkWrites()
{
FILE *fp;
fp = fopen("/private/shouldnotopen.txt", "w");
if(!fp) return 1;
else return 0;
}
On iOS devices the disk is partitioned in a way such that the read-only system partition is often much smaller than the data partition. Stock system applications reside on the system partition under the /Applications
folder by default. However, as part of the jailbreaking process, many jailbreaks relocate this folder so that additional applications can be installed in it without consuming the limited disk space. This is typically achieved by creating a symbolic link to replace the /Applications
directory, and linking to a newly created directory within the data partition. Modifying the filesystem in this manner provides an opportunity for you to look for further evidence of a jailbreak; if /Applications
is a symbolic link as opposed to a directory you can be confident that the device is jailbroken. A simple example of how to implement this check is shown next; you should call this function with the path you want to check (such as /Applications
) as the argument:
inline int checkSymLinks (char *path) __attribute__((always_inline));
int checkSymLinks(char *path)
{
struct stat s;
if (lstat(path, &s) == 0)
{
if (S_ISLNK(s.st_mode) == 1)
return 1;
}
return 0;
}
Aside from /Applications
, jailbreaks often create a number of other symbolic links that you should also validate for further confidence.
Frameworks such as Cydia Substrate (http://www.cydiasubstrate.com/
) and Frida (http://www.frida.re/
) make instrumentation of mobile runtimes a relatively straightforward process and can often be leveraged to modify application behavior and bypass security controls or to leak or steal sensitive data. In some cases they have also been abused by malware that targets jailbroken devices as was the case with the “Unflod Baby Panda malware” (https://www.sektioneins.de/en/blog/14-04-18-iOS-malware-campaign-unflod-baby-panda.html
). Instrumentation leads to a situation whereby an application cannot always trust its own runtime. For a secure application, additional validation of the runtime is recommended.
The typical approach for runtime hooking used by frameworks such as Cydia Substrate is to inject a dynamic library into the address space of your application and replace the implementation of a method that the attacker wants to instrument. This typically leaves behind a trail that you can use to gain some confidence as to whether your application is being instrumented. First, methods residing from within Apple SDKs will typically originate from a finite set of locations, specifically:
/System/Library/TextInput
/System/Library/Accessibility
/System/Library/PrivateFrameworks/
/System/Library/Frameworks/
/usr/lib/
Furthermore, methods internal to your application should reside from within your application binary itself. You can verify the source location of a method using the dladdr()
function, which takes a function pointer to the function that you want to retrieve information about. The following is a simple implementation that iterates a given class’ methods and checks the source location of the image against a set of known possible image locations. Finally, it checks whether the function resides within a path relative to the application itself:
int checkClassHooked(char * class_name)
{
char imagepath[512];
int n;
Dl_info info;
id c = objc_lookUpClass(class_name);
Method * m = class_copyMethodList(c, &n);
for (int i=0; i<n; i++)
{
char * methodname = sel_getName(method_getName(m[i]));
void * methodimp = (void *) method_getImplementation(m[i]);
int d = dladdr((const void*) methodimp, &info);
if (!d) return YES;
memset(imagepath, 0x00, sizeof(imagepath));
memcpy(imagepath, info.dli_fname, 9);
if (strcmp(imagepath, "/usr/lib/") == 0) continue;
memset(imagepath, 0x00, sizeof(imagepath));
memcpy(imagepath, info.dli_fname, 27);
if (strcmp(imagepath, "/System/Library/Frameworks/") == 0) continue;
memset(imagepath, 0x00, sizeof(imagepath));
memcpy(imagepath, info.dli_fname, 34);
if (strcmp(imagepath, "/System/Library/PrivateFrameworks/") == 0) \
continue;
memset(imagepath, 0x00, sizeof(imagepath));
memcpy(imagepath, info.dli_fname, 29);
if (strcmp(imagepath, "/System/Library/Accessibility") == 0) \
continue;
memset(imagepath, 0x00, sizeof(imagepath));
memcpy(imagepath, info.dli_fname, 25);
if (strcmp(imagepath, "/System/Library/TextInput") == 0) continue;
// check image name against the apps image location
if (strcmp(info.dli_fname, image_name) == 0) continue;
return YES;
}
return NO;
}
When using this implementation in an application, you should obfuscate or encrypt the image paths to prevent easy identification from reverse engineering.
As previously noted, when the aforementioned frameworks are used to modify an application, they inject a dynamic library into the application’s address space. Scanning your application’s address space and retrieving the list of currently loaded modules is therefore also possible; scanning each of these modules for known signatures or image names can help you determine whether a library has been injected. Consider the following simple example that iterates the list of currently loaded images, retrieves the image name using _dyld_get_image_name()
, and looks for substrings of known injection libraries:
inline void scanForInjection() __attribute__((always_inline));
void scanForInjection()
{
uint32_t count = _dyld_image_count();
char* evilLibs[] =
{
"Substrate", "cycript"
};
for(uint32_t i = 0; i < count; i++)
{
const char *dyld = _dyld_get_image_name(i);
int slength = strlen(dyld);
int j;
for(j = slength - 1; j>= 0; --j)
if(dyld[j] == '/') break;
char *name = strndup(dyld + ++j, slength - j);
for(int x=0; x < sizeof(evilLibs) / sizeof(char*); x++)
{
if(strstr(name, evilLibs[x]) ‖ strstr(dyld, evilLibs[x]))
fprintf(stderr,"Found injected library matching string: \
%s", evilLibs[x]);
}
free(name);
}
}
Another interesting technique for identifying hooking is to examine how hooks operate at a low level and attempt to locate similar signatures in your application. As an example, consider a simple hook that has been placed on the fork()
function; first retrieve the address of the fork()
function:
NSLog(@"Address of fork = %p", &fork);
This should print something similar to the following in the console log:
2014-09-25 19:09:28.619 HookMe[977:60b] Address of fork = 0x3900b7a5
Then run your application and examine the disassembly of the function without the hook in place (truncated for brevity):
(lldb) disassemble -a 0x3900b7a5
libsystem_c.dylib'fork:
0x3900b7a4: push {r4, r5, r7, lr}
0x3900b7a6: movw r5, #0xe86c
0x3900b7aa: add r7, sp, #0x8
0x3900b7ac: movt r5, #0x1d0
0x3900b7b0: add r5, pc
0x3900b7b2: ldr r0, [r5]
0x3900b7b4: blx r0
0x3900b7b6: blx 0x39049820
Repeating these steps again shows a different result when the fork()
function is being hooked:
(lldb) disassemble -a 0x3900b7a5
libsystem_c.dylib'fork:
0x3900b7a4: bx pc
0x3900b7a6: mov r8, r8
0x3900b7a8: .long 0xe51ff004
0x3900b7ac: bkpt #0x79
0x3900b7ae: lsls r5, r1, #0x6
0x3900b7b0: add r5, pc
0x3900b7b2: ldr r0, [r5]
0x3900b7b4: blx r0
As you can see, the opcode signature is entirely different. This can be attributed to the trampoline that is inserted at 0x3900b7a8
by the Cydia Substrate framework. In assembly, the opcode 0xe51ff004
equates to the ldr pc, [pc-4]
instruction that causes the application to jump to the location pointed to by the next word after the current value of the pc
register, in this case 0x018dbe79
.
Using this information you can now write a short routine to detect trampolines in your functions before you call them, and as a consequence, determine whether it is being hooked. This is demonstrated in the following simple example:
inline int checkFunctionHook() __attribute__((always_inline));
int checkFunctionHook(void * funcptr)
{
unsigned int * funcaddr = (unsigned int *) funcptr;
if (funcptr) {
if (funcaddr[0] == 0xe51ff004) return 1;
}
return 0;
}
Note that additional checks may be required depending on the architecture that your application is running under. You can also use similar techniques to detect hooking of native code on the Android platform.
The tamperproofing protection mechanism is not widely deployed but can typically be found in applications that have the most sensitive operating environments. Integrity validation attempts to ensure that static application resources such as HTML files or shared libraries, as well as internal code structures, have not been modified. From a native code perspective, this protection specifically looks to thwart attackers that have “patched” the assembly for your application.
Integrity validation is often implemented using checksums, with CRC32 being a popular choice due to its speed and simplicity. To validate static application resources such as HTML or shared library files the developer would calculate a checksum for each resource (or indeed all resources combined) and embed it in the application along with a validation routine to recalculate and compare the stored checksum periodically during the application’s runtime. Similarly, to validate internal code structures, the application must have some means of calculating the stored checksum.
Implementing such protections without external resources (such as the compiler or Mach-O/ELF modification tools) typically means running the application and allowing it to self-generate a checksum of a function or set or functions, then manually embedding the calculated checksum into the binary. You can achieve some success with this method when you manually embed a “web” of checksum validation routines but it has a number of drawbacks—primarily the inability to automatically randomize the protection across builds as well as the manual efforts required to implement and maintain it.
A more complex but significantly better approach is to use the power of the low-level virtual machine (LLVM) compiler and allow native code within iOS and Android applications to be self-validating. Using this approach you can create an optimization pass that leverages LLVM’s JIT compiler to programmatically compile and modify the LLVM bytecode. This strategy allows you to automatically calculate a checksum for your JIT-compiled function and insert validation routines across the binary during the application’s compilation process, without any modification to the code.
You should be aware that although integrity validation is a power protection mechanism, ultimately a knowledgeable adversary could always bypass it because all the validation routines occur within the binary itself. In the event that your checksum calculation functions can be easily identified—for example, through a specific signature or via cross references—the attacker could simply patch out your routines to leave the application unprotected.
Debugging is a popular technique used when reverse engineering mobile applications. It provides an insight into the internal workings of an application and allows an attacker to modify control flow or internal code structures to influence application behavior. This can have significant consequences for a security-conscious application; some example use cases where debugging might be applied are to extract cryptographic key material from an application, manipulate an application’s runtime by invoking methods on existing objects, or to understand the significance of an attacker-generated fault.
Although preventing a privileged attacker from debugging your application is conceptually impossible, you can take some measures to increase the complexity and time required for an attacker to achieve debugging results.
On iOS, debugging is usually achieved using the ptrace()
system call. However, you can call this function from within your third-party application and provide a specific operation that tells the system to prevent tracing from a debugger. If the process is currently being traced then it will exit with the ENOTSUP
status. As mentioned, this is unlikely to thwart a skilled adversary but does provide an additional hurdle to overcome. The following is a simple implementation of this technique. You should implement it not only throughout your application but also as close to the process start (such as in the main function or a constructor) as possible:
inline void denyPtrace () __attribute__((always_inline));
void denyPtrace()
{
ptrace_ptr_t ptrace_ptr = dlsym(RTLD_SELF, "ptrace");
ptrace_ptr(PT_DENY_ATTACH, 0, 0, 0);
}
You may also want to implement a secondary measure of detecting whether your application is being debugged to add further resilience in the event that your PT_DENY_ATTACH
operation has been overcome. To detect whether a debugger is attached to your application you can use the sysctl()
function. This doesn’t explicitly prevent a debugger from being attached to your application but returns sufficient information about your process to allow you to determine whether it is being debugged. When invoked with the appropriate arguments, the sysctl()
function returns a structure with a kp_proc.p_flag
flag that indicates the status of the process and whether or not it is being debugged. The following is a simple example of how to implement this:
inline int checkDebugger () __attribute__((always_inline));
int checkDebugger()
{
int name[4];
struct kinfo_proc info;
size_t info_size = sizeof(info);
info.kp_proc.p_flag = 0;
name[0] = CTL_KERN;
name[1] = KERN_PROC;
name[2] = KERN_PROC_PID;
name[3] = getpid();
if (sysctl(name, 4, &info, &info_size, NULL, 0) == -1) {
return 1;
}
return ((info.kp_proc.p_flag & P_TRACED) != 0);
}
These are just a few examples of strategies that exist for debugger detection; many others exist. Indeed, there is scope to be quite creative using more convoluted strategies such as execution timing, where you record the amount of time it takes to complete a set of operations and if it’s outside a margin of acceptable execution times you can have some assurance that your application is being debugged.
In its simplest definition obfuscation is a technique used to complicate reverse engineering by making code complex to understand. This principle is well understood throughout computer science and the topic is far beyond the scope of this book; indeed, whole research projects have been dedicated to this topic alone. Instead, we focus on how it is relevant to mobile applications and how you can apply it to iOS applications.
It is common knowledge that without obfuscation Objective-C is relatively simple to reverse engineer. As you have already discovered from Chapter 2, retrieving class, method, and variable names from the OBJC segment of a Mach-O binary is possible. This fact can be a thorn in the side of any developer who wants to protect his intellectual property, and therefore obfuscation is often used to disguise the operations of an application without entirely modifying the expected outcomes. At a high level, some of the techniques used by obfuscators include:
Few options exist for obfuscating native code, with the exception of the Obfuscator-LLVM project, which can be used to obfuscate the Android NDK or iOS applications using an LLVM compiler optimization pass. Obfuscator-LLVM implements obfuscation passes using the following techniques:
–mllvm –sub
)–mllvm –bcf
)–mllvm –fla
)To use Obfuscator-LLVM within Xcode you must first create an Xcode plugin to reference the new compiler. For instructions on how to perform this and build the project, you should refer to the O-LLVM wiki (https://github.com/obfuscator-llvm/obfuscator/wiki/Installation
).
Unfortunately, while Obfuscator-LLVM is an extremely useful obfuscator, it lacks the functionality to obfuscate class and method names. However, an alternative solution can work in harmony with Obfuscator-LLVM and together can make a relatively formidable obfuscator: iOS Class Guard works as an extension for the popular class-dump tool and works by parsing your binary to generate an obfuscated symbol table that you can use in future builds. For details on how to implement iOS Class Guard in your application, you should refer to the wiki (https://github.com/Polidea/ios-class-guard
).
Securing an iOS application can be a relatively daunting task even for seasoned developers due to the large number of considerations and possible attack surfaces. Within this chapter you have learned how to secure your application data not only at rest but also in transit, as well as securely erase it when it is no longer in use.
Furthermore, you learned how to implement a variety of binary protections that can be used to not only decrease the pool of adversaries capable of attacking your application, but also increase the amount of time needed to attack it. No silver bullet exists for securing an application, but with sufficient effort, building a self-defending application that cannot be easily tampered with is possible. You should also be aware that when securing an application using binary protections, you are not solving any vulnerabilities that your application might have. Indeed particular care should be given to ensure that these protections do not mask any issues that may have been identified without them.