Issues with Data Quantities

As discussed in Under the Hood, Core Data uses transaction logs to keep multiple persistent stores in sync with each other. Because of that design and because of the latency of networking, there’s an upper limit to the frequency with which we can create entities and save them to our Core Data application. The exact numbers are difficult to determine, but it’s safe to say that if we’re generating hundreds of entities per save, we may run into a performance problem.

Whenever we create an NSManagedObject and save the NSManagedObjectContext or the UIManagedDocument, a transaction log is created for that instance. The more entities we create, the larger that transaction log becomes. There’s an upper threshold whereby the creation/transmission of transaction logs is unable to keep up with the frequency of entities being generated. When that threshold is hit, iCloud syncing is unable to keep up with the data generation and eventually fails. This failure usually results in a crash in your application.

Unfortunately, there’s no magic number of entities to keep under. The threshold is a combination of processor speed, number of entities, size of the entities, and network speed. The slower the processor and/or network, then the fewer entities that are needed to reach the threshold. As an example, using an iPhone 4S on a performant Wi-Fi connection, it was possible to reach this threshold by generating a new entity every second with minimal size. With larger entities or a poorer network, it would be possible to reach the threshold with fewer entities.

At this time, there’s no known workaround for this issue other than to decrease the amount of data that’s being pushed to iCloud. The amount of data can be decreased by generating less data or by “rolling up” the data into fewer entities. Ideally, this issue will be resolved at some point soon.