Factory: The Industrial Devolution on iPhone: a cruel joke

The “version” of Factory: The Industrial Devolution that can currently be found on the iOS App Store (no link; version 1.0, from Samuel Evans), while it bears the same name and graphics from the classic, beloved Mac game, is actually nothing but a cruel joke played on those, like me, who would be ecstatic at the idea of playing a proper port of Factory on the iPhone, given how ideally suited to playing on a touchscreen the gameplay would be.

That iPhone app is a cruel joke as, besides being very user-unfriendly (“pause? what pause?”), having fundamentally game-altering gameplay changes, no sound, and a number of other limitations, its main characteristic is to simply crash at the end of the very first level (provided I manage to put at least one correctly made product in the truck), with no way to go further. Every time (which makes me remark that the App Store reviewer must have been asleep at the wheel, in more ways than one). On both the iPhone 5S and iPad 2, running iOS 7.1, which represent two extremes of currently available iOS devices, at least at the time this app was released (June 2014).

There are in fact hints that the engine of this app has nothing to do with that of the original game (for instance, look at the objects while they enter a routing module, occlusion is not the same1), maybe it could just be a quick and dirty post on the iOS App Store of this code, I can’t tell for sure; but at any rate this gives me sincere hope that Patrick Calahan, the author of Factory on the Mac, is not associated with this mockery of his game. Unfortunately, I do not know of any current way to contact him, so I am sending this out there in the hope that one of my readers does know, or will in turn forward this message that will be seen by someone who does know, so that Patrick Calahan can be properly notified of the existence of that thing.

In the meantime, avoid that app; it’s not worth the bandwidth necessary for its download.

The person responsible for that iPhone app is being notified, and I am ready to publish any response, at most as long as this post, that he cares to send my way.


  1. Of course I can run the original version to compare with, do you think I keep this machine running solely for MacPaint?

I, for one, welcome our new, more inclusive Apple

In case you have not been following closely, at this year’s WWDC Apple introduced a number of technologies that reverse many long-standing policies on what iOS apps were, or to be more accurate, were not allowed to do: technologies such as app extensions, third-party keyboards, Touch ID APIs, manual camera controls, Cloud Kit, or simply the ability to sell apps in bundles on the iOS App Store. I would be remiss if I did not mention a few of my pet peeves that apparently remain unaddressed, such as searching inside third-party apps from the iOS Springboard, real support for trials on the iOS App Store and the Mac App Store (more on that in a later post), any way to distribute iOS apps outside the iOS App Store, or the fact many of the changes in Mac OS X Yosemite are either better integration with iOS, or Lion and Mountain Lion-style “iOS-ification”, both of which would be better solved by transitioning the Mac to iOS, etc.

But in the end, the attitude change from Apple matters more than the specifics of what will come in iOS 8. And it was (as Matt Drance wrote) not just the announcements themselves: for instance with the video shown at the start of the keynote where iPhone and iPad users praise apps and the developers who made them, Apple wants us to know that they care for us developers and want us to succeed, which is a welcome change from the lack of visible consideration developers were treated with so far (with the limitation that this video frames the situation as developers directly providing their wares to users: don’t expect any change to how Apple sees middleware suppliers).

So I welcome this attitude change from Apple, and like Matt Drance, I am glad this seems to be coming from a place of confidence rather than concession (indeed, while the Google Play Store is much more inclusive1, the limited willingness of Android users to pay for apps means Apple probably does not feel much pressure in this area), which means that it’s likely only the first step: what we did not get at this WWDC, we can always hope to get in iOS 9, and at least the situation evolves in the right direction. I do not know where this change of heart comes from, I do not think any obvious event triggered it, I am just thankful that the Powers That Be at Apple decided to be pragmatic and cling less tightly to principles that, while potentially justified five years ago, were these days holding back the platform.

A caveat, though, is that I see one case where a new iOS 8 functionality, rather than giving me hope for the future, will actually hamper future improvements: iCloud Drive. While that feature may appear to address one of my longstanding pet peeves, anyone who thinks we were clamoring for merely a return to the traditional files and folders organization hasn’t really read what I or others have written on the matter; but this is exactly what iCloud Drive proposes (even if only documents are present in there, and even for just the files shared between different iOS apps, we expected better than that). Besides not improving on the current desktop status quo, the issue is that shipping it as such will create compatibility constraints (both from a user interface and API standpoint) which will make it hard for Apple to improve on it in the future, whereas Apple could have taken advantage of its experience and of the hindsight coming from having been without that feature for all this time to propose a better fundamental organization paradigm.

For instance, off the top of my head I can think of two ways to improve the experience of working on the same document from different apps:

  • Instead of (or on top of) “open in…”, have “also open in…”, which would also work by selecting an app among the ones supporting that document type. After that command, the document would appear in a specific section of the document picker of the first app, section with would be marked with the icon of the second app: in other words, this section would contain all documents shared between the first and second app. The same would go in the second app: the shared document would appear in a section marked with the icon of the first app. That way some sort of intuitive organization would be automatically provided. A document shared between more than two apps could appear in two sections at the same time, or could be put in the area where documents are available to all apps.
  • Introduce see-through folders. A paradox of hierarchical filing is that, as you start creating folders to organize your documents so as to more easily find something in the future, you may make documents harder to locate because they become “hidden” behind a folder. With see-through folders any folder you create would start with being just a roundrect drawn around the documents it would contain (say up to 4 contained documents), with the documents still being visible in their full size from the top level view, except there would be this roundrect around them. Then as the folder starts containing more and more documents, these documents would appear smaller and smaller from the top level view, so in practice you would have to “focus” on the folder by tapping on the folder name, so as to list only the documents contained in that folder, in full size, in order to select one document. When you have more than one level of folders, this would allow quickly scanning subfolders that contain only a few documents, since these documents would appear at full size when browsing the parent of these subfolders, so the document could either quickly be found in there, or else we would know it is in the “big” subfolder.

There are of course many other ways this could be improved, such as document tagging, or other metadata-based innovations. There are so many ways hierarchical document storage could be improved that Apple announcing they would merely go with pretty much the status quo for multi-app document collaboration tells me that in all likelihood no one who matters at Apple really cares about document management, which I find sad: even if not all such concocted improvements are actually viable, there is bound to be some that are and that they could have used instead.

(As for Swift, it is a subject with a very different scope that is deserving of its own post.)

But overall, these new developments seen at WWDC 2014 make me optimistic for the future of the Apple platforms and Apple in general. Even if it is not necessarily everything we wanted, change always starts with first steps like these.


  1. “Open” implies a binary situation, where a platform would be either “open” or “closed”; but situations are clearly more nuanced, with a whole continuum of “openness” between different cases such as game consoles, the iOS platform, the Android platform, or Windows. So I refer to platforms as being “more inclusive” or “less inclusive”, which allows for a range of, well, inclusiveness, rather than use “open” and the absolutes it implies.

Vote

This post, I’m afraid, will only be of interest to my European Union readers. If you are not a E.U. citizen, sorry, and thanks for visiting.

Fellow Europeans, it is very important for you to go and vote in the coming European elections. Indeed, the European parliament in Strasbourg is the only democratically elected European institution, and it is important in this specific opportunity to make our voice and our vote heard.

  • It is important because, following the 2008 financial crisis, the Union had to take a more important role in order to help the countries most affected by the crisis, and this revealed the need for a more transparent and more democratic E.U. decision process.
  • It is important because we now realize that a common market will end up implying common food and product safety services, among others: when the next food safety crisis occurs, it will make no sense to have state-specific reactions for products that circulate in the whole of the Union.
  • It is important because subjects that matter a lot in technology, such as privacy, net neutrality, or competition regulation, are by necessity increasingly decided at the E.U. level, when they aren’t already.
  • It is important because, in contrast with the common market, we still have made little to no progress on common social protections and labor laws, and the contrast between these two situations is more and more noticeable, hampering development of a common personal services market.
  • It is important so that, as the geopolitical situation evolves, we can have a credible E.U. foreign affairs department that can claim to represent the European people.

But most importantly, it is important for as many of you as possible to cast your vote, in order to make the parliament’s legitimacy indisputable so as to give it real decision power over E.U. affairs, rather than have our fates decided by less democratic, parochially charged negotiations between the head of states. So for these reasons, and many others, go vote for your European parliament representatives in the next few days1.

And be wary of these sovereignty parties who promise you a better tomorrow but in fact only offer at best unconstructive criticism of the unification process, and do not detail how they could possibly manage either (depending on the situation) getting out of the Euro, or out of the Schengen zone, or out of any other E.U. commitment, as their cure could very well be worse than any disease.


  1. In fact, while I vote this Sunday, May the 25th 2014 in France, I learned today that some countries, including the United Kingdom, are voting today already, which is another thing we may need to agree on: how can we have a pan-European election campaign if not every country votes at the same time?

Porting to the NEON intrinsics from experience

Hey you. Yes, you. Did you, inspired by my introduction to NEON on iPhone, write ARM NEON code, or are you maintaining ARM NEON code in some way? Is this NEON code written as ARM32 assembly? If you answered yes to both questions, then I hope you realize that any app that has your NEON code as a dependency is currently unable of taking advantage of ARM64 on supported hardware (now there may or may not be any real benefit for the app from doing so, but that is beside the point). ARM64, at the very least, is the future, so you will have to do something about that code so that it can run in ARM64 mode, but porting it to ARM64 assembly is not going to be straightforward, as the structure of the NEON register file has changed in ARM64 mode. Rather, I propose here porting your NEON ARM32 assembly algorithms to NEON intrinsics which can compile to both ARM32 and ARM64, and present here the outcome of my experience doing such a port, so that you can learn from it.

An introduction to the ARM NEON intrinsic support

The good thing about ARM NEON intrinsics is that they apply equally well in ARM32 and ARM64 mode, in fact you don’t have to follow any specific rule to support both with the same intrinsics source file: correct NEON intrinsics code that works on ARM32 will also work on ARM64 for free. At the most fundamental level, NEON intrinsics code is simply a C source file that includes <arm_neon.h> and uses a number of specific functions and types. The documentation for the ARM NEON intrinsics can be found here, on the ARM Information Center. This documentation ostensibly covers ARM DS-5, but in fact for iOS clang implements the same support; if you target other platforms in addition to or instead of iOS, you will have to check your toolchain compiler documentation, but if it supports any ARM NEON intrinsics at all it ought to have the same support as ARM DS-5.

Unfortunately, this document pretty much only documents the intrinsic function names and the types: for documentation on the operations these functions perform, it is still necessary to refer to the NEON instructions descriptions in the ARM instruction set document (don’t worry about the “registered ARM customers” part, you only need to create an account and agree to a license in order to download the PDF); furthermore, most material online (including my introduction to NEON on iPhone, if you need to get up to speed with NEON) will discuss NEON in terms of the instruction names rather than in terms of the C intrinsics, so it is a good idea to get used to locating the intrinsic function that corresponds to a given instruction; the most straightforward way is to open arm_neon.h in Xcode (add it as an include, compile once to refresh the index, then open it as one of this file’s includes in the “Related Files” menu), and just do a search for the instruction name: this will turn up the different intrinsic function variants that implement the instruction’s functionality, as the intrinsic function name is based on the instruction name. There is a trick situation, however, as for some instructions there is no matching intrinsic, these cases are documented here, with what you should do to get the equivalent functionality.

The converse also exists, where some intrinsics provide a functionality not provided by a particular instruction, or where the name does not match any instruction, such as:

In particular, the last two are what you will use in replacement of the parts of your ARM32 NEON algorithm where you would put results in, say, d6 and d7, and then the next operation would use q3, which is aliased to these two D registers. Indeed, it is important to realize (in particular if you are coming from NEON assembly coding) that these intrinsics work functionally, rather than procedurally over a register file; notably, the input variables are never modified. So stop worrying about placement and just write your NEON intrinsic code in functional fashion: factor_vec = vrsqrteq_f32(vmlaq_f32(vmulq_f32(x_vec, x_vec), y_vec, y_vec)); (assuming the initial reciprocal square root estimate is enough for your purposes). Things should come naturally once you integrate this way of thinking.

Variables should be reserved for results that you want to use more than once. Those need to be typed correctly, as the whole system is typed, with such fun variable type names as uint8x16_t; this explains the various vcombine_tnn variants, from vcombine_s8 to vcombine_p16, which in fact all come down to the same thing: the sole purpose of the variants is to preserve the correct element typing between the inputs and the output. I personally welcome the discipline: even if you think you know what you are doing, it’s so easy to get it subtly wrong in the middle of your algorithm, and you are left wondering at the end where you wrongly took a left turn (it was at Albuquerque. It is always at Albuquerque).

Less pleasant to use are the types that represent an array of vectors, of the form uint8x16x4_t for instance. Indeed, some intrinsics rely on these types, such as the transpositions ones, but also the deinterleaving loads and stores vld#/vst# (I presented them in my introduction to NEON on iPhone), which are just as indispensable when using intrinsics as they are when programming in assembly, and so when using these intrinsics you have to contend with these variables that represent multiple vectors at once (and that you of course cannot directly use as the input of another intrinsic); fortunately taking the individual vectors of those (for further calculations) is done using normal C array subscripting: coords_vec_arr.val[1], but this makes expressions less straightforward and elegant than they could otherwise have been.

Note that loading and storing vectors to memory without deinterleaving is not performed with an intrinsic, but simply by casting the pointer (typically one to your input/output element arrays) to a pointer to the correct vector type, and dereferencing that; this will result in the correct vector loads and stores being generated.

In practice

I am not going to share the code I ported or the actual benchmark results, but I can share the experience of porting a non-trivial NEON algorithm from ARM32 assembly to NEON intrinsics.

First, if the assembly code is competently commented (in particular with a clear register map), porting it is just a matter of following the algorithm main sequence and is rather straightforward, translating instructions one by one, with the addition of the occasional vcombine when two D vectors become a Q vector; your activity will mostly consist in finding the correct name for the intrinsic function for the given input element type, and finding variable names for these previously unnamed intermediate results (again, for these intermediate results which are only used once, save yourself the trouble of defining a variable and directly use the intrinsic output as the input for the next intrinsic). This was completed quickly.

But this is only the start. The next order of business is running the original algorithm and the new one on test inputs, and compare the results. For integer-only algorithms such as the one I ported, the results must match bit for bit between the original algorithm, the new one compiled as ARM32, and the new one compiled as ARM64; in my case they did. For algorithms that involve floating-point calculations they might not match bit for bit because of the different rounding control in ARM64, so compare within a tolerance that is appropriate for your purposes.

Once this check is done, you might wish to take a look at the assembly code generated from your intrinsics. In my case I discovered the ARM32 compiled version needed more vector storage than there are available registers, and as a result was performing various extra vectors loads and stores from memory at various points in the algorithm. The reason for this is that the automatic register allocation clang performed (at least in this case) just could not compare with my elaborate work in the original ARM32 NEON assembly code to tightly squeeze the necessary work data to never take more than 12 Q vectors at any given time (even avoiding the use of q4-q7, saving the trouble of having to preserve and restore them); also, it appears that, with clang, the intrinsics that use a scalar as one input do not actually generate the scalar-using instruction, but instead require the scalar to be splat on a vector register, harming register usage.

I have not been able to improve the situation by changing the way the intrinsic code was written; it seems it is the compiler which is going to have to improve. However, the ARM64 compiled version had no need for temporary storage beyond the NEON registers: twice as many vector registers are available in this mode, easing the pressure on the compiler register allocator.

But in the end what really matters is the actual performance of the code, so even if you take a look at the compiled code it is only by benchmarking the code (again, comparing between the original algorithm, the new version compiled as ARM32, and the new version compiled as ARM64) that you can reasonably decide which improvements are necessary. Don’t skimp on that part, you could be surprised. In my case, it turned out that the “inefficient”, ARM32 compiled version of the ported algorithm performed just as well as the original NEON ARM32 assembly. The probable reason is that my algorithm (and likely yours too) is in fact memory bandwidth constrained, and taking more time to perform the computations does not really matter when you then have to wait for the memory transfers to or from the level 3 cache or main memory to complete anyway.

As a result, in my case I could just replace the original algorithm by the new one without any performance regression. But that might not always be the case, and so if doing so would result in a performance regression, one course of action would be to keep using the original NEON assembly version in ARM32 mode, and use the new intrinsic-based algorithm only in ARM64 mode; use conditional compilation to select which code is used in each mode (I have a preprocessor macro defined for this purpose in the Xcode build settings, whose value depends on an architecture-dependent build setting). Fortunately, given the number of NEON registers available in ARM64, you should never see a performance regression on ARM64 capable hardware between the original ARM32 NEON assembly algorithm and the new one compiled as ARM64.

It worked

So your mileage may vary, certainly. But in my experience porting a NEON algorithm from ARM32 assembly to C intrinsics gave an adequate result, and was a quick and straightforward process, while writing an ARM64 assembly version would have been much more time consuming and would have required maintaining both versions in the future. And remember, no app that depends on your NEON algorithms can ship as a 64-bit capable app as long as you only have an ARM32 assembly version of these algorithms; if they haven’t been ported already, by now you’d better get started.

By the way, I should mention that today I also updated Introduction to NEON on iPhone and A few things iOS developers ought to know about the ARM architecture to take into account ARM64 and the changes it introduces; please have a look.

Besides fused multiply-add, what is the point of ARMv7s?

This post is part of a series on why an iOS developer would want to add a new architecture target to his iOS app project. The posts in this series so far cover ARMv7, ARMv7s (this one), ARM64 (soon!).

You probably remember the kerfuffle when, at the same time the iPhone 5 was announced (it was not even shipping yet), Apple added ARMv7s as a default architecture in Xcode without warning. But just what is it that ARMv7s brings, and why would you want to take advantage of it?

One thing that ARMv7s definitely brings is support for the VFPv4 and VFPv3 half-precision extensions, which consists of the following: fused floating-point multiply-add, and half-precision floating-point values (only for converting to and from single precision, no other operation supports the half-precision format), as well as the vector versions of these operations. Both of these have potential applications, even if they are not universally useful, and therefore it was indispensable for Apple to define an ARM architecture version so that apps could make use of them in practice if they desired: had Apple not defined ARMv7s, even if the iPhone 5 hardware would have been able to run these instructions, no one could have used them in practice as there would have been no way to safely include them in an iOS app (that is, in a way that does not cause the app to crash when run on earlier devices).

So we have determined that it was necessary for Apple to define ARMv7s, or this new functionality of the iPhone 5 processor would have been added for nothing, got it. But what if you are not taking advantage of these new floating-point instructions? It is important to realize that you are not taking advantage of these new floating-point instructions unless you full well know you do: indeed, the compiler will never generate these instructions, so the only way to benefit from this functionality is if your project includes algorithms that were specifically developed to take advantage of these instructions. And if it is not actually the case, then as far as I can tell using ARMv7s… is simply pointless. That is, there is no tangible benefit.

First, let us remember that adding an ARMv7s slice will almost double the executable binary size compared to shipping a binary with only ARMv7, which may or may not be a significant cost depending on whether other data (art assets, outer resources) already dominates the executable binary size, but remains something to pay attention to. So already the decision to include an ARMv7s slice starts in the red.

Go forth and divide

So let us see what other benefits we can find. The other major improvement of ARMv7s is integer division in hardware. So let us try and see how much it improves things.

int ZPDivisions(void* context)
{
    uint32_t i, accum = 0;
    uint32_t iterations = *((uint32_t*)context);
    
    for (i = 0; i < 4*iterations; i+=1)
    {
        accum += (4*iterations)/(i+1);
    }
    
    return accum;
}

OK, let us measure how much time it takes to execute (iterations equals 1000000, running on an iPhone 5S, averaged over three runs):

ARMv7 ARMv7s
divisions 24.951 ms 25.028 ms

…No difference. That can’t be?! The ARMv7 version includes this call to __udivsi3, which should be slower, let us see in the debugger what happens when this is called:

libcompiler_rt.dylib`__udivsi3:
0x3b767740:  tst    r1, r1
0x3b767744:  beq    0x3b767750                ; __udivsi3 + 16
0x3b767748:  udiv   r0, r0, r1
0x3b76774c:  bx     lr
0x3b767750:  mov    r0, #0
0x3b767754:  bx     lr

D’oh! When run on an ARMv7s device, this runtime function simply uses the hardware instruction to perform the division. Indeed, on other platforms such a function may be provided in a library statically linked to your app, whose code is then frozen. But that is not how Apple rolls. On iOS, even such a seemingly trivial functionality is actually provided by the OS, and the call to __udivsi3 is a dynamic library call which is actually resolved at runtime, and uses the most efficient implementation for the hardware, even if your binary only has ARMv7. In other words, on devices which would run your ARMv7s slice, you already benefit from the hardware integer division even without providing an ARMv7s slice.

Take 2

But wait, surely this dynamic library function call has some overhead compared to directly using the instruction, could we not reveal this by improving the test? We need to go deeper. Let’s find out, by performing four divisions during each loop, which will reduce the looping overhead:

int ZPUnrolledDivisions(void* context)
{
    uint32_t i, accum = 0;
    uint32_t iterations = *((uint32_t*)context);
    
    for (i = 0; i < 4*iterations; i+=4)
    {
        accum += (4*iterations)/(i+1) + (4*iterations)/(i+2) + (4*iterations)/(i+3) + (4*iterations)/(i+4);
    }
    
    return accum;
}

And the results are… drum roll… (iterations equals 1000000, running on an iPhone 5S, averaged over three runs):

ARMv7 ARMv7s
unrolled divisions 24.832 ms 24.930 ms

“Come on now, you’re messing with me here, right?” Nope. There is actually a simple explanation for this: even in hardware, integer division is very expensive. A good rule of thumb for the respective costs of execution for the elementary mathematical operations on integers is this:

operation (on 32-bit integers) + - × ÷
cost in cycles 1 1 3 or 4 20+

This approximation remains valid across many different processors. And the amortized cost of a dynamic library function call is pretty low (only slightly more than a regular function call), so it is dwarfed by the execution time of the division instruction itself.

Take 3

I had one last idea of where we could actually look for to observe a penalty when function calls are involved. We need to go even deeper: having these calls to __udivsi3 forces the compiler to put the input variables into the same hardware registers before each division, so the processor is not going to be able run the divisions in parallel, so let us modify the code so that, in ARMv7s, the divisions could actually run in parallel:

int ZPParallelDivisions(void* context)
{
    uint32_t i, accum1 = 0, accum2 = 0, accum3 = 0, accum4 = 0;
    uint32_t iterations = *((uint32_t*)context);
    
    for (i = 0; i < 4*iterations; i+=4)
    {
        accum1 += (4*iterations)/(i+1);
        accum2 += (4*iterations)/(i+2);
        accum3 += (4*iterations)/(i+3);
        accum4 += (4*iterations)/(i+4);
    }
    
    return accum1 + accum2 + accum3 + accum4;
}

(iterations equals 1000000, running on an iPhone 5S, averaged over three runs):

ARMv7 ARMv7s
parallel divisions 25.353 ms 24.977 ms

…I give up (the difference has no statistical significance).

There might be other benefits to avoiding a function call for each integer division, such as the compiler not needing to consider the values stored in caller-saved registers as being lost across the call, but honestly I do not see these effects as having any measurable impact on real-world code.

If you want to reproduce my results, you can get the source for these tests on Bitbucket.

What else?

We have already looked pretty far in trying to find benefits in directly using the new integer division instruction, what if we set that aside for now and try and see what else ARMv7s brings? Technically, nothing else: ARMv7s brings VFPv4 and VFPv3-HP, their vector counterparts, integer division in hardware, and that’s it for unprivileged instructions as far as anyone can tell.

However, when compiling an ARMv7s slice, Clang will apparently take advantage of this to optimize the code specifically for the Swift core (according these patches, via Stack Overflow). These optimizations are of the tuning variety, so do not expect that much from them, but the main limitation with those is that not that many iOS devices run on Swift, in the grand scheme of things. If you check the awesome iOS Support Matrix (ARMv7s column), you will see for instance that no iPod Touch model runs it, and that the iPad mini skipped it entirely (going directly from ARMv7 to ARM64). So is it worth optimizing specifically for the 4th generation iPad, the iPhone 5, and the iPhone 5C? Maybe not.

What compiling for ARMv7s won’t bring you

And now it’s time for our regular segment, “let us dispel some misconceptions about what a new ARM architecture version really brings”. Adding an ARMv7s slice will not:

  • make your code run more efficiently on ARMv7 devices, since those will still be running the ARMv7 compiled code; this means it could potentially improve your code only on devices where your app already runs faster.
  • improve performance of the Apple frameworks and libraries: those are already optimized for the device they are running on, even if your code is compiled only for ARMv7 (we saw this effect earlier with __udivsi3).
  • There are a few cases where ARMv7s devices run code less efficiently than ARMv7 ones; this will happen on these devices even if you only compile for ARMv7, so adding (or replacing by) an ARMv7s slice will not help or hurt this in any way.
  • If you have third-party dependencies with libraries that provide only an ARMv7 slice (you can check with otool -vf <library name>), the code of this dependency won’t become more efficient if you compile for ARMv7s (if they do provide an ARMv7s slice, compiling for ARMv7s will allow you to use it, maybe making it more efficient).

I need you

Seems clear-cut, right? Not so fast. Sure, we explored some places where we thought direct use of hardware integer division could have improved things, but maybe there are actual improvements in places I did not explore, places with a more complex mix between integer division and other operations. Maybe tuning for Swift does improve things for Cyclone too (which is represented by the ARM64 column devices in iOS Support Matrix), and maybe it could be worth it. Maybe I am wrong and Clang can take advantage of fused multiply-add without you needing to do a thing about it. Maybe I completely missed some other instructions that ARMv7s brings.

And most of all, I have not actually run any real benchmark here, for one good reason: I have little idea of the kind of algorithms iOS apps spend significant CPU time on (outside of the frameworks), so I do not know what kind of benchmark to run in the first place (as for Geekbench, I do not think it really represents tasks commonly done in iOS apps, and in addition I am wary of CPU benchmarks I cannot see the source code of). A good benchmark would avoid us missing the forest for the trees, in case that is what is happening here.

So I need you. I need you to run your app with, and without, an ARMv7s slice on a Swift device (as well as a Cyclone device, if you are so inclined), and report the outcome (such as increased performance or decreased processor usage, the latter is important for battery life). Failing that, I need you to tell me the improvements you remember seeing on Swift devices when you added an ARMv7s slice, or what were the conclusions of the evaluation to add an ARMv7s slice, what you can share of it at least. I need you to tell me if I missed something.

And that is why I am exceptionally going to allow comments on this post. In fact, they should appear immediately without having to go through moderation, in order to facilitate the conversation. But first, be wary of Akismet: if your comment is flagged as spam, try and rework it a bit and post again. Second, comments with nothing to do with the matter at hand will be subject to instant vaporization.

So have at it:

What benefits does the iPhone 5S get from being 64-bit?

Every fall, a new iPhone. The schedule of Apple’s mobile hardware releases has become pretty predictable by now, but they more than make up for this timing predictability by the sheer unpredictability of what they introduce each year, and this fall they outdid themselves. Between TouchID and the M7 coprocessor, the iPhone 5S had plenty of surprises, but what most intrigued and excited many people in the development community was its new, 64-bit processor. But many have openly wondered what the point of this feature was, exactly; including some of the same people excited by it, there is no contradiction in that. So I set out to collect answers, and here is what I found.

Before we begin, I strongly recommend you read Friday Q&A 2013-09-27: ARM64 and You, then get the The ARMv8-A Reference Manual (don’t worry about the “registered ARM customers” part, you only need to create an account and agree to a license in order to download the PDF): have it on hand to refer to it whenever I will mention an instruction or architectural feature.

Some Context

iOS devices have always been based on ARM architecture processors. So far, ARM processors have been strictly 32 bit machines: 32-bit general registers, addresses, most calculations, etc.; but in 2011, ARM Holdings announced ARMv8, the first version of the ARM architecture to allow ARM native 64-bit processing and in particular 64-bit addresses in programs. It was clearly, and by their own admission, done quite ahead of the time where it would actually be needed, so that the whole ecosystem would have time to adopt it (board vendors, debug tools, ISVs, open-source projects, etc.), and in fact ARM did not announce at the time any of their own processor implementations of the new architecture (which they also do ahead of time), leaving in fact some of their partners, server processor ones in particular, the honor of releasing the first ARMv8 processor implementations. I’m not sure any device using an ARMv8 processor design from ARM Holdings has even shipped yet.

All that to say that while many people knew about 64-bit ARM for some time, almost no one expected Apple to release an ARMv8-based consumer device so ahead of when everyone thought such a thing would be actually needed; Apple was not merely first to market with an ARMv8 handheld, but lapped every other handset maker and their processor suppliers in that regard. But that naturally raises the question of what exactly Apple gets from having a 64-bit ARM processor in the iPhone 5S, since after all none of its competitors or the suppliers of these saw what benefit would justify rushing to ARMv8 as Apple did. And this is a legitimate question, so let us see what benefits we can identify.

First Candidate: a Larger Address Space

Let us start by the obvious: the ability for a single program to address more than 4 GB of address space. Or rather, let us start by killing the notion that this is only about devices with more than 4 GB of physical RAM. You do not, absolutely not, need a 64-bit processor to build a device with more than 4GB or RAM; for instance, the Cortex A15, which implements ARMv7-A with a few extensions, is able of using up to 1 TB of physical RAM, even though it is a decidedly 32-bit processor. What is true is that, with a 32-bit processor, no single program is able of using more than 4 GB of that RAM at once, so while such an arrangement is very useful for server or desktop multitasking scenarios, its benefits are more limited on a mobile device, where you don’t typically expect background programs to keep using a lot of memory. So handsets and tablets will likely need to go with a 64-bit processor when they will start packing more than 4 GB of RAM, so that the frontmost program can actually make use of that RAM.

However, that does not mean the benefits of large virtual address space are limited to that situation. Indeed, using virtual memory an iOS program can very well use large datasets that do not actually fit in RAM, using mmap(2) to map the files containing this data into virtual memory, and leaving the virtual memory subsystem handle the RAM as a cache. Currently, it is problematic on iOS to map files more than a few hundred megabytes in size, because the program address space is limited to 4GB (and what is left is not necessarily in a big continuous chunk you can map a single file in).

That being said, in my opinion the usefulness of being able to map gigabyte-scale files on 64-bit iOS devices will currently be limited to a few niche applications, if only because, while these files won’t necessarily have to fit in RAM, they will have to fit on the device Flash storage in the first place, and the largest capacity you can get on an iOS device at the time of this writing being 128 GB, with most people settling for less, you’d better not have too many such applications installed at once. That said, for those applications which need it, likely in vertical markets mostly, the sole feature of a larger address space means 64-bit ARM is a godsend for them.

One sometimes heard benefit of Apple already pushing ordinary applications (those which don’t map big files) to adopt 64-bit ARM is that, the day Apple releases a device featuring more than 4 GB of RAM, these applications will benefit without them needing to be updated. However, I don’t buy it. It is in the best interest of iOS applications to not spontaneously occupy too much memory, for instance so that they do not get killed first when they are in the background, so in order to take advantage of an iOS device with more RAM than you can shake a stick at, they would have change behavior and be updated anyway, so…

Verdict: inconclusive for most apps, a godsend for some niche apps.

Second Candidate: the ARMv8 AArch64 A64 ARM64 Instruction Set

The ability of doing 64-bit processing on ARM comes as part of a new instruction set, called…

  • Well it’s not called ARMv8: not only does ARMv8 also bring improvements to the ARM and Thumb instruction sets (more on that later), which for the occasion have been renamed to A32 and T32, respectively; but also ARMv9 whenever it will come out will also feature 64-bit processing, so we can’t refer to 64-bit ARM as ARMv8.
  • AArch64 is the name of the execution mode of the processor where you can perform 64-bit processing, and it’s a mouthful so I won’t be using the term.
  • A64 is the name ARM Holdings gives to the new instructions set, a peer to A32 and T32; but this term does not make it clear we’re talking about ARM so I won’t be using it.
  • Apple in Xcode uses ARM64 to designate the new instruction set, so that’s the name we will be using

The instruction set is quite a change from ARM/A32; for one, the instruction encodings are completely different, and a number of features, most notably pervasive conditional execution, have been dropped. On the other hand, you get access to 31 general purpose registers that are of course 64-bit wide (as opposed to the 14 general purpose registers which you get from ARM/A32, once you remove the PC and SP), some instructions can do more, you get access to much more 64-bit math, and what does remain supported has the same semantics as in ARM/A32 (for instance the classic NZCV flags are still here and behave the exact same way), so it’s not completely unfamiliar territory.

ARM64 could be the subject of an entire blog post, so let us stick to some highlights:

CRC32

Ah, let’s get that one out of the way first. You might have noticed this little guy in the ARMv8-A Reference Manual (page 449). Bad news, folks: support for this instruction is optional, and the iPhone 5S processor does not in fact support it (trust me, I tried). Maybe next time.

More Registers

ARM64 features 31 general purpose registers, as opposed to 14, which helps the compiler avoid running out of registers and having to “spill” variables to memory, which can be a costly operation in the middle of a loop. If you remember, the expansion of 8 to 16 registers for the x86 64-bit transition on the Mac back in the day measurably improved performance; however here the impact will be less, as 14 was already enough for most tasks, and ARM already had a register-based parameter passing convention in 32-bit mode. Your mileage will vary.

64-bit Math

While you could do some 64-bit operations in previous versions of the ARM architecture, this was limited and slow; in ARM64, 64-bit math is natively and efficiently supported, so programs using 64-bit math will definitely benefit from ARM64. But which programs are those? After all, most of the amounts a program ever needs to track do not go over a billion, and therefore fit in a 32-bit integer.

Besides some specialized tasks, two notable example of tasks that do make use of 64-bit math are MP3 processing and most cryptography calculations. So those tasks will benefit from ARM64 on the iPhone 5S.

But on the other hand, all iPhones have always had dedicated MP3 processing hardware (which is relatively straightforward to use with Audio Queues and CoreAudio), which in particular is more power efficient to use than the main processor for this task, and ARMv8 also introduces dedicated AES and SHA accelerator instructions, which are theoretically available from ARM/A32 mode, so ARM64 was not necessary to improve those either.

But on the other other hand, there are other tasks that use 64-bit math and do not have dedicated accelerators. Moreover, standards evolve. Some will be phased out and others appear, and a perfect example is the recently announced SHA-3 standard, based on Keccak. Such new standards generally take time to make their way as dedicated accelerators, and obviously such accelerators cannot be introduced to devices released before the standardization. But software has no such limitations, and it just so happens that Keccak benefits from a 64-bit processor, for instance. 64-bit math matters for emerging and future standards which could benefit from it, even if they are specialized enough to warrant their own dedicated hardware, as software will always be necessary to deploy them after the fact.

NEON Improvements

ARMv8 also brings improvements to NEON, and while some of the new instructions are also theoretically available in 32-bit mode, such as VRINT, surprisingly some improvements are exclusive to ARM64, for instance the ability to operate on double-precision floating point data, and interesting new instructions to accumulate unsigned values to an accumulator which will saturate as a signed amount, and conversely (contrast with x86, where all vector extensions so far, including future ones like AVX-512, are equally available in 32-bit and 64-bit mode even though for 32-bit mode this requires incredible contortions, given how saturated the 32-bit x86 instruction encoding map is). Moreover, in ARM64 the number of 128-bit vector registers increases from 16 to 32, which is much more useful than the similar increase of number of general-purpose registers as SIMD calculations typically involve many vectors. I will talk about this some more in a future update to my NEON post

Pointer size doubling

It has to be mentioned: a tradeoff of ARM64 is that pointers are twice as large, taking more space in memory and the caches. Still, iOS programs are more media-heavy than pointer-heavy, so it shouldn’t be too bad (just make sure to monitor the effect when you will start building for ARM64).

Verdict: a nice win on NEON, native 64-bit math is a plus for some specialized tasks and the future, other factors are inconclusive: in my limited testing I have not observed performance changes from just switching (non-NEON, non-64 bit math) code from ARMv7 compilation to ARM64 compilation and running them on the same hardware (iPhone 5S).

Non-Candidate: Unrelated Processor Implementation Improvements

Speaking of which. Among the many dubious “evaluations” on the web of the iPhone 5S 64-bit feature, I saw some try to isolate the effect of ARM64 by comparing the run of an iOS App Store benchmark app (that had been updated to have an ARM64 slice) on an iPhone 5S to a run of that same app… on an iPhone 5. Facepalm. As if the processor and SoC designers at Apple had not been able to work on anything else than implementing ARMv8 in the past year. As a result, what was reported as being ARM64 improvements were in fact mostly the effect of unrelated improvement such as better caches, faster memory, improved microarchitecture, etc. Honestly, I’ve been running micro benchmarks of my devising on my iPhone 5S, and as far as I can tell the “Cyclone” processor core of the A7 is smoking hot (so to speak), including when running 32-bit ARMv7 code, so completely independently of ARMv8 and ARM64. The Swift core of the A6 was already impressive for a first public release, but here Cyclone knocks my socks off, my hat is off to the Apple semiconductor architecture guys.

Third Candidate: Opportunistic Apple Changes

Mike Ash talked about those, and I have not attempted to measure their effect, so I will defer to him. I will just comment that to an extent, these improvements can also be seen as Apple being stuck with inefficiencies they cannot get rid of for application compatibility reasons on ARM/A32, and they found a solution to not have these inefficiencies in the first place, but only for ARM64 (and therefore, ARM64 is better ;). I mean, we in the Apple development community are quick to point and laugh at Windows being saddled with a huge number of application compatibility hacks and a culture of fear of breaking anything that caused this OS to become prematurely fossilized (and I mean, I’m guilty as charged too), and I think it’s only fair we don’t blindly give Apple a pass on these things.

So, remember the non-fragile Objective-C ABI? The iPhone has always had it (though we did not necessarily realize as the Simulator did not have it at first), so why can’t Apple use it to add an inline retain count to NSObject in 32-bit ARM? I’m willing to bet that for such a fundamental Cocoa object, non-direct effects start playing a role whenever attempting to make even such an ostensibly simple change; I’m sure for instance that some shipping iOS apps allocate an enormous amount of small objects, and would therefore run out of memory if Apple added 4 bytes of inline retain count to each NSObject, and therefore to each such object. Mind you, the non-fragile ABI has likely been useful elsewhere on derivable Apple classes that are under less app compatibility pressure, but it was not enough to solve the inline retain count problem by itself.

Verdict: certainly a win, but should we give credit to ARM64 itself?

Fourth Candidate: the Floating-Point ABI

This one is a fun one. It could also be considered an inefficiency Apple is stuck with on 32-bit ARM, but since ARM touts that a hard float ABI is a feature of ARM64, I’m willing to cut Apple some slack here.

When the iPhone SDK was released, all iOS devices had an ARM11 processor which supported floating-point in hardware; however, Apple allowed and even set by default Thumb to be used for iOS apps, and Thumb on the ARM11 could not access the floating-point hardware, not even to put a floating-point value in a floating-point register. And in order to allow Thumb code to call the APIs, the APIs had to take their floating-point parameters from the general-purpose registers, and return their floating-point result to a general-purpose register, and in fact all function calls, including between ARM functions, had to behave that way, because they could always potentially be called by Thumb code: this is called the soft-float ABI. And when with the iPhone 3GS Thumb gained the ability to use floating-point hardware, it had to remain compatible with existing code, and therefore had to forward floating-point parameters in general-purpose registers. Today on 32-bit ARM parameters are still passed that way for compatibility with the original usages.

This can represent a performance penalty, as typically transferring from the floating-point register file to the general-purpose register file is often expensive (and sometimes the converse too). It is often small in comparison to the time needed for the called function to execute, but not always. ARM64 does not allow such a soft-float ABI, and I wanted to see if I could make the overhead visible, and then if switching to ARM64 would eliminate the overhead.

I created a small function that adds successively 1.0f, 2.0f, 3.0f, etc. up to (1<<20)*1.0f to a single-precision floating-point accumulator that starts at 0.0, and another function which does the same thing except it calls another function to perform the addition, through a function pointer to decrease the risk of it being inlined. Then I compiled the code to ARMv7, ran it on the iPhone 5S, and measured and compared the time taken by each function; then the same process, except the code was also compiled for ARM64. Here are the results:

ARMv7 ARM64
Inlined addition 6.103 ms 5.885 ms
Addition by function call 12.999 ms 6.920 ms

Yup, we have managed to make the effect observable and evaluate the penalty, and when trying on ARM64 the penalty is decimated; there is some overhead left, probably the overhead of the function call itself, which would be negligible in a real situation.

Of course, this was a contrived example designed to isolate the effect, where the least possible work is done for each forced function call, so the effect won’t be so obvious in real code, but it’s worth trying to build for ARM64 and see if improvements can be seen in code that passes around floating-point values.

Verdict: A win, possibly a nice win, in the right situations.

Non-Candidate: Apple-Enforced Limitations

Ah, yes, I have to mention that before I go. Remember when I mentioned that some ARMv8 features were theoretically available in 32-bit mode? That’s because Apple won’t let you use them in practice: you cannot create a program slice for 32-bit ARM that will only run on Cyclone and later (with the other devices using another slice). It is simply impossible (and I have even tried dirty tricks to do so, to no avail). If you want to take advantage of ARMv8 features like the AES and SHA accelerator instructions, you have to port your code to ARM64. Period. So Sayeth Apple.

That means, for instance, that Anand’s comparison on the same hardware of two versions of Geekbench: the one just before and the one just after addition of ARM64 support, while clever and the best he could do, is not really a fair comparison of ARM64v8 and ARM32v8, but in fact a comparison between ARM64v8 and ARMv7. When you remove the impressive AES and SHA1 advantages from the comparison table, you end up with something that may be fair, though it’s still hard to know for sure.

So these Apple-enforced limitations end up making us conflate the difference between ARM32v8 to ARM64v8 with the jump from ARMv7 to ARM64v8. Now mind you, Apple (and ARM, to an extent) gets full credit for both of them, but it is important to realize what we are talking about.

Plus, just a few paragraphs earlier I was chastising Apple for their constraining legacy, and what they did here, by not only shipping a full 64-bit ARMv8 processor, but also immediately a full 64-bit OS, is say: No. Stop trying to split hairs and try and use these new processor features while staying in 32-bit. Go ARM64, or bust. You have no excuse, the 64-bit environment was available as the same time as the new processor. And that way, it’s one less “ARM32v8” slice type in the wild, so one less legacy, to worry about.

Conclusion

Well… No. I’m not going to conclude one way or the other: neither that the 64-bit aspect of the iPhone 5S is a marketing gimmick, or that it is everything Apple implied it would be. I won’t enter this game. Because what I do see here is the result of awesome work from both the processor side, the OS side, and the toolchain side at Apple to seamlessly get us a full 64-bit ARM environment (with 32-bit compatibility) all at once, without us having to double-guess the next increment. The shorter the transition, the better we’ll all be, and Apple couldn’t do shorter than that. For comparison, on the Mac the first 64-bit machine, the Power Mac G5, shipped in 2003; and while Leopard ostensibly added support for 64-bit graphical applications, iTunes actually ended up requiring Lion, released in 2011, to run as a 64-bit process. So overall on the Mac the same transition took 8 years. ARM64 is the future, and the iPhone 5S + iOS 7 is a clear investment in that future. Plus, with the iPhone 5S I was able to tinker on ARMv8 way ahead of when I thought I would be able to do so, so I must thank Apple for that.

Let me put something while you’re waiting…

A short note to let you know that I am currently working on a copiously researched post about the big new thing of the iPhone 5S processor: 64-bit ARM, as well as related updates to keep “Introduction to NEON on iPhone” and of course “A few things iOS developers ought to know about the ARM architecture” up-to-date. I already have a lot of interesting info about that aspect of the iPhone 5S, either from experimenting on it or found online, and while searching I happened to stumble upon this post, which is so true and exactly what I’ve been practicing for so long that I wish I would have thought of posting this myself, so I’m sharing it with you here.

In fact, rather than compile then disassemble as in that post, I go one step further and compile to assembly (either use clang -S, or use the view assembly feature of Xcode:)

screenshot of part of the Xcode window, with the menu item for the view assembler feature highlighted

Indeed, sometimes I don’t merely need to figure out the instructions for a particular task, but also the assembler directives for it, for instance in order to generate the proper relocation when referencing a global variable, or sometimes simply to know what my ARM64 assembly files need to start with in order to work…

A highly recommended technique. Don’t leave home without it:

ARM Processors: How do I do <x> in assembler?

How Nintendo could release iOS games

So there have been a few exchanges lately in the Apple pundit sphere about whether Nintendo is in trouble with the 3DS and whether it had better start releasing games for the iPhone and iPad. I feel I would be out of my depth analyzing Nintendo’s woes (or lack thereof), but on specific points I feel like I can contribute.

First of all, my personal experience, for what it’s worth, mostly matches Lukas’: when my little sister got a Game Boy Color in 1998 or so, we all used it at home (including her), with only a few exceptions like road trips (where we would take a lot of things with us anyway). On the other hand, these days I use my Nintendo DS Lite during my daily commute, and it seems from various online and offline interactions that I am not alone; it is hard for me to tell whether Nintendo portable consoles are being displaced in that market: I do see a lot of people playing on their iPhones and Android devices in the commuter train, but would have (even part of) these people been playing on portable consoles instead were it not for smartphones? I have no idea. I would carry my briefcase during my commute anyway, but it is possible that others did take advantage of their smartphone to forego the portable console, and the briefcase/backpack/attaché-case along with it, for their commute, and therefore that for some the smartphone did in effect displace the portable console.

Now, on the matter of what Nintendo could do, I think that if they followed John’s recommendation and started releasing games for the iOS platform, success of such a strategy would have the odds stacked against it, at least if what they do is just release iOS games for the sake of releasing iOS games. Others went into some of the reasons why, but one factor I would like to add it that it would not sit well internally at Nintendo. Nintendo is a proud company. For starters, they have buried all of their predecessors and 80’s contemporaries, and by bury I mean that Nintendo (at least) drove them off the hardware business: Atari, Amstrad, Commodore, SNK (Neo Geo), and of course Sega, and this is but an incomplete list. And for those which did manage to keep releasing games, the results have been… I think I’ll go with “sub-par”. Now I think Nintendo would do better than these guys as a pure game developer, but it is easy to see why Nintendo employees would internally consider releasing games for iOS, even “a couple of $9.99 iPhone/iPad games to test the water”, to be losing face. Even if they were to present it to the outside as “giving a glass of ice water to someone in hell”.

As a result, if Nintendo were to work on releasing games for iOS, it would not have its best employees working on them, and it could even cause some of these to leave the company. Teams working on games for iOS would be considered turncoats by other parts of the company, with the possibility of turf wars and withheld cooperation from these other parts. What’s more, both technical and non-technical Nintendo employees working on these games would have to deal with Apple’s technical and policy rules of the iOS App Store, which are already frustrating to most iOS developers, so I let you imagine how much more they would be to people used to making the rules. Apple certainly is not going to bend the rules for Nintendo, regardless (or possibly because) of the “ice water” Nintendo would bring. In the end, that would cause some of these employees to question why they should be dealing with this stuff, and drive them to quit.

The comparison with iTunes for Windows is not apt at all. For one, by that point Windows was an incumbent; but more importantly, in general-purpose computing other network effects exist besides “more users attract more applications, and more applications attract more users”, so it was possible for Apple to convince its own people that it was worth it to stomach Windows development because it would help the iPod and even indirectly the Mac as well, and in the end it did. By contrast, with video gaming platforms there is very little of such secondary network effects, so for Nintendo employees to be working on games for iOS would be seen as playing for the other team. So I think it is too early for Nintendo to be seen releasing iOS games.

However, if Nintendo were to try releasing iOS games at this time anyway, one way they could do it which would dampen this factor (and maybe others) is by taking advantage of the iPhone to do things they could not do anyway with the 3DS. In particular, back in the day of the Game Boy and Game Boy Advance, Nintendo would sometimes release games featuring additional hardware in the cart, like Kirby Tilt’n’Tumble or Pokemon Pinball. But nowadays it is no longer possible, as the cartridge slot in the Nintendo DS and 3DS just does not reasonably allow anything to stick out like that. Lately I have been playing WarioWare Twisted, which packs a gyroscope and rumble (I mean, just look at that cart) and only ever uses the A button besides that, and thought it would be ideal for the iPhone, which features both. Now it turns out that the 3DS does have an internal gyroscope among other things, but as far as I can tell it does not have any rumble feature. So Nintendo could get a start on the iOS platform by releasing back catalog portable games that for one reason or another relied on a feature not present in the 3DS.

But releasing back catalog games does not a strategy make. So Nintendo could use this as a starting point to make new original games for iOS which they could not do on the 3DS. But probably not by relying on things like rumble or multitouch for these new games, as Nintendo might want to put them in their next handheld, and reserve games relying on these features to that console; instead, these games for iOS would rely on features less usual for games to use like the magnetometer and location services (to make augmented reality games for instance), as these features are not traditional gaming hardware features, and Nintendo could use them while keeping face and with less fear of squandering an opportunity of keeping such mechanisms exclusively for its next handheld.

And the (fictional) banner under which the team at Nintendo would do so, kind of like the Macintosh’s team pirate flag, could be WarioWare, Inc., the fictional company founded by the greedy character Wario and supposed to be behind the microgames in the WarioWare series of games. For a start because Nintendo has already played with Wario giving out a mock press conference or mock press release and fundraiser, then because the first back catalog game they would release this way would be WarioWare Twisted. But also because WarioWare games have always been quite… experimental, and this would merely be one more way to experiment, the new frontier for Nintendo. I am not even the first one to suggest the association.

And, well, okay, let’s admit it, it is also because I love the WarioWare microgames that remix old Nintendo games, and would love even more to see old Mac and Apple II games get the same treatment in a WarioWare for iOS: Oregon Trail (“cross!”), Zork (“get lamp!”), Ultima (“find the exit!”), Mystery House (“type!”), Alice (“capture!”), Lode Runner (“dig!”), Shufflepuck Café (“block!”), Beyond Dark Castle (“throw!”), Realmz (“cast!”), Ambrosia games and Escape Velocity in particular (“land!”), Glider (“escape!”), Marathon (“shoot!”), Factory (“dispatch!”), Jetpack (“dodge!”), or, heck, Visicalc (“calculate!”) or Hypercard (“flip!”)…

More on Mac OS X being the new Classic

In Mac OS X is the new Classic, I made some bold predictions about Apple’s operating system future, but forgot to attach some sort of deadline to these predictions. Allow me to repair this omission here by adding this: I believe Apple will announce to developers the operating system transition described in that blog post within the next five years, at WWDC 2018 at the latest. So if WWDC 2018 comes and goes and no such thing has been announced, you are free to point and laugh at me.

On the same subject, I omitted that not only I believe that Apple will allow Developer ID apps on desktop iOS, but I believe Apple will in fact allow distributions of desktop iOS apps without requiring developers to pay Apple for that privilege (either through a free certificate or no certificate needed at all), though, like with Mac OS X Mountain Lion currently, the default setting will likely not allow running such apps. This is in consistency with what I already expressed in my blog post about Developer ID.

Lastly, for those of you wondering, I finally remembered an example of behavior that apps ported to Mac OS X could technically get away with, but ended up “sticking out” among native apps including to the user base: drawing directly to the screen outside of full-screen mode. At the start of Mac OS X some games would do so, even in windowed mode, but it ended up being noticed, as it resulted in ugly interactions with the transparent on-screen volume and brightness change overlays, and then very bad interactions with Exposé; so as a result developers were pressured not to do so and by the time of Mac OS X 10.4 Tiger pretty much no newly released app was drawing directly to the screen. So the same way, Apple would not even need to mandate all aspects of what a good iOS desktop app would be, they can allow some things that are not completely iOS-like to allow developers to more easily port their apps, knowing that over time the community will ostracize such exceptions.

How can we know whether we can effect change from the outside?

Marco Arment thinks we can have some influence, however measured, over Apple through our writing (or podcasts or YouTube channels or other media, as the case may be); and I do think so too, but I have to wonder to which extent it is true, and as a consequence whether it is even worth keeping in mind.

The first issue is that we don’t know whether a decision was influenced by externally produced elements or not. As Marco wrote, Apple is not a waterfall dictatorship. But it is also a peculiar company, which has consistently eschewed “common wisdom” thrown at them (by the tech press in particular) for quite some time now, instead doing what they feel is best for them and for the user, and the least we can say it that it seems to have worked okay for them (so far). So by necessity Apple employees have, consciously or not, filters with which they dismiss many of these external opinions. Of course, we Apple developers are quite attuned to these filters, but on the other hand we are both users with slight delusions as to the representativity of our needs in the customer base at large, and also a party in relationship with Apple and as such are likely coming off as biased, especially on the subject of relation between Apple and third-party developers (iOS App Store policies come to mind, in particular). Paul Graham made me realize this when he wrote in Apple’s mistake:

Actually I suppose Apple has a third misconception: that all the complaints about App Store approvals are not a serious problem. They must hear developers complaining. But partners and suppliers are always complaining. It would be a bad sign if they weren’t; it would mean you were being too easy on them.

How do I convince an Apple employee, who I cannot hear, that my criticism of something does not (just) come from the inconvenience this something causes me as a developer, but comes from my belief this something causes subtle but important inconveniences for the user? I do not know. I mean, we still do not have trials on the iOS App Store after 5 years, for instance, so there has got to be some reason some of our pleas do not work.

Of course, I don’t expect an Apple representative to just go on the record telling that external opinions influenced this and that decision. Duh. But there are a variety of officious channels through which this could be hinted at. For instance, in the particular domain of Radar (Apple’s bug reporting tool) Matt Drance, formerly of DTS, gave a talk at C4 (scroll down to “Drance – How to be a Good Developer”) about your relation as a developer with Apple, in particular about Radar, giving information that I believe is not available anywhere else, and my thanks go to him for doing this talk, and to Philippe Casgrain for transcribing it and publishing his notes. As far as I know, no one who works or used to work at Apple ever talked publicly about the influence of external opinions the way Matt Drance talked about how radars are seen from the inside.

The second issue is that the tech community in general seems to be pretty quick at decreeing every Apple reaction as being the result of press coverage/public outcry/radar duplicates/etc., without any evidence (other than circumstantial) supporting that affirmation. The prime example being, of course, Apple’s eventual decision to release a SDK for the iPhone, which many developers will tell was because it had however many duplicates in Radar, while things couldn’t possibly have been that simple (personally I believe conversations over private channels between Apple and important software vendors, game developers in particular, played the main role in convincing the relevant Apple executives). Even for the example Marco gives, the replacement of Helvetica Neue Light by Helvetica Neue in current iOS 7 betas, I remain unconvinced: this could just as easily be explained by usual iteration of the design. The only instances where we can say with some credibility that Apple reacted to external influences are when they reversed their position on some app rejections (the Mark Fiore incident in particular comes to mind), and (along with some help from the FTC) when Apple renounced to their additions to Section 3.3.1. As a result it is hard to have a conversation in the tech community about what works when it comes to publishing opinions Apple employees see as valid; and since we are not going to have this conversation with Apple either, this leaves most everyone in the dark.

So as a result, I am going to keep writing my opinions here for my audience, but I am not going to pay attention to whether anyone at Apple actually could take them into account, as I do not want to worry over something of which I cannot know the outcome.