“translation layers”, externally sold content, and unsandboxed apps

So Apple ended up relenting on most of the requirements introduced at the same time as subscriptions. Apple does still require that apps not sell digital content in the app itself through means other than in-app purchases, or link to a place where this is done, however. I would say this is a reasonable way to provide an incentive for these products to be offered as in-app purchases, were it not first for the fact the agency model used for ebooks in particular (but I’m sure other kind of digital goods are affected) does not allow for 30% of the price to go to Apple, even if the price used in-app is 43% higher than the price out of the app, and second for the fact some catalogs (Amazon’s Kindle one, obviously, but it must be a pain for other actors too) cannot even be made to fit in Apple’s in-app database.

John Gruber thinks this is not Apple’s problem, but at the same time Apple has to exist in reality at some point. Besides, I don’t think Apple is entitled over the whole lifetime of an app to 30% of any purchase where the buying intent originated from the app. Regardless of whether you think it’s fair or not, competitors will eventually catch up in this area and offer better conditions to publishers, making it untenable for Apple to keep this requirement. But it’s not fair either for Apple to shoulder for free the cost of screening, listing, hosting, etc. these “free” clients that in fact enable a lot of business. Maybe apps could be required to ensure the first 10$ of purchases made in the app can be paid only using tokens bought through in-app purchase (thus avoiding the issue of exposing all SKUs to Apple), then only could they directly take users’ money.

But what this edict has done anyway—besides making the Kobo, Kindle, etc. apps quite inscrutable by forcing them to remove links to their respective stores—is hurt Apple’s credibility with respect to developer announcements. Last year they prohibited Flash “translation layers”, and this prohibition had already been in application (to the extent that it could be enforced, anyway) for a few months when they relented on it. This year they dictated these rules for apps selling digital content, rejecting new apps for breaking them before these rules were even known, with existing apps having until the end of June to comply, only for Apple to significantly relax these rules at the beginning of June (and leave until the end of July to comply). This means that in both cases developers were actually better off doing nothing and waiting to see what Apple would actually end up enforcing. I was about to wonder how many Mac developers were actually scrambling to implement sandboxing, supposed to be mandatory in the Mac App Store by November, but it turns out Apple may have jumped the gun at the very least here too, as they just extended the deadline to March. In the future, Apple may claim that they warned developers of such things in advance but the truth is most of the stuff they warned about did not come to pass in the way they warned it would, so why should developers heed these “warnings”?

Steve

I wasn’t sure I should write something, at first. Oh, sure, I could have written about the fact I didn’t dress specially thursday morning or didn’t bring anything to an Apple Store, as I thought for Steve I should either do something in the most excellent taste or nothing, and I couldn’t think of the former (and so I kicked myself saturday when I went to the Opera Apple Store to buy a Lion USB key, saw them, and thought “Of course! An apple with a bite taken out of it… dummy!”). Or I could have written about the fact he was taken from his families at a way too early age. Or about the fact, except for this one (and variants of this one, though one would have been enough), I was appalled by the editorial cartoons about the event (“iDead”? Seriously?). Or about a few obituaries I read or heard where the author put some criticism along with the praise (which by itself I don’t mind, honestly, he was kind of a jerk), but put in a way that suggested the good could be kept without the flaws, while for instance in an industry where having different companies responsible for aspects of the user experience of a single device is considered standard practice, being a control freak is essential to ensure the quality of the user experience that has made Apple a success. Or about how his presence in the keynotes during his last leave of absence (while on the other hand he stepped back from presentation duties during the previous one), and his resignation merely 6 weeks ago, both take on a whole new meaning today.

But at the end of the day, what would have I brought, given the outpouring of tributes and other content about Steve Jobs, many from people more qualified and better writers than I am? Not much. However, I read a piece where the author acknowledges the impact Steve Jobs had on his life, and I thought I should, too, pay my dues and render unto Steve that which is Steve’s, if only to help with the cathartic process. I hope it will contribute something for his family, his family at Apple, his family at Disney/Pixar, and the whole tech and media industries in this time of grief.

I was quite literally raised with Apple computers; from an Apple ][e to the latest Macs, there has always been Apple (and only Apple) hardware in the house, for which I cannot thank my father enough. As a consequence, while I had no idea who Steve Jobs was at the time, he was already having a huge impact on me. Not because I think he designed these computers all by himself, but because, by demanding seemingly impossibly high standards from the ones who designed them with him, or in the case of later Macs, by having made enough of a mark at Apple that the effect was (almost) the same, he ensured a quality of user experience way beyond that of any competitor, which allowed my young self to do things he wouldn’t have been able to do otherwise, and teaching him to expect, nay, demand similar excellence from his computing devices.

Then I started learning about him when he returned to Apple in 1997, from a press cautiously optimistic that the “prodigal son” could get Apple out of trouble, then how he spectacularly did so. I indirectly learned from him (in particular through folklore.org) that it requires a great deal of effort to make something look simple, that there is never good enough, merely good enough to ship this once (because on the other hand, real artists ship) and that the job of the software developer is to be in service of the user experience, not to make stuff that is only of interest to other software developers and remain in a closed circuit.

Imagining my life had Steve Jobs not made what he made is almost too ludicrous to contemplate. Assuming I would even have chosen a career in programming, I would be developing mediocre software on systems that would be as usable a mid-nineties Macintosh, if that, and would have very little of the elegance (come on: setting aside any quibble about who copied whom, do you think Windows or any other operating system would be where it is today were it not for the Mac to at the very least compete with it and make it do one better in the usability department?). And the worst thing is that I would have been content with it and considered it as good as it gets, and it would have been the same for almost all of my peers.

It’s thus safe to say that as far as my influences go, Steve Jobs is second only to my closest family members. By envisioning the future, then making it happen through leadership, talent and just plain chutzpah (for good or ill, it doesn’t seem to be possible to make people believe in your predictions of what the future will be made of, other than by actually taking charge and realizing it), he showed us what computers (and portable music players, and mobile phones, etc.) could be rather than what most people thought they could be before he showed us. And by teaching a legion of users, multiple generations of developers, and everyone at Apple to never settle for great but always strive for the best, he has ensured the continuation of this ethic for a few decades, at least (this is, incidentally, the reason why I am not too worried about the future of Apple, Inc.).

Thank you Steve. Thank you for everything. See you at the crossroads.

Benefits (and drawback) to compiling your iOS app for ARMv7

In “A few things iOS developers ought to know about the ARM architecture”, I talked about ARMv6 and ARMv7, the two ARM architecture versions that iOS supports, but I didn’t touch on an important point: why you would want to compile for one or the other, or even both (thanks to Jasconius at Stack Overflow for asking that question).

The first thing you need to know is that you never need to compile for ARMv7: after all, apps last updated at the time of the iPhone 3G (and thus compiled for ARMv6) still run on the iPad 2 (provided they didn’t use private APIs…).

Scratch that, you may have to compile for ARMv7 in some circumstances: I have heard reports that if your app requires iOS 5, then Xcode won’t let you build the app ARMv6 only. – May 22, 2012

So you could keep compiling your app for ARMv6, but is it what you should do? It depends on your situation.

If your app is an iPad-only app, or if it requires a device feature (like video recording or magnetometer) that no ARMv6 device ever had, then do not hesitate and compile only for ARMv7. There are only benefits and no drawback to doing so (just make sure to add armv7 in the Required Device Capabilities (UIRequiredDeviceCapabilities) key in the project’s Info.plist, otherwise you will get a validation error from iTunes Connect when uploading the binary, such as: “iPhone/iPod Touch: application executable is missing a required architecture. At least one of the following architecture(s) must be present: armv6”).

If you still want your app to run on ARMv6 devices, however, you can’t go ARMv7-only, so your only choices are to compile only for ARMv6, or for both ARMv6 and ARMv7, which generates a fat binary which will still run on ARMv6 devices while taking advantage of the new instructions on ARMv7 devices1. Doing the latter will almost double the executable binary size compared to the former; executable binary size is typically dwarfed by the art assets and other resources in your application package, so it typically doesn’t matter, but make sure to check this increase. In exchange, you will get the following:

  • ability to use NEON (note that you will not automatically get NEON-optimized code from the compiler, you must explicitly write that code)
  • Thumb that doesn’t suck: if you follow my advice and disable Thumb for ARMv6 but enable it for ARMv7, this means your code on ARMv7 will be smaller than on ARMv6, helping with RAM and instruction cache usage
  • slightly more efficient compiler-generated code (ARMv7 brings a few new instructions besides NEON).

Given the tradeoff, even if you don’t take advantage of NEON it’s almost always a good idea to compile for both ARMv6 and ARMv7 rather than just ARMv6, but again make sure to check the size increase of the application package isn’t a problem.

Now I think it is important to mention what compiling for ARMv7 will not bring you.

  • It will not make your code run more efficiently on ARMv6 devices, since those will still be running the ARMv6 compiled code; this means it will only improve your code on devices where your app already runs faster. That being said, you could take advantage of these improvements to, say, enable more effects on ARMv7 devices.
  • It will not improve performance of the Apple frameworks and libraries: those are already optimized for the device they are running on, even if your code is compiled only for ARMv6.
  • There are a few cases where ARMv7 devices run code less efficiently than ARMv6 ones (double-precision floating-point code comes to mind); this will happen on these devices even if you only compile for ARMv6, so adding (or replacing by) an ARMv7 slice will not help or hurt this in any way.
  • If you have third-party dependencies with libraries that provide only an ARMv6 slice (you can check with otool -vf <library name>), the code of this dependency won’t become more efficient if you compile for ARMv7 (if they do provide an ARMv7 slice, compiling for ARMv7 will allow you to use it, likely making it more efficient).

So to sum it up: you should likely compile for both ARMv6 and ARMv7, which will improve your code somewhat (or significantly if you take advantage of NEON) but only when running on ARMv7 devices, while increasing your application download to a likely small extent; unless, that is, if you only target ARMv7 devices, in which case you can drop compiling for ARMv6 and eliminate that drawback.


  1. Apple would very much like you to optimize for ARMv7 while keeping ARMv6 compatibility: at the time of this writing, the default “Standard” architecture setting in Xcode compiles for both ARMv6 and ARMv7.

China declined to join an earlier coalition, Russia reveals

The saga of France’s liquidation sale continues (read our previous report). Diplomatic correspondence released yesterday by Russia in response to China’s communiqué reveals that China was asked to join an earlier coalition to acquire South Africa’s nuclear arsenal (an acquisition China mentioned in its communiqué as evidence of a conspiracy), but China declined.

This seemed to undermine China’s argument of an international conspiracy directed against it, at the very least it strengthens the earlier coalition’s claim that its only purpose was to figuratively bury these nuclear weapons; it should be noted high-profile countries Russia and USA are members of both coalitions.

China then answered with an update to their communiqué (no anchor, scroll down to “UPDATE August 4, 2011 – 12:25pm PT”) stating the aim of this reveal was to « divert attention by pushing a false “gotcha!” while failing to address the substance of the issues we raised. » The substance being, according to China, that both coalitions’ aim was to prevent China from getting access to these weapons for itself so that it would have been able to use them to dissuade against attacks, and that China joining the coalition wouldn’t have changed this.

Things didn’t stop here, as Russia then answered back (don’t you love statements spread across multiple tweets?) that it showed China wasn’t interested in partnering with the international community to help reduce the global nuclear threat.

For many geopolitical observers, the situation makes a lot more sense now. At the time the France sale was closed and the bids were made public, some wondered why China wasn’t in the winning consortium and had instead made a competing bid with Japan. China and Japan are somewhat newcomers to the nuclear club, and while China’s status as the world’s manufacturer pretty much guarantees it will never be directly targeted, its relative lack of nuclear weapons is the reason, according to analysts, it has less influence than its size and GDP would suggest. Meanwhile, China is subjected to a number of proxy attacks, so analysts surmise increasing its nuclear arsenal would be a way for China to dissuade against such attacks against its weaker allies.

So the conclusion reached by these observers is that, instead of joining alliances that China perceived as designed to keep the weapons out of its reach, China played everything or nothing. But the old boys nuclear club still has means China doesn’t have, and China lost in both cases, and now China is taking the battle to the public relations scene.

Geopolitical analyst Florian Müller in particular was quoted pointing out that, given the recent expansion of its influence, it was expected for China to be targeted by proxy, and other countries were likely acting their normal course and were not engaged in any organized campaign.

So to yours truly, it seems that while the rules of nuclear dissuasion may be unfair, it seems pointless to call out the other players for playing by these rules, and it makes China look like a sore loser. But the worst part may be that the Chinese officials seemingly believe in their own, seemingly self-contradicting (if they are so much in favor of global reduction of nuclear armaments, why wouldn’t they contribute to coalitions designed to take some out of the circulation?) rhetoric, which would mean the conflict could get even bitterer in the future.

France goes down, its nuclear weapons, and China

So France is going belly up. Kaput. Thankfully not after a civil war, the strife is more on the political side, though a few unfortunately died in some of the riots. But after regions like Corsica, Brittany and Provence unilaterally declared independence, after Paris declared a real Commune in defiance of the government, and Versailles, the usual fallback, did not seem safe either, it became clear there was no way out but eventual dissolution of the old, proud French Republic state; much like the USSR dissolved in 1991, but without an equivalent of Russia to pick up the main pieces, the Paris Commune being seen as too unstable.

Among the numerous geopolitical problems this raised, one stood out. Among its armed forces, the French Republic had under its control several nuclear warheads, the missiles to carry them, and a fleet of submarines to launch them. Legitimately terrified that these weapons could fall under the hands of a rogue state or terrorist group, the international community sustained the French government long enough for it to organize a liquidation sale of its nuclear armament and other strategic assets. But Russia certainly wasn’t going to let the USA buy them, an neither were the USA willing to see Russia get them. Realizing that making sure these weapons didn’t fall in the wrong hands was more important than for either party to take control of them itself, Russia, the USA, and a few other countries like India, the United Kingdom, etc. formed a coalition and jointly bid, and won, the dangerous arsenal.

Though they agreed on a few principles before making this alliance, it was considered urgent to get control of the arsenal in the first place, and at the time the sale was closed the coalition had not agreed on what to do with those weapons. But most geopolitical observers and analysts agreed that the coalition would end up keeping the weapons around just in case, but inactive and offline, and that was if they were not just going to disband them outright; after all, for them to be used would require joint agreement of all parties, an agreement that was extremely unlikely to be ever reached.

But China suddenly started publicly complaining that the members of the coalition were engaged in a conspiracy against it, citing military interventions from some of the coalition members in foreign countries, various international disagreements, and now this France liquidation sale (China did not take part in the coalition, it made, with one ally, a separate bid for these weapons but was eventually outbid by the coalition). Observers, however, were skeptical: these events did not seem connected in any way except for the fact of being mentioned together in that communiqué; plus, as if the joint ownership didn’t already ensure at least immobilization by bureaucracy, the coalition includes one partner of China: Brazil. And the fact the coalition spent quite a bit of money to acquire this arsenal, more than some initial estimates, probably only reflected the importance of keeping them out of the wrong hands, given the unstable international landscape, what with rogue states, terrorist groups, less than trustworthy states gaining importance, etc. Not to mention China itself bid rather high in that auction.

In the end it is suspected that, while Chinese officials may believe this conspiracy theory themselves, these complaints made in public view were actually intended to fire up nationalism in the country, or even better, in the whole east Asia.

The saga, unsurprisingly, didn’t stop there: Russia answered, read all about it in the followup. – August 5, 2011

In support of the Lodsys patent lawsuit defendants

If you’re the kind of person who reads this blog, then you probably already know from other sources that an organization called Lodsys is suing seven “indie” iOS (and beyond!) developers for patent infringement after it started threatening them (and a few others) to do so about three weeks ago.

Independently of the number of reactions this warrants, I want to show my support, and I want you to show your support, to the developers who have been thus targeted. Apparently, in the USA, even defending yourself to find out whether a claim is valid doesn’t just cost an arm and a leg, but can put such developers completely out of business with the sheer cost of the litigation. So it must be pretty depressing when you work your ass off to ship a product, a real product with everything it entails (engine programming, user interface programming, design, art assets, testing, bug fixing, support, etc.), only to receive demands for part of your revenue just because someone claims to have come up with a secondary part of your app first, this someone being potentially anyone with half of a quarter of a third of a case and richer than you, since you’d be out of business by the time the claim is found to be invalid. It must be doubly depressing when the infringement is from your use of a standard part of the platform, that you should (and in fact, in the case of iOS in-app purchase, have to) use as a good platform citizen.

I have known about iconfactory.com and enjoyed their work for fifteen1 twelve years now, I use Twitterrific, I have bought Craig Hockenberry’s iPhone dev book, I follow him on Twitter and met him once. I know that the Iconfactory is an upstanding citizen of the Mac and iOS ecosystem and doesn’t deserve this. I am not familiar with the other defendants, but I am sure they do not deserve to be thus targeted, either.

So, to Craig, Gedeon, Talos, Corey, Dave, David, Kate and all the other Iconfactory guys and gals; to the fine folks of Combay, Inc; to the no less fine folks of Illusion Labs AB; to Michael; to Richard; to the guys behind Quickoffice, Inc.; to the people of Wulven Games; I say this: keep faith, guys. Do not let this get you down, keep doing great work, know there are people who appreciate you for it and support you. I’m supporting you whatever you decide to do; if you decide to settle, that’s okay, maybe you don’t have a choice, you have my support; if you decide to fight, you have my support; if you want to set up a legal defense fun to be able to defend yourself, know that there are people who are ready to pitch in, I know I am.

And in the meantime before the patent system in the USA gets the overhaul it so richly deserves (I seriously wonder how any remotely innovative product2 can possibly come out of the little guys in the USA, given such incentives), maybe we can get the major technology companies to withdraw selling their products in that infamous East Texas district (as well as the other overly patent-friendly districts), such that this district becomes a technological blight where nothing more advanced than a corded phone is available. I don’t think it could or would prevent patent lawsuits over tech products from being filed there, but at least it would place the court there in a very uncomfortable position vis-à-vis of the district population.


  1. Let’s just say my memories of my time online in the nineties are a bit fuzzy; it’s a recent browse in digital archives that made me realize I in fact only discovered iconfactory.com in 1999

  2. The initial version of that post just read “innovation” instead of “remotely innovative product”; I felt I needed to clarify my meaning

April’s Fools 2011

So, if you read my previous post before today… April’s fools! And not in the way you might think. This behavior of the iPad 2 is real, I did not make it up, I did indeed verify it this week. The joke is that I claimed to be surprised, hoping to make people believe this unexpected behavior was an April’s fools. Posting strange-sounding yet true information on April the first—now that is the real prank.

It’s hard to tell how successful I was in tricking people into believing this was a joke; I did however get a few emails explaining (as I pretended to request) how such a thing was possible. Congratulations guys, you did not fall for it!

I completely expected this behavior of the iPad 2, I knew about ARM having a weakly ordered memory model, and have known for some time (this test code was prepared over the last few weeks, for instance). By pretending to be surprised, I attempted to raise awareness of this behavior, which many people are completely unaware of; indeed, programmers have rarely been exposed to weakly ordered memory systems so far: x86 is strongly ordered, and even if these programmers have worked on ARM they have only worked on single-core systems so far (the only consumer hardware I know of that exposed a weakly ordered memory model are the various bi-pro PowerPC PowerMacs, which are not very common and back then Mac code was mostly single-threaded). I’ve been thinking about ways to raise this awareness for some time, but it was hard to find out how since it was pretty much a theoretical concern as long as no mainstream multi-core ARM hardware was shipping. But now that the iPad 2, the Xoom, and other multi-core ARM tablets and handsets have shipped I can show everyone that this indeed occurs.

Later today or tomorrow, I will replace the contents of that post with a more in-depth description and a few references, in other words the post I intended to write in the first place, before I realized I could turn it into a small April’s fools prank. It will be at the same URL, in fact you might have noticed the slug did not really match the title, I intended this as a small hint that something was off…

Whether you thought the iPad 2 behavior was a joke, you knew this behavior was real but believed I was genuinely surprised, or you saw right through my feigned surprise, thank you for reading!

(On that note, I should mention I have been sloppy in checking my spam filters residue so far, and my ISP deletes them automatically after one week. So if you ever wrote me and I never answered, this may be why. My apologies if this happened to you, please send the email again if you feel like doing so.)

ARM multicore systems such as the iPad 2 feature a weakly ordered memory model

At the time of this writing, numerous multicore ARM devices are either shipping or set to ship; handsets, of course, but more interestingly this wave of tablets, in particular the iPad 2 (but not only it), seems to be generally based around multicore ARM chips, be it the Tegra 2 from nVidia, or the OMAP 4 from TI, etc. ARM multicore systems did exist before, as the ARM11 was MP-capable, but I’m not aware of it being used in many or any device open for third-party development; this seems to be really exploding now with the Cortex A9.

These devices will also introduce a surprising system behavior to many programmers for the first time, a behavior which if it isn’t understood will cause crashes, or worse.

Let me show what I’m talking about:

BOOL PostItem(FIFO* cont, uint32_t item) /* Bad code, do not use in production */
{ /* Bad code, do not use in production */
#error This is bad code, do not use!
    size_t newWriteIndex = (cont->writeIndex+1)%FIFO_CAPACITY; /* Bad code, do not use in production */
    /* see why at http://wanderingcoder.net/2011/04/01/arm-memory-ordering/ */
    if (newWriteIndex == cont->readIndex) /* Bad code, do not use in production */
        return NO; /* notice that we could still fit one more item,
                    but then readIndex would be equal to writeIndex
                    and it would be impossible to tell from an empty
                    FIFO. */
                    
    cont->buffer[cont->writeIndex] = item; /* Bad code, do not use in production */
    cont->writeIndex = newWriteIndex; /* Bad code, do not use in production */
    
    return YES; /* Bad code, do not use in production */
}

BOOL GetNewItem(FIFO* cont, uint32_t* pItem) /* Bad code, do not use in production */
{
#error This is bad code, do not use!
    if (cont->readIndex == cont->writeIndex) /* Bad code, do not use in production */
        return NO; /* nothing to get. */
        
    *pItem = cont->buffer[cont->readIndex]; /* Bad code, do not use in production */
    /* see why at http://wanderingcoder.net/2011/04/01/arm-memory-ordering/ */
    cont->readIndex = (cont->readIndex+1)%FIFO_CAPACITY; /* Bad code, do not use in production */
    
    return YES; /* Bad code, do not use in production */
}

(This code is taken from the full project, which you can download from Bitbucket in order to reproduce my results.)

This is a lockless FIFO; it looks innocent enough. I tested it in the following setup: a first thread posts consecutive integers slightly more slowly (so that the FIFO is often empty) than a second thread, which gets them and checks that it gets consecutive integers. When this setup was run on the iPad 2, in every run the second thread very quickly (after about 100,000 transfers) got an integer that wasn’t consecutive with the previous one received; instead, it was the expected value minus FIFO_CAPACITY, in other words a leftover value from the buffer.

What happens is that the system allows writes performed by one core (the one which runs the first thread) to be seen out of order from another core. So the second core, running the second thread, first sees that writeIndex was updated, goes on to read the buffer at offset readIndex, and only after that sees the write in the buffer to that location, so it read what was there before that write.

A processor architecture which, like ARM, allows this to happen is referred to as weakly ordered. This behavior might seem scandalous, but remember your code is run on two processing units which, while they share the same memory, are not tightly synchronized, so you cannot expect everything to behave exactly the same way as in the single core case, this is what allows two cores to be faster than one. Many processor architectures permit writes to be reordered (PowerPC for instance), among other things permitting this allows an important reduction in cache synchronization traffic. While it also allows more freedom when designing out of order execution in the processor core, it is not necessary: a system made of in-order processors may reorder writes because of the caches, and it is possible to design a system with out of order processors that does not reorder writes.

Speaking of which, on the other hand x86 guarantees that writes won’t be reordered, that architecture is referred to as strongly ordered. This is not to say it doesn’t do any reordering, for instance reads are allowed to happen ahead of writes that come “before” them; this breaks a few algorithms like Peterson’s algorithm. Since this architecture dominates the desktop, and common mobile systems have only featured a single core so far and thus don’t display memory ordering issues, programmers as a result have gotten used to a strongly ordered world and are generally unaware of these issues. But now that the iPad 2 and other mainstream multicore ARM devices are shipping, exposing for the first time a large number of programmers to a weakly ordered memory model, they can no longer afford to remain ignorant—and going from a strongly ordered memory model to a weakly ordered one breaks far more, and much more common, algorithms, like the double-checked lock and this naive FIFO, than going from single processor to a strongly ordered multiprocessor system ever did.

Note that this can in fact cause regressions on already shipping iOS App Store apps (it is unclear whether existing apps are confined to a single core for compatibility or not) since, while very few iOS apps do really take advantage of more than one core yet, some nevertheless will from time to time since they are threaded for other reasons (e.g. to have tasks run in real-time for games or audio/video playback). However, Apple certainly tested existing iOS App Store apps on the iPad 2 hardware and they would have noticed if it caused many issues, so this probably only affects a limited number of apps and/or it occurs rarely. Still, it is important to raise awareness of this behavior, as an unprecedented number of weakly ordered memory devices are going to be in the wild now, and programmers are expected to make use of these two cores.

What now?

So what if you have a memory ordering issue? Well, first you don’t necessarily know that it is one, just like for threading bugs; the only thing you know is that you have an intermittent issue, you won’t know it is memory ordering related until you find the root cause. And if you thought threading bugs were fun, wait until you investigate a memory ordering issue. Like threading issues, scenarios in which memory ordering issues manifest themselves occur rarely, which makes them just as hard (if not harder) to track down.

To add to the fun, the fact your code runs fine on a multicore x86 system (which practically all Intel Macs, and therefore practically all iOS development machines, are) does not prove at all that it will run correctly on a multicore ARM system, since x86, as we’ve seen, is strongly ordered. So these memory ordering issues will manifest themselves only on device, never on the Simulator. You have to debug on device.

Once you find a plausible culprit code, how do you fix it (since often the only way to show the root cause is where you suspect it is, is to fix the code anyway and see if the symptoms disappear)? I advise against memory barriers; at least with threading bugs, you can reason in terms of a sequence of events (instructions of one thread happening, one thread interrupting another, etc.); with memory ordering bugs there is no longer any such thing as a single sequence, each core has its own; as in Einstein’s relativity, simultaneity in different reference frames is now meaningless. This makes memory ordering issues extremely hard to reason about, and the last thing you want is to leave it incorrectly resolved: it’s neither done nor to be done.

Instead, what I do is lock the code with a mutex, as it should have been in the first place. On top of its traditional role, the mutex ensures that a thread that took it sees the writes made before it was previously released elsewhere, taking care of the problem. Your code won’t be called often enough for the mutex to have any performance impact (unless you’re one of the few to be working on the fundamental primitives of the operating system or of a game engine, in which case you don’t need my advice).

For new iOS code, especially for code meant to run on more than one core at the same time, I suggest using Grand Central Dispatch, and using it in place of any other explicit or implicit thread communication mechanism. Even if you don’t want to tie yourself too much to iOS, coding in this way will make the various tasks and their dynamic relationships clear, making any future port easier. If you’re writing code for another platform, try to use similar task management mechanisms, if they exist they’re very likely to be better than what you could come up with.

But the important thing is to be aware of this behavior, and spread the awareness in the organization. Once you’re aware of it, you’re much better equipped to deal with it. As we say in France, “Un homme averti en vaut deux” (a warned man is worth two).

Here are a few references, for further reading:

This post was initially published with entirely different contents as an April’s fools. In the interest of historical preservation, the original content has been moved here.

PSA: “previous” and “next” links in archives

(PSA, for the readers not familiar with this bit of US culture, stands for Public Service Announcement; these are similar to ads, except that instead of promoting a product, they serve to forward a message of public interest, like “Don’t do drugs”)

On the web, many archives can be browsed chronologically; I’m not just thinking of blogs here, many web pages can be thought of as being in an archive, webmail for instance. And more often than not, the links to do so are labelled “previous (item)” and “next (item)”. And here lies the rub: “previous” will typically take you to a newer item, and “next” to an older item. Wait a minute…

Oh, I certainly see the faulty logic that leads there. It started innocently enough, with “next”, denoting a link to the next page to go to see more items in the archive, but then “previous” came in for the link to go in the opposite direction; and now many sites are content with this.

But there are much better choices. For instance one is “earlier” and “later”; “earlier” has the advantage of being relatively positive (as compared to, say, “older”), which is good to encourage the reader, who has reached the bottom of the front page, to dig deeper in your archives.

Now why am I calling attention to this? Because I am guilty of this myself. Indeed, when I posted “Raising the Level of Discourse” my blog gained a second page. I immediately went to check it and saw that it indeed features “Previous Page”. The technical reason is that it seems to be the default for the theme I’m using, but that’s hardly an excuse; even if I can’t modify the theme myself (I’m on WordPress.com and have to use the available themes; I can customize one or two things and add CSS, but that’s it), I have to own up to my choices: a good craftsman does not blame his tools, but if they are not up to the task, either fixes them himself, gets them fixed, or changes tools altogether. However, this takes time, which I haven’t taken for this yet, so in the meantime it is still “Previous Page”; I apologize for the inconvenience.

The issue was resolved when I switched to a different theme in the beginning of 2012; I’m keeping this post for historical interest. – May 22, 2012

So it’s true

So it’s true. Along with announcing support for subscriptions, Apple has confirmed the policy changes that many suspected were behind the rejection of the Sony Reader app: that apps can no longer link to a place to buy content for the app (there can still be such a place, just it must not be linked from the app), and must instead offer an at least as advantageous in-app purchase to do so (Apple first released a press release announcing support for subscriptions and this new policy that would apply to them, then confirmed this new policy would also apply to all paid content used by apps, not just subscriptions).

In the way it’s presented in the quote attributed to Steve Jobs in the press release, it sounds like a wager between two gentlemen: a friendly, interested contest, here to see who can bring the most people in, everything else being otherwise equal. That the publisher earns less in one case, and yet maintains equivalent prices is not unheard of, either: for instance books sold directly often have the same price in Amazon or a bookstore (setting aside any shipping, of course), even though the retailer (and this includes Amazon) takes quite a bit of margin. In fact, in some cases like the tabletop gaming world, publishers have a store on the web but downright encourage customers to buy from their friendly local game store, because they know the benefits these places provide. So on the face of it, this looks reasonable.

Except for this: what value, exactly, does Apple bring to the table here? For in-app purchases that unlock features, there is a quite justified case to be made that, since Apple distributes the app with the feature potentially present, just not unlocked yet, the hosting, screening, and to an extent, DRM services provided by Apple apply as well to in-app purchases, justifying the same 30% cut. However, none of these apply for content in-app purchases: new content can be added all the time, without the need to send a new binary to customers (as an aside, I wonder when Apple added the ability to allow in-app purchases to be added without the need for a new binary), so this content is not coming out of Apple bandwidth, never goes through the approval process, and likely has its own DRM.

This leaves payment processing. Now don’t get me wrong, iTunes is quite awesome payment processing. Back when I was a teenager I really wanted to pay for shareware, but I had no way to do so, and my parents refused to do so for me, so I played pretend by printing the order form (this was even before we had Internet). But a teenager today can buy a prepaid iTunes card with his pocket money pretty much anywhere, or use iTunes credit gifted by his parents, and buy apps for his iPod Touch, the family iPad, or the family Mac. This is awesome. So I think Apple can justify having a payment processing fee slightly larger than, say, that of PayPal. But most definitely not 30%.

Oh, there is, of course, the immense privilege of your app being allowed to run on Apple devices, but it has been established again and again that it is a deadly sin to make it difficult for people to build on top of your platform, because a platform is only as good as what has been built on top of it. No successful platform has ever collected royalties from applications, except for game consoles, but you have to remember consoles are sold at a loss or zero margin (recouped on the games), which is not exactly the case for iDevices.

The end result is that, contrary to the cases I mentioned earlier where publishers would earn less, but still earn money, through another retailer, this leaves publishers with the prospect of selling to Apple device owners at a loss, due to the fee disproportionate with the value Apple brings, as it is certainly larger than the share of profits the publisher can afford to spare for just payment processing (given that they still need to do pretty much everything else). Even if the publisher has a way to sell directly to these same consumers, and even if he was confident most of the sales would occur directly, uncontrolled selling at a loss is still way too big a risk for any publisher to take. I don’t think Apple is seeking rent as much as trying to establish a balanced system, but even with the best intents they have set up a policy that will drive publishers away.

Even if you see no or little problem on Apple’s side with this new policy, consider the following: the end result of this, whatever the reasons or whichever way the faults lie, will be that the Apple “ecosystem” will have its own specific publishers (of books, movies, comics, etc…), either Apple itself or ones that are, in practice, dependent on Apple; publishers different from the ones used by the rest of the world. Is it really what you want? Is it really what Apple wants?

Back in the 80’s, Apple with the Mac had years of advance on everyone else, and made obscene profits exploiting their then current strengths while positioning themselves too narrowly, before this caught up to them in the end. While the mechanisms and kind of partners (application developers/content providers) this is happening with are different, I wonder if it’s not what is going to happen with iOS as well.