iOS app management removed from iTunes (a first reaction)

Perhaps a bit lost in the noise of last week’s announcements was the release of iTunes 12.7, which removes iOS app management (oh, and ringtones, too): you can no longer buy iOS apps on the desktop, or update them, or sync the ones you bought to your device except in an ad hoc way.

I admit I was taken by surprise. I heavily use these features: in principle, I do not download apps or app updates directly to my iPhone or iPad (there are exceptions): if I am at home with WiFi, I consider I might as well use my Mac, and elsewhere I’d rather not eat into my WAN bandwidth cap, battery life, etc. Plus, I indeed find iOS app browsing in the built-in (iOS) App Store app to be a substandard experience. Yes, some of us still don’t buy into the idea that the handheld device is necessarily self-sufficient; I mean I’d very much like to see you add freely distributed music (which as a result is not in the iTunes store) to your iPhone music library, or back up your iPhone to a non-Internet backup location, using solely the iPhone itself. As long as I can’t do that and have to sync, might as well use sync for everything (and honestly, I don’t mind sync per se).

And of course, speaking as a developer-adjacent person, I have to wonder what the impact is when potential customers who come across a link to an iOS app while browsing the web on their desktop… can no longer buy it there. There will be lost sales until Apple improves the situation (QR codes would be a start, for instance).


Now thinking about it more, I may be able to live without this feature. I don’t reorder apps on the iPhone screen from iTunes, app thinning means that even with two devices, my bandwidth use should even be lower (compared to downloading the “fat” app or app update as I do today), I haven’t tried to keep superseded version of apps just in case an update would ruin it, I will search and discover apps on the device if I have to (the main trouble in this scenario being, for me, the lack of free trials — that is not changing), I haven’t switched devices in a long time (though that might change in the next few months)…

So I’ll try it out. After one last sync, I will later today update iTunes and I will see if I miss anything. Maybe iOS 11 will help, maybe it won’t, maybe Apple will improve the experience (I wouldn’t hold my breath — iOS 11 features are already known, normally).


But as with yesterday’s post, my biggest worry is for historical preservation. What happens, in the long run, for apps that are no longer being sold in the iOS App Store? Will they only remain on devices where they were bought, with no chance of being able to transfer them on a different device? But I fear this is the last of Apple’s worries…

What benefits does iOS 11 get from being 64-bit only?

By now you have proooooooooobably heard that iOS 11 will consider apps that still have not been ported to 64-bit mode as obsolete. In practice, by refusing to run them.

Now this post is not on the how to port to 64-bit (I mean, if that is your concern Apple has been encouraging you to do so for years now…), rather on the why Apple did so. Why obsolete perfectly good 32-bit code and apps? I do not have all the answers, but I have a few. Let us first see why 64-bit is the better choice if we have to choose between the two, and why Apple chose not to maintain both.

Why 64-bit only is better than 32-bit only?

That one is an open and shut case: in this earlier post I already presented how the then-new iPhone 5S 64-bit environment was overall a benefit; and the benefits have only grown since then (as I wrote: “native 64-bit math is a plus for some specialized tasks and the future”), so there really is no question. If Apple had to drop one, it had to be the 32-bit environment.

Why not both?

hardware savings

In theory, Apple could save silicon surface on their post-iOS 11 hardware designs (iPhone 8, 8+, and X) by omitting parts that serve only for ARM/A32 mode execution in their processor design (which they have the power to do, remember they design their own ARM CPUs now). Indeed, while the cleanup from ARM/A32 to ARM64 was not nearly as dramatic as x86 to x86-64, some instructions and instruction semantics were dropped, though how far this could could save in terms of execution units is way beyond my expertise; more important probably are the savings for the instruction decode circuits: not only is there no need to support Thumb, but the instructions formats were completely overhauled between ARM/A32 and ARM64, with the former being quite convoluted (plenty of non-uniform formats, one-off cases, and split fields).

In practice, I wonder if this is worth the trouble. I think ARM processors are meant to start up in 32-bit mode before being raised to 64-bit, anyway, and there may be additional compatibility constraints (e.g. with drivers, hypervisors). Even if they did take advantage of this, this is not the main driver.

software savings

That is the part where the real savings are. Through the equivalent of app thinning, Apple could already eliminate the 32-bit parts from their kernel and built-in applications, but they would still have had to provide the 32-bit slice of the library stack (everything from libSystem to AppKit) so that 32-bit apps could keep running. And that does take up some space on your iPhone or iPad storage (which I have not measured, to be honest)… but more importantly, this slice would take up space in RAM, next to its 64-bit equivalent (always present since built-in apps use it), just as soon and any time a 32-bit app would be running.

This is the message that Apple has been not-so-subtly telling users already when they warned of 32-bit apps that running them would slow down the device: iOS devices have traditionally been quite RAM-constrained, and even if that eased a bit in recent years, any RAM savings are worth taking: they allow more tabs to remain active without having to be reloaded, more apps to remain frozen and only have to be (quickly) thawed instead of having to be relaunched, etc., improving the overall experience. And so to keep having the 32-bit library stack loaded in RAM in most iOS devices just next to the 64-bit library stack was starting to look like a waste of precious resources.

Was it worth it?

Heck if I know. I do not think I will be too much affected through the apps I own, but I am always worried about such obsolescence, especially from a digital preservation perspective. That being said, for the purposes of saving such history it is best to rely on a historical device (such as one that can’t be updated to iOS 11), because they are many other reasons why historical iOS software just stops running anyway. I keep my old iPhone 3GS for that purpose, and it is already loaded with a number of apps that simply don’t run any more on my iPhone 5S running iOS 10.

WWDC 2017 Keynote not-quite-live tweeting

(Times are GMT-7 and their timestamps correspond to a real-time, though not live, viewing of the 2017 WWDC Keynote and Platforms State of the Union)

Apple to phase out usage of Imagination Technologies GPU in iOS devices

Big news dropped recently: via Daring Fireball, we learn that Apple notified Imagination Technologies that they would no longer be using their products in new iPhone, iPad or iPod Touch designs within a 15 to 24 months timeframe.

For some time already, the GPU has been the biggest driver and bottleneck of iOS performance, if not since the beginning, at least starting with the iPad and Retina devices, compounded when iPads became Retina themselves: iOS SoCs have been characterized for some time as being bandwidth monsters (relatively to mobile devices), most of it connected to the GPU so that it can feed the screen pixels. It is the GPU which is mostly responsible for scrolling smoothness, for the amount of layers you can have on screen before performance takes a dive, for the performance of games, etc. The improvement of CPU performance, comparatively, improves the iOS experience much less (in the browser, mostly). If you’ve been curious enough to look at CPU teardowns of iPhones, for instance here for the iPhone 7, you know the GPU can take as much space as the multiple CPU cores, and for iPads a truly outrageous amount of silicon surface is taken by the GPU alone. And you are more than aware of Apple’s reliance on graphical effects (not just partial transparency, but also now translucency, blurs, etc.) in the iOS interface, all of which are generated by the GPU. So the GPU on iPhones and iPads has strategic importance.

If you need a refresher, Apple has been using PowerVR GPUs from Imagination ever since the original iPhone. More than that, though, it is the only outside technology (and a significant one, at that) that is and has always been an explicit dependency for iOS apps: readers of this blog don’t need to be reminded of Apple’s insistence to own every single aspect of the iOS platform (if you missed the previous episodes, most of it is in my iPhone shenanigans category) so as not to let anyone (Microsoft, Adobe, whoever) get leverage over them, but graphical technologies have been a notable exception, being more than mere software. For instance, while Apple uses OpenGL ES, and now Metal, to abstract away the GPU, a number of PowerVR-specific extensions have always been available and Apple encouraged their use. Even if Apple has recently tried to wean their developers away from these extensions, and stopped advertising to developers the GPUs as being PowerVR products (starting with the A7/iPhone 5S, if I recall correctly), iDevices are still using Imagination products, and PVRTC, as in PowerVR Texture Compression format, textures are still a common sight in the bundle of iOS games and other apps, for instance.

So the first challenge here is the dependencies on these extensions. I don’t see Apple getting developers to make such a transition so quickly, especially as the first devices without Imagination tech are going to be available 12 months before the deadline (the iOS product lines have become too complex to perform the hardware transition all at once), which would leave developers 3 to 12 months to transition… So most likely, Apple is going to have to support those, and this is going to expose them to intellectual properties issues (patents or otherwise). Besides the extensions developers explicitly use, there are all the performance aspects and tradeoffs specific to PowerVR that iOS games have unwittingly become dependent upon (e.g. whether to use complex geometry or compensate with shaders, how to best obtain some effects, etc.), which Apple would have to best reproduce, or at least not regress on, in a new GPU.

And even if they started from a blank slate when it came to third-party software, Apple has many technological challenges to overcome. Much like audio and video codecs, graphical processing technologies are patented to the hilt; but contrary to audio/video codecs, there is no FRAND licensing, no patent pool, or single licensing counter for GPU tech; instead, existing GPU companies live in an uneasy truce, given they are all exposed to each other’s patents. And mobile GPUs are a particular breed within this universe, with adapted techniques to live in such constraints, like Tile-Based Deferred Rendering (present in all PowerVR GPUs). Apple has managed to build its own CPU with great success, so I have little doubt that they will manage to develop their own GPU, especially given their expertise in SoC design as well. But I also see patent royalty payments in Apple’s future.

So what does this mean for iOS developers? For now, nothing. There is nothing to justify scrambling to remove any PowerVR dependency at this point, and it’s pointless to second-guess the performance characteristics of these future Apple GPUs. Best to wait for Apple to come forward. But there is some transition ahead, because at least some long-held assumptions about how iPhone graphics work are going to be challenged when the new Apple GPU will eventually appear. If anything, I’m surprised for such a glaring externality in the iOS platform to have managed to remain for so long, and it will be interesting to see how this will play out and how Apple will manage any necessary transition.

See also: Ryan Smith’s take at AnandTech, a reference.

APFS’s “Bag of Bytes” Filenames (Michael Tsai – Blog)

I have sooooooooo many questions. I mean, first I have the same ones as Michael, but on top of that:

  • “bag of bytes”, but I hope at least that the file name, even if not normalized, is guaranteed to be valid UTF-8, right? Right? Right?
  • In some circumstances, it is possible for the user to type the beginning of a file name to select or at least winnow the file selection; is there going to be guidance on how to perform this?
  • Sorting file names for display. Oh, the fun we shall have with sorting. Again, will guidance/a standard function be provided?
  • Normally this should result in less issues for software that wrote a file name with any valid UTF-8 string, then expects a file with that exact name to be in the directory listing, as it will be the case at least more often (I must admit I don’t fully understand the issue that led to the Apple response in the first place, though I understand even less the Apple response). However, when performing manipulations with NSString/NSURL/Swift String, do those preserve composition enough that developers can rely on them for that?

Now, granted, I know two people this will make happy (or, OK, less unhappy)…

EDIT: One additional data point about this, is that in a similar situation, even Apple doesn’t get it right (coincidentally, fixed in Safari 10.1 and iOS 10.3). Let me tell you, this issue was a bear to isolate.

I admit:

  • I have no idea where this was in Safari, though it is safe to say Apple has responsibility for that code,
  • Safari is already compensating for invalid data, the URL should be properly escaped in the first place, and
  • this is when using HTTP, not the filesystem.

Nevertheless, this shows Apple themselves sometimes get it wrong and normalize strings in a way that causes issues because the underlying namespace has a dumb byte string for key. So if they can get it wrong, then third-party developers will need all the help they can get to get it right.

EDIT: New info, in that there will be a case-insensitive variant for the Mac, which will also behave differently for normalization.

I think “normalization-preserving, but not normalization-sensitive” means that (like HFS+ on the Mac, unlike APFS on iOS) you cannot have multiple files whose names differ only in normalization. And you can look up a file using the “wrong” normalization and still find it. Additionally, beyond what HFS+ offers, if you create a file and then read the directory contents, you’ll see the filename listed using the same normalization that you used.

This is my interpretation as well.

Curtain update

I took advantage of the recent update to JPS to experiment a bit with Curtain. I significantly retooled it towards one goal: separate the generation of the deployment package from the deployment itself.

While the initial version of Curtain benefitted from many influences, one I completely forgot to take into account was Alex Papadimoulis’ teachings, more specifically those about release management and database changes. Especially the commandment that builds be immutable and to make sure that what gets deployed on production is the same thing that got deployed on the earlier environments.

When I recently re-read those two articles for inspiration at work, I thought: “Uh, oh.”

Indeed, with Curtain the deployment process is not only a function of the revision that we ultimately want there, but also of what was previously there, in order to support proper rollover of resources (itself necessary because of offline support). And as originally designed, Curtain would just adapt its deployment to what was previously there, which means that, if I wasn’t careful and did not double check that staging was properly rolled back to what is present in production (which let’s admit, we’ve all done at some point), then the Curtain deployment to staging would not be representative of the eventual deployment to production. Oops.

So Curtain has been updated to, rather than perform the deployment itself, instead generate a package containing the generated files; this package doubles as a Python script which, when invoked, will perform all the deployment steps to the target of choice. The script itself is dumb and takes no decisions, such that it can be invoked multiple times and perform always the same job, but it also checks prior to operating that the data previously present corresponds to the expectations it was generated with. That way, we can use the same script multiple times, once on staging and once on production, and be certain that the two deployments will be the same. And Alex will be happy.

One more thing. In my initial post, I also completely forgot to mention another influence: Deployinator. Many aspects of Curtain come from Deployinator: deployment as a single operation, deploying assets as a layer separate from code, and versioning these assets as part of the URL, etc. The lessons from Deployinator were so obvious to me that it did not even occur to me to mention where they came from. That omission has now been repaired.

Simple File Cache: improve the performance of FileReader in the browser

When was the last time you obtained a 10x (ten times, 1000%) performance gain with a single improvement?

Not recently, I bet. Most optimizations work incrementally, eking out 3% here, 2% there, and only achieve an observable effect by iterating many such optimization steps. Even algorithmic improvements, such as replacing an O(n²) algorithm by an O(n·log(n)) one, typically get you on the order of 3 or 4 times performance improvement, at least on the data sizes in typical use at the time the improvement is made. So let me tell you how I improved performance of JPS, my web app to apply IPS patches, tenfold.

Once upon a time…

Soon after the initial public version of JPS, I started working on support of another format that (among other processing) requires the CRC32 of the whole file to be obtained, which is best done in blocks of, say, 1024 bytes rather than reading from the file byte by byte. Given my prior experiences, I dreaded the performance penalty from having to (re-)visit every single byte of the file, but it turned out to perform surprisingly well. Why couldn’t I get the same performance when processing IPS files?

So as a proof of concept I started developing a layer that would read from the file in blocks of 4096 bytes, then serve read requests from the loaded data whenever possible, entirely in JavaScript. In other words, a cache. Writing a cache is something you always end up learning in any Computer Science curriculum, and you always wonder why, given that it seems so simple and obvious it need not be taught, and simultaneously is something the platform will provide anyway (especially as modern caches tend to be very complex beasts, what with replacement policies, cache invalidation, and so forth). And Mac OS X, on which I develop, aggressively caches filesystem reads at every level already. Writing my own cache for file reads seemed too obvious to be something worth doing.

Photo of the Mont Blanc, lighted with sunset light

From now on, my longer posts will have random photos from my various trips inserted to serve as breathers. This is the Mont Blanc, lighted by sunset light.

As a way to test this anyway, I wrote the dumbest file cache you could possibly imagine: there is only one cache bucket, it can only be loaded from whole block-aligned ranges in the file, with the result that a number of requests, e.g. those that cross block-aligned boundaries, or those that load from the remainder of the file that can’t form a whole block, have to sidestep the cache and be served from the file separately. Furthermore, JavaScript Blobs are supposed to be immutable, so I did not need to worry about invalidating my cache when the underlying storage changed. Even then, this was a not a trivial thing: the asynchronous nature of the browser file reading API meant the cache had to provide an asynchronous API itself and maintain a “todo list” of read operations being processed.

And now I turn on the cache, and measure the performance improvement… and files that used to take Chrome 50 seconds to process now take 5 seconds! (Chrome being my reference browser for development of JPS). And the 10x factor is consistent, applying over various source files, often turning the processing time into “too short to measure”, and over various platforms: the same files which took around 200 seconds on Chrome for Android now take 20 (and the behavior of desktop Chrome on Windows was the same as on Mac OS X). Similar improvements could be observed with desktop Firefox, with processing times going from 20 seconds to 2 seconds.

Wow.

I reported these findings on the Chromium discussion forums (Chrome being the worst offender), because surely that meant something was wrong with Chrome somewhere. However, not much came out of it, so I decided to productize the cache so as to deploy these performance improvements in production.

From proof of concept to production-worthy code

The proof of concept assumed that, for every read operation except from the first it could just append a new read request from the client to its todo list, and once control would bubble up back to the cache code, the request could be served there if it was in cache. That worked in most cases, at least enough to get performance measurements; but in some cases, a new request would be logged from code that was not called from a callback from our cache, so it would never bubble up back to our code and never be served, and the pump would drain.

Photo of a young Ibex

A young ibex.

Easy enough, I thought, I will get rid of the todo list and instead I will always defer processing by calling setTimeout(,O).

That worked.

But it was slow. Even slower than without the cache.

Turns out, the overhead of calling setTimeout(,O) and getting called back by it was killing this solution. What to do, what to do, what to do? Back to the drawing board, I came up with the solution: reinstate the todo list, and use it, but only if we can tell for sure that we are within code that is being called by cache code — which entails keeping track of that information. If we are not within code that is being called by cache code, only then use setTimeout(,O). That managed to both work in all cases and with good performance.

And then I also had to support aborting requests, adding a number of unit tests, fix a few bugs… and then it was done.

Photo of the Grandes Jorasses

The Grandes Jorasses.

What have we learned?

  • Don’t diss CS or the CS curriculum. You never know when what you learn there might turn out to be useful.
  • Sometimes the obvious solution is the right one.
  • The source of slowness isn’t reading files, per se, but rather the shocking overhead of calling a Web API and getting called back by it (whether it be FileReader or setTimeout(,O)), which by my estimates is around 2 ms for each such operation with Chrome on a modern desktop machine. This is crazy. Other browsers (with the exception of Internet Explorer/Edge, which I have not been able to test) fare better, but still have enough overhead that you have to wonder what is going on in there.

Get the code

I set up a specific project for the cache code: you can get the code on BitBucket, and I also published it on NPM as simple-file-cache. It is free to use and modify (under the terms of the BSD license). If you find it useful, I request that you consider donating to the ACLU and the UNHCR, however.


P.S.: While I’ve got your attention, I’m happy to report that JPS will soon support Safari, as this browser is finally about to get support for the download attribute and downloading blobs, normally as part of Safari 10.1, which is meant to arrive with Mac OS X 10.12.4. Being able to be used on a stock install of Mac OS X will be a huge milestone for JPS and the viability of web apps in general as a way to circumvent Developer ID and Gatekeeper.

What the Joy of Tech means to me

I have a confession to make.

At the risk of ruining my credibility both as a webcomic specialist and as a long-time member of the Mac community, I have to admit I only discovered the Joy of Tech at the occasion of this Foxtrot strip paying homage to webcomics. “The Ecstasy of Tech? I get the others, but what could this possibly refer to?”

Of course, I found out soon enough, and I’ve been following the Joy of Tech ever since. Through the various news around Apple, or around the tech industry in general, or even completely unrelated matters, they are here to bring a little joy to our lives. I have used one of their comics once while paying homage to Fake Steve, and while I also wrote they come close to, but aren’t, the Penny Arcade of Apple and the Tech industry, that is only because, in my opinion, Nitrozac and Snaggy are too damn nice for their work to be considered satirical…

Nevertheless, the Joy of Tech is performing a duty that I haven’t seen anyone else fulfill: topical humor on Apple and the tech industry in general. On that, they are pretty much the only game in town, which means they get drafted (with little attribution…) whenever media is looking for humoristic commentary, especially of the graphical kind, on tech events. Take, for instance, the time France 5 (French public TV channel) used them to illustrate the acquisition of Instagram (I was the one who tipped off Nitrozac and Snaggy about it). And yet, whenever I stroll back the memory lane and browse old JoT strips, remarkably they hold up much better than, say, Penny Arcade strips do; and to me it’s because they are not just about the immediate event at hand, but more generally tell things about us Mac and tech aficionados.

Nitrozac and Snaggy are at a difficult time in their careers. If you’ve ever appreciated what they do, consider contributing to their Patreon or otherwise supporting them in some fashion. Thank you.

Slight Pause

(Yes, as if I had published any post in the last three months, in the first place…)

If you’re looking for software development treatises or Apple nerdery, I’m afraid I haven’t been able to focus on writing on these matters recently. I do have posts in the pipeline, but I don’t know when I will manage to publish them.

Meanwhile, the action is happening elsewhere.

In-app purchases are in need of reform

The common wisdom with Apple, especially when it comes to explaining the unusual and apparently limiting ways they introduce features, is that to better serve the user they introduce features that solve the user need in a specific way for each task, instead of providing a generic, unrestricted feature that may not provide an optimal user experience.

At least, that’s how I have seen it expressed, e.g. in one Jesper post:

I am more out of my depth here, but just applying the output to what we know of the process, I think the iOS group sees files as something you are under pressure to manage. In particular, it sees files for everything as a generic solution, and by applying Apple philosophy, it thinks that most of the problems that can be solved using files and applications are instead better solved in a task-specific way for each task.

(post which you may remember from our exchanges on the lack of a document filing system on iOS)

This applies very well to iOS multitasking, as well: instead of just allowing apps to run unconditionally in the background, Apple provided ways to fulfill (practically) each user need in a specific way, and grant background execution privileges commensurately with the need: frozen but no background execution in the general case, background execution for a limited time in the “complete a task” case to e.g. complete an upload, background execution only as long as audio is played for the “play audio while doing something else” case, etc. List which Apple has expanded a few years later with new specific privileges, which shows a willingness to revisit initial restrictions.

So I have to wonder why Apple is not applying this principle to in-app purchases. Currently, it is a generic feature that does not provide an optimal user experience for a variety of user needs:

  • digital content purchases (ebooks, comics, etc.)
  • apps that are downloaded for free with limited features for trial purposes, with a one-time fee to buy the app and get the full functionality (known to old-timers like me as the shareware model)
  • games with a base scenario supplemented by substantial expansions (think StarCraft/StarCraft Brood War)
  • games with more discrete, non-recurring downloadable content (extra weapons, extra maps, etc.)
  • apps with extra functionality obtainable through in-app purchase
  • coin-operated games or with consumables (ammunition, smurfberries, boosters, gems, etc…)

Yes, the purchase experience per se is optimized for each user need, by virtue of each app managing entirely that experience; where this is not is for the other places where in-app purchases have an impact, such as the top grossing list. In particular, information in the iOS App Store about presence of in-app purchases, and how many/how expensive they are, is a completely generic solution to many specific problems, and in way which is not very transparent, to say the least.

This results in warped incentives for app developers, which you probably know about already since Apple has gotten in hot water in the press for those, especially the matter with children buying smurfberries amounting hundreds of dollars or more (which they’ve been able to do while under the timer, initiated by the initial purchase, where the Apple ID password is not prompted for). Apple has fixed the most egregious issues, for instance by having separate timers for the initial iOS App Store download and for in-app purchases, but the fundamental incentive of appearing as an ordinary game, then tempting the user with “boosters” to get him out of a bind, or even possibly get him addicted to these boosters, remains1.

Apple has more recently improved the situation, by changing the language when obtaining free apps (which now reads “Get” rather than “Free”), including those with in-app purchases, and by featuring games that you Pay Once and Play, i.e. without in-app purchases, and while this is a step in the right direction, this is far from sufficient as this excludes games like Monument Valley that feature a single, consistent expansion, and everyone (Apple included, since they featured Monument Valley in the WWDC intro video) wants to encourage apps like Monument Valley.

What can be done?

So what can be done? I think the most important is not to prohibit anything outright, because there may always be a legitimate use for a particular in-app purchase pattern. For instance, long ago, way before there even was an iPhone, I remember reading an article bemoaning that arcade games (back, you know, when arcade games mattered) were ported to consoles without any adaptation; that is, when the arcade version would prompt for a quarter after a game over, the port would simply allow unlimited continues, which sometimes would make it absurdly easier. And the article imagined potential solutions, one of which was a system by which the player on his home console would actually pay 25¢ whenever he would continue that way, that would be wired somehow to the game publisher, which sounded completely outlandish at the time. Not so outlandish now, eh?

But whatever is allowed, what matters is that the user is properly informed when he installs the app.

So the solution I propose is to keep in-app purchases as the common infrastructure behind the scenes, but for the iOS App Store to present each app in a specific way for each use case:

  • First, of course, apps (free or paid) without any in-app purchase, featured as they are currently.

  • Then, apps that you can try before buying. Those would be listed among paid apps, with a price tag that is the unlock price, but with a mention that you can try them out for free; and those would have two buttons rather than “Get”: something like “Try for free” and “Buy outright”, so that you could save yourself the trouble of going through the in-app purchase process if you know the app already and know you need it.

  • Then we would have apps, typically games, with a discrete and limited number of “tiers”. They would be listed among paid apps with a price tag which is the first tier; and in the page for the app, the tiers would be shown in a clear way (instead of this meaningless ranking of in-app purchases), e.g. as a series of “expansion” elements which visually combine, with each the name and price, as in:

    /-------------------\--------------\
    | StarCraft          \ Brood War    \
    | $20                / +$10         /
    \-------------------/--------------/
    

    or even:

    /-------------------\------------------\-------------------------\------------\----------------------\
    | World of Warcraft  \ Burning Crusade  \ Wrath of the Lich King  \ Cataclysm  \ Hey why not Narnia?  \
    | $30                / +$15             / +$15                    / +$15       / +$15                 /
    \-------------------/------------------/-------------------------/------------/----------------------/
    

    (Maybe with a shape that less suggests an arrow, but you get the drift)

  • Then apps that have unlockable features in a more complicated structure, but with no “ammunition” in-app purchase (what Apple refers to as a “Consumable” in-app purchase). Those would just have their initial price, then a ranking of these in-app purchases in a way close with what is done currently, but a “maximum cost” which is the price of obtaining all of them would also be shown as an indication.

  • Then apps with content in-app purchases, such as Comixology before it removed them. For those, there would be no such “maximum cost”, because no one is going to buy the whole catalog.

  • And lastly, apps that do have “ammunition” in-app purchases. These would be listed with a special price tag mentioning no specific cost, and the page for the app would have the button say, not “Free”, not “Get”, but “Install coin-operated machine” or some such that makes it clear you would be inviting on your device a box that belongs to the app developer and has a slot that takes money and directly sends it there, because that is what these apps are. Such a decision wouldn’t be popular with many app developers, but Apple has shown itself willing to take decisions that don’t sit well with developers when they sincerely think they are acting for the benefit of the consumer, for instance when Apple still doesn’t allow paid upgrades.

  • And we would also have apps using recurring subscriptions, about which I don’t have much of an opinion so far.

Building on these distinctions, more changes would be possible, for instance there could be separate top grossing lists, one for each category, which would avoid legitimate hits of the first categories from being drowned by the eternally grossing coin-operated machines of the App Store.

There you have it. At any rate, even if there could be completely different ways to go about it, it is certainly an area of the iOS App Store that could use some improvement (Mr. Schiller, if you’re listening…), having barely changed for so long without any of Apple’s apparent philosophy of “Let’s replace this confusing, generic solution by a number of specific solutions designed for each task”.


  1. In fact, given the similarities with gambling, I can’t exclude for these boosters-laden games to be eventually regulated as such