RIP, QuickTime for Windows

As you may have heard, Apple will no longer provide fixes for QuickTime for Windows, not even for two released security vulnerabilities (this post is a sort of PSA, as well: if for some reason you have QuickTime for Windows, uninstall it now). I wonder why anyone refers to QuickTime for Windows as being deprecated, as deprecated technologies don’t receive updates or fixes except for critical issues: the correct term for the no-fixes-at-all situation is unsupported; for all intents and purposes, QuickTime for Windows is dead. And while this has been coming for some time, this doesn’t make these news any less sad; so today, let us remember QuickTime for Windows.

While I think it existed earlier in some form, the real beginning for QuickTime for Windows was with QuickTime 3.0, which had feature parity with the MacOS version — imagine that! I know little about how it fared at that time, since my usage of Windows machines was limited; I only know that a number of game developers adopted it, eager for an acceptable media playback solution (e.g. for cutscenes): a number of games had you install QuickTime for Windows (bundled on the game CD) in order to run. Also, QuickTime for Windows came with an implementation of a subset of the Mac toolbox (though with some differences, e.g. file name length), which helped with the port of some Mac games to Windows.

Then, some of you might not have really known that time, so you have to take my word for the fact that, before YouTube in 2005-2006, there was no universal standard for distributing video online; but QuickTime with its browser plugin was the closest we had. So people were posting videos in QuickTime format (e.g. this Apple switch ad campaign parody); this did not support Linux or Unix, and Windows users were a bit reluctant to install QuickTime, but this was miles better than any alternative such as Windows media which, if it was supported at all on the Mac, was always incredibly crappy.

QuickTime served also, back then, as the basis for media playback of iTunes for Windows, which itself was the indispensable tool for allowing anyone (not just Mac owners) to own an iPod, then later on an iPhone. For those purposes and many others, QuickTime for Windows carried the burden of making sure many Apple initiatives were at least viewable outside of just Macs, playing no small part in keeping Apple relevant for all these years. QuickTime for Windows was the symbol of Apple’s leadership in multimedia, and everything it allowed legitimized the Mac and Apple including for die-hard Windows users in a way that is impossible to overstate.

For instance, back when I worked at NXP Software, QuickTime Player was the standard test for determining whether a movie file was correctly formatted (among other reasons because we were working with 3GPP media files, whose format, like that of MPEG4 media files, was derived from the QuickTime movie format): if a file generated by our media recorder had an issue with QuickTime Player, which was necessarily on Windows (we did not use Macs, at least not before we developed iPhone apps), then there was a bug in our media recorder. This made for a fun investigation when I tried to understand a bug that turned out to actually be in QuickTime!

As far as users go, the average user now has a number of alternatives, starting with VLC, but there are a number of people working on Windows in media and media-related industries who will miss having a reference media player on their machine (iTunes’ just not the same thing). However, software developers who were still building against the QuickTime SDK and relying on QuickTime being installed on Windows should have seen it coming for some time: the writing has been on the wall for QuickTime for Windows since QuickTime X in 2009, when there was no corresponding update on the Windows side, which stayed on QuickTime 7. I have not used Windows machines for media work for some time, and I missed the event when iTunes for Windows become independent of QuickTime, so this personally caught me a bit by surprise nevertheless.

So long, QuickTime for Windows. We’ll miss you.

April fools’ 2016

In case you missed it, for April fools’ day in 2016 I shuffled all my posts such that, at the URL for one post, you would find a completely different one: for instance, at the address http://wanderingcoder.net/2010/06/02/intro-neon/ you would find Apple’s Golden Path, at the address http://wanderingcoder.net/2010/06/21/golden-path/ you would find A few things iOS developers ought to know about the ARM architecture, etc. This mostly affected people visiting from search engines or who followed an external link, though it was not hard for them to then locate the post they were actually interested in; for people who visited from the front page, the only really visible effect is that my first post had rolled over and appeared as the latest.

I hope those of you who stumbled upon this appreciated it, and as always, thank you for reading Wandering Coder.

The Stela comics app

Stela is a new comics app for smartphones (iOS-only at the time of this writing), but it works nothing like, say, Comic Chameleon (which presents existing webcomics with a phone-adapted navigation) or Comixology (which presents comics you’d find in stores as digital products, with a phone-adapted navigation when not running on a tablet). Rather, once you use it it becomes clear Stela’s purpose is to publish comics that embrace the 5 centimeters (that’s about 2 inches, for the metrically-challenged) width of today’s smartphone screens1.

These are comics that are native to that world: the panels are only as wide as the screen (nary a vertical gutter in sight) and can only extend vertically, but they can do so as much as desired because they are read by vertical scrolling. A panel may not necessarily fit on a screen (at least on an iPhone 5/5S/SE; I haven’t checked on the larger models)! An iPhone 5 screenful is a common size, but most of these comics have widely varying panels sizes, and anyway have conversations for instance that extend over multiple screenfuls: they don’t follow a pattern of identically-sized pages. The result is a very fluid flow and a reading experience that is meant to be fast.

The essence of most iPhone apps since the beginning, as best seen for instance with Twitter clients, is of a (potentially long) scrolling list of items (our friend the UITableView), with more or less drilldown or navigation between these lists. Stela is the comics embodiment of that2, and it’s very addictive.

The comics are updated chapter by chapter (which make for checkpoints as well); the economic model is that the first chapter of each story is free, and you can get a subscription (using Apple’s in-app subscription system) to read after that. It is a single subscription global to the app, not per-series, so it works a bit like an anthology series. Comics are always loaded from the network, which bothers me a little: there is no way to preload while on WiFi to avoid eating into your phone data allotment, and no way to read at all if you are off the network. iPod Touches exist, you know.

The comics themselves are of good quality, and I enjoyed the series I read, though many are still developing their story (eagerly waiting for the next chapter of Crystal Fighters for instance) and it’s a bit early to tell how they will turn out.

Either way, whether you’re from my usual audience of iOS app developers, and/or involved in comics, or neither, check it out, you’re bound to find some interesting lessons in this experiment in comics and app design.

~ Reactions ~

Over at Fleen, Gary Tyrrell cautions that, since it’s subscription-based, your access to the content will only last as long as you keep paying for it (I specifically allowed him to quote from this post as much as he wanted). It’s absolutely worth noting; maybe I’ve just become blasé to such things.


  1. The app works natively on iPad, but the comics are just scaled up, which makes for funnily huge lettering.
  2. For instance, images are loaded dynamically and present a spinner if you scroll too fast before they have had time to load, as is traditional in iPhone apps: prioritize the flow, even if that means betraying some implementation realities.

Application Cache was fired for his douchebaggery

To all of you who enquired about the whereabouts of Application Cache, I regret that I have to inform you that he is no longer with our company. This was not an easy decision to take, but we believe it was the right one.

While it has been no secret for some time that Application Cache was a douchebag, this was not necessarily apparent at first. Application Cache promised so much, and we believed him because he could prove his claims to a large extent. However, his way of working was so much at odds with the way other web components work (especially long-time pillar of web infrastructure HTTP cache) that his core value proposition was harder to exploit than it should have been (with many unfortunate pitfalls, as Jake Archibald documented); and worse, his more advanced promises, while working in basic scenarios, had some ancillary troubles, which unexpectedly turned out to be intractable no matter how hard we tried, and so these promises never came to light.

Because he was useful despite the issues, we tried to work with him on these, with many counseling sessions with HR; however, Application Cache was adamant that this was his fundamental mode of operation and he could not work any other way, and that others would have to adapt to him. This, of course, was not remotely acceptable, but we could not find any way to make him change either, so little progress was made. There was some, as we did manage to make him more transparent; some claimed that made him no longer a douchebag, but in truth he remained one.

Still, we believed that it could still be worth keeping him just for his core value proposition of using web apps while offline. But as time went on, it became clear that even that was not going to be worth the bother, again as a consequence of his fundamentally different way of working. Things came to a head when we tried to solve race conditions resulting from the possibility that a user load the initial HTML page before the web app is updated, and its dependencies (including the manifest) after the web app is updated: the manifest has to be updated at the same URL (it acts as a fixed entry point of sorts for users who already have the web app in Application cache), so we could not rely on the HTML pointing to a new manifest URL so that the update of the entry point would atomically result in the update of the web app. Even with the provision that the manifest be redownloaded after the entry point, and checked against the manifest downloaded before in the case of an app already in Application Cache (so as to try to have the manifest always loaded after the entry point, at least conceptually), we were stuck.

Some solutions were found, though limited to ideal situations; there was no solution available for the case of a serving infrastructure, such as content distribution networks, with only “eventually consistent” or other weak guarantees, and there was no solution either if even minimal use of FALLBACK: was required. Moreover, even in ideal situations those solutions bring a lot of burden on the web developer, too much burden considering that offline web apps ought to work correctly in the face of these race conditions by default, or at least with minimal care. In the end, Application Cache was let go a few months ago.

If you were relying on the services provided by Application Cache, don’t worry. While there will be no future evolution (in particular, don’t expect bugs to get fixed), a new guy was hired to perform the tasks of Application Cache exactly as the latter did them. This new guy, Service Worker, will also provide a new service allowing web apps to work offline, this time in harmony with the other web components: for instance, out of the box he makes it possible to throttle checks for updated versions simply by setting a cache control header on the service worker (the period being a day at most); something which was exceedingly hard, if not impossible, with Application Cache due to his bad interactions with HTTP cache. He was already available in Chrome, and with the recently released Firefox 44, two independent, non-experimental implementations have now shipped, so you should take the time to make his acquaintance.

New software release: JPS

JPS (stands for Javascript Patching System) is a web app that applies binary patches, currently IPS patches. Usage is simple enough: you provide the reference file and the patch file, and once patching is done you recover the patched file just as you would download a file from a server, except everything happens on your local machine. Moreover, JPS works while offline, thanks to Curtain, which was in fact developed for the needs of JPS.

JPS works on any reasonably recent version of Firefox or Chrome (both of which update automatically anyway), as well as any version of Opera starting with Opera 15. Unfortunately, some of the features used (download of locally-generated files in particular) are not universally supported yet, which means that, regardless of my efforts, Safari (rdar://problem/23550189, OpenRadar) and Internet Explorer are not supported; as a Safari user myself, this bothers me, but I could not find any way around this issue, you will have to wait for a version of Safari that supports the download attribute.

Some background…

My motivation for writing JPS came from two events:

Indeed, when I learned of Zelda Starring Zelda I wanted to play it (A+++ would play again, currently playing the second installment), but realized the IPS patcher I previously used no longer ran (it was built for PowerPC), and while I was able to download and use a different patcher I thought there had to be a better way than each platform using a different program, program also susceptible to becoming unsupported. And this joined my thoughts from the time when Gatekeeper and Developer ID were announced, where I wondered if we couldn’t circumvent this Apple restriction using web apps. So I decided I would develop a web app to apply IPS patches.

While most of the difficulties were encountered when developing the Curtain engine, the browser features used by JPS itself, namely client-side file manipulation and download, led to some challenges as well. One fun aspect was taking a format, IPS, which embeds many assumptions, some undocumented, on C-like manipulation APIs (e.g. writing to a mutable FILE*-like object, and performing automatic zero filling when writing past the end of file), and making it work using the functional Blob APIs, based on slicing and concatenation of arrays and immutable Blob objects. There were a few interesting surprises, for instance early versions of JPS could, on some input files, cause Firefox to crash, taking down JPS and all the other Firefox tabs! Worse, resolving this required a significant rewrite to the patching engine, which led me to develop automated tests to catch any regression before performing this rewrite, to ensure that the rewrite would not regress in any way (it didn’t).

JPS has been extensively tested prior to this release; I tested myself about a hundred patches, with only one patch not working while running on Firefox (bug report), and it has been in open beta for some time without any other problematic patch having been reported.

The JPS source code is available under a BSD license; the source release contains all the needed code to deploy it with Curtain (which has to be downloaded separately), as well as test vectors for the IPS file format and a test harness to automatically test JPS using these files.

A few more words

While I would have liked to support Safari so that JPS could run out of the box on Mac OS X, I deem this proof of concept of a desktop-like web app to be good enough for at least a subset of desktop use cases; enough so for me to put the Gatekeeper and Developer ID concerns behind me. I can now reveal that, because of these concerns, I did not update to Mac OS X Mountain Lion or any later version until today; yes, up until yesterday I was still running Lion on my main machine.

Now that JPS and Curtain have been released, I can’t wait to see what will be done with this easy (well, OK, easier) way to develop small desktop-like tinkerer tools using the web!

Introducing Curtain

I’m excited to introduce Curtain to you today. Curtain is a packaging and deployment engine for desktop-like web apps; Curtain handles the business of generating the app from source files and deploying it on the server such that it supports offline use.

Curtain can be cloned from BitBucket, and it has a sample app, both under the BSD license. Rather than repeating the Readme found there, I would like here to provide some background.

Some background…

Offline support

I wanted to use Application Cache for a project; as you know, Application Cache is a douchebag, but even that article did not prepare me for how much of a douchebag it is. In particular, you want web apps to be able to be updated, if only because the first version inevitably has bugs. Remember that, even if the list of files in the manifest does not change, the manifest has to change whenever the app changes, otherwise users won’t get the updated version. So how to update the manifest and app?

  • If the app is updated in this manner:

    1. manifest is updated
    2. remainder is updated

    or even if the two are updated at the same time, then you could run into the following scenario:

    1. user does not have the app in cache, and fetches the HTML resource
    2. manifest is updated
    3. remainder is updated
    4. due to a network hiccup on her side, user only now fetches the manifest

    Now the user has the manifest for the updated version, but is really running the previous version of the web app. Even if the list of cached files is still correct, now whenever the user agent checks for an updated manifest it will find it to be bit-for-bit identical, and the user agent will not update the version the user uses, which is out of date, until a second update occurs. This is obviously not acceptable, and if the list of cached files is incorrect for the version it will be even worse.

  • Now imagine the web app is updated in this manner:

    1. remainder is updated
    2. manifest is updated 30 seconds (one network timeout) later

    In this case, the scenario in the previous case cannot occur: if the user fetched the HTML resource prior to the update, the user agent will either succeed before the manifest is updated, or will give up at its network timeout. However, another scenario can now occur:

    1. remainder is updated
    2. user loads the app from the server (either initial install or because he still had a version prior to the one before the update), both app files an manifest
    3. manifest is updated

    In that case, the user has the updated app but the manifest for the previous version. Even if the list of cached files is correct, the versions are inconsistent which is an issue if the new version turns out to have a showstopping issue (which sometimes only becomes apparent after public deployment, due to the enormous variety of user agents in the wild) and we decide to roll back to the previous version: in that case, whenever the user agent checks for an updated manifest, it will find it hasn’t changed and the user will keep using the version of the app that has a showstopping issue. When performing the rollback, we could decide to modify the manifest so that it is different from both versions, but this is dangerous: when rolling back you want to deploy exactly what you deployed before, in order to avoid running into further issues. And I don’t need to tell you how problematic having inconsistent app and manifest would be if the list of resources to cache changed during the update.

So how does Curtain solve this problem?

By updating the manifest twice:

  1. manifest is updated with intermediate contents
  2. remainder is updated
  3. manifest is updated again 30 seconds (one network timeout) later

If the list of resources to cache changes during the update, the manifest contains the union of the files needed by the previous version and the files needed by the updated version; and in all cases, the intermediate manifest contains in a comment two version numbers: the one for the app prior to the update, and the one for the app after the update. That way the manifest is suitable in both cases, and his method of update avoid all the issues associated with the previous methods.

Of course, that would be tedious and error-prone to handle by hand, so Curtain generates both intermediate and updated manifests from a script.

Versioned resources

I enjoy reading Clients from Hell; even though I don’t design web sites for a living I relate strongly to these horror stories. Except for one kind: those where the client complains he should not have to clear the cache/do a hard reload/etc. to see the fully updated site. Sorry, but for those, I side completely and unquestioningly with the client. Even in a development iteration context, it is up to the developer to show he can change the site and have the changes propagate without the user needing to do anything more than a soft reload (which invalidates the initial HTML resource if necessary, but nothing else), because such changes will need to happen in a post-deployment context. And don’t get me started on the number of site redesigns where the previous versions of all assets (icons, previous/next arrows, etc.) are still visible, and the announcement post starts with the caveat that you may have to reload manually in order for the redesign to be fully in effect… and even then, it has to be done again on a second page, because the main page does not have a “next” arrow for instance.

Yes, clearly you want resources and image assets, in particular, to be far-expire in order to save on bandwidth. But this means they must also be immutable: they might disappear, but may never, ever change; and if a different resource is needed, then it must have a different URL. Period.

Obviously, changing the resource name by hand, especially if you need to do so for every development iteration, is tedious and error-prone. When I read in web development tutorials, including some Application Cache ones, the suggestion to use, say, script-v2.js and increment version numbers that way, I can’t help but think “Isn’t that the cutest thing? Thinking you can do so flawlessly without ever forgetting to so do whenever a resource changes? Awwww…” because that is a recipe for failure, even if you only change these resources as part of a deploy.

Such inconsistency issues are even worse for offline web apps. Indeed, if your web app cannot work offline, you can just assume that, if your web app works incorrectly because of an inconsistent set of resources, the user will just reload and she will eventually get consistent resources. But in the case of an offline web app, once the user is back to her new home for which DSL hasn’t been installed yet (I’m getting tired of the airplane example) she has no opportunity for reload.

Even worse, even if the user checked while she was online that the web app was working correctly (which is asking a lot of her already), it may in fact be the previous version that was reloaded from the cache, while an inconsistently updated version is being downloaded, and when she relaunches it while at home she will get the inconsistent version. You can’t afford to be careless with offline web apps.

Curtain resolves this issue by relying on a version control system. On the build machine, all resources must be under a version-controlled work area, and Curtain will query the version control system for the ID of the version where the resource was last updated, and will generate a resource name by appending this version ID. Note that by doing it this way, Curtain will avoid changing the URL of the resource (which would invalidate it in the cache) even if everything else has changed, as long as the resource itself hasn’t changed. Curtain will process your HTML to replace references to the resource by references to the versioned resource name, and upload the result, and upload the resources themselves so that they have the versioned name on the server.

Curtain will also assign a version to the app as a whole, this is in particular put in a comment in the manifest (see above): this version is simply the current version of the version control system work area. Curtain itself must be in such a work area, so that if Curtain is updated but the source files are not, the version number is changed.

As part of these tasks, Curtain will manage the Cache-Control headers by synthesizing the necessary .htaccess file, which is especially important when using Application Cache; since it has to deal with .htaccess anyway, Curtain will also directly manage the MIME type of these resources, to avoid relying on the default Apache behavior (based on file extensions).

No progressive rendering

I have always found progressive rendering to be unsightly. It was necessary in the first days of the web, what with images taking seconds to download, it is largely necessary on mobile to this day, and it is still desirable on desktop for online apps. But for offline, desktop-like web apps? No way.

Curtain opts out of progressive rendering by downloading all dependent resources through XMLHttpRequest and explicitly loading the content, for instance for image resources by generating a URL for the downloaded Blob and assigning it through code to the src attribute of the img tag; this means Curtain-deployed web apps depend on XHR2 and Blob as a XHR responseType. Curtain will hide the interface until all resources have been loaded and assigned, assuming that the user will retry loading the app if no interface appears after a time; it is safe to assume the user is online at that time, because if he is offline, this means all the files listed in the Application Cache manifest are locally available and so will not fail to load.

If JavaScript is disabled or the browser does not have the necessary support for Curtain, we want to be able to show a message to that effect, and we want to do it in the context of the “usual” interface, so that she recognizes the web app. So the entirety of the interface is put in a div belonging to a CSS class called curtain. A small bit of JavaScript code before the interface hides this div: if JavaScript is disabled, the interface simply won’t be hidden. Then code after the interface will check everything necessary for the Curtain runtime to perform its job (using Modernizr in particular); if not everything is available, then the message will be changed and the div will be made visible.

The HTTP URLs of the images are put in the src attributes of the img tags in the initially downloaded HTML. However, this is only a provision for the above two error cases; in normal usage they will have been replaced by the blob URLs prior to the interface becoming visible.

Language

First, Curtain generates static sites, and does not depend on any server programming language or any kind of server processing. Second, while early versions of the build and upload script were written as a shell script, Curtain is written in Python so as to be as portable as possible (it was that or Perl; I chose Python), though I have not been able to test it on Windows or Linux yet.

Third, Curtain embeds a bit of JavaScript code along with your app, and it expects your app to be written in JavaScript. However, Curtain makes no pretense at bringing JavaScript framework features; you should be able to use it with any JavaScript framework, including Vanilla JS.

Stay tuned…

Stay tuned, because tomorrow I will present you the sample app for Curtain, and its justification.

On the recent Apple top management adjustments

Michael Tsai summed up my thoughts exactly on the Schiller side of the announcements: I share his reservations about Schiller, but I indeed can’t complain when the various groups that interact with developers, whose lack of coordination I previously listed as being part of the problem, are now under a single top executive (with the exception of APIs, still the responsibility of Craig Federighi, of course).

So I thought I’d also mention the promotion to top leadership of Johny Srouji, promotion which to me represents the rise of semiconductor engineering inside Apple. Far from shedding this skill (as it might have appeared to some at the time of the Intel transition), Apple doubled down on it: in the domain of system glue (chipsets), peripheral controllers, sensors, etc., but also going as far as to design its own processors, both at the RTL level (with the acquisition of PA Semi) and at the RTL to mask translation level (with the acquisition of Intrisity); something Apple never did (as far as we know, anyway) for 68k, PowerPC, or any other processor they used prior to ARM. With impressive results, particularly now with the iPad pro.

The Mac App Store and long-term app preservation

I am fortunate enough not to have apps on the Mac App Store, and I have bought few enough apps on it (for reasons I previously exposed) that I initially missed the meltdown, due to the store, of many apps bought there. This is not an outage, in the sense that an outage implies the user is aware on some level on being dependent on an online resource; this is worse. This is not just unacceptable: this is a fundamental violation of the trust that both app developers and customers have placed in Apple, namely that bought, installed and compatible apps would keep working (short of any dramatic action taken for consumer protection so that they would not, such as revoking the certificate of a malicious developer).

Worse, this has implications beyond the Mac App Store per se. As you know, Apple is reserving many APIs related to online services even in a remote fashion to Mac App Store apps: even when there is a non-Mac App Store version of the app available, it cannot make use of iCloud (is there a typo version of “revealing tongue slips”? Because I initially typed ”iCould”…) or Apple Maps. So, in turn, how am I supposed to trust iCloud or Apple Maps, if I am not sure I can run any app that can access it? As if these services did not already have a reputation…

But even more troubling are the implications for long-term usage and preservation of software and it data. The consumer issue of not being able to trust that a purchased app will keep running even when nothing else changes is bad enough (you could set back the system clock, but how realistic is that, even in an unconnected system? You would no longer be able to trust the creation or modification dates of any of your documents, for a start); but the implications on being unable to preserve running software on a cultural level is frightening. Even more so for the documents with proprietary formats created by that software. I’ve been following with interest the initiatives of Jason Scott in that area, I am definitely down with the need to preserve this software and data, not just for ourselves, but the future generations. And the Mac App Store (and the iOS App Store, the only difference being that we have not had any fire drill on that side. Yet.) is “not helping”. To put it mildly, because this blog tries to be family friendly.

I initially though there was no DRM component to this story: certificates, “damaged apps”, that sounded like code signature infrastructure, in other words protection of the consumer against malware, something that the user can disable (ostensibly, at least). But when I tried to convince my Mac to run this app as an unsigned app, I encountered what is extremely likely to be the store DRM: I initially got the “your app was bought on another machine” message, so I tried deleting the receipt, but then I got the dreaded “app damaged” message, at which point I removed the signature. But no way: in that case, what happens is that the app does not launch either, with the console printing:

13/11/15 15:36:23,608 com.apple.launchd.peruser.502: ([0x0-0x2cc2cc].com.tapbots.TweetbotMac[9317]) Exited with code: 173
13/11/15 15:36:23,663 storeagent: Unsigned app (/Applications/Tweetbot.app).

Since I removed the MAS receipt, how is storeagent getting involved? Probably in order to decode the app DRM, and as you see it refuses to do so due to the app being now unsigned. So now we have DRM preventing us from running our legitimately bought software. I have kept a pristine copy of the app in a safe location to make further attempts, but the only way I can see is to create a new root CA which I install on the machine as a trusted root, and redo the signing chain, and even that might not work if the DRM is somehow tied to the signature chain.

I was already wary of buying apps without trials; this event guarantees that I will never buy anything else from the Mac App Store (and will try to obtain direct licenses of apps already bought there). No direct version of your app? You don’t get my business. I would delete the Mac App Store app if I could. Apple could change my mind by providing verifiable commitments on the ability to disengage the signature checks and in operational service levels, and even then… Furthermore, Apple owes an apology to all the app developers who trusted them with the Mac App Store and who had a long day (and will continue to have long days) of customer support entirely due to Apple’s incompetence.

Apple later on sent emails to developers to explain themselves on the issue; I will count that as the aforementioned apology. I don’t mind that they took a few business days to react, as they themselves had to figure out what the problems were from the multiple reports; I do mind that, operationally, they allowed developers and themselves to be caught flat-footed in the first place: why isn’t there anyone at Apple checking that a sample of Mac App Store apps do run on a machine with the time perpetually set to one month in the future? Still, I guess I’m glad we got an answer in the first place. — November 19, 2015

~ Reactions ~

Rainer Brockerhoff, besides presenting some investigations and corrections, took some issue with my investigation methods, and we started exchanging in the comments there. Don’t miss it.

Exercising, Apple TV and the Web

At the time of the initial discovery of its SDK, then again now with its recent release, debates have sparked about the presence of a web browser on the “new” (as of 2015) Apple TV, or to be more accurate about the lack thereof. And on the desirability of a web browser on a TV in general. I wish to contribute but one data point to the debate.

For two years now I have been keeping shape by using an elliptical/cross trainer for about an hour three times a week, among other purposes in order to be in shape when doing a week-long mountain trek in the summer. Which has led to looking for ways to fill these hours with some sort of distraction, and unfortunately (most) movies are not appropriate, given that, when combined with the exercise, they tend to make my heart rate go way too high. I did watch a number of works (humor in particular: Monty Python, Blackadder, Kaamelott , Spaceballs, etc.) but also tried alternatives, such as browsing the web, in order to catch up/read from the beginning some webcomics for instance. I tried three ways to do so:

  • attaching my iPad 2 to the trainer using a GorilaPod case (works very well, very much recommended!) and browsing the web with it,
  • using my iPhone as a mouse with Mobile Mouse Remote 1 (recommended too, if you need this kind of functionality) to control Safari running on my Mac,
  • using my Wii, connected to the same computer monitor, and its remote to control the built-in browser.

In the end, except for one aspect which ended up breaking the deal the supposedly terrible Wii browser actually provided the best experience. Indeed, in this constrained environment (remember I have to be holding the handlebars most the time) the Wii remote was the best way to interact, as I could actually hold it, and press the dpad to scroll, while still holding the handlebars, which was not the case with either other method; even when I occasionally had to point the remote (and thus let go of the right handlebar) this was surprisingly usable; moreso, I felt, than using the iPad.

The deal breaker was that the Wii was just way to slow to load and render pages. Any time I gained from its better interface was lost waiting each time for the page to appear. I have returned to mostly watching videos while training. It remains: other than speed, the meant-for-tv browser of the Wii was actually the best web browsing experience I could get while using a trainer.

Now imagine using the new Apple TV instead in this situation. It would simply not have the same kind of speed issue, obviously, and the Apple TV remote could potentially provide an even better experience, for instance by eschewing the pointer (as the Apple TV appears to be doing in general), the interaction with the touch surface on the remove performing both the scrolling and moving the focus from link to link instead; the + and – buttons would zoom in and out, etc. and all of it could be done without ever letting go of the handlebars.

Is it a common case? No, certainly not. Training, and in particular the person who trains being able to monopolize the device to which the Apple TV is connected, is a very specific use case. But it is a non-trivial, non-contrived use case in which remote-based, big screen web browsing makes perfect sense. So maybe the lack of a browser on the Apple TV is less a case of lack of floppy drive on the iMac, and more a case of lack of copy and paste on the original iPhone. I can only hope it eventually shows up, at least.


  1. Interestingly, I used that app from first trying out the “lite” version, then upgrading the the paid version; and I initially (re-)tried the lite version because I still had it installed from back when I tried it out as part of the “try before you buy” featured group, which you might remember I mentioned back in the day, so you see, you never know when someone installing and trying out the demo of your app might bear fruit!