Various comments on the iPad pro, among other Apple announcements

As someone who regularly expresses himself on how the iPad is (or how it currently falls short of being) the future of computing, I guess I should comment on the announcements Apple made this September, in particular of course the iPad pro. I think it’s great news, of course. But… where to begin? More so than the iPhone, it is software applications, and specifically third-party software, that will make or break the iPad, even more so for the iPad pro, considering that most of its hardware improvements will only really be exploited with third-party apps: it does not appear that Apple will provide a built-in drawing app to go with the impressive tablet drawing capabilities of the iPad pro, for instance.

And so, what corresponding iOS platform news did we get from Apple this September? Err, none. From a policy standpoint, iOS is still as developer-unfriendly, by not supporting software trials for instance, even though this is a fundamental commercial practice; in fact, this appear to be backfiring on Apple, as this resulted in potential buyers going for the already established brands1 when it comes to professional iPad software, and in Apple coming to bringing Adobe and Microsoft on stage in the presentation to prove the professional potential of the iPad pro; those two companies are probably the ones Apple wishes to be least dependent upon, and yet here we are.

And what about iOS, iOS 9 specifically? When it was announced at WWDC, my thoughts were, on the timing: “It’s about time!”, and on the feature set: “This is a good start.”; and this was in the context of the 10′ iPad, so with the iPad pro announcement I was expecting more multitasking features to be announced along with it, but nope, that was it, what I saw at WWDC and thought would be catching up, meant for the then-current iPads, was in fact a stealth preview of software features meant to be sufficient for the iPad pro. Don’t get me wrong, on-screen multitasking on the iPad is awesome (I’ve been playing with it this summer with the developer betas), but the iPad pro will need more than that. Much more, such as, I don’t know, drag and drop between apps? Or a system where one app could ask a service of another other app (bringing it on the screen by the first app’s side if it wasn’t present), similar to extensions except it would better use the screen real estate allowed by the iPad and iPad pro?

So yes, most definitely count me in with those who think Apple is solving only one part of the problem of the iPad platform with the iPad pro. I’m afraid the iPad has too little direct competition for Apple to take this problem seriously right now, and later it may be too late.

All that having been said, I am very impressed with what they’ve shown of the iPad pro’s capabilities as a graphic tablet, even if I will probably never use these capabilities myself; I think Wacom should indeed be worried. One thing that will matter a lot, but Apple hasn’t talked about or posted specs on, is the thinness of the so-called air gap between the screen and the drawing surface, which (along with latency, which they did mention in the keynote) makes all the difference in making the user feel like he is actually drawing on a surface, rather than awkwardly interacting with a machine; Piro of Megatokyo posted about this a while ago. This is something Apple has been good at, at least when compared with other handset/tablet makers. At any rate, the iPad, even the iPad pro, has to support the needs of everyone, so Wacom and other graphic tablet makers may still be able to differentiate themselves by providing possibilities that a tablet that still must support finger input and other usage patterns than drawing could not provide.

Lastly, I am interested in the iPad pro as a comic reader. The 10′ iPad family is generally great for reading manga and U.S. comics, as that screen size is large enough to contain a single page of them (in the case of U.S. comics, pages are actually slightly scaled down, but that does not impair reading), however it falls short when needing to show a double page spread, such as the (outstanding) one found in the latest issue of Princeless (if you’re not, you should be reading Princeless. Trust me.) or the not less outstanding ones found in Girl Genius. The iPad pro has the perfect size and aspect ratio for those, being able of showing two U.S. comics pages side by side at the same scale a 10′ iPad is able of showing a single page. A 10′ iPad is also too small to reasonably display a single page of comics of the French-Belgian tradition (aka “bandes dessinées”), while the iPad pro would be up to the task with only a minor downscale; I’m not about to give up buying my French-Belgian comics on paper any time soon, but it would be a wonderful way to read them for someone overseas who is far away from any store that distributes such comics, especially for less well-known ones.2

As for the other announcements… I don’t know what the new Apple TV will be capable of outside the U.S., so most of its appeal is hard to get for me. I have little to comment on with regard to its capabilities as an app platform, except that I find the notion of Universal iOS/tvOS games (even more so saved game handoff) to be completely ludicrous: the two have completely different interaction mechanisms, so the very notion that the “same game” could be running on both is absurd.

As for the iPhone 6S/6S+, what is to me most interesting about them is 3D touch and live photos. Those two are very dependent on the concept catching on with developers (well, Vine is ready for live photos, of course…): there is little point in being able to capture live photos if you can’t share them as such on Facebook/Flickr/Dropbox/etc., and it remains to be seen which developers will add 3D touch actions to their apps, and what for. So will they catch on? We’ll see.

  1. And they, in turn, have the branding power to pull off alternate payment schemes such as subscriptions, allowing them to thrive on the iPad more than developers dependent on race-to-the-bottom iOS App Store revenue do.
  2. What about double page spreads in French-Belgian comics? Those are rare, but they exist, most notably in the œuvre of Cabu (yes, that Cabu). However, I am not sure the 17′ handheld tablet that would be necessary to properly display these would be very desirable.

Maybe Android did have the right idea with activities

One of the features that will land as part of iOS 9 is a new way to display web content within an app by way of a new view controller called SFSafariViewController, which is nicely covered by Federico Viticci in this article on MacStories. I’ve always been of the (apparently minority) opinion that iOS apps ought to open links in the default browser (which currently is necessarily Mobile Safari), but the Safari view controller might change my mind, because it pretty much removes any drawback the in-app UI/WKWebView-based browsers had: with the Safari view controller I will get a familiar and consistent interface (minus the editable address bar, of course), the same cookies as when I usually browse, my visits will be registered in the history, plus added security (and performance, though WKWebView already had that), etc. Come November, maybe I will stop long-tapping any and all links in my Twitter feed (in Twiterrific, of course) in order to bring the share sheet and open these links in Safari.

Part of my reluctance to use the in-app browsers was that I considered myself grown up enough to switch apps, then switch back to Twitter/Tumblr/whatever when I was done with reading the page: I have never considered app switching to be an inconvenience in the post-iOS 4 world, especially with the iOS 7+ multitasking interface. But I understand the motivation behind wanting to make sure the user goes back to whatever he was doing in the app once he is done reading the web content; and no, it is not about the app developers making sure the user does not stray far from the app (okay, it is not just about that): the user himself may fail to remember that he actually was engaged in a non-web-browsing activity and thus is better served, in my opinion, by a Safari experience contained in-app than he is by having to switch to Safari. For instance, if he is reading a book he would appreciate resuming reading the book as soon as he is done reading the web page given as reference, rather than risk forgetting he was reading a book in the first place and realize it the following day as he reopens the book app (after all, there is no reason why only John Siracusa ebooks should have web links).

And most interestingly, in order to deliver on all these features, the Safari view controller will run in its own process (even if, for app switching purposes for instance, it will still belong to the host app); this is a completely bespoke tech, to the best of my knowledge there is no framework on iOS for executing part of an app in a specific process as part of a different host app (notwithstanding app extensions, which provide a specific service to the host app, rather than the ability to run a part of the containing app). And this reminded me of Android activities.

For those of you not familiar with the Android app architecture, Android applications are organized around activities, each activity representing a level of full-screen interaction, e.g. the screen displaying the scrollable list of tweets complete with navigation bars at the top and/or bottom, with the activity changing when drilling down/back up the app interface, e.g. when tapping a tweet the timeline activity slides out and the tweet details activity slides in its place, with activities organized in a hierarchy. Android activities are roughly equivalent to view controllers on iOS, with a fundamental difference: installed apps can “vend” activities, that other apps can use as part of their activity hierarchy. For instance, a video player app can start with an activity listing the files found in its dedicated folder, then once such a file is tapped it starts the activity to play a single video, all the while also allowing any app, such as an email client or the file browser, that wants to play a video the ability to directly invoke the video-playing activity from within the host app.

I’ve always had reservations with this system: I thought it introduced user confusion as to which app the user “really” is in, which matters for app switching and app state management purposes, for instance. And I still do think so to an extent, but it is clear the approach has a lot of merit when it comes to the browser, given how pervasive visiting web links is even as part of otherwise very app-centric interactions. And maybe it could be worth generalizing and creating on iOS an activity-like system; sure, the web browser is pretty much the killer feature for such a system (I am more guarded about the merits when applying it to file viewing/editing, as in my video player example), but other reasonable applications could be considered.

So I’m asking you, would you like, as a user, to have activity-like features on iOS? Would you like, as a developer, to be able to provide some of your app’s view controllers to other apps, and/or to be able to invoke view controllers from other apps as part of your app?

“Character”-by-“character” string processing is hard, people

I bet you did not believe me when I wrote in Swift thoughts about how it is hard to properly process strings when treating them as a sequence of Unicode code points, and that as a result text is better thought of as a media flow and strings better handled through the few primitives I mentioned, which never treat strings using any individual entity (be this entity the byte, the UTF-16 “character”, the Unicode code point, or the grapheme cluster). I am exaggerating, of course, some of you probably did believe me, but given how I still see string processing being discussed between software developers, this is true enough.

So go ahead and read the latest post in the Swift blog, about how they changed the String type in Swift 2, and the fact that it is no longer considered a collection (by no longer conforming to the CollectionType protocol), because a collection where appending an element (a combining acute accent, or “´”) not only does not result in the element being considered part of the collection, but also results in elements previously considered part of it (the letter “e”) to no longer be, is a pretty questionable collection type. Oops.

But that is not the (most) interesting aspect of that blog post.

Look at the table towards the end, which is supposed to correspond to a string “comprised of the decomposed characters [ c, a, f, e ] and [ ´ ]”, and which I am reproducing here, as an actual HTML table as Tim Berners-Lee intended, for your benefit (and because I am pretty certain they are going to correct it after I post this):










Unicode Scalar Value






UTF-8 Code Unit






UTF-16 Code Unit





The first thing you will notice is the last element of the Character view, the whole row in fact. Why are they described by a Unicode code point each? Indeed, each of these elements is an instance of the Swift Character type, i.e. a grapheme cluster, which can be made up of multiple code points, and this is particularly absurd in the case of the last one, which corresponds to two Unicode code points. True, it would compare equal with a Swift Character containing a single LATIN SMALL LETTER E WITH ACUTE, but that is not what it contains. And yet, this is only the start of the problems…

If we take the third row, its last element is incorrect. Indeed, 204, or 0xCC ($CC for the 68k assembly fans in the audience) is only the first byte of the UTF-8 serialization of U+0301 (COMBINING ACUTE ACCENT) that you see in the previous row (which is correct, amazingly), the second being $81.

And lastly, if the last two column are two separate Unicode scalar values, how could they possibly be represented by a single UTF-16 scalar? Of course, they can’t: 769 is $0301, our friend the combining acute accent. “e” is simply gone.

So out of 4 rows, 3 are wrong *slow clap*. So here is the correct table:





Unicode Scalar Value











UTF-8 Code Unit







UTF-16 Code Unit






Note that with the example given, Unicode scalar value match one for one with the UTF-16 scalars in the sequence. For a counterexample to be provided, the string would have to include Unicode code points beyond the Basic Multilingual Plane — a land populated by scripts no longer in general usage (hieroglyphs, Byzantine musical notations, etc.), extra compatibility ideographs, invented languages, and other esoteric entities; that place, by the way, is where emoji were (logically) put in Unicode.


If Apple can’t get its “Characters”, UTF-16 scalars, and bytes of a seemingly simple string such as “café” straight in a blog post designed to show these very views of that string, what hope could you possibly have of getting “character”-wise text processing right?

Treat text as a media flow, by only using string processing primitives without ever directly caring about the individual constituents of these strings.

Thoughts on developer presentation audiences

So after the WWDC 2015 keynote, reading Dr. Drang (via Six Colors) and generally agreeing with his take sparked some reflexions. In particular, as a software developer myself, a side reflexion about what (if anything) is particular about software developer audiences, so that people like Drake and Jimmy Iovine, in the unlikely case they read this, don’t think of software developers as a mean crowd; and who knows, it could be applicable to other show-biz types in case they present at events like Build or Google I/O.

But to being with, whose bright idea was it, honestly, to have Apple Music be the “one more thing” of a WWDC keynote? Especially of a keynote that was one of the longest, if not the longest, in recent history (I’d check, but is currently redirecting me to Only one of those (“one more thing” or “WWDC keynote”) would have been fine, but not both. You can have, at the end of a long presentation, at a time the attention of the crowd (which, if it needs to be reminded, was up very early and spent a lot of time in line, because otherwise you end up in an overflow room) may be waning, a “one more thing” about something outside of the theme, for instance new hardware announcement, on condition that this announcement relieves some pain points (e.g. new hardware that makes a previously impossible combination now possible, easing the life of developers who use one and develop for the other) or otherwise has elements that can spark the audience’s specific interest so that you can be guaranteed some cheers and applauses and keep the crowd interested. Apple Music, even if it materializes as a good product, does not have that.

Don’t get me wrong, software developers like music just as much as the next guy. And heck, we’ve seen worse, including at WWDC or iPhone SDK events (Farmville, anyone?). In fact, software developers are not a tough crowd; they will almost alway give at least polite applause when cued: I remember the Safari kickoff presentation at WWDC 2010, with an audience therefore presumably dominated by web developers, and Safari extensions ended up being introduced, with one of the presenters presenting… how they ported their ad blocking extension to Safari. To an audience at least in part making a living (directly or indirectly) from web advertising. Even then, he got polite applause at the end of his presentation, like everyone else. And outside of very specific, preexisting situations (it was at Macworld, but could just as well have happened at WWDC) software developers will never boo a presenter offstage. Why is that? Well, an important part is that software developers have been in the presenter’s shoes before; not to this scale, most likely, but they know it’s a tough part, either to have a demo that works (hence why the applause even for incremental features that were even seen on other platforms before), or worse, if there is no demo, to be able to convey the importance of the software you are talking about without being able to show it. And even if the presenter is mediocre, contrary to a mediocre artist, software developers know that applauding the presenter will not make him stay longer, and his script will end soon enough anyway, so might as well politely applaud.

Software developers, as tech enthusiasts, are more generally interested in anything that moves the state of the art forward, even if it has little relation with any technology they will actually make use of in their job, on condition they can see what is new or specific about it; or at least, they want to be able to take the announcement apart, as we’re going to see. Plus they are heavy users of the platforms they are developing for, so anything that makes user’s lives easier makes theirs easier, too, and they will react to that.

But software developers are also a wary bunch. All of them have been burned before; doesn’t matter whether they trusted a company they shouldn’t have or whether they couldn’t have foreseen anything but were betrayed, they all got a past experience of betting on something (a platform, an API, a service, a tool, etc.) and losing their bet. So in presentations they don’t want to be given dreams of an ideal future where the product magically does what we expect of it, but rather they want demos, or at the very least material claims that can be objectively evaluated as soon as anything concrete is provided. Triply so for Internet services. Everything else is just a setup to get to a demo, as far as a software developer is concerned. Also, as a result software developers take apart everything that is being said to try and figure how it works, in particular to foresee any eventual limitation; yes, to an extent it does take out the magic to dissect everything that way, but remember that for software developers this is a matter of survival. Again, triply so for Internet services. Do I need to remark that the Apple Music presentation, even the demo from Eddy Cue, provided little in the way of these material claims? He did show how the user interacts with the service, but, being an Internet service, this does not show how it works, really. Also, this means that the crowd may be too busy trying to make sense of what you are saying to react to your quick quips.

Software developers are also extremely good at math and mental arithmetic. It goes with the job. They will double-check everything you throw at them, live, so don’t ever expect to be able to assert claims that don’t literally add up.

As with any recurring event (as opposed to, say, a concert date, where this is less the case), there is also a lot of lore and unsaid things that are nevertheless known from both the regular presenters and the audience. If you’re not a regular presenter, it’s not something you can tap into (so yes, you will be at a disadvantage from the regular presenters), but you better be briefed about those to avoid running into them by accident. I remember a high school incident where I had an exchange with a classmate that the class couldn’t miss: it was about silex/stone blades, and I can’t remember what the root of the problem was, but I was countering that this was no way to build a hatchet that could chop down trees, to which my classmate countered that chopping down trees was not the guys’ aim. I think I let him have the point at the time; this was presumptuous of him to assume this, but at the same time it was presumptuous for me to assume this use case, this was just something I came up with as a use case for a hatchet at the top of my head. Fast forward a few months, but still in the same year, the class was making an outside trip to a place where they studied stone age tooling, and during his presentation the guy explained that following the methods from these times: preparing and carving the stone, pairing it with a wooden handle and attaching with then-current methods, he got a tool that worked very well, taking as an example that he had been able to chop down a small tree with it.

After maybe a beat, the whole class erupted in laughter.

My classmates had clearly not forgotten; the poor presenter was all “Was it something I said?” (I was tempted reach out to him, shake his hand and thank him profusely), and our teacher fortunately came to his rescue by promising to tell him later. There was of course no way he could have been briefed for this, but this is not the case for an event such as WWDC.

And, as a “one more thing”, I think I should close with a mention of the other WWDC keynote audience, those who watch the live stream, and react on Twitter instead. This is not exactly a developer audience, but pretty close. However, the reactions tend to be all over the place, in particular you always get the complaint (that ends up trending each time!) that no new hardware (or no new hardware the poster was interested in) was announced, even though this is a silly expectation to have: you can always hope for some, mind you, but given there is always a lot to say about future OS updates at WWDC, especially now with three major Apple platforms, it’s better for Apple to make these announcements at other times. So it’s hard to get a feel for the WWDC live stream audience.

WWDC 2015 Keynote not-quite-live tweeting

(Times are GMT+2)

  • 10:14 PM – 8 Jun 2015: Looking at the previewed Mac OS X improvement, I think it’s too bad @siracusa is not going to be reviewing them (but it’s his call) #WWDC15
  • 10:16 PM – 8 Jun 2015: Speaking of @siracusa , I’d almost prefer for multitasking dividers not to be repositionable. #positioning #OCD #WWDC15
  • 10:19 PM – 8 Jun 2015: Metal on the Mac: “Of course this means war” #OpenGL #WWDC15
  • 10:21 PM – 8 Jun 2015: At long last we have search in third-party apps on iOS! (viz. ) #WWDC15
  • 10:23 PM – 8 Jun 2015: (Around the 37 minute mark): It’s funny, because I’m actually exercising as I watch the keynote stream and take these notes. #WWDC15
  • 10:24 PM – 8 Jun 2015: Siri does still rely on network services, so it can’t all “stay on the device”… #WWDC15
  • 10:27 PM – 8 Jun 2015: (Around the 43 minute mark): the Apple guys have turned into Stanley Yankeeball (minus the Stanley) #WWDC15
  • 10:33 PM – 8 Jun 2015: These improvements to notes might be good, or might turn it into a mess(is it a word processor? For structured text? Something else?)#WWDC15
  • 10:35 PM – 8 Jun 2015: Mapping exits of tube stations is great, not even all of the transit systems’ dedicated apps do so. #WWDC15
  • 10:37 PM – 8 Jun 2015: Since it’s only in select countries at first, the new news app is more than just an aggregator and probably has some editorial. #WWDC15
  • 10:39 PM – 8 Jun 2015: Keyboard gestures for editing are great, but are they like cursor keys (more accurate) or more like mouse movement? #WWDC15
  • 10:40 PM – 8 Jun 2015: Yes! Yes! Yes! Yes! Split screen multitasking on iPad! Amply justifies upgrading to the Air 2. #WWDC15
  • 10:41 PM – 8 Jun 2015: I don’t know how practical multi-touch on multiple apps is, but it sure rocks. #WWDC15
  • 10:42 PM – 8 Jun 2015: New low power mode is the battery equivalent of low memory warnings. #WWDC15
  • 10:43 PM – 8 Jun 2015: Apple game development frameworks still aren’t credible as long as Apple is not dogfooding them. We want Apple-made games! #WWDC15
  • 10:45 PM – 8 Jun 2015: With Home Kit through iCloud, better hope that iCloud is secure… (or that this particular part can be disabled). #WWDC15
  • 10:46 PM – 8 Jun 2015: About Swift: open source is nice, standardization would be nicer. Yes, Objective-C isn’t a standard, but C and C++ are. #WWDC15
  • 10:47 PM – 8 Jun 2015: With iOS9 still supporting the iPad 2, get ready to have to support ARMv7 and the Cortex A9 for some time (it’s not hard, mind you). #WWDC15
  • 10:48 PM – 8 Jun 2015: Can’t really comment on watchOS improvements, since I don’t know much about what it currently does anyway. #WWDC15
  • 10:51 PM – 8 Jun 2015: With native Apple Watch apps, get ready for a “Benchmarking on your wrist” post from @chockenberry as soon as watchOS 2.0 lands. #WWDC15
  • 10:52 PM – 8 Jun 2015: (around the 1:40 mark): wasn’t expecting them to be ready to demo the new watchOS features live so soon after Apple Watch release. #WWDC15
  • 10:54 PM – 8 Jun 2015: (around the 1:41 mark): Kevin Lynch was tethered by the wrist during the Apple Watch demo. Is that punishment for Flash? #WWDC15
  • 10:56 PM – 8 Jun 2015: I was even less expecting them to have a new watch OS beta ready today, 6 weeks after the Apple Watch release. #WWDC15
  • 10:57 PM – 8 Jun 2015: Between Jimmy Iovine and the two women (sorry ladies, I did not write down your names), many new presenters, that’s great. #WWDC15
  • 10:58 PM – 8 Jun 2015: Interesting that they would present Apple Music at WWDC, would appear more fitting for an iPhone or music event. #WWDC15
  • 10:59 PM – 8 Jun 2015: I am more interested in music I can keep, though global radio is interesting. #WWDC15
  • 11:01 PM – 8 Jun 2015: Nothing has really replaced the records stores so far when it comes to music discovery. Will Apple Music do better than Ping? #WWDC15
  • 11:02 PM – 8 Jun 2015: With the news app and Apple Music, Apple is doing more editorial/curation than they ever did. #WWDC15
  • 11:03 PM – 8 Jun 2015: I won’t comment on Apple Pay until it reaches France. #WWDC15
  • 11:04 PM – 8 Jun 2015: Sure, you can ask Siri for the music used in Selma, but she’s no Shazam. #WWDC15
  • 11:05 PM – 8 Jun 2015: After the demo, my feeling of Apple Music is: Netflix for music. Android support is interesting… #WWDC15
  • 11:06 PM – 8 Jun 2015: Again, interesting to have a live performance at WWDC, rather than at an iPhone or music event. #WWDC15
  • 11:09 PM – 8 Jun 2015: And that’s it for the #WWDC15 keynote comments. Now back to notifying of new posts.
  • 8:53 AM – 9 Jun 2015: Some more post-sleep #WWDC15 thoughts before returning to normal:
  • 8:56 AM – 9 Jun 2015: First, there was no homage or reference (that I could spot) in the keynote to @Siracusa and his Mac OS X reviews, I’m disappointed. #WWDC15
  • 9:01 AM – 9 Jun 2015: Second, maybe it’s just me, but I get the impression the keynote is less and less for developer-level features. #WWDC15
  • 9:13 AM – 9 Jun 2015: Third, no free Apple Music tier means people won’t get the impression this is music they can access forever. #WWDC15
  • 9:15 AM – 9 Jun 2015: Fourth and I’ll be done: with Apple global radio, what happens to iTunes Radio? #WWDC15

Thank you, Mr. Siracusa

Today, I learned that John Siracusa had retired from his role of writing the review of each new Mac OS X release for Ars Technica. Well, review is not quite the right word: as I’ve previously written when I had the audacity to review one of his reviews, what are ostensibly articles reviewing Mac OS X are, to my mind, better thought of as book-length essays that aim to vulgarize the progress made in each release of Mac OS X. They will be missed.

It would be hard for me to overstate the influence that John Siracusa’s “reviews” have had on my understanding of Mac OS X and on my writing; you only have to see the various references to John or his reviews I made over the years on this blog (including this bit…). In fact, the very existence of this blog was inspired in part by John: when I wrote him with some additional information in reaction to his Mac OS X Snow Leopard review, he concluded his answer with:

You should actually massage your whole email into a blog post [of] your own.  I’d definitely tweet a link to it! :)

to which my reaction was:

Blog? Which blog? On the other hand, it’d be a good way to start one

Merely 4 months later, for this reason and others, this blog started (I finally managed to drop the information alluded to in 2012; still waiting for that tweet ;) ).

And I’ll add that his podcasting output may dwarf his blogging in volume, but, besides the fact I don’t listen to podcasts much, I don’t think they really compare, mostly because podcasts lack the reference aspect of his Mac OS X masterpieces due to the inherent limitations of podcasts (not indexed, hard to link to a specific part, not possible to listen in every context, etc.). But, ultimately, it was his call; as someone, if I remember well, commented on the video of this (the actual video has since gone the way of the dodo): “Dear John, no pressure. Love, the Internet”. Let us not mourn, but rather celebrate, from the Mac OS X developer preview write-ups to the Mac OS X 10.10 Yosemite review, the magnum opus he brought to the world. Thank you, Mr. Siracusa.

April’s Fools 2015

As you probably guessed, the post I made Wednesday was an April’s fools… well, the kind of April’s fools I do here, of course: just because this was for fun does not mean this was no deeper message to that post (now translated to English for your understanding).

In case you missed it, for April the first (besides posting that post) I translated to French my greatest hits (as listed there) and a few other minor posts, replaced all others with a message in French claiming the post in question was being translated, replaced the comicroll with an equivalent one listing French online comics, and translated to French all post titles and all elements of the blog interface: “React”, search, dates, etc. up to the blog title: “Le Programmeur Itinérant” (it stayed that way a bit longer than the initially planned 1-2 days because of unforeseen technical issues, my apologies for the trouble). Thus reminding you, in case my name did not make it clear enough, that even though I publish in English my first language is actually French.

The problem of availability of information, especially technical one, in more than one human language has always interested me, for reasons of inclusiveness among others. It remains a very hard problem (I did get a good laugh at the results of Google translate back to English when applied to my French posts), and so initiatives such as this are very welcome (they translated my “A few things iOS developers…” for instance, but I can’t find the link at the moment).

Lastly, there have been a few influences that led me to do this for April the first, but I want to thank in particular Stéphane Bortzmeyer, who manages to maintain a very technical blog in French; whenever I needed the French translation of a technical term I could typically just look in his blog to see what he uses (or to confirm there was no point in trying, e.g. for “smartphone”, which has no real French translation). Much respect to him for this.

To arms, citizens!

To arms, I say! I just realized the enormous scandal that is the presence in Unicode of the emoji character TOKYO TOWER (U+1F5FC), which you should be able to see if you are equipped with a color set after the colon: 🗼. Scandal, I say, as this thing, which we never talk about at home whenever we talk about Tokyo, and for good reason, as it is in truth a pale imitation of our national tower, the Eiffel tower, that the Japanese made at a time when they found success in imitation… Where was I? Oh, yes, so, that thing managed to steal a spot in Unicode even though our Eiffel tower isn’t in there! Scandal, I say!

Worse yet, this was done with the yankees’ complicity, who shamelessly dominate the Unicode consortium; the collusion is obvious when we see they themselves took advantage of it to slot in the statue of liberty. And I say, no, this shall not pass! Say no to the US-Japan cultural domination! That is why, from now on, my blog will be in French. Too bad for you if you can’t read it. I even started translating my previous posts, starting with my greatest hits, namely A few things iOS developers ought to know about the ARM architecture, Introduction to NEON on iPhone, Benefits (and drawback) to compiling your iOS app for ARMv7 et PSA: Do not release ARMv7s code until you have tested it. And I have no intent of stopping there.

Join me in the protest to demand that the Eiffel tower be added to Unicode! To arms!


Unconventional iOS app extension idea: internal thumbnail generator

The arrival (along with similar features) of extensions in iOS 8, even if it does not solve all problems with the platform’s inclusiveness, represents a sea change in what is possible for third-party developers with iOS, enabling many previously unviable apps such as Transmit iOS. But, even with the ostensibly specific scenarios (document provider extensions, share extensions, etc.) app extensions are allowed to hook themselves to, I feel we have only barely begun to realize the potential of extensions. Today I would like to present a less expected problem extensions could solve: fail-safe thumbnail generation.

The problem is one we encountered back in the day when developing CineXPlayer. I describe the use case in a radar (rdar://problem/9115604), but the tl;dr version is we wanted to generate thumbnails for the videos the user loaded in the app, and were afraid of crashing at launch as a result of doing this processing (likely resulting in the user leaving a one-star “review”), so we wanted to do so in a separate process to preserve the app, but the sandbox on iOS does not allow it.

But now in iOS 8 there may be a way to use extensions to get the same result. Remember that extensions are run in their own process, separate from both the host app process and the containing app process; so the idea would be to embed an action extension for a custom type of content that in practice only our app provides, make the videos loaded in the app provided under that type to extensions, and use the ability of action extensions to send back content to the host to send back the generated thumbnail; if our code crashes while generating the thumbnail, we only lose the extension process, and the app remains fine.

This would not be ideal, of course, as the user would have to perform an explicit action on each and every file (I haven’t checked to see whether there would be sneaky ways to process all files with one extension invocation), but I think it would be worth trying if I were still working on CineXPlayer; and if after deployment Apple eventually wises up to it, well, I would answer them that it’s only up to them to provide better ways to solve this issue.

MPW on Mac OS X

From Steven Troughton-Smith (via both Michael Tsai and John Gruber) comes the news of an MPW compatibility layer project and how to use it to build code targeting Classic Mac OS and even Carbonized code from a Mac OS X host, including Yosemite (10.10). This is quite clever, and awesome news, as doing so was becoming more and more complicated, and in practice required keeping one ore more old Macs around.

Back in the days of Mac OS X 10.2-10.4, I toyed with backporting some of my programming projects, originally developed in Carbon with Project Builder, to MacOS 9, and downloaded MPW (since it was free, and CodeWarrior was not) to do so. The Macintosh Programmer’s Workshop was Apple’s own development environment for developing Mac apps, tracing its lineage from the Lisa Programmer’s Workshop, which was originally the only way to develop Mac apps (yes, in 1984 you could not develop Mac software on the Mac itself). If I recall correctly, Apple originally had MPW for sale, before they made it free when it could no longer compete with CodeWarrior. You can still find elements from MPW in the form of a few tools in today’s Xcode — mostly Rez, DeRez, GetFileInfo and SetFile. As a result, I do have some advice when backporting code from Mac OS X to MacOS 9 (and possibly earlier, as Steven demonstrated).

First, you of course have to forget about Objective-C, forget about any modern Carbon (e.g. HIObject, though the Carbon Event Manager is OK), forget about Quartz (hello QuickDraw), forget about most of Unix, though if I recall correctly the C standard library included with MPW (whose name escapes me at the moment) does have some support beside the standard C library, such as open(), read(), write() and close(). Don’t even think about preemptive threads (or at least, ones you would want to use). In fact, depending on how far back you want to go, you may not have support for what you would not even consider niceties, but were actually nicer than what came before; for instance, before Carbon, a Mac app would call WaitNextEvent() in a loop to sleep until the next event that needed processing, and then the app would have to manually dispatch it to the right target, including switching on the event type, performing hit testing, etc.: no callback-based event handing! But WaitNextEvent() itself did not appear until System 7, if I recall correctly, so if you want to target System 6 and earlier, you have to poll for events while remembering to yield processing time from time to time to drivers, to QuickTime (if you were using it), etc. The same way, if you want to target anything before MacOS 8 you cannot use Navigation Services and instead have to get yourself acquainted with the Standard File Package… FSRefs are not usable before MacOS 9, as another example.

When running in MacOS 9 and earlier, the responsibilities of your code also considerably increase. For instance, you have to be mindful of your memory usage much more than you would have to in Mac OS X, as even when running with virtual memory in MacOS 9 (something many users disabled anyway) your application only has access to a small slice of address space called the memory partition of the application (specified in the 'SIZE' resource and that the user can change): there is only one address space in the system which is partitioned between the running apps; as a result memory fragmentation becomes a much more pressing concern, requiring in practice the use of movable memory blocks and a number of assorted things (move high, locking the block, preallocating master pointers, etc.). Another example is that you must be careful to leave processor time for background apps, even if you are a fullscreen game: otherwise, for instance if iTunes is playing music in the background, it will keep playing (thanks to a trick known as “interrupt time”)… until the end of the track, and become silent from then on. Oh, and did I mention that (at least before Carbon and the Carbon Event Manager) menu handling runs in a closed event handling loop (speaking of interrupt time) that does not yield any processing time to your other tasks? Fun times.

Also, depending again on how far back you want to go, you might have difficulty using the same code in MacOS 9 and Mac OS X, even with Carbon and CarbonLib (the backport of most of the Carbon APIs to MacOS 9 as a library, in order to support the same binary and even the same slice running on both MacOS 9 and Mac OS X). For instance, if you use FSSpec instead of FSRef in order to run on MacOS 8, your app will have issues on Mac OS X with file names longer than were possible on MacOS 9; they are not fatal, but will cause your app to report the file name as something like Thisisaverylongfilena#17678A… not very user-friendly. And the Standard File Package is not supported at all in Carbon, so you will have to split your code at compile time (so that the references to the Standard File Package are not even present when compiling for Carbon) and diverge at runtime so that when running in System 7 the app uses the Standard File Package, and when running in MacOS 8 and later it uses Navigation Services, plus the assorted packaging headaches (e.g. using a solution like FatCarbon to have two slices, one ppc that links to InterfaceLib, the pre-Carbon system library, linking weakly to the Navigation Services symbols, and one ppc that links to CarbonLib and only runs on Mac OS X).

You think I’m done? Of course not, don’t be silly. The runtime environment in MacOS 9 is in general less conductive to development than that of Mac OS X: the lack of memory protection not only means that, when your app crashes, it is safer to just reboot the Mac since it may have corrupted the other applications, but also means you typically do not even know when your code, say, follows a NULL pointer, since that action typically doesn’t fault. Cooperative multitasking also means that a hang from your app hangs the whole Mac (only the pointer is still moving), though that can normally be solved by a good command-alt-escape… after which it’s best to reboot anyway. As for MacsBug, your friendly neighborhood debugger… well, for one, it is disassembly only, no source. But you can handle that, right?

It’s not that bad!

But don’t let these things discourage you from toying with Classic MacOS development! Indeed, doing so is not as bad as you could imagine from the preceding descriptions: none of those things matter when programming trivial, for fun stuff, and even if you program slightly-less-than-trivial stuff, your app will merely require a 128 MB memory partition where it ought to only take 32 MB, which doesn’t matter in this day and age.

And in fact, it is a very interesting exercise because it allows a better understanding of what makes the Macintosh the Macintosh, by seeing how it was originally programmed for. So I encourage you all to try and play with it.

For this, I do have some specific advice about MPW. For one, I remember MrC, the PowerPC compiler, being quite anal-retentive for certain casts, which it just refuses to do implicitly: for instance, the following code will cause an error (not just a warning):

SInt16** sndHand;
sndHand = NewHandle(sampleNb * sizeof(SInt16));

You need to explicitly cast:

SInt16** sndHand;
sndHand = (Sint16**)NewHandle(sampleNb * sizeof(SInt16));

It is less demanding when it comes to simple casts between pointers. Also, even though it makes exactly no difference in PowerPC code, it will check that functions that are supposed to have a pascal attribute (supposed to mark the function as being called using the calling conventions for Pascal, which makes a difference in 68k code), typically callbacks, do have it, and will refuse to compile if this is not the case.

If you go as far back as 68k, if I remember correctly int is 16 bit wide in the Mac 68k environment (this is why SInt32 was long up until 64-bit arrived: in __LP64__ mode SInt32 is int), but became 32 bit wide when ppc arrived, so be careful, it’s better not to use int in general.

QuickDraw is, by some aspects, more approachable that Quartz (e.g. no object to keep track of and deallocate at the end), but on the other hand the Carbon transition added some hoops to jump through that makes it harder to just get started with it; for instance something as basic as getting the black pattern, used to ensure your drawing is a flat color, is described in most docs as using the black global variable, but those docs should have been changed for Carbon: with Carbon GetQDGlobalsBlack(&blackPat); must be used to merely get that value. Another aspect which complicates initial understanding is that pre-Carbon you would just directly cast between a WindowPtr, (C)GrafPtr, offscreen GWorldPtr, etc., but when compiling for Carbon you have to use conversion functions, for instance GetWindowPort() to get the port for a given window… but only for some of those conversions, the others just being done with casts, and it is hard to know at a glance which are which.

When it came to packaging, I think I got an app building for classic MacOS relatively easily with MPW, but when I made it link to CarbonLib I got various issues related to the standard C library, in particular the standard streams (stdin, stdout and stderr), and I think I had to download an updated version of some library or some headers before it would work and I could get a single binary that ran both in MacOS 9 and natively on Mac OS X.

Also, while an empty 'carb' resource with ID 0 does work to mark the application as being carbonized and make it run natively on Mac OS X, you are supposed to instead use a 'plst' resource with ID 0 and put in there what you would put in the Info.plist if the app were in a package. Also, it is not safe to use __i386__ to know whether to use framework includes (#include <Carbon/Carbon.h>) or “flat” includes (#include <Carbon.h>); typically you’d use something like WATEVER_USE_FRAMEWORK_INCLUDES, which you then set in your Makefile depending on the target.

Lastly, don’t make the same mistake I originally did: when an API asks for a Handle, it doesn’t just mean a pointer to pointer to something, it means something that was specifically allocated with NewHandle() (possibly indirectly, e.g. with GetResource() and loaded if necessary), so make sure that is what you give it.

I also have a few practical tips for dealing with Macs running ancient system software (be they physical or emulated). Mac OS X removed support for writing to an HFS (as opposed to HFS+) filesystem starting with Mac OS X 10.6, and HFS is the only thing MacOS 8 and earlier can read. However, you can still for instance write pre-made HFS disk images to floppy discs with Disk Utility (and any emulator worth its salt will allow you to mount disk images inside the emulated system), so your best bet is to use a pre-made image to load some essential tools, then if you can, set up a network connection (either real or emulated) and transfer files that way, making sure to encode them in MacBinary before transfer (which I generally prefer to BinHex); unless you know the transfer method is Mac-friendly the whole way, always decode from MacBinary as the last step, directly from the target. Alternately, you can keep around a Mac running Leopard around to directly write to HFS floppies, as I do.

Okay, exercise time.

If you are cheap, you could get away with only providing a 68k build and a Mac OS X Intel build (except neither of these can run on Leopard running on PowerPC…). So the exercise is to, on the contrary, successfully build the same code (modulo #ifdefs, etc.) for 68k, CFM-PPC linking to InterfaceLib, CFM-PPC linking to CarbonLib, Mach-o Intel, Mach-o 64-bit PPC, and Mach-o 64-bit Intel (a Cocoa UI will be tolerated for those two) for optimal performance everywhere (ARM being excluded here, obviously). Bonus points for Mach-o PPC (all three variants) and CFM-68k. More bonus points for gathering all or at least most of those in a single obese package.

Second exercise: figure out the APIs which were present in System 1.0 and are supported in 64-bit on Mac OS X. It’s a short list, but I know for sure it is not empty.


Macintosh C Carbon: besides the old Inside Mac books (most of which can still be found here), this is how I learned Carbon programming back in the day.

Gwynne Raskind presents the Mac toolbox for contemporary audiences in two twin articles, reminding you in particular to never neglect error handling, you can’t get away with it when using the toolbox APIs.