The Mac App Store and long-term app preservation

I am fortunate enough not to have apps on the Mac App Store, and I have bought few enough apps on it (for reasons I previously exposed) that I initially missed the meltdown, due to the store, of many apps bought there. This is not an outage, in the sense that an outage implies the user is aware on some level on being dependent on an online resource; this is worse. This is not just unacceptable: this is a fundamental violation of the trust that both app developers and customers have placed in Apple, namely that bought, installed and compatible apps would keep working (short of any dramatic action taken for consumer protection so that they would not, such as revoking the certificate of a malicious developer).

Worse, this has implications beyond the Mac App Store per se. As you know, Apple is reserving many APIs related to online services even in a remote fashion to Mac App Store apps: even when there is a non-Mac App Store version of the app available, it cannot make use of iCloud (is there a typo version of “revealing tongue slips”? Because I initially typed ”iCould”…) or Apple Maps. So, in turn, how am I supposed to trust iCloud or Apple Maps, if I am not sure I can run any app that can access it? As if these services did not already have a reputation…

But even more troubling are the implications for long-term usage and preservation of software and it data. The consumer issue of not being able to trust that a purchased app will keep running even when nothing else changes is bad enough (you could set back the system clock, but how realistic is that, even in an unconnected system? You would no longer be able to trust the creation or modification dates of any of your documents, for a start); but the implications on being unable to preserve running software on a cultural level is frightening. Even more so for the documents with proprietary formats created by that software. I’ve been following with interest the initiatives of Jason Scott in that area, I am definitely down with the need to preserve this software and data, not just for ourselves, but the future generations. And the Mac App Store (and the iOS App Store, the only difference being that we have not had any fire drill on that side. Yet.) is “not helping”. To put it mildly, because this blog tries to be family friendly.

I initially though there was no DRM component to this story: certificates, “damaged apps”, that sounded like code signature infrastructure, in other words protection of the consumer against malware, something that the user can disable (ostensibly, at least). But when I tried to convince my Mac to run this app as an unsigned app, I encountered what is extremely likely to be the store DRM: I initially got the “your app was bought on another machine” message, so I tried deleting the receipt, but then I got the dreaded “app damaged” message, at which point I removed the signature. But no way: in that case, what happens is that the app does not launch either, with the console printing:

13/11/15 15:36:23,608 ([0x0-0x2cc2cc].com.tapbots.TweetbotMac[9317]) Exited with code: 173
13/11/15 15:36:23,663 storeagent: Unsigned app (/Applications/

Since I removed the MAS receipt, how is storeagent getting involved? Probably in order to decode the app DRM, and as you see it refuses to do so due to the app being now unsigned. So now we have DRM preventing us from running our legitimately bought software. I have kept a pristine copy of the app in a safe location to make further attempts, but the only way I can see is to create a new root CA which I install on the machine as a trusted root, and redo the signing chain, and even that might not work if the DRM is somehow tied to the signature chain.

I was already wary of buying apps without trials; this event guarantees that I will never buy anything else from the Mac App Store (and will try to obtain direct licenses of apps already bought there). No direct version of your app? You don’t get my business. I would delete the Mac App Store app if I could. Apple could change my mind by providing verifiable commitments on the ability to disengage the signature checks and in operational service levels, and even then… Furthermore, Apple owes an apology to all the app developers who trusted them with the Mac App Store and who had a long day (and will continue to have long days) of customer support entirely due to Apple’s incompetence.

Apple later on sent emails to developers to explain themselves on the issue; I will count that as the aforementioned apology. I don’t mind that they took a few business days to react, as they themselves had to figure out what the problems were from the multiple reports; I do mind that, operationally, they allowed developers and themselves to be caught flat-footed in the first place: why isn’t there anyone at Apple checking that a sample of Mac App Store apps do run on a machine with the time perpetually set to one month in the future? Still, I guess I’m glad we got an answer in the first place. — November 19, 2015

~ Reactions ~

Rainer Brockerhoff, besides presenting some investigations and corrections, took some issue with my investigation methods, and we started exchanging in the comments there. Don’t miss it.

Thoughts on developer presentation audiences

So after the WWDC 2015 keynote, reading Dr. Drang (via Six Colors) and generally agreeing with his take sparked some reflexions. In particular, as a software developer myself, a side reflexion about what (if anything) is particular about software developer audiences, so that people like Drake and Jimmy Iovine, in the unlikely case they read this, don’t think of software developers as a mean crowd; and who knows, it could be applicable to other show-biz types in case they present at events like Build or Google I/O.

But to being with, whose bright idea was it, honestly, to have Apple Music be the “one more thing” of a WWDC keynote? Especially of a keynote that was one of the longest, if not the longest, in recent history (I’d check, but is currently redirecting me to Only one of those (“one more thing” or “WWDC keynote”) would have been fine, but not both. You can have, at the end of a long presentation, at a time the attention of the crowd (which, if it needs to be reminded, was up very early and spent a lot of time in line, because otherwise you end up in an overflow room) may be waning, a “one more thing” about something outside of the theme, for instance new hardware announcement, on condition that this announcement relieves some pain points (e.g. new hardware that makes a previously impossible combination now possible, easing the life of developers who use one and develop for the other) or otherwise has elements that can spark the audience’s specific interest so that you can be guaranteed some cheers and applauses and keep the crowd interested. Apple Music, even if it materializes as a good product, does not have that.

Don’t get me wrong, software developers like music just as much as the next guy. And heck, we’ve seen worse, including at WWDC or iPhone SDK events (Farmville, anyone?). In fact, software developers are not a tough crowd; they will almost alway give at least polite applause when cued: I remember the Safari kickoff presentation at WWDC 2010, with an audience therefore presumably dominated by web developers, and Safari extensions ended up being introduced, with one of the presenters presenting… how they ported their ad blocking extension to Safari. To an audience at least in part making a living (directly or indirectly) from web advertising. Even then, he got polite applause at the end of his presentation, like everyone else. And outside of very specific, preexisting situations (it was at Macworld, but could just as well have happened at WWDC) software developers will never boo a presenter offstage. Why is that? Well, an important part is that software developers have been in the presenter’s shoes before; not to this scale, most likely, but they know it’s a tough part, either to have a demo that works (hence why the applause even for incremental features that were even seen on other platforms before), or worse, if there is no demo, to be able to convey the importance of the software you are talking about without being able to show it. And even if the presenter is mediocre, contrary to a mediocre artist, software developers know that applauding the presenter will not make him stay longer, and his script will end soon enough anyway, so might as well politely applaud.

Software developers, as tech enthusiasts, are more generally interested in anything that moves the state of the art forward, even if it has little relation with any technology they will actually make use of in their job, on condition they can see what is new or specific about it; or at least, they want to be able to take the announcement apart, as we’re going to see. Plus they are heavy users of the platforms they are developing for, so anything that makes user’s lives easier makes theirs easier, too, and they will react to that.

But software developers are also a wary bunch. All of them have been burned before; doesn’t matter whether they trusted a company they shouldn’t have or whether they couldn’t have foreseen anything but were betrayed, they all got a past experience of betting on something (a platform, an API, a service, a tool, etc.) and losing their bet. So in presentations they don’t want to be given dreams of an ideal future where the product magically does what we expect of it, but rather they want demos, or at the very least material claims that can be objectively evaluated as soon as anything concrete is provided. Triply so for Internet services. Everything else is just a setup to get to a demo, as far as a software developer is concerned. Also, as a result software developers take apart everything that is being said to try and figure how it works, in particular to foresee any eventual limitation; yes, to an extent it does take out the magic to dissect everything that way, but remember that for software developers this is a matter of survival. Again, triply so for Internet services. Do I need to remark that the Apple Music presentation, even the demo from Eddy Cue, provided little in the way of these material claims? He did show how the user interacts with the service, but, being an Internet service, this does not show how it works, really. Also, this means that the crowd may be too busy trying to make sense of what you are saying to react to your quick quips.

Software developers are also extremely good at math and mental arithmetic. It goes with the job. They will double-check everything you throw at them, live, so don’t ever expect to be able to assert claims that don’t literally add up.

As with any recurring event (as opposed to, say, a concert date, where this is less the case), there is also a lot of lore and unsaid things that are nevertheless known from both the regular presenters and the audience. If you’re not a regular presenter, it’s not something you can tap into (so yes, you will be at a disadvantage from the regular presenters), but you better be briefed about those to avoid running into them by accident. I remember a high school incident where I had an exchange with a classmate that the class couldn’t miss: it was about silex/stone blades, and I can’t remember what the root of the problem was, but I was countering that this was no way to build a hatchet that could chop down trees, to which my classmate countered that chopping down trees was not the guys’ aim. I think I let him have the point at the time; this was presumptuous of him to assume this, but at the same time it was presumptuous for me to assume this use case, this was just something I came up with as a use case for a hatchet at the top of my head. Fast forward a few months, but still in the same year, the class was making an outside trip to a place where they studied stone age tooling, and during his presentation the guy explained that following the methods from these times: preparing and carving the stone, pairing it with a wooden handle and attaching with then-current methods, he got a tool that worked very well, taking as an example that he had been able to chop down a small tree with it.

After maybe a beat, the whole class erupted in laughter.

My classmates had clearly not forgotten; the poor presenter was all “Was it something I said?” (I was tempted reach out to him, shake his hand and thank him profusely), and our teacher fortunately came to his rescue by promising to tell him later. There was of course no way he could have been briefed for this, but this is not the case for an event such as WWDC.

And, as a “one more thing”, I think I should close with a mention of the other WWDC keynote audience, those who watch the live stream, and react on Twitter instead. This is not exactly a developer audience, but pretty close. However, the reactions tend to be all over the place, in particular you always get the complaint (that ends up trending each time!) that no new hardware (or no new hardware the poster was interested in) was announced, even though this is a silly expectation to have: you can always hope for some, mind you, but given there is always a lot to say about future OS updates at WWDC, especially now with three major Apple platforms, it’s better for Apple to make these announcements at other times. So it’s hard to get a feel for the WWDC live stream audience.

WWDC 2015 Keynote not-quite-live tweeting

(Times are GMT+2)

  • 10:14 PM – 8 Jun 2015: Looking at the previewed Mac OS X improvement, I think it’s too bad @siracusa is not going to be reviewing them (but it’s his call) #WWDC15
  • 10:16 PM – 8 Jun 2015: Speaking of @siracusa , I’d almost prefer for multitasking dividers not to be repositionable. #positioning #OCD #WWDC15
  • 10:19 PM – 8 Jun 2015: Metal on the Mac: “Of course this means war” #OpenGL #WWDC15
  • 10:21 PM – 8 Jun 2015: At long last we have search in third-party apps on iOS! (viz. ) #WWDC15
  • 10:23 PM – 8 Jun 2015: (Around the 37 minute mark): It’s funny, because I’m actually exercising as I watch the keynote stream and take these notes. #WWDC15
  • 10:24 PM – 8 Jun 2015: Siri does still rely on network services, so it can’t all “stay on the device”… #WWDC15
  • 10:27 PM – 8 Jun 2015: (Around the 43 minute mark): the Apple guys have turned into Stanley Yankeeball (minus the Stanley) #WWDC15
  • 10:33 PM – 8 Jun 2015: These improvements to notes might be good, or might turn it into a mess(is it a word processor? For structured text? Something else?)#WWDC15
  • 10:35 PM – 8 Jun 2015: Mapping exits of tube stations is great, not even all of the transit systems’ dedicated apps do so. #WWDC15
  • 10:37 PM – 8 Jun 2015: Since it’s only in select countries at first, the new news app is more than just an aggregator and probably has some editorial. #WWDC15
  • 10:39 PM – 8 Jun 2015: Keyboard gestures for editing are great, but are they like cursor keys (more accurate) or more like mouse movement? #WWDC15
  • 10:40 PM – 8 Jun 2015: Yes! Yes! Yes! Yes! Split screen multitasking on iPad! Amply justifies upgrading to the Air 2. #WWDC15
  • 10:41 PM – 8 Jun 2015: I don’t know how practical multi-touch on multiple apps is, but it sure rocks. #WWDC15
  • 10:42 PM – 8 Jun 2015: New low power mode is the battery equivalent of low memory warnings. #WWDC15
  • 10:43 PM – 8 Jun 2015: Apple game development frameworks still aren’t credible as long as Apple is not dogfooding them. We want Apple-made games! #WWDC15
  • 10:45 PM – 8 Jun 2015: With Home Kit through iCloud, better hope that iCloud is secure… (or that this particular part can be disabled). #WWDC15
  • 10:46 PM – 8 Jun 2015: About Swift: open source is nice, standardization would be nicer. Yes, Objective-C isn’t a standard, but C and C++ are. #WWDC15
  • 10:47 PM – 8 Jun 2015: With iOS9 still supporting the iPad 2, get ready to have to support ARMv7 and the Cortex A9 for some time (it’s not hard, mind you). #WWDC15
  • 10:48 PM – 8 Jun 2015: Can’t really comment on watchOS improvements, since I don’t know much about what it currently does anyway. #WWDC15
  • 10:51 PM – 8 Jun 2015: With native Apple Watch apps, get ready for a “Benchmarking on your wrist” post from @chockenberry as soon as watchOS 2.0 lands. #WWDC15
  • 10:52 PM – 8 Jun 2015: (around the 1:40 mark): wasn’t expecting them to be ready to demo the new watchOS features live so soon after Apple Watch release. #WWDC15
  • 10:54 PM – 8 Jun 2015: (around the 1:41 mark): Kevin Lynch was tethered by the wrist during the Apple Watch demo. Is that punishment for Flash? #WWDC15
  • 10:56 PM – 8 Jun 2015: I was even less expecting them to have a new watch OS beta ready today, 6 weeks after the Apple Watch release. #WWDC15
  • 10:57 PM – 8 Jun 2015: Between Jimmy Iovine and the two women (sorry ladies, I did not write down your names), many new presenters, that’s great. #WWDC15
  • 10:58 PM – 8 Jun 2015: Interesting that they would present Apple Music at WWDC, would appear more fitting for an iPhone or music event. #WWDC15
  • 10:59 PM – 8 Jun 2015: I am more interested in music I can keep, though global radio is interesting. #WWDC15
  • 11:01 PM – 8 Jun 2015: Nothing has really replaced the records stores so far when it comes to music discovery. Will Apple Music do better than Ping? #WWDC15
  • 11:02 PM – 8 Jun 2015: With the news app and Apple Music, Apple is doing more editorial/curation than they ever did. #WWDC15
  • 11:03 PM – 8 Jun 2015: I won’t comment on Apple Pay until it reaches France. #WWDC15
  • 11:04 PM – 8 Jun 2015: Sure, you can ask Siri for the music used in Selma, but she’s no Shazam. #WWDC15
  • 11:05 PM – 8 Jun 2015: After the demo, my feeling of Apple Music is: Netflix for music. Android support is interesting… #WWDC15
  • 11:06 PM – 8 Jun 2015: Again, interesting to have a live performance at WWDC, rather than at an iPhone or music event. #WWDC15
  • 11:09 PM – 8 Jun 2015: And that’s it for the #WWDC15 keynote comments. Now back to notifying of new posts.
  • 8:53 AM – 9 Jun 2015: Some more post-sleep #WWDC15 thoughts before returning to normal:
  • 8:56 AM – 9 Jun 2015: First, there was no homage or reference (that I could spot) in the keynote to @Siracusa and his Mac OS X reviews, I’m disappointed. #WWDC15
  • 9:01 AM – 9 Jun 2015: Second, maybe it’s just me, but I get the impression the keynote is less and less for developer-level features. #WWDC15
  • 9:13 AM – 9 Jun 2015: Third, no free Apple Music tier means people won’t get the impression this is music they can access forever. #WWDC15
  • 9:15 AM – 9 Jun 2015: Fourth and I’ll be done: with Apple global radio, what happens to iTunes Radio? #WWDC15

Thank you, Mr. Siracusa

Today, I learned that John Siracusa had retired from his role of writing the review of each new Mac OS X release for Ars Technica. Well, review is not quite the right word: as I’ve previously written when I had the audacity to review one of his reviews, what are ostensibly articles reviewing Mac OS X are, to my mind, better thought of as book-length essays that aim to vulgarize the progress made in each release of Mac OS X. They will be missed.

It would be hard for me to overstate the influence that John Siracusa’s “reviews” have had on my understanding of Mac OS X and on my writing; you only have to see the various references to John or his reviews I made over the years on this blog (including this bit…). In fact, the very existence of this blog was inspired in part by John: when I wrote him with some additional information in reaction to his Mac OS X Snow Leopard review, he concluded his answer with:

You should actually massage your whole email into a blog post [of] your own.  I’d definitely tweet a link to it! 🙂

to which my reaction was:

Blog? Which blog? On the other hand, it’d be a good way to start one

Merely 4 months later, for this reason and others, this blog started (I finally managed to drop the information alluded to in 2012; still waiting for that tweet 😉 ).

And I’ll add that his podcasting output may dwarf his blogging in volume, but, besides the fact I don’t listen to podcasts much, I don’t think they really compare, mostly because podcasts lack the reference aspect of his Mac OS X masterpieces due to the inherent limitations of podcasts (not indexed, hard to link to a specific part, not possible to listen in every context, etc.). But, ultimately, it was his call; as someone, if I remember well, commented on the video of this (the actual video has since gone the way of the dodo): “Dear John, no pressure. Love, the Internet”. Let us not mourn, but rather celebrate, from the Mac OS X developer preview write-ups to the Mac OS X 10.10 Yosemite review, the magnum opus he brought to the world. Thank you, Mr. Siracusa.

MPW on Mac OS X

From Steven Troughton-Smith (via both Michael Tsai and John Gruber) comes the news of an MPW compatibility layer project and how to use it to build code targeting Classic Mac OS and even Carbonized code from a Mac OS X host, including Yosemite (10.10). This is quite clever, and awesome news, as doing so was becoming more and more complicated, and in practice required keeping one ore more old Macs around.

Back in the days of Mac OS X 10.2-10.4, I toyed with backporting some of my programming projects, originally developed in Carbon with Project Builder, to MacOS 9, and downloaded MPW (since it was free, and CodeWarrior was not) to do so. The Macintosh Programmer’s Workshop was Apple’s own development environment for developing Mac apps, tracing its lineage from the Lisa Programmer’s Workshop, which was originally the only way to develop Mac apps (yes, in 1984 you could not develop Mac software on the Mac itself). If I recall correctly, Apple originally had MPW for sale, before they made it free when it could no longer compete with CodeWarrior. You can still find elements from MPW in the form of a few tools in today’s Xcode — mostly Rez, DeRez, GetFileInfo and SetFile. As a result, I do have some advice when backporting code from Mac OS X to MacOS 9 (and possibly earlier, as Steven demonstrated).

First, you of course have to forget about Objective-C, forget about any modern Carbon (e.g. HIObject, though the Carbon Event Manager is OK), forget about Quartz (hello QuickDraw), forget about most of Unix, though if I recall correctly the C standard library included with MPW (whose name escapes me at the moment) does have some support beside the standard C library, such as open(), read(), write() and close(). Don’t even think about preemptive threads (or at least, ones you would want to use). In fact, depending on how far back you want to go, you may not have support for what you would not even consider niceties, but were actually nicer than what came before; for instance, before Carbon, a Mac app would call WaitNextEvent() in a loop to sleep until the next event that needed processing, and then the app would have to manually dispatch it to the right target, including switching on the event type, performing hit testing, etc.: no callback-based event handing! But WaitNextEvent() itself did not appear until System 7, if I recall correctly, so if you want to target System 6 and earlier, you have to poll for events while remembering to yield processing time from time to time to drivers, to QuickTime (if you were using it), etc. The same way, if you want to target anything before MacOS 8 you cannot use Navigation Services and instead have to get yourself acquainted with the Standard File Package… FSRefs are not usable before MacOS 9, as another example.

When running in MacOS 9 and earlier, the responsibilities of your code also considerably increase. For instance, you have to be mindful of your memory usage much more than you would have to in Mac OS X, as even when running with virtual memory in MacOS 9 (something many users disabled anyway) your application only has access to a small slice of address space called the memory partition of the application (specified in the 'SIZE' resource and that the user can change): there is only one address space in the system which is partitioned between the running apps; as a result memory fragmentation becomes a much more pressing concern, requiring in practice the use of movable memory blocks and a number of assorted things (move high, locking the block, preallocating master pointers, etc.). Another example is that you must be careful to leave processor time for background apps, even if you are a fullscreen game: otherwise, for instance if iTunes is playing music in the background, it will keep playing (thanks to a trick known as “interrupt time”)… until the end of the track, and become silent from then on. Oh, and did I mention that (at least before Carbon and the Carbon Event Manager) menu handling runs in a closed event handling loop (speaking of interrupt time) that does not yield any processing time to your other tasks? Fun times.

Also, depending again on how far back you want to go, you might have difficulty using the same code in MacOS 9 and Mac OS X, even with Carbon and CarbonLib (the backport of most of the Carbon APIs to MacOS 9 as a library, in order to support the same binary and even the same slice running on both MacOS 9 and Mac OS X). For instance, if you use FSSpec instead of FSRef in order to run on MacOS 8, your app will have issues on Mac OS X with file names longer than were possible on MacOS 9; they are not fatal, but will cause your app to report the file name as something like Thisisaverylongfilena#17678A… not very user-friendly. And the Standard File Package is not supported at all in Carbon, so you will have to split your code at compile time (so that the references to the Standard File Package are not even present when compiling for Carbon) and diverge at runtime so that when running in System 7 the app uses the Standard File Package, and when running in MacOS 8 and later it uses Navigation Services, plus the assorted packaging headaches (e.g. using a solution like FatCarbon to have two slices, one ppc that links to InterfaceLib, the pre-Carbon system library, linking weakly to the Navigation Services symbols, and one ppc that links to CarbonLib and only runs on Mac OS X).

You think I’m done? Of course not, don’t be silly. The runtime environment in MacOS 9 is in general less conductive to development than that of Mac OS X: the lack of memory protection not only means that, when your app crashes, it is safer to just reboot the Mac since it may have corrupted the other applications, but also means you typically do not even know when your code, say, follows a NULL pointer, since that action typically doesn’t fault. Cooperative multitasking also means that a hang from your app hangs the whole Mac (only the pointer is still moving), though that can normally be solved by a good command-alt-escape… after which it’s best to reboot anyway. As for MacsBug, your friendly neighborhood debugger… well, for one, it is disassembly only, no source. But you can handle that, right?

It’s not that bad!

But don’t let these things discourage you from toying with Classic MacOS development! Indeed, doing so is not as bad as you could imagine from the preceding descriptions: none of those things matter when programming trivial, for fun stuff, and even if you program slightly-less-than-trivial stuff, your app will merely require a 128 MB memory partition where it ought to only take 32 MB, which doesn’t matter in this day and age.

And in fact, it is a very interesting exercise because it allows a better understanding of what makes the Macintosh the Macintosh, by seeing how it was originally programmed for. So I encourage you all to try and play with it.

For this, I do have some specific advice about MPW. For one, I remember MrC, the PowerPC compiler, being quite anal-retentive for certain casts, which it just refuses to do implicitly: for instance, the following code will cause an error (not just a warning):

SInt16** sndHand;
sndHand = NewHandle(sampleNb * sizeof(SInt16));

You need to explicitly cast:

SInt16** sndHand;
sndHand = (Sint16**)NewHandle(sampleNb * sizeof(SInt16));

It is less demanding when it comes to simple casts between pointers. Also, even though it makes exactly no difference in PowerPC code, it will check that functions that are supposed to have a pascal attribute (supposed to mark the function as being called using the calling conventions for Pascal, which makes a difference in 68k code), typically callbacks, do have it, and will refuse to compile if this is not the case.

If you go as far back as 68k, if I remember correctly int is 16 bit wide in the Mac 68k environment (this is why SInt32 was long up until 64-bit arrived: in __LP64__ mode SInt32 is int), but became 32 bit wide when ppc arrived, so be careful, it’s better not to use int in general.

QuickDraw is, by some aspects, more approachable that Quartz (e.g. no object to keep track of and deallocate at the end), but on the other hand the Carbon transition added some hoops to jump through that makes it harder to just get started with it; for instance something as basic as getting the black pattern, used to ensure your drawing is a flat color, is described in most docs as using the black global variable, but those docs should have been changed for Carbon: with Carbon GetQDGlobalsBlack(&blackPat); must be used to merely get that value. Another aspect which complicates initial understanding is that pre-Carbon you would just directly cast between a WindowPtr, (C)GrafPtr, offscreen GWorldPtr, etc., but when compiling for Carbon you have to use conversion functions, for instance GetWindowPort() to get the port for a given window… but only for some of those conversions, the others just being done with casts, and it is hard to know at a glance which are which.

When it came to packaging, I think I got an app building for classic MacOS relatively easily with MPW, but when I made it link to CarbonLib I got various issues related to the standard C library, in particular the standard streams (stdin, stdout and stderr), and I think I had to download an updated version of some library or some headers before it would work and I could get a single binary that ran both in MacOS 9 and natively on Mac OS X.

Also, while an empty 'carb' resource with ID 0 does work to mark the application as being carbonized and make it run natively on Mac OS X, you are supposed to instead use a 'plst' resource with ID 0 and put in there what you would put in the Info.plist if the app were in a package. Also, it is not safe to use __i386__ to know whether to use framework includes (#include <Carbon/Carbon.h>) or “flat” includes (#include <Carbon.h>); typically you’d use something like WATEVER_USE_FRAMEWORK_INCLUDES, which you then set in your Makefile depending on the target.

Lastly, don’t make the same mistake I originally did: when an API asks for a Handle, it doesn’t just mean a pointer to pointer to something, it means something that was specifically allocated with NewHandle() (possibly indirectly, e.g. with GetResource() and loaded if necessary), so make sure that is what you give it.

I also have a few practical tips for dealing with Macs running ancient system software (be they physical or emulated). Mac OS X removed support for writing to an HFS (as opposed to HFS+) filesystem starting with Mac OS X 10.6, and HFS is the only thing MacOS 8 and earlier can read. However, you can still for instance write pre-made HFS disk images to floppy discs with Disk Utility (and any emulator worth its salt will allow you to mount disk images inside the emulated system), so your best bet is to use a pre-made image to load some essential tools, then if you can, set up a network connection (either real or emulated) and transfer files that way, making sure to encode them in MacBinary before transfer (which I generally prefer to BinHex); unless you know the transfer method is Mac-friendly the whole way, always decode from MacBinary as the last step, directly from the target. Alternately, you can keep around a Mac running Leopard around to directly write to HFS floppies, as I do.

Okay, exercise time.

If you are cheap, you could get away with only providing a 68k build and a Mac OS X Intel build (except neither of these can run on Leopard running on PowerPC…). So the exercise is to, on the contrary, successfully build the same code (modulo #ifdefs, etc.) for 68k, CFM-PPC linking to InterfaceLib, CFM-PPC linking to CarbonLib, Mach-o Intel, Mach-o 64-bit PPC, and Mach-o 64-bit Intel (a Cocoa UI will be tolerated for those two) for optimal performance everywhere (ARM being excluded here, obviously). Bonus points for Mach-o PPC (all three variants) and CFM-68k. More bonus points for gathering all or at least most of those in a single obese package.

Second exercise: figure out the APIs which were present in System 1.0 and are supported in 64-bit on Mac OS X. It’s a short list, but I know for sure it is not empty.


Macintosh C Carbon: besides the old Inside Mac books (most of which can still be found here), this is how I learned Carbon programming back in the day.

Gwynne Raskind presents the Mac toolbox for contemporary audiences in two twin articles, reminding you in particular to never neglect error handling, you can’t get away with it when using the toolbox APIs.

Factory: The Industrial Devolution on iPhone: a cruel joke

The “version” of Factory: The Industrial Devolution that can currently be found on the iOS App Store (no link; version 1.0, from Samuel Evans), while it bears the same name and graphics from the classic, beloved Mac game, is actually nothing but a cruel joke played on those, like me, who would be ecstatic at the idea of playing a proper port of Factory on the iPhone, given how ideally suited to playing on a touchscreen the gameplay would be.

That iPhone app is a cruel joke as, besides being very user-unfriendly (“pause? what pause?”), having fundamentally game-altering gameplay changes, no sound, and a number of other limitations, its main characteristic is to simply crash at the end of the very first level (provided I manage to put at least one correctly made product in the truck), with no way to go further. Every time (which makes me remark that the App Store reviewer must have been asleep at the wheel, in more ways than one). On both the iPhone 5S and iPad 2, running iOS 7.1, which represent two extremes of currently available iOS devices, at least at the time this app was released (June 2014).

There are in fact hints that the engine of this app has nothing to do with that of the original game (for instance, look at the objects while they enter a routing module, occlusion is not the same1), maybe it could just be a quick and dirty post on the iOS App Store of this code, I can’t tell for sure; but at any rate this gives me sincere hope that Patrick Calahan, the author of Factory on the Mac, is not associated with this mockery of his game. Unfortunately, I do not know of any current way to contact him, so I am sending this out there in the hope that one of my readers does know, or will in turn forward this message that will be seen by someone who does know, so that Patrick Calahan can be properly notified of the existence of that thing.

In the meantime, avoid that app; it’s not worth the bandwidth necessary for its download.

The person responsible for that iPhone app is being notified, and I am ready to publish any response, at most as long as this post, that he cares to send my way.

  1. Of course I can run the original version to compare with, do you think I keep this machine running solely for MacPaint?

More on Mac OS X being the new Classic

In Mac OS X is the new Classic, I made some bold predictions about Apple’s operating system future, but forgot to attach some sort of deadline to these predictions. Allow me to repair this omission here by adding this: I believe Apple will announce to developers the operating system transition described in that blog post within the next five years, at WWDC 2018 at the latest. So if WWDC 2018 comes and goes and no such thing has been announced, you are free to point and laugh at me.

On the same subject, I omitted that not only I believe that Apple will allow Developer ID apps on desktop iOS, but I believe Apple will in fact allow distributions of desktop iOS apps without requiring developers to pay Apple for that privilege (either through a free certificate or no certificate needed at all), though, like with Mac OS X Mountain Lion currently, the default setting will likely not allow running such apps. This is in consistency with what I already expressed in my blog post about Developer ID.

Lastly, for those of you wondering, I finally remembered an example of behavior that apps ported to Mac OS X could technically get away with, but ended up “sticking out” among native apps including to the user base: drawing directly to the screen outside of full-screen mode. At the start of Mac OS X some games would do so, even in windowed mode, but it ended up being noticed, as it resulted in ugly interactions with the transparent on-screen volume and brightness change overlays, and then very bad interactions with Exposé; so as a result developers were pressured not to do so and by the time of Mac OS X 10.4 Tiger pretty much no newly released app was drawing directly to the screen. So the same way, Apple would not even need to mandate all aspects of what a good iOS desktop app would be, they can allow some things that are not completely iOS-like to allow developers to more easily port their apps, knowing that over time the community will ostracize such exceptions.

How can we know whether we can effect change from the outside?

Marco Arment thinks we can have some influence, however measured, over Apple through our writing (or podcasts or YouTube channels or other media, as the case may be); and I do think so too, but I have to wonder to which extent it is true, and as a consequence whether it is even worth keeping in mind.

The first issue is that we don’t know whether a decision was influenced by externally produced elements or not. As Marco wrote, Apple is not a waterfall dictatorship. But it is also a peculiar company, which has consistently eschewed “common wisdom” thrown at them (by the tech press in particular) for quite some time now, instead doing what they feel is best for them and for the user, and the least we can say it that it seems to have worked okay for them (so far). So by necessity Apple employees have, consciously or not, filters with which they dismiss many of these external opinions. Of course, we Apple developers are quite attuned to these filters, but on the other hand we are both users with slight delusions as to the representativity of our needs in the customer base at large, and also a party in relationship with Apple and as such are likely coming off as biased, especially on the subject of relation between Apple and third-party developers (iOS App Store policies come to mind, in particular). Paul Graham made me realize this when he wrote in Apple’s mistake:

Actually I suppose Apple has a third misconception: that all the complaints about App Store approvals are not a serious problem. They must hear developers complaining. But partners and suppliers are always complaining. It would be a bad sign if they weren’t; it would mean you were being too easy on them.

How do I convince an Apple employee, who I cannot hear, that my criticism of something does not (just) come from the inconvenience this something causes me as a developer, but comes from my belief this something causes subtle but important inconveniences for the user? I do not know. I mean, we still do not have trials on the iOS App Store after 5 years, for instance, so there has got to be some reason some of our pleas do not work.

Of course, I don’t expect an Apple representative to just go on the record telling that external opinions influenced this and that decision. Duh. But there are a variety of officious channels through which this could be hinted at. For instance, in the particular domain of Radar (Apple’s bug reporting tool) Matt Drance, formerly of DTS, gave a talk at C4 (scroll down to “Drance – How to be a Good Developer”) about your relation as a developer with Apple, in particular about Radar, giving information that I believe is not available anywhere else, and my thanks go to him for doing this talk, and to Philippe Casgrain for transcribing it and publishing his notes. As far as I know, no one who works or used to work at Apple ever talked publicly about the influence of external opinions the way Matt Drance talked about how radars are seen from the inside.

The second issue is that the tech community in general seems to be pretty quick at decreeing every Apple reaction as being the result of press coverage/public outcry/radar duplicates/etc., without any evidence (other than circumstantial) supporting that affirmation. The prime example being, of course, Apple’s eventual decision to release a SDK for the iPhone, which many developers will tell was because it had however many duplicates in Radar, while things couldn’t possibly have been that simple (personally I believe conversations over private channels between Apple and important software vendors, game developers in particular, played the main role in convincing the relevant Apple executives). Even for the example Marco gives, the replacement of Helvetica Neue Light by Helvetica Neue in current iOS 7 betas, I remain unconvinced: this could just as easily be explained by usual iteration of the design. The only instances where we can say with some credibility that Apple reacted to external influences are when they reversed their position on some app rejections (the Mark Fiore incident in particular comes to mind), and (along with some help from the FTC) when Apple renounced to their additions to Section 3.3.1. As a result it is hard to have a conversation in the tech community about what works when it comes to publishing opinions Apple employees see as valid; and since we are not going to have this conversation with Apple either, this leaves most everyone in the dark.

So as a result, I am going to keep writing my opinions here for my audience, but I am not going to pay attention to whether anyone at Apple actually could take them into account, as I do not want to worry over something of which I cannot know the outcome.

Mac OS X is the new Classic

Doing predictions about Apple is risky business. Heck, predicting anything in the tech industry is risky business, and people who did would generally like you to forget they did so at some point: as we know, being a pundit means never having to say you’re sorry.

I’m willing to risk some skin on one matter, however, which is that of Apple’s operating system future. In short: Mac OS X is the new Classic, and iOS will move over to non-touchscreen interfaces, the desktop, and x86, while taking on more Mac-like features. This might be unexpected on my part, critical as I have been here of many aspects of the new computing iOS in particular embodies (but also the Mac App Store, Gatekeeper, etc.). But while I am critical of these aspects, I am also lucid enough to realize, starting in particular when the iPad shipped, that iOS is a revolution, and that all computers will one day work that way.

The iPad revolution

I used to think that our “old world” user interfaces, while not perfect, could eventually be learned by a majority of the population; I did not believe that only stupid people refused to work with computers (I very well understand unwillingness or inability, which sometimes aren’t completely distinct, to deal with the necessary complexities of today’s computers), but I also believed that our current state in computing was as good as it could be given the constraints, and that eventually computer usage would become part of the basic skills, like literacy and mathematics. In fact, I still believe this could have happened with current computers given enough time.

But Apple has shown a much better way. The Mac and Windows PC of 2010 are pre-alpabetic writing systems; nice pre-alphabetic systems, mind you, that anyone could learn in theory, but in practice only a minority of scribes were literate. By contrast, iOS is the first alphabetic system.

During the iPad introduction, for the roots of the iPad Steve strangely chose to go back to the beginnings of Apple with the Apple II (including a rare reference to Woz), and the first PowerBook; strangely because what the introduction of the iPad corresponds the best to is the original Macintosh. Both are compact computers with completely different graphical interfaces first seen elsewhere (the Lisa/the iPhone), but which suddenly took a much broader significance, and were as a result derided for being for play and not real work. While it was not necessarily the Mac itself which succeeded afterwards, we all know where the ideas pioneered in the marketplace by the original Macintosh went.

What about the incumbents?

But does this mean that current operating systems will change to become more like iOS? I don’t think so, or, rather, for those which try and do so, I don’t believe it will work. As we have seen with Mac OS X, operating system transitions are as much about transitioning user expectations as they are about the technology itself. A change of this magnitude will require new operating environments that people can see with a new eye, rather than by modifying an existing OS where any change is perceived as grating, or may not really work unless most applications are on board.

For instance, in Mac OS X Lion Apple changed built-in apps to automatically save documents being edited, as well as adopt a number of new behaviors (for instance, when a document is set to be an attachment of an email, the application editing the document, if any, is asked to save its work to disk so that the document state the user sees in the editing application is actually taken as the attachment), and encouraged third-party applications to do the same. This is undeniably progress, think of all the aggregate amount of work that is no longer being lost because of a power failure or application crash, across all Mac users running Lion or later.

But many people (including noteworthy people) complained about a part of this change, which is the loss of the “Save As…” command. The very notion of that command, which is that you are currently engaged in modifications to a file that you intend to not apply to that file but instead write elsewhere, is deeply rooted in the assumption that the system does not write file modifications until you explicitly tell it to; and so “Save as…” is foreign, at best, in an autosave world, but at the same time people have gotten used to it, so are they wrong to be missing it? After some adjustment attempts, Apple eventually relented and added back full “Save As…” functionality, but I don’t know if they should have; in particular, I have not checked to see how it interacts with the “attach to an email causes a save” feature.

Speaking of this feature, what do you think happens when someone uses it with, say, TextEdit documents currently being edited and it works as expected, and then uses it on an unsaved Word document (assuming Word has not been updated with that feature)? The last saved version gets attached instead of what the user expects, and he only knows about it when his correspondent replies with an inflammatory email telling that there is nothing new in the document, what have you been doing all this time? As a result, the feature will be poisoned in the mind of the user, who will adopt paranoid and near-superstitious steps to ensure the latest version always gets used in attachments. A feature that works unreliably is worse than no feature at all.

Another such problematic change is auto-quitting applications that are not topmost and have no document open. What is the issue here? After all the application has no visible UI in this case, and when recalled it would behave the same way (typically, by showing a new untitled document) whether it was auto-quit or not. Except this is not actually transparent at all: the application as a result disappears from the command-tab interface, most notably, so when the user wants to switch back to it using command-tab, he can’t.

What about the feature of saving the list of applications running upon shutdown, and reopening them upon restart (which in turn causes them to restore their open windows and state if they support it)? But there is a problem with that: applications can still veto a shutdown, so if half the applications have quit, but one prevents the shutdown at this point, once you have cleared the situation with the reluctant app when you shut down again only the list of applications still running at this point is saved and will get restored. So, oops, the feature only partially and inconsistently works, which I argue is worse than not having the feature at all.

Etc, etc. The list goes on. Now as John Siracusa ponders in his Lion review, users are expected to look down on applications (or mechanisms, like command-tab) that ruin the experience for everyone else, but it is not necessarily easy for the average user to make the correct connection, especially when the failure (wrong attachment, incomplete list of applications launched after the reboot) is removed from the cause. But even these cases are still easier to trace than what is coming next.

Copland 2010: Avoided

In Avoiding Copland 2010, John boldly (given, again, the risk inherent in doing so) tried to predict Apple’s operating system future, and referred to the situation of the Mac prior to Mac OS X, Copland being the failed attempt by Apple to provide the Mac a modern operating system. As he explained, what the “classic” MacOS lacked compared to Mac OS X was:

  1. systemic features: memory protection and preemptive multitasking,
  2. a modern programming framework: Cocoa with its object-oriented programming model, for the form present in Mac OS X,

the combination of the two being more than the sum of these parts. He then went on to try and find out the parallels which could be made between the Mac OS X of 2005 and the traditional MacOS of 1995, to figure out what Mac OS X would need in the future in order to stay competitive; but he focused on the second category on improvements, namely what kinds of programming paradigms Apple may need to adopt (such as functional programming, garbage collection, dynamic languages, etc.). I remember reading his article and being skeptical: after all with memory protection and preemptive multitasking in Mac OS X, applications were now isolated from each other, so from now on whatever the need for new operating system functionalities needed to stay relevant (be it systemic features or a better programing framework), applications could adopt them individually without having to have the others on board, so no clean break similar to Mac OS X would ever be needed in the future.

Of course, I was wrong. But not because a better programming framework turned out to be necessary (I don’t feel there is much competitive pressure on Apple on this front); rather, I forgot the one big globally shared entity through which applications can still interfere with each other: the file system. So a new clean break is likely needed to provide sandboxing, the systemic feature to solve the situation. With sandboxing, applications… well, I hesitate to say that they would be properly isolated, because I now know better than to proclaim that, but they certainly would be better isolated by an order of magnitude.

And on the sandboxing front, Apple is certainly among the most competitive, if not in front (at least as far as client software is concerned)… thanks to the iPhone, which was sandboxed from day one. Thanks to a hardware opportunity a few years prior, Apple already has in house the production-quality, shipping operating system with a third-party software ecosystem, ready for the challenges of the 10s; compare to when they had to buy such a system from NeXT in the 90s. This time Apple is ready.

Playing in the sandbox

I have not yet shown why sandboxing is so necessary; it certainly is less obvious than for memory protection and preemptive multitasking. Doing so would take an entire post, but I should mention here two advantages: the mysterious file system, and privacy. For the first, it may be hard for us nerds to realize how mysterious the file system, or at least some aspects of it, is to the average user, what with it mixing the operating system, applications, user files, downloads, and the like. Why did this folder for a game (containing, as it turns out, game saves) suddenly appear in the Documents folder? And so forth, and I am sure many such interactions that I don’t even think twice about and no longer realize any more, are instead frustrating mysteries for many users. Sandboxing would put an end to that.

As for privacy, you may say that you trust the software installed on your machine. But that’s the thing: you need to trust something entirely before you install it as a Mac app, there is no way to benefit from an application that is useful yet in which you do not have entire trust. Contrast that with the web, which is the platform actually challenging Apple the most in this area: with its same-origin restrictions, the web already has a strong sandboxing model which allows people to use Facebook and a myriad of other services, easily adopting a new one with low commitment and low friction, confident that the service only knows what the user entered in it (granted, the myriad of “like” buttons on the Internet mean that Facebook is aware of far more than what users explicitly tell it, but that’s between Facebook and the sites integrating the “like” button; sites can choose not to do so). Apple needs sandboxing in order to stay competitive with the web.

Of course, you have probably noticed Apple actually tried to retrofit sandboxing on Mac OS X. When I heard that Lion would feature application sandboxing, I tried to imagine how such a feat could be possible while keeping the Mac file usage model, and thought that they would have to turn the open and save panels into a privileged external application, which upon selection from the user would grant the sandboxed app the right to operate on the file, while keeping API compatibility, no small feat. Which is exactly what they did. Impressive as it may be, I feel this effort to maintain the “all files mixed together” file system while adopting a sandboxed underlying infrastructure will eventually go to waste, as Apple will need to build a new user model eventually anyway to go with the sandboxed infrastructure (for instance, users could rightly wonder why a file “just next” to one they selected is not automatically picked up, say the subtitles file next to a movie file), a new user model where documents of each app are together by default, and “all files mixed together” is for specific scenarios for which there could be a better UI.

And on a software development standpoint, the retrofitting of sandboxing on Mac OS X has been less that stellar, with deadlines pushed multiple times, Apple needing to implement scenarios they had not foreseen but turned out to be so important for users that apps would not ship without it, like security-scoped bookmarks, notable app developers publicly giving up on sandboxing, other apps having to become clunkier or losing features in order to adopt sandboxing… All for little benefit since, as far as I can tell, an unsandboxed app can still read the data stored by sandboxed apps.

The Mac OS X transition model

In theory, Apple could have introduced Mac OS X in a less disruptive fashion. Instead of pushing it to consumers when they did, they would have kept Mac OS X 10.1 and 10.2 to developers only, while encouraging them to ship carbonized applications. Then for 10.3 with a sufficient number of Mac OS X-optimized apps around, they would have introduced it to consumers as simply the evolution of MacOS 9 and minimal cosmetic changes, with carbonized apps and new Cocoa apps running natively, and other apps running in a completely transparent Classic environment, and the users would have benefited from a more stable operating system without any visible transition.

But in practice, that would have of course never worked. There are numerous reasons why, but foremost is the fact third-party application developers are not perfectly cooperative. They are ready to listen and adopt Apple initiatives, but there had better be some sort of immediate payout, because otherwise they are spending time, and thus money, changing their code while adding exactly zero feature. So developers would have waited for Apple to ship Mac OS X in order to carbonize their app, otherwise what’s the benefit in a MacOS 9 environment, and Apple would have waited for a critical mass of carbonized apps before shipping Mac OS X to consumers. Uh oh.

Instead, by shipping Mac OS X 10.0-10.1 to early adopters, then progressively to less early adopters, Apple provided right from the start an incentive for developers to ship carbonized apps, first some specialized developers would have enough of their audience using Mac OS X for them to be worth it, then more and more developers would find it worthwhile to port, etc. More importantly, early adopters and early applications would actually set up expectations, instead of incumbents setting them in the case of a “progressive” transition, so that if an app got ported technically, but not in spirit (say, it would use a lot of resources), it would stick among the ported apps and it would be pressured to fit to the new standards. And playing just as important a role, the Classic ghetto clearly marked which apps would all together go down whenever one of them would crash, making sure to mark the boundaries of the negative network effects/broken windows (where a minority of apps could ruin it for everyone else). The Aqua interface was an essential part of this strategy, being no mere eye-candy, but eye-candy that coincided with a major OS environment change and helped mark it for average users.

Drawing directly to the screen, as I added in a clarification, is a good example of behavior that characterized a Mac OS X app that was not ported in spirit, and that these apps were pressured to drop. — September 3, 2013

By contrast, what Apple is currently doing with the “iOS-ification” of Mac OS X is merely add superficial enhancements from iOS to a system whose behavior is still fundamentally defined by existing users and existing applications, a system which has to stay compatible with existing installs, existing peripherals, existing workflows, existing mechanisms, etc. Mark my words: history will look back at the recent “iOS-ification” of Mac OS X as as quaint and superficial and meaningless as the Mac-like interface of the Apple IIgs.

Mac OS X as the Classic of iOS

I expect that Apple will soon realize that trying to drive Mac OS X toward having all iOS features is a dead-end. And some setback or other, leading to cost-saving measures, or just their obsession with efficiency, will make them suddenly question why the heck they are spending effort maintaining two operating systems in parallel. I do not think Mac OS X and iOS will merge either; this kind of thing only happens in tech journalists wet dreams. There will be no Mac OS XI, instead my prediction is that the Mac will have a new OS, and Mac OS X will run in a Classic mode inside that OS. But Apple already has a new OS, it’s called iOS, hence that new OS would be a “port” of iOS to the desktop, called… “iOS”.

So in practice, this would allow Apple to provide an incentive to applications adopting modern features like sandboxing and a more modern interface (larger click/touch targets, etc.), by having them run outside the Mac OS X ghetto and inside iOS instead, and thus for instance making their data impossible to read from a an app running inside that ghetto; and I am sure other such immediate incentives are possible, given the benefits of sandboxing to users. Currently Apple can get some applications to adopt sandboxing thanks to their clout, embodied in the Mac App Store, but application developers resent it, there has to be more positive ways of encouragement. Also, the “iOS-native” area would be, again, defined by the early adopters (here, in fact, the existing mobile iOS apps, see later), so applications not having, for instance, autosaving would quickly stick out and be pressured to adopt it, if it wouldn’t be mandatory in the first place in order to run outside the ghetto.

Meanwhile, the new Classic environment, with Mac OS X running inside, would allow users to keep using “legacy” Mac OS X apps during the transition, exactly the same way the Classic environment of Mac OS X eased the transition from MacOS 9. The current Mac OS X interface (which only has a passing resemblance to Aqua at this point) would be the new Platinum, a ghetto inside the world of the native iOS interface, which would play the role of the new Aqua.

Now given all the speculation about Apple potentially adopting ARM chips for future MacBooks, or Apple potentially using Intel chips in future mobile devices, I think it is important to clarify what I am talking about here. I expect Apple to keep using the most appropriate chip for each device class, which for each existing Apple product line is still ARM chips for tablets and smaller, and Intel chips for the MacBook Air and up, and I don’t expect it to change in the short run (Intel and ARM chips are just now starting to meet in the performance/power point in the no-man’s land between the two device classes).

So I expect iOS, for the purpose of running on laptop and desktop class devices, would be ported to x86. This is not much of a stretch, extremely little of one in fact because first, iOS already runs on x86 (at the very least, the whole library stack and some built-in apps), and second most iOS App Store apps do already run on x86, both thanks to the iOS Simulator. This unassuming piece of iOS app development, including the iOS runtime environment, indeed runs natively on the host Mac, without any processor emulation involved, and iOS apps need to be compiled as x86 to run on it. As many iOS app development processes rely on running the app on the iOS Simulator for convenience when debugging, better debugging tools, easier and faster design iteration, most iOS apps are routinely compiled and work on x86, and would easily run on the x86 iOS port; and Apple, having most of iOS running on x86 already, would have little trouble porting the remainder of iOS, it could even once ported be installed on Macs shipping today and not require new machines.

What would the interface for iOS on the desktop be? I can’t remember in which Apple product introduction this was mentioned, but the host mentioned that touching a vertical desktop screen did not work in practice, and neither would it be practical to read and view a horizontal desktop screen — and that they had tried. So while some are clamoring for iOS to come to the laptop (and desktop) form factor so that we can finally have large touchscreen devices, they are misguided: if it was technically possible, then Apple would already have done it on the Mac, with Mac OS X. Maybe this will happen some day, but this would happen independently of iOS running on desktop class devices.

Instead, as using a touch interface with a mouse is at least tolerable (as opposed to using an interface meant to be used with a mouse on a touchscreen device), iOS on the desktop could be used, at least its basic functions, with a mouse, but in fact a Magic Trackpad would be recommended (and bundled with all desktops from that point on), as providing something of the best of both words: multitouch, and the ability to browse a screen much larger than typical finger movements using the typical mechanisms of acceleration, lifting and landing elsewhere, etc. Of course, since it would be a touch interface separate from the screen instead of a touchscreen, there would need to be a visible pointer of some sort; likely not the arrow shape we’ve all known and loved since 1984, as this arrow is meant to be able to click with pixel accuracy, which is antithetical with the iOS interface paradigm. Maybe something as simple as a circle could do the trick.

Now remains the most important and thorny question of how these laptops and desktops now meant to run iOS would be called; would we still call them Macs? I have no idea.

iOS would have to change, too

But just like Mac OS X was not Rhapsody, iOS, in order to run on the desktop and be an attractive target for current desktop apps and users, would have to adopt some Mac features. For one, the model of designing iOS apps specifically for each screen size clearly wouldn’t scale, there would have to be ways to have apps reliably extend their user interface; most of the infrastructure is there mind you, even with springs and struts a lot can be done, but there would have to be new mechanisms to better take advantage of space when available; think CSS media queries and responsive design for native apps.

But having apps themselves scale with the screen is only part of the challenge, with all that screen space we certainly would not use it to run one app at a time, so ways to have multiple iOS apps on screen a the same time would have to be devised. That would be an awful amount of work to define and implement this new on-screen multitasking model for the 21st century, no doubt about it.

Then iOS would have to adopt some sort of document filing system, of course, as well as Developer ID, because having all apps having to come from either the iOS App Store or Mac App Store would be untenable for a desktop machine with many uses, corporate ones in particular.

I believe Apple will in fact even allow unsigned apps, or something functionally equivalent, on desktop iOS, though maybe they will not be allowed to run in the default setting. — September 3, 2013

Then it would need the ability to specify different default apps for, say, browser or email, as well as more comprehensive systems for apps to communicate with each other, with services for instance, better URL handlers (with UI to specify the default in case more than one app claim support). iCloud that doesn’t suck, but that’s a necessity for iOS on mobile anyway.

Most significantly, iOS for desktop would need to support some Mac OS X APIs, possibly with some restrictions and dropped methods/objects (kind of like Carbon was to the Mac Toolbox), most notably AppKit, but not only it, though it is safe to say Carbon wouldn’t make the jump.

And Apple would need to add (meaningful) removable media support, external drive support, backup support on such a drive, support for burning disks, wired networking, multiple monitors, a Terminal app, a development environment, some solution to the problem of communal computing/multiple users, etc, etc.

It’s time

So, easy, right? Just a little bit of work. In all seriousness, that would be a big endeavor, bigger maybe than Mac OS X, and, like Mac OS X, iOS on desktop would probably require a few iterations to get right. And that’s why Apple, whatever they actually intend to do (because they will not keep maintaining two operating systems forever), should start the process by telling developers as early as possible; for instance Mac OS X was introduced for the first time in May 1998 (how time flies…), and wasn’t really ready to replace MacOS 9 until 2002 or so.

I initially forgot to put some sort of deadline with my prediction, this has been repaired: I think it will have been announced by WWDC 2018, in five years, at the latest. — September 3, 2013

So what do you think will happen? Even if I am completely off-base here, at least I hope I provided for some interesting reflection for you to think on the matter.

Good riddance, Google. Don’t let the door hit you on the ass on the way out.

This post is, in fact, not quite like the others. It is a parody of Fake Steve I wrote for parody week, so take it with a big grain of salt…

See here. Basically, the rockets scientists at Google have decided, after having used our WebKit for years in Chrome, that they, uh, suddenly did not need us any more and forked WebKit like the true leeches they are. Dude, we are the ones who found KHTML and made WebKit what it is, if it weren’t for us KHTML would only be known to three frigtards in west Elbonia and you would have had no engine to put in your hotrodded race car of a browser, so I guess, thanks for nothing, bastards.

Truth is, good riddance. Those know-it-alls at Google have been a pain in our ass ever since Chrome debuted. Where do I start? Like with V8. Oh God V8… I… uh…

Okay, I can’t do this. I can’t parody Fake Steve. I’ve got nothing on Dear Leader. He was pitch perfect, like, you would get the feeling the real Steve Jobs would be writing just for you in his secret diary, all the while being satiric and outrageous enough so that at some level you knew it was fake but at the same time the persona was so well maintained that you easily suspended disbelief and you could not help thinking Steve could have shared these opinions. And he was insightful, oh of course he was, like in the middle of some ludicrous story you would feel like you would be enlightened about the way the tech industry or the press or tech buyers worked, didn’t matter if it was made up because it was a way to thought-provoke us and make us think about how the sausage factory really worked inside. He was the perfect ying-yang of the old-school professional who has seen it all and who knows how it works behind the hype, and of the new media guy who can drop a bunch of paragraphs without a word limit on a whim on a subject he want to tackle, and is not afraid to try new things and new ways of storytelling. Favorites? Ah! Apple, the Old Borg, the New Borg, the Linux frigtards, the old dying press, these upstart bloggers, the consumers standing in line, PR flacks, software developers, no one was safe.

I can see him now, looking above me from wherever his is now, laughing at my pathetic attempt at reviving him, even for a minute. I know he is at peace there, meditating, waiting for his reincarnation, because oh yes, he will be reincarnated some day, in a different form: Fake Steve is buddhist too, he most certainly did not meet St Peter at the pearly gates, and he has unfinished business in this world, he was not done restoring a sense of childlike sarcastic wonder in our lives. I’m waiting, waiting for the day I will see a blog or webcomic or column, because Fake Steve has a sense of humor and may throw us all for a loop by reincarnating in the old press, or Twitter feed or animation (but not a Flash animation, there are limits), and I will see the telltale signs, the snark, the character play, the insightfulness, and I will think: “Yes, Fake Steve has been reincarnated.”

Meanwhile, Fake Steve, I know you are now in a better place and cannot come back as such, but if you could hear my prayer: Dan… has not been good lately, to put it mildly. So… could you try and inspire him a bit while he will be away from the echo chamber? Not for him to write as you, no, just so that when he eventually returns to us after having spent some time away from it all, he will write good things, no matter what they are. Because we can’t stand looking at him like this.

The Joy of Tech comic number 995: Yes, Virgil, there is a Fake Steve Jobs