How can we know whether we can effect change from the outside?

Marco Arment thinks we can have some influence, however measured, over Apple through our writing (or podcasts or YouTube channels or other media, as the case may be); and I do think so too, but I have to wonder to which extent it is true, and as a consequence whether it is even worth keeping in mind.

The first issue is that we don’t know whether a decision was influenced by externally produced elements or not. As Marco wrote, Apple is not a waterfall dictatorship. But it is also a peculiar company, which has consistently eschewed “common wisdom” thrown at them (by the tech press in particular) for quite some time now, instead doing what they feel is best for them and for the user, and the least we can say it that it seems to have worked okay for them (so far). So by necessity Apple employees have, consciously or not, filters with which they dismiss many of these external opinions. Of course, we Apple developers are quite attuned to these filters, but on the other hand we are both users with slight delusions as to the representativity of our needs in the customer base at large, and also a party in relationship with Apple and as such are likely coming off as biased, especially on the subject of relation between Apple and third-party developers (iOS App Store policies come to mind, in particular). Paul Graham made me realize this when he wrote in Apple’s mistake:

Actually I suppose Apple has a third misconception: that all the complaints about App Store approvals are not a serious problem. They must hear developers complaining. But partners and suppliers are always complaining. It would be a bad sign if they weren’t; it would mean you were being too easy on them.

How do I convince an Apple employee, who I cannot hear, that my criticism of something does not (just) come from the inconvenience this something causes me as a developer, but comes from my belief this something causes subtle but important inconveniences for the user? I do not know. I mean, we still do not have trials on the iOS App Store after 5 years, for instance, so there has got to be some reason some of our pleas do not work.

Of course, I don’t expect an Apple representative to just go on the record telling that external opinions influenced this and that decision. Duh. But there are a variety of officious channels through which this could be hinted at. For instance, in the particular domain of Radar (Apple’s bug reporting tool) Matt Drance, formerly of DTS, gave a talk at C4 (scroll down to “Drance – How to be a Good Developer”) about your relation as a developer with Apple, in particular about Radar, giving information that I believe is not available anywhere else, and my thanks go to him for doing this talk, and to Philippe Casgrain for transcribing it and publishing his notes. As far as I know, no one who works or used to work at Apple ever talked publicly about the influence of external opinions the way Matt Drance talked about how radars are seen from the inside.

The second issue is that the tech community in general seems to be pretty quick at decreeing every Apple reaction as being the result of press coverage/public outcry/radar duplicates/etc., without any evidence (other than circumstantial) supporting that affirmation. The prime example being, of course, Apple’s eventual decision to release a SDK for the iPhone, which many developers will tell was because it had however many duplicates in Radar, while things couldn’t possibly have been that simple (personally I believe conversations over private channels between Apple and important software vendors, game developers in particular, played the main role in convincing the relevant Apple executives). Even for the example Marco gives, the replacement of Helvetica Neue Light by Helvetica Neue in current iOS 7 betas, I remain unconvinced: this could just as easily be explained by usual iteration of the design. The only instances where we can say with some credibility that Apple reacted to external influences are when they reversed their position on some app rejections (the Mark Fiore incident in particular comes to mind), and (along with some help from the FTC) when Apple renounced to their additions to Section 3.3.1. As a result it is hard to have a conversation in the tech community about what works when it comes to publishing opinions Apple employees see as valid; and since we are not going to have this conversation with Apple either, this leaves most everyone in the dark.

So as a result, I am going to keep writing my opinions here for my audience, but I am not going to pay attention to whether anyone at Apple actually could take them into account, as I do not want to worry over something of which I cannot know the outcome.

Mac OS X is the new Classic

Doing predictions about Apple is risky business. Heck, predicting anything in the tech industry is risky business, and people who did would generally like you to forget they did so at some point: as we know, being a pundit means never having to say you’re sorry.

I’m willing to risk some skin on one matter, however, which is that of Apple’s operating system future. In short: Mac OS X is the new Classic, and iOS will move over to non-touchscreen interfaces, the desktop, and x86, while taking on more Mac-like features. This might be unexpected on my part, critical as I have been here of many aspects of the new computing iOS in particular embodies (but also the Mac App Store, Gatekeeper, etc.). But while I am critical of these aspects, I am also lucid enough to realize, starting in particular when the iPad shipped, that iOS is a revolution, and that all computers will one day work that way.

The iPad revolution

I used to think that our “old world” user interfaces, while not perfect, could eventually be learned by a majority of the population; I did not believe that only stupid people refused to work with computers (I very well understand unwillingness or inability, which sometimes aren’t completely distinct, to deal with the necessary complexities of today’s computers), but I also believed that our current state in computing was as good as it could be given the constraints, and that eventually computer usage would become part of the basic skills, like literacy and mathematics. In fact, I still believe this could have happened with current computers given enough time.

But Apple has shown a much better way. The Mac and Windows PC of 2010 are pre-alpabetic writing systems; nice pre-alphabetic systems, mind you, that anyone could learn in theory, but in practice only a minority of scribes were literate. By contrast, iOS is the first alphabetic system.

During the iPad introduction, for the roots of the iPad Steve strangely chose to go back to the beginnings of Apple with the Apple II (including a rare reference to Woz), and the first PowerBook; strangely because what the introduction of the iPad corresponds the best to is the original Macintosh. Both are compact computers with completely different graphical interfaces first seen elsewhere (the Lisa/the iPhone), but which suddenly took a much broader significance, and were as a result derided for being for play and not real work. While it was not necessarily the Mac itself which succeeded afterwards, we all know where the ideas pioneered in the marketplace by the original Macintosh went.

What about the incumbents?

But does this mean that current operating systems will change to become more like iOS? I don’t think so, or, rather, for those which try and do so, I don’t believe it will work. As we have seen with Mac OS X, operating system transitions are as much about transitioning user expectations as they are about the technology itself. A change of this magnitude will require new operating environments that people can see with a new eye, rather than by modifying an existing OS where any change is perceived as grating, or may not really work unless most applications are on board.

For instance, in Mac OS X Lion Apple changed built-in apps to automatically save documents being edited, as well as adopt a number of new behaviors (for instance, when a document is set to be an attachment of an email, the application editing the document, if any, is asked to save its work to disk so that the document state the user sees in the editing application is actually taken as the attachment), and encouraged third-party applications to do the same. This is undeniably progress, think of all the aggregate amount of work that is no longer being lost because of a power failure or application crash, across all Mac users running Lion or later.

But many people (including noteworthy people) complained about a part of this change, which is the loss of the “Save As…” command. The very notion of that command, which is that you are currently engaged in modifications to a file that you intend to not apply to that file but instead write elsewhere, is deeply rooted in the assumption that the system does not write file modifications until you explicitly tell it to; and so “Save as…” is foreign, at best, in an autosave world, but at the same time people have gotten used to it, so are they wrong to be missing it? After some adjustment attempts, Apple eventually relented and added back full “Save As…” functionality, but I don’t know if they should have; in particular, I have not checked to see how it interacts with the “attach to an email causes a save” feature.

Speaking of this feature, what do you think happens when someone uses it with, say, TextEdit documents currently being edited and it works as expected, and then uses it on an unsaved Word document (assuming Word has not been updated with that feature)? The last saved version gets attached instead of what the user expects, and he only knows about it when his correspondent replies with an inflammatory email telling that there is nothing new in the document, what have you been doing all this time? As a result, the feature will be poisoned in the mind of the user, who will adopt paranoid and near-superstitious steps to ensure the latest version always gets used in attachments. A feature that works unreliably is worse than no feature at all.

Another such problematic change is auto-quitting applications that are not topmost and have no document open. What is the issue here? After all the application has no visible UI in this case, and when recalled it would behave the same way (typically, by showing a new untitled document) whether it was auto-quit or not. Except this is not actually transparent at all: the application as a result disappears from the command-tab interface, most notably, so when the user wants to switch back to it using command-tab, he can’t.

What about the feature of saving the list of applications running upon shutdown, and reopening them upon restart (which in turn causes them to restore their open windows and state if they support it)? But there is a problem with that: applications can still veto a shutdown, so if half the applications have quit, but one prevents the shutdown at this point, once you have cleared the situation with the reluctant app when you shut down again only the list of applications still running at this point is saved and will get restored. So, oops, the feature only partially and inconsistently works, which I argue is worse than not having the feature at all.

Etc, etc. The list goes on. Now as John Siracusa ponders in his Lion review, users are expected to look down on applications (or mechanisms, like command-tab) that ruin the experience for everyone else, but it is not necessarily easy for the average user to make the correct connection, especially when the failure (wrong attachment, incomplete list of applications launched after the reboot) is removed from the cause. But even these cases are still easier to trace than what is coming next.

Copland 2010: Avoided

In Avoiding Copland 2010, John boldly (given, again, the risk inherent in doing so) tried to predict Apple’s operating system future, and referred to the situation of the Mac prior to Mac OS X, Copland being the failed attempt by Apple to provide the Mac a modern operating system. As he explained, what the “classic” MacOS lacked compared to Mac OS X was:

  1. systemic features: memory protection and preemptive multitasking,
  2. a modern programming framework: Cocoa with its object-oriented programming model, for the form present in Mac OS X,

the combination of the two being more than the sum of these parts. He then went on to try and find out the parallels which could be made between the Mac OS X of 2005 and the traditional MacOS of 1995, to figure out what Mac OS X would need in the future in order to stay competitive; but he focused on the second category on improvements, namely what kinds of programming paradigms Apple may need to adopt (such as functional programming, garbage collection, dynamic languages, etc.). I remember reading his article and being skeptical: after all with memory protection and preemptive multitasking in Mac OS X, applications were now isolated from each other, so from now on whatever the need for new operating system functionalities needed to stay relevant (be it systemic features or a better programing framework), applications could adopt them individually without having to have the others on board, so no clean break similar to Mac OS X would ever be needed in the future.

Of course, I was wrong. But not because a better programming framework turned out to be necessary (I don’t feel there is much competitive pressure on Apple on this front); rather, I forgot the one big globally shared entity through which applications can still interfere with each other: the file system. So a new clean break is likely needed to provide sandboxing, the systemic feature to solve the situation. With sandboxing, applications… well, I hesitate to say that they would be properly isolated, because I now know better than to proclaim that, but they certainly would be better isolated by an order of magnitude.

And on the sandboxing front, Apple is certainly among the most competitive, if not in front (at least as far as client software is concerned)… thanks to the iPhone, which was sandboxed from day one. Thanks to a hardware opportunity a few years prior, Apple already has in house the production-quality, shipping operating system with a third-party software ecosystem, ready for the challenges of the 10s; compare to when they had to buy such a system from NeXT in the 90s. This time Apple is ready.

Playing in the sandbox

I have not yet shown why sandboxing is so necessary; it certainly is less obvious than for memory protection and preemptive multitasking. Doing so would take an entire post, but I should mention here two advantages: the mysterious file system, and privacy. For the first, it may be hard for us nerds to realize how mysterious the file system, or at least some aspects of it, is to the average user, what with it mixing the operating system, applications, user files, downloads, and the like. Why did this folder for a game (containing, as it turns out, game saves) suddenly appear in the Documents folder? And so forth, and I am sure many such interactions that I don’t even think twice about and no longer realize any more, are instead frustrating mysteries for many users. Sandboxing would put an end to that.

As for privacy, you may say that you trust the software installed on your machine. But that’s the thing: you need to trust something entirely before you install it as a Mac app, there is no way to benefit from an application that is useful yet in which you do not have entire trust. Contrast that with the web, which is the platform actually challenging Apple the most in this area: with its same-origin restrictions, the web already has a strong sandboxing model which allows people to use Facebook and a myriad of other services, easily adopting a new one with low commitment and low friction, confident that the service only knows what the user entered in it (granted, the myriad of “like” buttons on the Internet mean that Facebook is aware of far more than what users explicitly tell it, but that’s between Facebook and the sites integrating the “like” button; sites can choose not to do so). Apple needs sandboxing in order to stay competitive with the web.

Of course, you have probably noticed Apple actually tried to retrofit sandboxing on Mac OS X. When I heard that Lion would feature application sandboxing, I tried to imagine how such a feat could be possible while keeping the Mac file usage model, and thought that they would have to turn the open and save panels into a privileged external application, which upon selection from the user would grant the sandboxed app the right to operate on the file, while keeping API compatibility, no small feat. Which is exactly what they did. Impressive as it may be, I feel this effort to maintain the “all files mixed together” file system while adopting a sandboxed underlying infrastructure will eventually go to waste, as Apple will need to build a new user model eventually anyway to go with the sandboxed infrastructure (for instance, users could rightly wonder why a file “just next” to one they selected is not automatically picked up, say the subtitles file next to a movie file), a new user model where documents of each app are together by default, and “all files mixed together” is for specific scenarios for which there could be a better UI.

And on a software development standpoint, the retrofitting of sandboxing on Mac OS X has been less that stellar, with deadlines pushed multiple times, Apple needing to implement scenarios they had not foreseen but turned out to be so important for users that apps would not ship without it, like security-scoped bookmarks, notable app developers publicly giving up on sandboxing, other apps having to become clunkier or losing features in order to adopt sandboxing… All for little benefit since, as far as I can tell, an unsandboxed app can still read the data stored by sandboxed apps.

The Mac OS X transition model

In theory, Apple could have introduced Mac OS X in a less disruptive fashion. Instead of pushing it to consumers when they did, they would have kept Mac OS X 10.1 and 10.2 to developers only, while encouraging them to ship carbonized applications. Then for 10.3 with a sufficient number of Mac OS X-optimized apps around, they would have introduced it to consumers as simply the evolution of MacOS 9 and minimal cosmetic changes, with carbonized apps and new Cocoa apps running natively, and other apps running in a completely transparent Classic environment, and the users would have benefited from a more stable operating system without any visible transition.

But in practice, that would have of course never worked. There are numerous reasons why, but foremost is the fact third-party application developers are not perfectly cooperative. They are ready to listen and adopt Apple initiatives, but there had better be some sort of immediate payout, because otherwise they are spending time, and thus money, changing their code while adding exactly zero feature. So developers would have waited for Apple to ship Mac OS X in order to carbonize their app, otherwise what’s the benefit in a MacOS 9 environment, and Apple would have waited for a critical mass of carbonized apps before shipping Mac OS X to consumers. Uh oh.

Instead, by shipping Mac OS X 10.0-10.1 to early adopters, then progressively to less early adopters, Apple provided right from the start an incentive for developers to ship carbonized apps, first some specialized developers would have enough of their audience using Mac OS X for them to be worth it, then more and more developers would find it worthwhile to port, etc. More importantly, early adopters and early applications would actually set up expectations, instead of incumbents setting them in the case of a “progressive” transition, so that if an app got ported technically, but not in spirit (say, it would use a lot of resources), it would stick among the ported apps and it would be pressured to fit to the new standards. And playing just as important a role, the Classic ghetto clearly marked which apps would all together go down whenever one of them would crash, making sure to mark the boundaries of the negative network effects/broken windows (where a minority of apps could ruin it for everyone else). The Aqua interface was an essential part of this strategy, being no mere eye-candy, but eye-candy that coincided with a major OS environment change and helped mark it for average users.

Drawing directly to the screen, as I added in a clarification, is a good example of behavior that characterized a Mac OS X app that was not ported in spirit, and that these apps were pressured to drop. — September 3, 2013

By contrast, what Apple is currently doing with the “iOS-ification” of Mac OS X is merely add superficial enhancements from iOS to a system whose behavior is still fundamentally defined by existing users and existing applications, a system which has to stay compatible with existing installs, existing peripherals, existing workflows, existing mechanisms, etc. Mark my words: history will look back at the recent “iOS-ification” of Mac OS X as as quaint and superficial and meaningless as the Mac-like interface of the Apple IIgs.

Mac OS X as the Classic of iOS

I expect that Apple will soon realize that trying to drive Mac OS X toward having all iOS features is a dead-end. And some setback or other, leading to cost-saving measures, or just their obsession with efficiency, will make them suddenly question why the heck they are spending effort maintaining two operating systems in parallel. I do not think Mac OS X and iOS will merge either; this kind of thing only happens in tech journalists wet dreams. There will be no Mac OS XI, instead my prediction is that the Mac will have a new OS, and Mac OS X will run in a Classic mode inside that OS. But Apple already has a new OS, it’s called iOS, hence that new OS would be a “port” of iOS to the desktop, called… “iOS”.

So in practice, this would allow Apple to provide an incentive to applications adopting modern features like sandboxing and a more modern interface (larger click/touch targets, etc.), by having them run outside the Mac OS X ghetto and inside iOS instead, and thus for instance making their data impossible to read from a an app running inside that ghetto; and I am sure other such immediate incentives are possible, given the benefits of sandboxing to users. Currently Apple can get some applications to adopt sandboxing thanks to their clout, embodied in the Mac App Store, but application developers resent it, there has to be more positive ways of encouragement. Also, the “iOS-native” area would be, again, defined by the early adopters (here, in fact, the existing mobile iOS apps, see later), so applications not having, for instance, autosaving would quickly stick out and be pressured to adopt it, if it wouldn’t be mandatory in the first place in order to run outside the ghetto.

Meanwhile, the new Classic environment, with Mac OS X running inside, would allow users to keep using “legacy” Mac OS X apps during the transition, exactly the same way the Classic environment of Mac OS X eased the transition from MacOS 9. The current Mac OS X interface (which only has a passing resemblance to Aqua at this point) would be the new Platinum, a ghetto inside the world of the native iOS interface, which would play the role of the new Aqua.

Now given all the speculation about Apple potentially adopting ARM chips for future MacBooks, or Apple potentially using Intel chips in future mobile devices, I think it is important to clarify what I am talking about here. I expect Apple to keep using the most appropriate chip for each device class, which for each existing Apple product line is still ARM chips for tablets and smaller, and Intel chips for the MacBook Air and up, and I don’t expect it to change in the short run (Intel and ARM chips are just now starting to meet in the performance/power point in the no-man’s land between the two device classes).

So I expect iOS, for the purpose of running on laptop and desktop class devices, would be ported to x86. This is not much of a stretch, extremely little of one in fact because first, iOS already runs on x86 (at the very least, the whole library stack and some built-in apps), and second most iOS App Store apps do already run on x86, both thanks to the iOS Simulator. This unassuming piece of iOS app development, including the iOS runtime environment, indeed runs natively on the host Mac, without any processor emulation involved, and iOS apps need to be compiled as x86 to run on it. As many iOS app development processes rely on running the app on the iOS Simulator for convenience when debugging, better debugging tools, easier and faster design iteration, most iOS apps are routinely compiled and work on x86, and would easily run on the x86 iOS port; and Apple, having most of iOS running on x86 already, would have little trouble porting the remainder of iOS, it could even once ported be installed on Macs shipping today and not require new machines.

What would the interface for iOS on the desktop be? I can’t remember in which Apple product introduction this was mentioned, but the host mentioned that touching a vertical desktop screen did not work in practice, and neither would it be practical to read and view a horizontal desktop screen — and that they had tried. So while some are clamoring for iOS to come to the laptop (and desktop) form factor so that we can finally have large touchscreen devices, they are misguided: if it was technically possible, then Apple would already have done it on the Mac, with Mac OS X. Maybe this will happen some day, but this would happen independently of iOS running on desktop class devices.

Instead, as using a touch interface with a mouse is at least tolerable (as opposed to using an interface meant to be used with a mouse on a touchscreen device), iOS on the desktop could be used, at least its basic functions, with a mouse, but in fact a Magic Trackpad would be recommended (and bundled with all desktops from that point on), as providing something of the best of both words: multitouch, and the ability to browse a screen much larger than typical finger movements using the typical mechanisms of acceleration, lifting and landing elsewhere, etc. Of course, since it would be a touch interface separate from the screen instead of a touchscreen, there would need to be a visible pointer of some sort; likely not the arrow shape we’ve all known and loved since 1984, as this arrow is meant to be able to click with pixel accuracy, which is antithetical with the iOS interface paradigm. Maybe something as simple as a circle could do the trick.

Now remains the most important and thorny question of how these laptops and desktops now meant to run iOS would be called; would we still call them Macs? I have no idea.

iOS would have to change, too

But just like Mac OS X was not Rhapsody, iOS, in order to run on the desktop and be an attractive target for current desktop apps and users, would have to adopt some Mac features. For one, the model of designing iOS apps specifically for each screen size clearly wouldn’t scale, there would have to be ways to have apps reliably extend their user interface; most of the infrastructure is there mind you, even with springs and struts a lot can be done, but there would have to be new mechanisms to better take advantage of space when available; think CSS media queries and responsive design for native apps.

But having apps themselves scale with the screen is only part of the challenge, with all that screen space we certainly would not use it to run one app at a time, so ways to have multiple iOS apps on screen a the same time would have to be devised. That would be an awful amount of work to define and implement this new on-screen multitasking model for the 21st century, no doubt about it.

Then iOS would have to adopt some sort of document filing system, of course, as well as Developer ID, because having all apps having to come from either the iOS App Store or Mac App Store would be untenable for a desktop machine with many uses, corporate ones in particular.

I believe Apple will in fact even allow unsigned apps, or something functionally equivalent, on desktop iOS, though maybe they will not be allowed to run in the default setting. — September 3, 2013

Then it would need the ability to specify different default apps for, say, browser or email, as well as more comprehensive systems for apps to communicate with each other, with services for instance, better URL handlers (with UI to specify the default in case more than one app claim support). iCloud that doesn’t suck, but that’s a necessity for iOS on mobile anyway.

Most significantly, iOS for desktop would need to support some Mac OS X APIs, possibly with some restrictions and dropped methods/objects (kind of like Carbon was to the Mac Toolbox), most notably AppKit, but not only it, though it is safe to say Carbon wouldn’t make the jump.

And Apple would need to add (meaningful) removable media support, external drive support, backup support on such a drive, support for burning disks, wired networking, multiple monitors, a Terminal app, a development environment, some solution to the problem of communal computing/multiple users, etc, etc.

It’s time

So, easy, right? Just a little bit of work. In all seriousness, that would be a big endeavor, bigger maybe than Mac OS X, and, like Mac OS X, iOS on desktop would probably require a few iterations to get right. And that’s why Apple, whatever they actually intend to do (because they will not keep maintaining two operating systems forever), should start the process by telling developers as early as possible; for instance Mac OS X was introduced for the first time in May 1998 (how time flies…), and wasn’t really ready to replace MacOS 9 until 2002 or so.

I initially forgot to put some sort of deadline with my prediction, this has been repaired: I think it will have been announced by WWDC 2018, in five years, at the latest. — September 3, 2013

So what do you think will happen? Even if I am completely off-base here, at least I hope I provided for some interesting reflection for you to think on the matter.

Parody week

Well, I hoped you enjoyed parody week last week on this blog. After the parody of the Old New Thing I posted Monday as an April’s fool, I admit I got a little carried away and posted a parody of Hypercritical Tuesday, a parody of Coding Horror Wednesday, and concluded with a fake parody of Fake Steve Thursday.

So of course these posts last week were not entirely serious. But… Let’s see, for instance, Did Apple just cargo cult the iPhone platform?; clearly, I would not use a cargo cult metaphor for the iOS platform, even with precautions, outside of a parodic context: “cargo cult” is a very specific and grave accusation that just does not apply to the iOS platform. But just because this was a not-really-serious, parodic post, does not mean it was only for laughs and entertainment: if you are not coming away from that post thinking there was a deeper message to it, then I have not done my job properly (hey, what’s this bold text doing here? Oh God Jeff is contaminating me). I think that good satire makes you laugh, then makes you think, and I hope I was up to that standard.

As always, thank you for reading Wandering Coder.

Good riddance, Google. Don’t let the door hit you on the ass on the way out.

This post is, in fact, not quite like the others. It is a parody of Fake Steve I wrote for parody week, so take it with a big grain of salt…

See here. Basically, the rockets scientists at Google have decided, after having used our WebKit for years in Chrome, that they, uh, suddenly did not need us any more and forked WebKit like the true leeches they are. Dude, we are the ones who found KHTML and made WebKit what it is, if it weren’t for us KHTML would only be known to three frigtards in west Elbonia and you would have had no engine to put in your hotrodded race car of a browser, so I guess, thanks for nothing, bastards.

Truth is, good riddance. Those know-it-alls at Google have been a pain in our ass ever since Chrome debuted. Where do I start? Like with V8. Oh God V8… I… uh…


Okay, I can’t do this. I can’t parody Fake Steve. I’ve got nothing on Dear Leader. He was pitch perfect, like, you would get the feeling the real Steve Jobs would be writing just for you in his secret diary, all the while being satiric and outrageous enough so that at some level you knew it was fake but at the same time the persona was so well maintained that you easily suspended disbelief and you could not help thinking Steve could have shared these opinions. And he was insightful, oh of course he was, like in the middle of some ludicrous story you would feel like you would be enlightened about the way the tech industry or the press or tech buyers worked, didn’t matter if it was made up because it was a way to thought-provoke us and make us think about how the sausage factory really worked inside. He was the perfect ying-yang of the old-school professional who has seen it all and who knows how it works behind the hype, and of the new media guy who can drop a bunch of paragraphs without a word limit on a whim on a subject he want to tackle, and is not afraid to try new things and new ways of storytelling. Favorites? Ah! Apple, the Old Borg, the New Borg, the Linux frigtards, the old dying press, these upstart bloggers, the consumers standing in line, PR flacks, software developers, no one was safe.

I can see him now, looking above me from wherever his is now, laughing at my pathetic attempt at reviving him, even for a minute. I know he is at peace there, meditating, waiting for his reincarnation, because oh yes, he will be reincarnated some day, in a different form: Fake Steve is buddhist too, he most certainly did not meet St Peter at the pearly gates, and he has unfinished business in this world, he was not done restoring a sense of childlike sarcastic wonder in our lives. I’m waiting, waiting for the day I will see a blog or webcomic or column, because Fake Steve has a sense of humor and may throw us all for a loop by reincarnating in the old press, or Twitter feed or animation (but not a Flash animation, there are limits), and I will see the telltale signs, the snark, the character play, the insightfulness, and I will think: “Yes, Fake Steve has been reincarnated.”

Meanwhile, Fake Steve, I know you are now in a better place and cannot come back as such, but if you could hear my prayer: Dan… has not been good lately, to put it mildly. So… could you try and inspire him a bit while he will be away from the echo chamber? Not for him to write as you, no, just so that when he eventually returns to us after having spent some time away from it all, he will write good things, no matter what they are. Because we can’t stand looking at him like this.

The Joy of Tech comic number 995: Yes, Virgil, there is a Fake Steve Jobs

Did Apple just cargo cult the iPhone platform?

This post is, in fact, not quite like the others. It is a parody of Coding Horror I wrote for parody week, so take it with a big grain of salt…

In The iPhone Software Revolution, I proclaimed that the iPhone was the product Apple was born to make:

But a cell phone? It’s a closed ecosystem, by definition, running on a proprietary network. By a status quo of incompetent megacorporations who wouldn’t know user friendliness or good design if it ran up behind them and bit them in the rear end of their expensive, tailored suits. All those things that bugged me about Apple’s computers are utter non-issues in the phone market. Proprietary handset? So is every other handset. Locked in to a single vendor? Everyone signs a multi-year contract. One company controlling your entire experience? That’s how it’s always been done. Nokia, Sony/Ericsson, Microsoft, RIM — these guys clearly had no idea what they were in for when Apple set their sights on the cell phone market — a market that is a nearly perfect match to Apple’s strengths.

Apple was born to make a kick-ass phone. And with the lead they have, I predict they will dominate the market for years to come.

But never mind the fact a similar reasoning could have been made of the Macintosh when it came out. What bothers me today is the realization Apple might have handled the opening of the iPhone platform like a cargo cult:

The term “cargo cult” has been used metaphorically to describe an attempt to recreate successful outcomes by replicating circumstances associated with those outcomes, although those circumstances are either unrelated to the causes of outcomes or insufficient to produce them by themselves. In the former case, this is an instance of the post hoc ergo propter hoc fallacy.

cargo cult phone

cargo cult phone by dret, on Flickr; used under the terms of the Creative Commons CC BY-SA 2.0 license

By which I mean that Apple decided they needed to open the iPhone as a development platform, but I wonder to which extent they then did so by giving it the trappings of a platform more than the reality of a platform: third parties can sell their apps for running on it, right? So it must be a platform, right? Well… And I don’t mean the APIs are the problem either, it’s more like… everything else:

  • Apple has a very restrictive idea of what kind of use cases third parties are allowed to provide solutions to: everything that does not fit their idea of an app is rejected, or is impossible. For instance, installing third-party keyboards is not possible on iPhone:

    But sometimes, an Apple product’s feature lands at the wrong side of the line that divides “simple” from “stripped down.” The iPhone keyboard is stripped-down.

    If you don’t like how Android’s stock keyboard behaves, you can dig into Settings and change it. If you still don’t like it, you can install a third-party alternative. And if you think it’s fine as-is, then you won’t be distracted by the options. The customization panel is inside Settings, and the alternatives are over in the Google Play store.

    This? It’s from Andy Ihnatko, in an article in which he explains why he switched from iPhone to Android. Andy. Ihnatko. When Mac users of 25 years start switching away from the iPhone, I’d be very worried if I were in Cupertino.

  • Even for these use cases third-parties are allowed to provide solutions to, they are quite restricted: when Apple added support for multitasking, in iOS 4, they more or less proclaimed they had covered for every desirable multitasking scenario, and have not added any since. This feels a tad preposterous to me that there would have been no need for even a single new multitasking scenario in the two years since.

  • Even when third parties can sell their wares, they do so at the pleasure of the king. Apple seems to consider iPhone developers to be contractors/authors developing solely for Apple purposes. And paid by commission. Without any advance. And without any assurance when they begin developing that their app will be accepted in the end.

  • Apple apps do not play by the same rules other apps do. They are not sandboxed, or not as much. They use private APIs off-limits to other apps. They get a pass on many iOS App Store restrictions. In short, Apple eats People Food, and gives its developers Dog Food:

    Microsoft has known about the Dogfood rule for at least twenty years. It’s been part of their culture for a whole generation now. You don’t eat People Food and give your developers Dog Food. Doing that is simply robbing your long-term platform value for short-term successes. Platforms are all about long-term thinking.

  • In the same spirit, Apple introduced iCloud, gave users the perception Apple did the hard work and that apps would merely have to opt in, sold it to developers as the best thing since sliced bread, then promptly went and not used it themselves in combination with er, the technology they have consistently recommended be used for persistent storage (while ostensibly supporting this combination), without giving the ability to audit synchronization issues either. And now it turns out, and people come to the realization, that iCloud Core Data syncing does not work. Shocker.

  • Apple even tried at some point to prohibit non-C programming languages for iPhone development, with a clear aim to ban a number of alternative development environments, not just Flash. But just like Apple cannot provide all the software to fulfill iPhone user needs, Apple cannot provide all the software to fulfill iPhone developer needs either. A platform is characterized not just an by ecosystem of apps, but also by an ecosystem of developer tooling and libraries behind the scenes. They ended up relenting on this, but if I were an iPhone developer, I would not be very reassured.

But wait, surely, that can’t be. Apple knows all about platforms and the value of platforms, right? VisiCalc, right? But that encouraged Apple to provide something that looks like a platform, rather than a platform. As for the iPhone not being Apple’s first platform, there is a school of thought that says Steve Jobs did not build platforms, except by accident; so according to this line of thought, the Apple II and the Mac became honest-to-God platforms not because of Apple, but in spite of Apple. And now for the first time we would get to see the kind of platform Apple creates when it actually calls the shots. It looks like a platform, sounds like a platform, has the taste of a platform, smells like a platform, walks like a duck platform… but the jury is still out on whether it is actually a platform.

There is a case to be made for reducing your dependencies. Apple clearly is not going to let anyone hold it back; but as incredibly skilled as the people working at Apple are, is “not being held back” going to be enough to keep up when Android, propelled by being a more complete and inclusive platform, will threaten to move past Apple?

Apple can still turn this around. Indeed, the issue lies not so much in these restrictions being present at first, than in so few of them having been lifted since then. The key, of course, will be in figuring out which ones they need to lift, and how to do so. And this will require Apple to reconsider the motions it does to bring cargo, regardless of the cargo these actions have brought, and instead focus on improving their limited understanding of what it is that actually makes a platform. In order to really bring in the cargo.

Annoyance-Driven Blogging

This post is, in fact, not quite like the others. It is a parody of Hypercritical I wrote for parody week, so take it with a big grain of salt…

I’ve been reading Hypercritical, John Siracusa’s new blog outside of Ars Technica, and it has been good to read more of John, rather than the glacial pace his blog there had been updating lately.

But even on his own space, John has been unable to escape some of the trappings of his past. A blog that updates with some frequency naturally lends itself to multi-post reading sessions. But reading a post about the annoyance of having to watch a minute and a half of opening credits before each episode can get tiresome.

To be fair to John, the existence of this kind of post may not be entirely under his control, given his quasi-OCD tendencies. But getting bogged down in these details misses the point.

Yes, we all know and love John Siracusa for his, well, hypercritical tendencies, but these are best consumed as part of a post on a broader subject, like a spice, having nothing but that in a post quickly gets to be too much.

This may sound comically selfish, but true innovation comes from embracing your audience expectations, not fighting them. Find out what is annoying your readers. Give people what they want and they will beat a path to your door.

We nerds love bickering about technology for its own sake. Indeed, there’s always something to be gained by criticizing the state of the art and goading into providing more of a good thing. But the most profound leaps are often the result of applying criticism as strictly needed in the context of a more constructive post. By all means, criticize, but also research, expose and propose what could be done better and how. Go after those things and you’ll really make people love you. Accentuate the positive. Eliminate the negative.

How does ETC work? A sad story of application compatibility

This post is, in fact, not quite like the others. It is a parody of the Old New Thing I wrote for parody week, so take it with a big grain of salt…

Commenter Contoso asked: “What’s the deal with ETC? Why is it so complicated?“

First, I will note that ETC (which stands for Coordinated Eternal Time) is in fact an international standard, having been adopted by ISO as well as national and industrial standard bodies. The specification is also documented on MSDN, but that’s more for historical reasons than anything else at this point, really. But okay, let’s discuss ETC, seeing as that’s what you want me to do.

ETC is not complicated at all if you follow from the problem to its logical conclusion. The youngest among you might not realize it, but the year 2000 bug was a Big Deal. When it began to be discussed in the public sphere starting in 1996 or so, most people laughed it off, but we knew, and always knew that if nothing was done computers, and all civilization in fact, would be headed for a disaster of biblical proportions. Real wrath of God type stuff. The dead rising from the grave! Human sacrifice! Dogs and cats living together… mass hysteria!

The problem originated years before that when some bright software developers could not be bothered to keep track of the whole year, and instead only kept track of the last two digits; so for instance, 1996 would be stored as just 96 in memory, and when reading it it was implicitly considered to have had the “19” before it, and so would be restored as “1996” for display, processing, etc. But it just happened to work because the years they saw started in “19”, and things would go wrong as soon as years would no longer do so, starting with 2000.

What happened (or rather, would have happened if we let it happen, this was run in controlled experiment conditions in our labs) in this case was that, for starters, these programs would print the year in the date as “19100”. You might think that would not be too bad, even though that would have some regulatory and other consequences, and would result in customers blaming us, and not the faulty program.

But that would in fact be if they even got as far as printing the date.

Most of them just fell over and died from some “impossible” situation long before that: some would take the date given by the API, convert it to text, blindly take the last two digits without checking the first two, and when comparing with the date in its records to see how old the last save was would end up with a negative age since it did 0 – 99 as far as the year was concerned, and the program would crash on a logic error; others would try and behave better by computing the difference with the year 1900 and the one returned by our API, but when they tried to process their “two-digit” year, which was now “100”, for display, they would take up one more byte than expected and end up corrupting whatever data was after it, which quickly led them to a crash.

And that was if you were lucky: some programs would appear to work correctly, but in fact have subtle yet devastating problems, such as computing interest backwards or outputting the wrong people ages.

We could not ignore the problem: starting about noon, 31st of December 1999 UTC, when the first parts of the world would start being in 2000, we would have been inundated with support requests for these defective products, never mind that the problem was not with us.

And we could not just block the faulty software: even if we did not already suspect that was the case, a survey showed every single one of our (important) customers was using at least one program which we know would exhibit issues come year 2000, with some customers using hundreds of such programs! And that’s without accounting for internally-developed software by the customer, and after requesting some sample we found out most of this software would be affected as well. Most of the problematic software was considered mission-critical and could not just be abandoned and had to keep working past 1999, come hell or high water.

Couldn’t the programs be fixed and customers get updated version? Well, for one in the usual case the company selling the program would be happy to do so, provided customers would pay for the upgrade to the updated version of the software, and customers reacted badly to that scenario.

And that assumes the company that developed the software was still in business.

In any case, the program might have been written in an obsolete programming language like Object Pascal, using the 16-bit APIs, and could no longer be built for lack of a surviving install of the compiler, or even lack of a machine able of running the compiler. Some of these programs could not be fixed without fixing the programming language they used or a library they relied on, repeating the problem recursively on the suppliers of these which may have become out of business. Even if the program could technically be rebuilt, maybe its original developer was long gone from the company and no one else could have managed to do it.

But a more common case was that the source code for the program was in fact simply lost to the ages.

Meanwhile, we were of course working on the solution. We came up with an elegant compatibility mechanism by which any application or other program which did not explicitly declare itself to support the years 2000 and after would get dates from the API in ETC instead of UTC. ETC was designed so that 1999 is the last year to ever happen. It simply never ends. You should really read the specification if you want the details, but basically how it works is that in the first half of 1999, one ETC second is worth two UTC seconds, so it can represent one UTC year; then in the first half of what is left of 1999, which is a quarter year, one ETC second is worth four UTC seconds, so again in total one UTC year, and in the first half of what is left after that, one ETC second is worth eight UTC seconds, etc. So we can fit an arbitrary number of UTC years into what seems to be one year in ETC, and therefore from the point of view of the legacy programs. Clever, huh? Of course, this means the resolution of legacy programs decreases as time goes on, but these programs only had a limited number of seconds they could ever account for in the future anyway, so it is making best use of the limited resource they have left. Things start becoming a bit more complex when we start dividing 1999 into amounts that are no longer integer amounts of seconds, but the general principle remains.

Of course, something might seem off in the preceding description, and you might guess that things did not exactly come to be that way. And indeed, when we deployed the solution in our usability labs, we quickly realized people would confuse ETC dates coming from legacy apps with UTC dates, for instance copying an ETC date and pasting it where a UTC date was expected, etc., causing the system to be unusable in practice. That was when we realized the folly of having two calendar systems in use at the same time. Something had to be done.

Oh, there was some resistance, of course. Some countries in particular dragged their feet. But in the end, when faced with the perspective of a digital apocalypse, everyone complied eventually, and by 1998 ETC was universally adopted as the basis for official timekeeping, just in time for it to be deployed. Because remember: application compatibility is paramount.

And besides, aren’t you glad it’s right now the 31st of December, 23:04:06.09375? Rather than whatever it would be right now had we kept “years”, which would be something in “2013” I guess, or another thing equally ridiculous.

Apple has a low-cost iPhone, but they should have a different one

From time to time rumors surface about Apple being poised to introduce some sort of cheap iPhone to fill a hole in their lineup, or we have so-called experts pontificate on why Apple needs to introduce such a device in order not to leave that market entirely to Android. The ways this gets discussed gets me wondering how these people could possibly not have noticed that Apple still sells the iPhone 4 and 4S, now as lower cost iPhones; I don’t know, maybe these don’t count because they have cooties? In reality, while they were introduced some time ago indeed, they can stand the comparison with “base” Android models as far as I can tell, and buyers do not seem to be snubbing them.

If at least the conversation was about whether the iPhone 4 and 4S were appropriate base or low-cost models for Apple to sell, as then there would be things to say on the matter (but no, it is always framed as if these two did not exist). Indeed, I think this strategy was justified when the iPhone 3G kept being sold along the new 3GS, it might have been tolerable to keep the iPhone 3GS along the new iPhone 4, but by now Apple should have long changed strategy. For a number of reasons, as the iPhone product lineup, or rather hardware platform, matures it should instead have low-cost models meant for that purpose.

The first reason goes back to the dark days of the Mac in the nineties, where former top of the line Macs would be discounted and sold as base models when another Mac model replaced them as the top of the line Mac; as a result, people would be hesitant to buy the (more) affordable Mac which they knew was not really up to date, and did not want to shell out for the current model, so they ended up just waiting for it to be discounted. It was hard for Apple to have a clear message on which Mac someone in a given market should be buying: so yesterday that model was not appropriate for consumers, but today it is? The heck? Fortunately, Steve Jobs put an end to that when he introduced the Mac product matrix (with its two dimensions: consumer-professional, and portable-desktop, and four models: iMac, PowerMac, iBook, and PowerBook).

Which brings me to the second reason, which is that not all technologies make sense as being introduced for professionals exclusively, even at first; USB, introduced in fact with the first iMac before it was on any PowerMac, is a prime example. Today, we have for instance SSDs, at least as an option or in the form of an hybrid drive.

But I think the most important reason is that having dedicated base models would allow Apple to sell devices where the hardware design flaws of yesterday (visible or invisible) are fixed, instead of having to keep having to take them into account in the next X years of software updates. While some new features in iOS releases have not been made available on older devices on which they run, the base OS has to run nevertheless, and even with this feature segmentation performance regressions have been observed. The other side of having specifically developed low-cost iPhone models is that Apple would seed with these devices current technologies to better be able to introduce new and interesting things in future versions of iOS (think, say, OpenCL), for instance because third-party developers are more likely to adopt a technology if they know every device sold in the last year supports it; this goes doubly if the technology cannot serve as an optional enhancement, but instead is meant to have apps depend on it.

The example the iPhone should follow, where Apple itself does it right, is with the iPad and in particular the iPad mini. I joke that now with the 128GB iPad there are 48 (count them) iPad SKUs, but that’s in fact OK, as the lineup is very orthogonal: from the consumer’s viewpoint, color and connectivity are similar to build-to-order options over capacity variations of the 3 base models; it must be somewhat challenging to manage supply and resupply, but apparently the operations at Apple is managing it; and sometimes the one combination you want is out of stock at the store, so you end up getting a slightly different one, but that’s minor in the grand scheme of things. On the other hand, the introduction of the iPad mini was indispensable to diversify the iPad presence and make the iPad feel like not just a single product, but a real hardware platform.

The one thing in particular that was done best in that respect with the iPad mini is that internally, the iPad mini is pretty much the iPad 2 guts with the 4th generation iPad connectivity: Lightning port, Bluetooth 4.0, and LTE as an option. I/O is one of these things where it often does not make sense to introduce technologies only at the high end at first, because of network effects: they apply whether the I/O is to connect devices with each other, in which case you need to clear a penetration threshold before people can use it in practice, or if the I/O is for accessories, in which case hardware accessory makers are more likely to follow the better you seed the technology to everyone.

Now I’m not saying it is going to be easy to do the same for the iPhone. It is clear that each iPhone hardware model is very integrated, requiring for each model a lot of investment not only in hardware design, but also in the supply chain and the assembling infrastructure. Designing each model to be sold for only one year would make it harder to amortize these investments, but planning for some hardware reuse during the design process could compensate to an extent, and I think the outcomes in having a clearer and stronger product lineup, better technology introduction, and decreased iOS maintenance costs, would make it worth it.

PSA: Subscribe to your own RSS feed

I like RSS. A lot. It allows me to efficiently follow the activity of a lot of blogs that do not necessarily update predictably (I mean, I’m guilty as charged here). So when things break down, in particular in silent or non-obvious ways, it is all the more grating. To avoid causing this for your readers, please follow the RSS feed of your own blog; it does not have to be much, you could do it for free with the built-in feature of Firefox, just do it.

Case in point: at some point Steven Frank was talking about the <canvas> tag (in a post since gone in one of his blog reboots). It showed up fine on the website, but for some reason I cannot fathom the angle brackets did not end up as being escaped in the XML feed, and NetNewsWire dutifully interpreted it as an actual canvas. With no closing tag, which means none of the text after the “tag” showed up in NetNewswire, so as far as I could tell the blog post just straight up ended there (before the tag) in a mysterious fashion. I fortunately thought that that didn’t look like Mr Frank’s usual style and investigated, but I might never have noticed anything wrong.

You might say it was because of a bug at some point in the feed generation, but this is not the point. I mean, this is the web, in permanent beta, of course bugs are going to happen. The point is that apparently the author never noticed it and fixed it, so he couldn’t possibly have been following his own feed; had he been doing so he would have noticed, and would have fixed it in any number of ways.

Another case is when the feed URL for Wil Shipley’s blog changed. He did not announce it anywhere, and I did not notice it until I went to his site at work and saw a post I had not seen before (and which had been posted earlier than that week). Had he been following his own feed, he would have noticed at some point when posting that the post never showed up in his reader, and would have remembered to notify his RSS subscribers in some way.

So kids, don’t be the lame guy: follow your own RSS feed. Otherwise, you’re publishing something you are not testing, and I think we recently talked about how bad that was. The more you know…

Proposal for a standard plain text format for iOS documents

Since the last times we visited the matter of working with documents on iOS, I have read with great interest more write-ups of people describing how they work on the iPad, because of course it is always good to hear about people being able to do more and more things on the iPad, but also because (as John Gruber so astutely noted) Dropbox almost always seems to be involved. I don’t feel that Dropbox solves the external infrastructure problem I raised in my first post on the matter, I consider Dropbox external infrastructure as well, if only because it requires you to be connected to the Internet to merely be able to transfer documents locally on your iPad (and that’s not a knock on Dropbox mind you, this is entirely the doing of restrictions Apple imposes).

I am going to concede one advantage the current de facto iOS model of documents in a per-app sandbox plus next to that an explicit container for document interchange, which is that it forces apps to actually consider support of interchange document formats. With the Grand Unified Model, and whatever we call the model Mac OS X now uses since Snow Leopard, applications would first only concern themselves with creating documents to save the state of the user’s work for them to pick it up later, without concern for other applications; and when the authors of the applications would come to consider standard formats, or at the very least creating an interchange format without data that are of no interest to another app (e.g. which tool was selected at the time the image was saved) or would amount to implementation details, they would realize that other applications had managed to slog through their undocumented document format to open it, and as a result the authors did not feel so pressured to support writing another document format. The outcome is that the onus of information interchange falls only on the readers, which need to keep adding support for anything the application that writes the document feels like adding to the format, in the way it feels like doing so.

However, with the de facto model used by iOS, apps may start out the same way, but when they want to claim Dropbox support, they have damn well better write in there documents in a standard or documented interchange format, or their claims of Dropbox support become pretty much meaningless. I am not sure the tradeoff is worth it compared to the loss of being able to get at the original document directly as a last resort (in case, for instance, the document exchanged on Dropbox has information missing compared to the document kept in the sandbox), but it is indeed an advantage to consider. An issue with that, though, is that as things currently stand there is no one to even provide recommendations as to the standard formats to use for exchanging documents on iOS: Dropbox the company is not in a position to do so, and as far as Apple is concerned document interchange between iOS apps does not exist.

So when I read that the format often used in Dropbox to exchange data between apps is plain text, while this is better than proprietary formats, this saddens me to no end. Why? Because plain text is a lie. There is no such thing as plain text. Plain text is a myth created by Unix guys to control us. Plain text is a tall tale parents tell their children. Plain text is what you find in the pot at the end of the rainbow. Plain text is involved in Hercules’ labors and Ulysses’ odyssey. Perpetual motion machines run on plain text. The Ultimate Question of Life, the Universe and Everything is written in plain text.

I sense you’re skeptical, so let me explain. Plain text is pretty well defined, right? ASCII, right? Well, let me ask you: what is a tab character supposed to do? Bring you over to the next tab stop every 4 spaces? Except that on the Mac tab stops are considered to occur every 8 spaces instead (and even on Unix not everyone agrees). And since we are dealing with so-called plain text, the benefit of being able to align in a proportional context does not apply: if you were to rely on that, then switch to another editor that uses a different font, or switch the editing font in your editor, then your carefully aligned document would become all out of whack. Finally, any memory saving brought by the tab character has become insignificant given today’s RAM and storage capacities.

Next are newlines. Turns out, hey, no one agrees here either: you’ve got carriage return, line feed, the two together (and both ways). More subtle is wrapping… What’s this, you say? Editors always word wrap? Except piconano, for instance, doesn’t by default. And Emacs, in fact, does a character wrap: by default it will cut in the middle of a word. It seems inconsequential, but it causes users of non-wrapping editors to complain that others send them documents with overly long lines, while these others complain that the first guys write lines with an arbitrary limit, causing for instance unsightly double-wrapping when used on a window narrower than that arbitrary width.

And of course, you saw it coming, comes the character encoding, we have left the 7-bit ASCII world eons ago. Everything is Unicode capable by now, but some idiosyncrasies still remain: for instance as far as I can tell out of the box TextEdit in Mac OS X still opens text files in MacRoman by default.

This is, simply, a mess. There is not one, but many plain text formats. So what can we do?

The proposal

Goals, scope and rationale (non-normative)

The most important, defining characteristic of the proposal for Standard Plain Text is that it is meant to store prose (or poetry). Period. People might sometimes happen to use it for, e.g. source code, but these use cases shall not be taken into considerations for the format. If you want to edit makefiles or a tab separated values file, use a specialized tool. However, we do want to make sure that more specialized humane markup/prose-like formats can be built above the proposal, in fact for instance Markdown and Textile over Standard Plain Text ought to be able to be trivially defined as being, well, Markdown and Textile over Standard Plain Text.

Then, we want to be able to recover the data on any current computer system in case of disaster. This means compatibility with existing operating systems, or at least being capable of recovering the data using only programs built in these operating systems.

And we want the format to be defined thoroughly enough to limit as much as possible disagreements and misunderstandings, while keeping it simple to limit risks of mistakes in implementations.

Requirements

Standard Plain Text files shall use the Unicode character set, and be encoded in UTF-8. Any other character set or encoding is explicitly forbidden.

This seem obvious, until you realize this causes Asian text, among others, to take up 50% more storage than it would using UTF-16, so there is in fact a tradeoff here, and compatibility was favored; I apologize to all our Japanese, Chinese, Korean, Indian, etc. friends.

Standard Plain Text files shall not contain any character in the U+0000 – U+001F range, inclusive, (ASCII control characters) except for U+000A (LINE FEED). As a result, tabulation characters are forbidden and the line ending shall be a single LINE FEED. Standard Plain Text files shall not contain any character in the U+00FF – U+011F range, inclusive (DELETE and C1 range). Standard Plain Text files shall not contain any U+FEFF character (ZERO WIDTH NO-BREAK SPACE aka byte order mark), either at the start or anywhere else. All other code points between U+0020 and U+10FFFF, inclusive, that are allowed in Unicode are allowed, including as-yet unassigned ones.

Standard Plain Text editors shall word wrap, and shall support arbitrarily long stretches of characters and bytes between two consecutive LINE FEEDs. They may support proportional text, but they shall support at least one monospace font.

These requirements shall be enforced at multiple levels in Standard Plain Text editors, both at the user input stage and when writing to disk at least: pasting in text containing forbidden characters shall not result in them being written as part of a Standard Plain Text file. Editors may handle tabulation given as input any way they see fit (e.g. inserting N spaces, inserting enough spaces to reach the next multiple of N column, etc.) as long as it does not result in a tab character being written as part of a Standard Plain Text file in any circumstance.

Standard Plain Text files should have the .txt extension for compatibility. No MacOS 4-char type code is specified. No MIME type is specified for the time being. If a Uniform Type Identifier is desired, net.wanderingcoder.projects.standard-plain-text (conforming to public.utf8-plain-text) can be used as a temporary solution.

Clearly this is the part that still needs work. Dropbox supports file metadata, but I have not fully investigated the supported metadata, in particular whether there is a space for an UTI.

Appendix A (non-normative): recovery methods

On a modern Unix/Linux system: make sure the locale is a UTF-8 variant, then open with the text editor of your preference.

On a Mac OS X system: in the open dialog of TextEdit, make sure the encoding is set to Unicode (UTF-8), then open the file. Being a modern Unix, the previous method can also be applied.

On a modern Windows system (Windows XP and later): in the open dialog of Wordpad, make sure the format to open is set to Text Document (.txt) (not Unicode Text Document (.txt)), then open the file. Append a newline then delete it, then save the file. In the open dialog of Notepad, make sure the encoding is set to UTF-8, then open the latter file.

The reason for this roundabout method is that Wordpad does not support UTF-8 (Unicode in its opening options in fact means UTF-16) but supports linefeed line endings, while Notepad does not support linefeed line endings. Tested on Windows XP.