Parody week

Well, I hoped you enjoyed parody week last week on this blog. After the parody of the Old New Thing I posted Monday as an April’s fool, I admit I got a little carried away and posted a parody of Hypercritical Tuesday, a parody of Coding Horror Wednesday, and concluded with a fake parody of Fake Steve Thursday.

So of course these posts last week were not entirely serious. But… Let’s see, for instance, Did Apple just cargo cult the iPhone platform?; clearly, I would not use a cargo cult metaphor for the iOS platform, even with precautions, outside of a parodic context: “cargo cult” is a very specific and grave accusation that just does not apply to the iOS platform. But just because this was a not-really-serious, parodic post, does not mean it was only for laughs and entertainment: if you are not coming away from that post thinking there was a deeper message to it, then I have not done my job properly (hey, what’s this bold text doing here? Oh God Jeff is contaminating me). I think that good satire makes you laugh, then makes you think, and I hope I was up to that standard.

As always, thank you for reading Wandering Coder.

Good riddance, Google. Don’t let the door hit you on the ass on the way out.

This post is, in fact, not quite like the others. It is a parody of Fake Steve I wrote for parody week, so take it with a big grain of salt…

See here. Basically, the rockets scientists at Google have decided, after having used our WebKit for years in Chrome, that they, uh, suddenly did not need us any more and forked WebKit like the true leeches they are. Dude, we are the ones who found KHTML and made WebKit what it is, if it weren’t for us KHTML would only be known to three frigtards in west Elbonia and you would have had no engine to put in your hotrodded race car of a browser, so I guess, thanks for nothing, bastards.

Truth is, good riddance. Those know-it-alls at Google have been a pain in our ass ever since Chrome debuted. Where do I start? Like with V8. Oh God V8… I… uh…

Okay, I can’t do this. I can’t parody Fake Steve. I’ve got nothing on Dear Leader. He was pitch perfect, like, you would get the feeling the real Steve Jobs would be writing just for you in his secret diary, all the while being satiric and outrageous enough so that at some level you knew it was fake but at the same time the persona was so well maintained that you easily suspended disbelief and you could not help thinking Steve could have shared these opinions. And he was insightful, oh of course he was, like in the middle of some ludicrous story you would feel like you would be enlightened about the way the tech industry or the press or tech buyers worked, didn’t matter if it was made up because it was a way to thought-provoke us and make us think about how the sausage factory really worked inside. He was the perfect ying-yang of the old-school professional who has seen it all and who knows how it works behind the hype, and of the new media guy who can drop a bunch of paragraphs without a word limit on a whim on a subject he want to tackle, and is not afraid to try new things and new ways of storytelling. Favorites? Ah! Apple, the Old Borg, the New Borg, the Linux frigtards, the old dying press, these upstart bloggers, the consumers standing in line, PR flacks, software developers, no one was safe.

I can see him now, looking above me from wherever his is now, laughing at my pathetic attempt at reviving him, even for a minute. I know he is at peace there, meditating, waiting for his reincarnation, because oh yes, he will be reincarnated some day, in a different form: Fake Steve is buddhist too, he most certainly did not meet St Peter at the pearly gates, and he has unfinished business in this world, he was not done restoring a sense of childlike sarcastic wonder in our lives. I’m waiting, waiting for the day I will see a blog or webcomic or column, because Fake Steve has a sense of humor and may throw us all for a loop by reincarnating in the old press, or Twitter feed or animation (but not a Flash animation, there are limits), and I will see the telltale signs, the snark, the character play, the insightfulness, and I will think: “Yes, Fake Steve has been reincarnated.”

Meanwhile, Fake Steve, I know you are now in a better place and cannot come back as such, but if you could hear my prayer: Dan… has not been good lately, to put it mildly. So… could you try and inspire him a bit while he will be away from the echo chamber? Not for him to write as you, no, just so that when he eventually returns to us after having spent some time away from it all, he will write good things, no matter what they are. Because we can’t stand looking at him like this.

The Joy of Tech comic number 995: Yes, Virgil, there is a Fake Steve Jobs

Did Apple just cargo cult the iPhone platform?

This post is, in fact, not quite like the others. It is a parody of Coding Horror I wrote for parody week, so take it with a big grain of salt…

In The iPhone Software Revolution, I proclaimed that the iPhone was the product Apple was born to make:

But a cell phone? It’s a closed ecosystem, by definition, running on a proprietary network. By a status quo of incompetent megacorporations who wouldn’t know user friendliness or good design if it ran up behind them and bit them in the rear end of their expensive, tailored suits. All those things that bugged me about Apple’s computers are utter non-issues in the phone market. Proprietary handset? So is every other handset. Locked in to a single vendor? Everyone signs a multi-year contract. One company controlling your entire experience? That’s how it’s always been done. Nokia, Sony/Ericsson, Microsoft, RIM — these guys clearly had no idea what they were in for when Apple set their sights on the cell phone market — a market that is a nearly perfect match to Apple’s strengths.

Apple was born to make a kick-ass phone. And with the lead they have, I predict they will dominate the market for years to come.

But never mind the fact a similar reasoning could have been made of the Macintosh when it came out. What bothers me today is the realization Apple might have handled the opening of the iPhone platform like a cargo cult:

The term “cargo cult” has been used metaphorically to describe an attempt to recreate successful outcomes by replicating circumstances associated with those outcomes, although those circumstances are either unrelated to the causes of outcomes or insufficient to produce them by themselves. In the former case, this is an instance of the post hoc ergo propter hoc fallacy.

cargo cult phone

cargo cult phone by dret, on Flickr; used under the terms of the Creative Commons CC BY-SA 2.0 license

By which I mean that Apple decided they needed to open the iPhone as a development platform, but I wonder to which extent they then did so by giving it the trappings of a platform more than the reality of a platform: third parties can sell their apps for running on it, right? So it must be a platform, right? Well… And I don’t mean the APIs are the problem either, it’s more like… everything else:

  • Apple has a very restrictive idea of what kind of use cases third parties are allowed to provide solutions to: everything that does not fit their idea of an app is rejected, or is impossible. For instance, installing third-party keyboards is not possible on iPhone:

    But sometimes, an Apple product’s feature lands at the wrong side of the line that divides “simple” from “stripped down.” The iPhone keyboard is stripped-down.

    If you don’t like how Android’s stock keyboard behaves, you can dig into Settings and change it. If you still don’t like it, you can install a third-party alternative. And if you think it’s fine as-is, then you won’t be distracted by the options. The customization panel is inside Settings, and the alternatives are over in the Google Play store.

    This? It’s from Andy Ihnatko, in an article in which he explains why he switched from iPhone to Android. Andy. Ihnatko. When Mac users of 25 years start switching away from the iPhone, I’d be very worried if I were in Cupertino.

  • Even for these use cases third-parties are allowed to provide solutions to, they are quite restricted: when Apple added support for multitasking, in iOS 4, they more or less proclaimed they had covered for every desirable multitasking scenario, and have not added any since. This feels a tad preposterous to me that there would have been no need for even a single new multitasking scenario in the two years since.

  • Even when third parties can sell their wares, they do so at the pleasure of the king. Apple seems to consider iPhone developers to be contractors/authors developing solely for Apple purposes. And paid by commission. Without any advance. And without any assurance when they begin developing that their app will be accepted in the end.

  • Apple apps do not play by the same rules other apps do. They are not sandboxed, or not as much. They use private APIs off-limits to other apps. They get a pass on many iOS App Store restrictions. In short, Apple eats People Food, and gives its developers Dog Food:

    Microsoft has known about the Dogfood rule for at least twenty years. It’s been part of their culture for a whole generation now. You don’t eat People Food and give your developers Dog Food. Doing that is simply robbing your long-term platform value for short-term successes. Platforms are all about long-term thinking.

  • In the same spirit, Apple introduced iCloud, gave users the perception Apple did the hard work and that apps would merely have to opt in, sold it to developers as the best thing since sliced bread, then promptly went and not used it themselves in combination with er, the technology they have consistently recommended be used for persistent storage (while ostensibly supporting this combination), without giving the ability to audit synchronization issues either. And now it turns out, and people come to the realization, that iCloud Core Data syncing does not work. Shocker.

  • Apple even tried at some point to prohibit non-C programming languages for iPhone development, with a clear aim to ban a number of alternative development environments, not just Flash. But just like Apple cannot provide all the software to fulfill iPhone user needs, Apple cannot provide all the software to fulfill iPhone developer needs either. A platform is characterized not just an by ecosystem of apps, but also by an ecosystem of developer tooling and libraries behind the scenes. They ended up relenting on this, but if I were an iPhone developer, I would not be very reassured.

But wait, surely, that can’t be. Apple knows all about platforms and the value of platforms, right? VisiCalc, right? But that encouraged Apple to provide something that looks like a platform, rather than a platform. As for the iPhone not being Apple’s first platform, there is a school of thought that says Steve Jobs did not build platforms, except by accident; so according to this line of thought, the Apple II and the Mac became honest-to-God platforms not because of Apple, but in spite of Apple. And now for the first time we would get to see the kind of platform Apple creates when it actually calls the shots. It looks like a platform, sounds like a platform, has the taste of a platform, smells like a platform, walks like a duck platform… but the jury is still out on whether it is actually a platform.

There is a case to be made for reducing your dependencies. Apple clearly is not going to let anyone hold it back; but as incredibly skilled as the people working at Apple are, is “not being held back” going to be enough to keep up when Android, propelled by being a more complete and inclusive platform, will threaten to move past Apple?

Apple can still turn this around. Indeed, the issue lies not so much in these restrictions being present at first, than in so few of them having been lifted since then. The key, of course, will be in figuring out which ones they need to lift, and how to do so. And this will require Apple to reconsider the motions it does to bring cargo, regardless of the cargo these actions have brought, and instead focus on improving their limited understanding of what it is that actually makes a platform. In order to really bring in the cargo.

Annoyance-Driven Blogging

This post is, in fact, not quite like the others. It is a parody of Hypercritical I wrote for parody week, so take it with a big grain of salt…

I’ve been reading Hypercritical, John Siracusa’s new blog outside of Ars Technica, and it has been good to read more of John, rather than the glacial pace his blog there had been updating lately.

But even on his own space, John has been unable to escape some of the trappings of his past. A blog that updates with some frequency naturally lends itself to multi-post reading sessions. But reading a post about the annoyance of having to watch a minute and a half of opening credits before each episode can get tiresome.

To be fair to John, the existence of this kind of post may not be entirely under his control, given his quasi-OCD tendencies. But getting bogged down in these details misses the point.

Yes, we all know and love John Siracusa for his, well, hypercritical tendencies, but these are best consumed as part of a post on a broader subject, like a spice, having nothing but that in a post quickly gets to be too much.

This may sound comically selfish, but true innovation comes from embracing your audience expectations, not fighting them. Find out what is annoying your readers. Give people what they want and they will beat a path to your door.

We nerds love bickering about technology for its own sake. Indeed, there’s always something to be gained by criticizing the state of the art and goading into providing more of a good thing. But the most profound leaps are often the result of applying criticism as strictly needed in the context of a more constructive post. By all means, criticize, but also research, expose and propose what could be done better and how. Go after those things and you’ll really make people love you. Accentuate the positive. Eliminate the negative.

How does ETC work? A sad story of application compatibility

This post is, in fact, not quite like the others. It is a parody of the Old New Thing I wrote for parody week, so take it with a big grain of salt…

Commenter Contoso asked: “What’s the deal with ETC? Why is it so complicated?“

First, I will note that ETC (which stands for Coordinated Eternal Time) is in fact an international standard, having been adopted by ISO as well as national and industrial standard bodies. The specification is also documented on MSDN, but that’s more for historical reasons than anything else at this point, really. But okay, let’s discuss ETC, seeing as that’s what you want me to do.

ETC is not complicated at all if you follow from the problem to its logical conclusion. The youngest among you might not realize it, but the year 2000 bug was a Big Deal. When it began to be discussed in the public sphere starting in 1996 or so, most people laughed it off, but we knew, and always knew that if nothing was done computers, and all civilization in fact, would be headed for a disaster of biblical proportions. Real wrath of God type stuff. The dead rising from the grave! Human sacrifice! Dogs and cats living together… mass hysteria!

The problem originated years before that when some bright software developers could not be bothered to keep track of the whole year, and instead only kept track of the last two digits; so for instance, 1996 would be stored as just 96 in memory, and when reading it it was implicitly considered to have had the “19” before it, and so would be restored as “1996” for display, processing, etc. But it just happened to work because the years they saw started in “19”, and things would go wrong as soon as years would no longer do so, starting with 2000.

What happened (or rather, would have happened if we let it happen, this was run in controlled experiment conditions in our labs) in this case was that, for starters, these programs would print the year in the date as “19100”. You might think that would not be too bad, even though that would have some regulatory and other consequences, and would result in customers blaming us, and not the faulty program.

But that would in fact be if they even got as far as printing the date.

Most of them just fell over and died from some “impossible” situation long before that: some would take the date given by the API, convert it to text, blindly take the last two digits without checking the first two, and when comparing with the date in its records to see how old the last save was would end up with a negative age since it did 0 – 99 as far as the year was concerned, and the program would crash on a logic error; others would try and behave better by computing the difference with the year 1900 and the one returned by our API, but when they tried to process their “two-digit” year, which was now “100”, for display, they would take up one more byte than expected and end up corrupting whatever data was after it, which quickly led them to a crash.

And that was if you were lucky: some programs would appear to work correctly, but in fact have subtle yet devastating problems, such as computing interest backwards or outputting the wrong people ages.

We could not ignore the problem: starting about noon, 31st of December 1999 UTC, when the first parts of the world would start being in 2000, we would have been inundated with support requests for these defective products, never mind that the problem was not with us.

And we could not just block the faulty software: even if we did not already suspect that was the case, a survey showed every single one of our (important) customers was using at least one program which we know would exhibit issues come year 2000, with some customers using hundreds of such programs! And that’s without accounting for internally-developed software by the customer, and after requesting some sample we found out most of this software would be affected as well. Most of the problematic software was considered mission-critical and could not just be abandoned and had to keep working past 1999, come hell or high water.

Couldn’t the programs be fixed and customers get updated version? Well, for one in the usual case the company selling the program would be happy to do so, provided customers would pay for the upgrade to the updated version of the software, and customers reacted badly to that scenario.

And that assumes the company that developed the software was still in business.

In any case, the program might have been written in an obsolete programming language like Object Pascal, using the 16-bit APIs, and could no longer be built for lack of a surviving install of the compiler, or even lack of a machine able of running the compiler. Some of these programs could not be fixed without fixing the programming language they used or a library they relied on, repeating the problem recursively on the suppliers of these which may have become out of business. Even if the program could technically be rebuilt, maybe its original developer was long gone from the company and no one else could have managed to do it.

But a more common case was that the source code for the program was in fact simply lost to the ages.

Meanwhile, we were of course working on the solution. We came up with an elegant compatibility mechanism by which any application or other program which did not explicitly declare itself to support the years 2000 and after would get dates from the API in ETC instead of UTC. ETC was designed so that 1999 is the last year to ever happen. It simply never ends. You should really read the specification if you want the details, but basically how it works is that in the first half of 1999, one ETC second is worth two UTC seconds, so it can represent one UTC year; then in the first half of what is left of 1999, which is a quarter year, one ETC second is worth four UTC seconds, so again in total one UTC year, and in the first half of what is left after that, one ETC second is worth eight UTC seconds, etc. So we can fit an arbitrary number of UTC years into what seems to be one year in ETC, and therefore from the point of view of the legacy programs. Clever, huh? Of course, this means the resolution of legacy programs decreases as time goes on, but these programs only had a limited number of seconds they could ever account for in the future anyway, so it is making best use of the limited resource they have left. Things start becoming a bit more complex when we start dividing 1999 into amounts that are no longer integer amounts of seconds, but the general principle remains.

Of course, something might seem off in the preceding description, and you might guess that things did not exactly come to be that way. And indeed, when we deployed the solution in our usability labs, we quickly realized people would confuse ETC dates coming from legacy apps with UTC dates, for instance copying an ETC date and pasting it where a UTC date was expected, etc., causing the system to be unusable in practice. That was when we realized the folly of having two calendar systems in use at the same time. Something had to be done.

Oh, there was some resistance, of course. Some countries in particular dragged their feet. But in the end, when faced with the perspective of a digital apocalypse, everyone complied eventually, and by 1998 ETC was universally adopted as the basis for official timekeeping, just in time for it to be deployed. Because remember: application compatibility is paramount.

And besides, aren’t you glad it’s right now the 31st of December, 23:04:06.09375? Rather than whatever it would be right now had we kept “years”, which would be something in “2013” I guess, or another thing equally ridiculous.

Apple has a low-cost iPhone, but they should have a different one

From time to time rumors surface about Apple being poised to introduce some sort of cheap iPhone to fill a hole in their lineup, or we have so-called experts pontificate on why Apple needs to introduce such a device in order not to leave that market entirely to Android. The ways this gets discussed gets me wondering how these people could possibly not have noticed that Apple still sells the iPhone 4 and 4S, now as lower cost iPhones; I don’t know, maybe these don’t count because they have cooties? In reality, while they were introduced some time ago indeed, they can stand the comparison with “base” Android models as far as I can tell, and buyers do not seem to be snubbing them.

If at least the conversation was about whether the iPhone 4 and 4S were appropriate base or low-cost models for Apple to sell, as then there would be things to say on the matter (but no, it is always framed as if these two did not exist). Indeed, I think this strategy was justified when the iPhone 3G kept being sold along the new 3GS, it might have been tolerable to keep the iPhone 3GS along the new iPhone 4, but by now Apple should have long changed strategy. For a number of reasons, as the iPhone product lineup, or rather hardware platform, matures it should instead have low-cost models meant for that purpose.

The first reason goes back to the dark days of the Mac in the nineties, where former top of the line Macs would be discounted and sold as base models when another Mac model replaced them as the top of the line Mac; as a result, people would be hesitant to buy the (more) affordable Mac which they knew was not really up to date, and did not want to shell out for the current model, so they ended up just waiting for it to be discounted. It was hard for Apple to have a clear message on which Mac someone in a given market should be buying: so yesterday that model was not appropriate for consumers, but today it is? The heck? Fortunately, Steve Jobs put an end to that when he introduced the Mac product matrix (with its two dimensions: consumer-professional, and portable-desktop, and four models: iMac, PowerMac, iBook, and PowerBook).

Which brings me to the second reason, which is that not all technologies make sense as being introduced for professionals exclusively, even at first; USB, introduced in fact with the first iMac before it was on any PowerMac, is a prime example. Today, we have for instance SSDs, at least as an option or in the form of an hybrid drive.

But I think the most important reason is that having dedicated base models would allow Apple to sell devices where the hardware design flaws of yesterday (visible or invisible) are fixed, instead of having to keep having to take them into account in the next X years of software updates. While some new features in iOS releases have not been made available on older devices on which they run, the base OS has to run nevertheless, and even with this feature segmentation performance regressions have been observed. The other side of having specifically developed low-cost iPhone models is that Apple would seed with these devices current technologies to better be able to introduce new and interesting things in future versions of iOS (think, say, OpenCL), for instance because third-party developers are more likely to adopt a technology if they know every device sold in the last year supports it; this goes doubly if the technology cannot serve as an optional enhancement, but instead is meant to have apps depend on it.

The example the iPhone should follow, where Apple itself does it right, is with the iPad and in particular the iPad mini. I joke that now with the 128GB iPad there are 48 (count them) iPad SKUs, but that’s in fact OK, as the lineup is very orthogonal: from the consumer’s viewpoint, color and connectivity are similar to build-to-order options over capacity variations of the 3 base models; it must be somewhat challenging to manage supply and resupply, but apparently the operations at Apple is managing it; and sometimes the one combination you want is out of stock at the store, so you end up getting a slightly different one, but that’s minor in the grand scheme of things. On the other hand, the introduction of the iPad mini was indispensable to diversify the iPad presence and make the iPad feel like not just a single product, but a real hardware platform.

The one thing in particular that was done best in that respect with the iPad mini is that internally, the iPad mini is pretty much the iPad 2 guts with the 4th generation iPad connectivity: Lightning port, Bluetooth 4.0, and LTE as an option. I/O is one of these things where it often does not make sense to introduce technologies only at the high end at first, because of network effects: they apply whether the I/O is to connect devices with each other, in which case you need to clear a penetration threshold before people can use it in practice, or if the I/O is for accessories, in which case hardware accessory makers are more likely to follow the better you seed the technology to everyone.

Now I’m not saying it is going to be easy to do the same for the iPhone. It is clear that each iPhone hardware model is very integrated, requiring for each model a lot of investment not only in hardware design, but also in the supply chain and the assembling infrastructure. Designing each model to be sold for only one year would make it harder to amortize these investments, but planning for some hardware reuse during the design process could compensate to an extent, and I think the outcomes in having a clearer and stronger product lineup, better technology introduction, and decreased iOS maintenance costs, would make it worth it.

PSA: Subscribe to your own RSS feed

I like RSS. A lot. It allows me to efficiently follow the activity of a lot of blogs that do not necessarily update predictably (I mean, I’m guilty as charged here). So when things break down, in particular in silent or non-obvious ways, it is all the more grating. To avoid causing this for your readers, please follow the RSS feed of your own blog; it does not have to be much, you could do it for free with the built-in feature of Firefox, just do it.

Case in point: at some point Steven Frank was talking about the <canvas> tag (in a post since gone in one of his blog reboots). It showed up fine on the website, but for some reason I cannot fathom the angle brackets did not end up as being escaped in the XML feed, and NetNewsWire dutifully interpreted it as an actual canvas. With no closing tag, which means none of the text after the “tag” showed up in NetNewswire, so as far as I could tell the blog post just straight up ended there (before the tag) in a mysterious fashion. I fortunately thought that that didn’t look like Mr Frank’s usual style and investigated, but I might never have noticed anything wrong.

You might say it was because of a bug at some point in the feed generation, but this is not the point. I mean, this is the web, in permanent beta, of course bugs are going to happen. The point is that apparently the author never noticed it and fixed it, so he couldn’t possibly have been following his own feed; had he been doing so he would have noticed, and would have fixed it in any number of ways.

Another case is when the feed URL for Wil Shipley’s blog changed. He did not announce it anywhere, and I did not notice it until I went to his site at work and saw a post I had not seen before (and which had been posted earlier than that week). Had he been following his own feed, he would have noticed at some point when posting that the post never showed up in his reader, and would have remembered to notify his RSS subscribers in some way.

So kids, don’t be the lame guy: follow your own RSS feed. Otherwise, you’re publishing something you are not testing, and I think we recently talked about how bad that was. The more you know…

Proposal for a standard plain text format for iOS documents

Since the last times we visited the matter of working with documents on iOS, I have read with great interest more write-ups of people describing how they work on the iPad, because of course it is always good to hear about people being able to do more and more things on the iPad, but also because (as John Gruber so astutely noted) Dropbox almost always seems to be involved. I don’t feel that Dropbox solves the external infrastructure problem I raised in my first post on the matter, I consider Dropbox external infrastructure as well, if only because it requires you to be connected to the Internet to merely be able to transfer documents locally on your iPad (and that’s not a knock on Dropbox mind you, this is entirely the doing of restrictions Apple imposes).

I am going to concede one advantage the current de facto iOS model of documents in a per-app sandbox plus next to that an explicit container for document interchange, which is that it forces apps to actually consider support of interchange document formats. With the Grand Unified Model, and whatever we call the model Mac OS X now uses since Snow Leopard, applications would first only concern themselves with creating documents to save the state of the user’s work for them to pick it up later, without concern for other applications; and when the authors of the applications would come to consider standard formats, or at the very least creating an interchange format without data that are of no interest to another app (e.g. which tool was selected at the time the image was saved) or would amount to implementation details, they would realize that other applications had managed to slog through their undocumented document format to open it, and as a result the authors did not feel so pressured to support writing another document format. The outcome is that the onus of information interchange falls only on the readers, which need to keep adding support for anything the application that writes the document feels like adding to the format, in the way it feels like doing so.

However, with the de facto model used by iOS, apps may start out the same way, but when they want to claim Dropbox support, they have damn well better write in there documents in a standard or documented interchange format, or their claims of Dropbox support become pretty much meaningless. I am not sure the tradeoff is worth it compared to the loss of being able to get at the original document directly as a last resort (in case, for instance, the document exchanged on Dropbox has information missing compared to the document kept in the sandbox), but it is indeed an advantage to consider. An issue with that, though, is that as things currently stand there is no one to even provide recommendations as to the standard formats to use for exchanging documents on iOS: Dropbox the company is not in a position to do so, and as far as Apple is concerned document interchange between iOS apps does not exist.

So when I read that the format often used in Dropbox to exchange data between apps is plain text, while this is better than proprietary formats, this saddens me to no end. Why? Because plain text is a lie. There is no such thing as plain text. Plain text is a myth created by Unix guys to control us. Plain text is a tall tale parents tell their children. Plain text is what you find in the pot at the end of the rainbow. Plain text is involved in Hercules’ labors and Ulysses’ odyssey. Perpetual motion machines run on plain text. The Ultimate Question of Life, the Universe and Everything is written in plain text.

I sense you’re skeptical, so let me explain. Plain text is pretty well defined, right? ASCII, right? Well, let me ask you: what is a tab character supposed to do? Bring you over to the next tab stop every 4 spaces? Except that on the Mac tab stops are considered to occur every 8 spaces instead (and even on Unix not everyone agrees). And since we are dealing with so-called plain text, the benefit of being able to align in a proportional context does not apply: if you were to rely on that, then switch to another editor that uses a different font, or switch the editing font in your editor, then your carefully aligned document would become all out of whack. Finally, any memory saving brought by the tab character has become insignificant given today’s RAM and storage capacities.

Next are newlines. Turns out, hey, no one agrees here either: you’ve got carriage return, line feed, the two together (and both ways). More subtle is wrapping… What’s this, you say? Editors always word wrap? Except piconano, for instance, doesn’t by default. And Emacs, in fact, does a character wrap: by default it will cut in the middle of a word. It seems inconsequential, but it causes users of non-wrapping editors to complain that others send them documents with overly long lines, while these others complain that the first guys write lines with an arbitrary limit, causing for instance unsightly double-wrapping when used on a window narrower than that arbitrary width.

And of course, you saw it coming, comes the character encoding, we have left the 7-bit ASCII world eons ago. Everything is Unicode capable by now, but some idiosyncrasies still remain: for instance as far as I can tell out of the box TextEdit in Mac OS X still opens text files in MacRoman by default.

This is, simply, a mess. There is not one, but many plain text formats. So what can we do?

The proposal

Goals, scope and rationale (non-normative)

The most important, defining characteristic of the proposal for Standard Plain Text is that it is meant to store prose (or poetry). Period. People might sometimes happen to use it for, e.g. source code, but these use cases shall not be taken into considerations for the format. If you want to edit makefiles or a tab separated values file, use a specialized tool. However, we do want to make sure that more specialized humane markup/prose-like formats can be built above the proposal, in fact for instance Markdown and Textile over Standard Plain Text ought to be able to be trivially defined as being, well, Markdown and Textile over Standard Plain Text.

Then, we want to be able to recover the data on any current computer system in case of disaster. This means compatibility with existing operating systems, or at least being capable of recovering the data using only programs built in these operating systems.

And we want the format to be defined thoroughly enough to limit as much as possible disagreements and misunderstandings, while keeping it simple to limit risks of mistakes in implementations.


Standard Plain Text files shall use the Unicode character set, and be encoded in UTF-8. Any other character set or encoding is explicitly forbidden.

This seem obvious, until you realize this causes Asian text, among others, to take up 50% more storage than it would using UTF-16, so there is in fact a tradeoff here, and compatibility was favored; I apologize to all our Japanese, Chinese, Korean, Indian, etc. friends.

Standard Plain Text files shall not contain any character in the U+0000 – U+001F range, inclusive, (ASCII control characters) except for U+000A (LINE FEED). As a result, tabulation characters are forbidden and the line ending shall be a single LINE FEED. Standard Plain Text files shall not contain any character in the U+00FF – U+011F range, inclusive (DELETE and C1 range). Standard Plain Text files shall not contain any U+FEFF character (ZERO WIDTH NO-BREAK SPACE aka byte order mark), either at the start or anywhere else. All other code points between U+0020 and U+10FFFF, inclusive, that are allowed in Unicode are allowed, including as-yet unassigned ones.

Standard Plain Text editors shall word wrap, and shall support arbitrarily long stretches of characters and bytes between two consecutive LINE FEEDs. They may support proportional text, but they shall support at least one monospace font.

These requirements shall be enforced at multiple levels in Standard Plain Text editors, both at the user input stage and when writing to disk at least: pasting in text containing forbidden characters shall not result in them being written as part of a Standard Plain Text file. Editors may handle tabulation given as input any way they see fit (e.g. inserting N spaces, inserting enough spaces to reach the next multiple of N column, etc.) as long as it does not result in a tab character being written as part of a Standard Plain Text file in any circumstance.

Standard Plain Text files should have the .txt extension for compatibility. No MacOS 4-char type code is specified. No MIME type is specified for the time being. If a Uniform Type Identifier is desired, net.wanderingcoder.projects.standard-plain-text (conforming to public.utf8-plain-text) can be used as a temporary solution.

Clearly this is the part that still needs work. Dropbox supports file metadata, but I have not fully investigated the supported metadata, in particular whether there is a space for an UTI.

Appendix A (non-normative): recovery methods

On a modern Unix/Linux system: make sure the locale is a UTF-8 variant, then open with the text editor of your preference.

On a Mac OS X system: in the open dialog of TextEdit, make sure the encoding is set to Unicode (UTF-8), then open the file. Being a modern Unix, the previous method can also be applied.

On a modern Windows system (Windows XP and later): in the open dialog of Wordpad, make sure the format to open is set to Text Document (.txt) (not Unicode Text Document (.txt)), then open the file. Append a newline then delete it, then save the file. In the open dialog of Notepad, make sure the encoding is set to UTF-8, then open the latter file.

The reason for this roundabout method is that Wordpad does not support UTF-8 (Unicode in its opening options in fact means UTF-16) but supports linefeed line endings, while Notepad does not support linefeed line endings. Tested on Windows XP.

Creating discoverable user defaults

John C. Welch recently lashed out at Firefox and Chrome for using non-standard means of storing user settings, and conversely praised Safari for using the property list format, the standard on Mac OS X, to do so. A particular point of contention with the Chrome way was the fact there is no way to change plug-in settings by manipulating the preferences file, unless the user changed at least one such setting in the application beforehand, while this is not the case with Safari.

Or is it? As mentioned in the comments, there is at least one preference in Safari that behaves in a similar way, remaining hidden and absent from the preferences file until explicitly changed. What gives?

In this post I am going to try and explain what is going on here, and provide a way for Mac (and iOS — you never know when that might turn out to be useful) application developers to allow the user defaults that their applications use to be discoverable and changeable in the application property list preferences file.

The first thing is that it is an essential feature of the Mac OS X user defaults system (of which the property list format preference files are a part) that preferences files need not contain every single setting the application relies upon: if an application asks the user defaults system for a setting which is absent from the preference plist for this application, the user default system will merely answer that there no value for that setting, and the application can then take an appropriate action. This is essential because it allows, for instance, to seamlessly update apps while preserving their settings, even when the update features new settings, but also to just as seamlessly downgrade such apps, and even to switch versions in other ways: suppose you were using the Mac App Store version of an app, and you switch to the non-Mac App Store one, which uses Sparkle for updates; using the Mac OS X user defaults system, the non-Mac App Store version will seamlessly pick up the user settings, while when Sparkle will ask for its settings (it has a few) in the application preferences it will find they are not set, and Sparkle will act as if there was no Sparkle activity previously for this application, in other words behave as expected. This is much more flexible than, for instance, a script to upgrade the preference file at each app update.

So, OK, the preference file need not contain every single setting the application relies upon before the application is run; what about after the application is run? By default, unfortunately, the Mac OS X user defaults system is not made aware of the decision taken by the application when the latter gets an empty value, so the user defaults system simply leaves the setting unset, and the preference file remains as-is. There is, however, a mechanism by which the application can declare to the user defaults system the value to use when the search for a given setting turns out empty, by registering them in the registration domain; but as it turns out, these default settings do not get written in the preferences file either, they simply get used as the default for each setting.

So in other words, when using techniques recommended by Apple, no setting will ever get written out to the preferences file until explicitly changed by the user. It is still possible to change the setting without using the application, but this requires knowing exactly the name of the setting, its location, and the kind of value it can take, which are hard to guess without it being present in the preference file. This will not do, so what can we do?

What I have done so far in my (not widely released) applications is the following (for my fellow Mac developers lucky enough to be maintaining Carbon applications, adapt using CFPreferences as appropriate): to begin with I always use -[NSUserDefaults objectForKey:] rather than the convenience methods like -[NSUserDefaults integerForKey:], at the very least as the first call, so that I can known whether the value is unset or actually set to 0 (which is impossible to tell with -[NSUserDefaults integerForKey:]); then if the setting was not found in the user defaults system, I explicitly write the default value to the user defaults system before returning it as the asked setting; for convenience I wrap the whole thing into a function, one for each setting, looking more or less like this:

NSString* StuffWeServe(void)
   NSString* result = nil;
   result = [[NSUserDefaults standardUserDefaults] objectForKey:@"KindOfObjectWeServe"];
   if (result != nil) // elided: checks that result actually is a string
      return result;
   // not found, insert the default
   result = @"burgers";
   [[NSUserDefaults standardUserDefaults] setObject:result forKey:@"KindOfObjectWeServe"];
   return result;

It is important to never directly call NSUserDefaults and always go through the function whenever the setting value is read (writing the setting can be done directly through NSUserDefaults). I only needed to do this for a handful of settings, if you have many settings it should be possible to implement this in a systemic fashion by subclassing NSUserDefaults and overriding objectForKey:, to avoid writing a bunch of similar functions.

Using this code, after a new setting is requested for the first time it is enough to synchronize the user defaults (which should happen automatically during the application run loop) for it to appear in the preference file and so more easily allow it to be discovered and changed by end users, or more typically, system administrators.


My apologies for the extended radio silence prior to Patents; it took me a lot of time to properly articulate my reasoning, as it is no simple matter, and in particular it took me time to properly show in which ways software differs from other engineering domain and why it matters for patents. The blog post I ended up writing is admittedly a bit “too long, didn’t read”, but given the often simplistic discourse occurring these days around software patents, I wanted to start back from the basics so it necessarily got a bit long.

In other news, as promised I updated my existing posts, namely here Introduction to NEON on iPhone and A few things iOS developers ought to know about the ARM architecture, to account for ARMv7s. However an equivalent of Benefits (and drawback) to compiling your iOS app for ARMv7 for ARMv7s will have to wait, as I do not have an ARMv7s device I can call mine, so I have no idea how performance improves in practice when targeting ARMv7s. In the meantime I advise measuring the improvements in the specific case of your apps to see whether it is worth it, I cannot provide any general advice yet.