Good riddance, Google. Don’t let the door hit you on the ass on the way out.

This post is, in fact, not quite like the others. It is a parody of Fake Steve I wrote for parody week, so take it with a big grain of salt…

See here. Basically, the rockets scientists at Google have decided, after having used our WebKit for years in Chrome, that they, uh, suddenly did not need us any more and forked WebKit like the true leeches they are. Dude, we are the ones who found KHTML and made WebKit what it is, if it weren’t for us KHTML would only be known to three frigtards in west Elbonia and you would have had no engine to put in your hotrodded race car of a browser, so I guess, thanks for nothing, bastards.

Truth is, good riddance. Those know-it-alls at Google have been a pain in our ass ever since Chrome debuted. Where do I start? Like with V8. Oh God V8… I… uh…


Okay, I can’t do this. I can’t parody Fake Steve. I’ve got nothing on Dear Leader. He was pitch perfect, like, you would get the feeling the real Steve Jobs would be writing just for you in his secret diary, all the while being satiric and outrageous enough so that at some level you knew it was fake but at the same time the persona was so well maintained that you easily suspended disbelief and you could not help thinking Steve could have shared these opinions. And he was insightful, oh of course he was, like in the middle of some ludicrous story you would feel like you would be enlightened about the way the tech industry or the press or tech buyers worked, didn’t matter if it was made up because it was a way to thought-provoke us and make us think about how the sausage factory really worked inside. He was the perfect ying-yang of the old-school professional who has seen it all and who knows how it works behind the hype, and of the new media guy who can drop a bunch of paragraphs without a word limit on a whim on a subject he want to tackle, and is not afraid to try new things and new ways of storytelling. Favorites? Ah! Apple, the Old Borg, the New Borg, the Linux frigtards, the old dying press, these upstart bloggers, the consumers standing in line, PR flacks, software developers, no one was safe.

I can see him now, looking above me from wherever his is now, laughing at my pathetic attempt at reviving him, even for a minute. I know he is at peace there, meditating, waiting for his reincarnation, because oh yes, he will be reincarnated some day, in a different form: Fake Steve is buddhist too, he most certainly did not meet St Peter at the pearly gates, and he has unfinished business in this world, he was not done restoring a sense of childlike sarcastic wonder in our lives. I’m waiting, waiting for the day I will see a blog or webcomic or column, because Fake Steve has a sense of humor and may throw us all for a loop by reincarnating in the old press, or Twitter feed or animation (but not a Flash animation, there are limits), and I will see the telltale signs, the snark, the character play, the insightfulness, and I will think: “Yes, Fake Steve has been reincarnated.”

Meanwhile, Fake Steve, I know you are now in a better place and cannot come back as such, but if you could hear my prayer: Dan… has not been good lately, to put it mildly. So… could you try and inspire him a bit while he will be away from the echo chamber? Not for him to write as you, no, just so that when he eventually returns to us after having spent some time away from it all, he will write good things, no matter what they are. Because we can’t stand looking at him like this.

The Joy of Tech comic number 995: Yes, Virgil, there is a Fake Steve Jobs

Did Apple just cargo cult the iPhone platform?

This post is, in fact, not quite like the others. It is a parody of Coding Horror I wrote for parody week, so take it with a big grain of salt…

In The iPhone Software Revolution, I proclaimed that the iPhone was the product Apple was born to make:

But a cell phone? It’s a closed ecosystem, by definition, running on a proprietary network. By a status quo of incompetent megacorporations who wouldn’t know user friendliness or good design if it ran up behind them and bit them in the rear end of their expensive, tailored suits. All those things that bugged me about Apple’s computers are utter non-issues in the phone market. Proprietary handset? So is every other handset. Locked in to a single vendor? Everyone signs a multi-year contract. One company controlling your entire experience? That’s how it’s always been done. Nokia, Sony/Ericsson, Microsoft, RIM — these guys clearly had no idea what they were in for when Apple set their sights on the cell phone market — a market that is a nearly perfect match to Apple’s strengths.

Apple was born to make a kick-ass phone. And with the lead they have, I predict they will dominate the market for years to come.

But never mind the fact a similar reasoning could have been made of the Macintosh when it came out. What bothers me today is the realization Apple might have handled the opening of the iPhone platform like a cargo cult:

The term “cargo cult” has been used metaphorically to describe an attempt to recreate successful outcomes by replicating circumstances associated with those outcomes, although those circumstances are either unrelated to the causes of outcomes or insufficient to produce them by themselves. In the former case, this is an instance of the post hoc ergo propter hoc fallacy.

cargo cult phone

cargo cult phone by dret, on Flickr; used under the terms of the Creative Commons CC BY-SA 2.0 license

By which I mean that Apple decided they needed to open the iPhone as a development platform, but I wonder to which extent they then did so by giving it the trappings of a platform more than the reality of a platform: third parties can sell their apps for running on it, right? So it must be a platform, right? Well… And I don’t mean the APIs are the problem either, it’s more like… everything else:

  • Apple has a very restrictive idea of what kind of use cases third parties are allowed to provide solutions to: everything that does not fit their idea of an app is rejected, or is impossible. For instance, installing third-party keyboards is not possible on iPhone:

    But sometimes, an Apple product’s feature lands at the wrong side of the line that divides “simple” from “stripped down.” The iPhone keyboard is stripped-down.

    If you don’t like how Android’s stock keyboard behaves, you can dig into Settings and change it. If you still don’t like it, you can install a third-party alternative. And if you think it’s fine as-is, then you won’t be distracted by the options. The customization panel is inside Settings, and the alternatives are over in the Google Play store.

    This? It’s from Andy Ihnatko, in an article in which he explains why he switched from iPhone to Android. Andy. Ihnatko. When Mac users of 25 years start switching away from the iPhone, I’d be very worried if I were in Cupertino.

  • Even for these use cases third-parties are allowed to provide solutions to, they are quite restricted: when Apple added support for multitasking, in iOS 4, they more or less proclaimed they had covered for every desirable multitasking scenario, and have not added any since. This feels a tad preposterous to me that there would have been no need for even a single new multitasking scenario in the two years since.

  • Even when third parties can sell their wares, they do so at the pleasure of the king. Apple seems to consider iPhone developers to be contractors/authors developing solely for Apple purposes. And paid by commission. Without any advance. And without any assurance when they begin developing that their app will be accepted in the end.

  • Apple apps do not play by the same rules other apps do. They are not sandboxed, or not as much. They use private APIs off-limits to other apps. They get a pass on many iOS App Store restrictions. In short, Apple eats People Food, and gives its developers Dog Food:

    Microsoft has known about the Dogfood rule for at least twenty years. It’s been part of their culture for a whole generation now. You don’t eat People Food and give your developers Dog Food. Doing that is simply robbing your long-term platform value for short-term successes. Platforms are all about long-term thinking.

  • In the same spirit, Apple introduced iCloud, gave users the perception Apple did the hard work and that apps would merely have to opt in, sold it to developers as the best thing since sliced bread, then promptly went and not used it themselves in combination with er, the technology they have consistently recommended be used for persistent storage (while ostensibly supporting this combination), without giving the ability to audit synchronization issues either. And now it turns out, and people come to the realization, that iCloud Core Data syncing does not work. Shocker.

  • Apple even tried at some point to prohibit non-C programming languages for iPhone development, with a clear aim to ban a number of alternative development environments, not just Flash. But just like Apple cannot provide all the software to fulfill iPhone user needs, Apple cannot provide all the software to fulfill iPhone developer needs either. A platform is characterized not just an by ecosystem of apps, but also by an ecosystem of developer tooling and libraries behind the scenes. They ended up relenting on this, but if I were an iPhone developer, I would not be very reassured.

But wait, surely, that can’t be. Apple knows all about platforms and the value of platforms, right? VisiCalc, right? But that encouraged Apple to provide something that looks like a platform, rather than a platform. As for the iPhone not being Apple’s first platform, there is a school of thought that says Steve Jobs did not build platforms, except by accident; so according to this line of thought, the Apple II and the Mac became honest-to-God platforms not because of Apple, but in spite of Apple. And now for the first time we would get to see the kind of platform Apple creates when it actually calls the shots. It looks like a platform, sounds like a platform, has the taste of a platform, smells like a platform, walks like a duck platform… but the jury is still out on whether it is actually a platform.

There is a case to be made for reducing your dependencies. Apple clearly is not going to let anyone hold it back; but as incredibly skilled as the people working at Apple are, is “not being held back” going to be enough to keep up when Android, propelled by being a more complete and inclusive platform, will threaten to move past Apple?

Apple can still turn this around. Indeed, the issue lies not so much in these restrictions being present at first, than in so few of them having been lifted since then. The key, of course, will be in figuring out which ones they need to lift, and how to do so. And this will require Apple to reconsider the motions it does to bring cargo, regardless of the cargo these actions have brought, and instead focus on improving their limited understanding of what it is that actually makes a platform. In order to really bring in the cargo.

Annoyance-Driven Blogging

This post is, in fact, not quite like the others. It is a parody of Hypercritical I wrote for parody week, so take it with a big grain of salt…

I’ve been reading Hypercritical, John Siracusa’s new blog outside of Ars Technica, and it has been good to read more of John, rather than the glacial pace his blog there had been updating lately.

But even on his own space, John has been unable to escape some of the trappings of his past. A blog that updates with some frequency naturally lends itself to multi-post reading sessions. But reading a post about the annoyance of having to watch a minute and a half of opening credits before each episode can get tiresome.

To be fair to John, the existence of this kind of post may not be entirely under his control, given his quasi-OCD tendencies. But getting bogged down in these details misses the point.

Yes, we all know and love John Siracusa for his, well, hypercritical tendencies, but these are best consumed as part of a post on a broader subject, like a spice, having nothing but that in a post quickly gets to be too much.

This may sound comically selfish, but true innovation comes from embracing your audience expectations, not fighting them. Find out what is annoying your readers. Give people what they want and they will beat a path to your door.

We nerds love bickering about technology for its own sake. Indeed, there’s always something to be gained by criticizing the state of the art and goading into providing more of a good thing. But the most profound leaps are often the result of applying criticism as strictly needed in the context of a more constructive post. By all means, criticize, but also research, expose and propose what could be done better and how. Go after those things and you’ll really make people love you. Accentuate the positive. Eliminate the negative.

How does ETC work? A sad story of application compatibility

This post is, in fact, not quite like the others. It is a parody of the Old New Thing I wrote for parody week, so take it with a big grain of salt…

Commenter Contoso asked: “What’s the deal with ETC? Why is it so complicated?“

First, I will note that ETC (which stands for Coordinated Eternal Time) is in fact an international standard, having been adopted by ISO as well as national and industrial standard bodies. The specification is also documented on MSDN, but that’s more for historical reasons than anything else at this point, really. But okay, let’s discuss ETC, seeing as that’s what you want me to do.

ETC is not complicated at all if you follow from the problem to its logical conclusion. The youngest among you might not realize it, but the year 2000 bug was a Big Deal. When it began to be discussed in the public sphere starting in 1996 or so, most people laughed it off, but we knew, and always knew that if nothing was done computers, and all civilization in fact, would be headed for a disaster of biblical proportions. Real wrath of God type stuff. The dead rising from the grave! Human sacrifice! Dogs and cats living together… mass hysteria!

The problem originated years before that when some bright software developers could not be bothered to keep track of the whole year, and instead only kept track of the last two digits; so for instance, 1996 would be stored as just 96 in memory, and when reading it it was implicitly considered to have had the “19” before it, and so would be restored as “1996” for display, processing, etc. But it just happened to work because the years they saw started in “19”, and things would go wrong as soon as years would no longer do so, starting with 2000.

What happened (or rather, would have happened if we let it happen, this was run in controlled experiment conditions in our labs) in this case was that, for starters, these programs would print the year in the date as “19100”. You might think that would not be too bad, even though that would have some regulatory and other consequences, and would result in customers blaming us, and not the faulty program.

But that would in fact be if they even got as far as printing the date.

Most of them just fell over and died from some “impossible” situation long before that: some would take the date given by the API, convert it to text, blindly take the last two digits without checking the first two, and when comparing with the date in its records to see how old the last save was would end up with a negative age since it did 0 – 99 as far as the year was concerned, and the program would crash on a logic error; others would try and behave better by computing the difference with the year 1900 and the one returned by our API, but when they tried to process their “two-digit” year, which was now “100”, for display, they would take up one more byte than expected and end up corrupting whatever data was after it, which quickly led them to a crash.

And that was if you were lucky: some programs would appear to work correctly, but in fact have subtle yet devastating problems, such as computing interest backwards or outputting the wrong people ages.

We could not ignore the problem: starting about noon, 31st of December 1999 UTC, when the first parts of the world would start being in 2000, we would have been inundated with support requests for these defective products, never mind that the problem was not with us.

And we could not just block the faulty software: even if we did not already suspect that was the case, a survey showed every single one of our (important) customers was using at least one program which we know would exhibit issues come year 2000, with some customers using hundreds of such programs! And that’s without accounting for internally-developed software by the customer, and after requesting some sample we found out most of this software would be affected as well. Most of the problematic software was considered mission-critical and could not just be abandoned and had to keep working past 1999, come hell or high water.

Couldn’t the programs be fixed and customers get updated version? Well, for one in the usual case the company selling the program would be happy to do so, provided customers would pay for the upgrade to the updated version of the software, and customers reacted badly to that scenario.

And that assumes the company that developed the software was still in business.

In any case, the program might have been written in an obsolete programming language like Object Pascal, using the 16-bit APIs, and could no longer be built for lack of a surviving install of the compiler, or even lack of a machine able of running the compiler. Some of these programs could not be fixed without fixing the programming language they used or a library they relied on, repeating the problem recursively on the suppliers of these which may have become out of business. Even if the program could technically be rebuilt, maybe its original developer was long gone from the company and no one else could have managed to do it.

But a more common case was that the source code for the program was in fact simply lost to the ages.

Meanwhile, we were of course working on the solution. We came up with an elegant compatibility mechanism by which any application or other program which did not explicitly declare itself to support the years 2000 and after would get dates from the API in ETC instead of UTC. ETC was designed so that 1999 is the last year to ever happen. It simply never ends. You should really read the specification if you want the details, but basically how it works is that in the first half of 1999, one ETC second is worth two UTC seconds, so it can represent one UTC year; then in the first half of what is left of 1999, which is a quarter year, one ETC second is worth four UTC seconds, so again in total one UTC year, and in the first half of what is left after that, one ETC second is worth eight UTC seconds, etc. So we can fit an arbitrary number of UTC years into what seems to be one year in ETC, and therefore from the point of view of the legacy programs. Clever, huh? Of course, this means the resolution of legacy programs decreases as time goes on, but these programs only had a limited number of seconds they could ever account for in the future anyway, so it is making best use of the limited resource they have left. Things start becoming a bit more complex when we start dividing 1999 into amounts that are no longer integer amounts of seconds, but the general principle remains.

Of course, something might seem off in the preceding description, and you might guess that things did not exactly come to be that way. And indeed, when we deployed the solution in our usability labs, we quickly realized people would confuse ETC dates coming from legacy apps with UTC dates, for instance copying an ETC date and pasting it where a UTC date was expected, etc., causing the system to be unusable in practice. That was when we realized the folly of having two calendar systems in use at the same time. Something had to be done.

Oh, there was some resistance, of course. Some countries in particular dragged their feet. But in the end, when faced with the perspective of a digital apocalypse, everyone complied eventually, and by 1998 ETC was universally adopted as the basis for official timekeeping, just in time for it to be deployed. Because remember: application compatibility is paramount.

And besides, aren’t you glad it’s right now the 31st of December, 23:04:06.09375? Rather than whatever it would be right now had we kept “years”, which would be something in “2013” I guess, or another thing equally ridiculous.