Kinect User Interaction Design

I don’t own a Kinect or an Xbox 360 (and I don’t intend to buy either), but I’ve recently read very interesting stuff about the user interaction design issues it has raised, even for “just” game menus.

First thing, Penny Arcade’s Tycho (as it happens) reports his initial impressions (fourth paragraph), which are that it’s all over the place. No two games behave the same. (as an aside, it’s not every day you see Penny Arcade writing about usability.)

Second, Ars Technica has a very interesting article about how Harmonix apparently got it right with their game Dance Central. How they did so will not be a surprise to any of us Mac/iPhone nerds: they prototyped, and iterated, and  prototyped, and iterated, and  prototyped, and iterated, and…

While every console or computer game, or at least every game genre, has for obvious reasons its own user interaction rules when playing the game itself, on the other hand you generally expect the rules for menus to be consistent from game to game, including across platforms; game menus even share some conventions with desktop software. But the Kinect is a fundamentally new user input mechanism, for which there is practically no reference (when your best reference is Minority Report, you know you have a lot of work ahead of you). It’s not every day, or even every year, but on average around every decade that you see such a fundamentally new user input mechanism for electronic devices come up; for all time, I count only seven: “buttons, sliders and dials”, keyboard, mouse, gamepad, touchscreen (which only started realizing its potential with multitouch), Wiimote, and Kinect. I’m not counting steering wheels, joysticks and light guns, as these are used on electronic devices only to provide a simulation of the “real” ones, neither do I count remotes, which are just  “buttons, sliders and dials” that operate remotely.

So with such a new, unexplored user interaction continent, you’ve got to wonder why Microsoft didn’t do the job they should have done as the platform owner, by which I mean doing the research Harmonix had to do themselves, sharing the results with the developers of Kinect games, and applying the lessons to their own titles to set the example. If there’s something they should have taken from Apple’s and Nintendo’s playbook, it’s this. Maybe I shouldn’t be surprised given how much Microsoft application themselves are the worst violators of what little user interface guidelines Windows has, but at the same time, isn’t the Xbox from a completely different part of Microsoft?

At any rate, it will be very interesting to see how this will develop in the future.

Raising the Level of Discourse

There has been a worrying trend of late regarding criticism of Apple, and how Apple responds to it.

You’re no doubt aware that Apple has tried to increase and improve its communication in recent years, especially in response to criticism; besides, of course, the open letters from Steve Jobs, we have for instance the outreach of Phil Schiller to the community or the iPhone 4 antenna press conference. This is in itself a good thing; in the worst case at least you know their viewpoint, which furthers the debate, while in previous years you would know absolutely nothing.

The problem, however, is that Apple is often responding to the wrong criticism.

For instance, a lot of time in the iPhone 4 antenna press conference was spent repeating that all phones have their signal reception affected when held, that the iPhone 4 is no worse, that Apple does extensive testing, both in their labs and in the field, including in low coverage areas. This was to counter the allegations from some of the press that was screaming bloody murder at Apple over the “obviously defective” antenna, and accusing Apple of only having used the device in areas of good coverage, while that press had few, if any, hard facts to back these up. But anyone with a bit of sense already suspected or knew that the matter was a bit more subtle, and was following AnandTech’s excellent coverage and testing instead. This is the criticism Apple should have responded to instead; OK, sure, spend the first few minutes addressing the basics and the dumb criticism, but then move on to the specifics of the matter: namely, why they went ahead with such a new and innovative (and thus risky) external/structural antenna design when it could make the problem worse by allowing actual electrical contact (and maybe, additionally, why they didn’t mitigate the risk by adding a layer of insulating coating over the stainless steel). They could have answered in a number of ways. But they did not address that question. And none of the journalists who had the opportunity to ask that question did so.

The point is not to put Steve Jobs or Apple on trial, but to keep them honest. And yet, in this press conference everyone remained at a low level of discourse. I expect there will always be a significant proportion of the tech press that will happily remain at this level, unable to climb higher simply because they try to cover everything under the Sun without the means to cover this immensity with any real depth; I just wish this proportion didn’t seem to be the majority (so we should all encourage outlets that provide quality coverage!). However, I do not forgive Apple for not trying and raising to a higher level.

Something similar happened at the WWDC keynote. During the keynote Steve Jobs took some time to defend the iOS App Store review process (see “The Low Point of the Keynote”), which is okay in itself: the review system has attracted criticism, but there are points to be made in its favor. However, he tried to defend it by telling that 95% of rejections were for three reasons: the app crashes, it uses private API calls, or it doesn’t work as advertised. But it’s meaningless to lump all the rejected apps together; and it’s a fallacy to say that, since an overwhelming majority of the apps in this slightly artificial group fail to meet basic requirements (apps which were most likely coded by novice, careless, or dishonest developers) such that the other reasons are only a small fraction, then the other reasons are only a small problem. But that fallacy was clearly the implication Steve seemed to make; in the best case, that was the wrong criticism to address. Honestly, given the stuff that makes it to the iOS App Store, I’m surprised these reasons don’t account for 99% of rejections. Here again, Apple addressed a straw man; to be fair, some of the press puts a lot of straw men in front of Apple, but that’s not reason for Apple to take the bait.

So in the end, “interaction” between Apple and that press ends up looking like this:

Apple: Does not!
Press: Does too!
– Does not!
– Does too!
– Does not!
– Does too!

And even then, that’s still better than the common case where Apple doesn’t comment, leaving that kind of press feeling “slighted”, and that “the people” are not being “heard”:

Press: Does too! Does too! Does too! Does toooooooo!

This kind of stuff is all the more disheartening as Apple sometimes gets it right. For instance, in his Flash open letter, Steve Jobs not only addressed the obvious, but also addressed the less obvious criticism, by saying that even if they were to support it, current Flash content wouldn’t work well anyway given the lack of a pointer, and if it would require publishers to rewrite their content, why not rewrite it in HTML5? Not to mention Flash itself was designed with the PC in mind, not a touchscreen. And let’s not forget the original open letter, Thoughts on Music, which addressed a lot of points and issues that most people (and most of the press) had not even thought about.

Pretty often, criticism of Apple originates from a legitimate concern, but by the time it gets through the tech press hype machine, it has become an horribly distorted version of the original criticism; and I’m afraid people at Apple get the feeling they are taken to task for reasons that are entirely unjustified, based on this distorted criticism (while the original criticism is lost in the noise), leading them to believe nobody understands them.

Such misunderstandings are not uncommon, even without the distortion machine: for instance in the Ninjawords saga, Phil Schiller answered accusations that Apple required Ninjawords to be censored, by telling that this was something the developer did after an initial rejection, and Apple did not ask the developer to do so. Which was technically true, but as one of the developers pointed out, by having other, uncensored dictionaries in the iOS App Store at the 4+ rating, and at the same time requiring Ninjawords to have a 17+ rating, Apple was in effect putting a lot of pressure for Ninjawords to censor itself, as “who wants to be the only illicit dictionary on the [iOS] App Store?” So while Apple saw A: that they were enforcing the rating on Ninjawords (and while there may be a problem of ratings being inconsistently enforced, from their viewpoint this is none of the developer’s business anyway), the Ninjawords developers saw B: that they were, in effect, being made to censor the application.

A misunderstanding (or many) also probably explains Steve Jobs’ “Some people lie” comment at the D8 conference. Either he refers to cases that no one (or few people) was disputing in the first place, or he refers to developers who went to the press claiming an equivalent of B, while Apple viewed it as an equivalent of A.

Even if Apple feels it is wrongly criticized for, say, doing X, if only they would answer “Of course we do X, it seems obvious to us why, but here are three good reasons for us to do so”, then it would silence the press that made the dumb criticism, while allowing more serious outlets to follow up with “Yes, but then why do you do X even when Y?” Through communication the real issues that are lost in the noise can be revealed.

To improve this state of affairs, I only see one solution.

We need a Penny Arcade of Apple and the tech industry in general.

You have probably heard of Penny Arcade, and know it to be a funny, topical, popular webcomic skewering the video game industry, the video game press, and “gamers” themselves;  something you’d find in a newspaper covering video games if such a thing existed, playing an equivalent role as that of the editorial cartoons in the newspapers (except, you know, actually good and funny). What you may not know is the influence it has on the video game industry in general. In the foreword to their second collection, J. Allard (at the time in charge of the Xbox at Microsoft) describes Penny Arcade as accomplishing much more than the typical webcomic: for him, Tycho and Gabe do nothing less than keep the video game industry honest. Quoting Allard: “PA doesn’t buy it, and they don’t sell it. They tell it like it is. Whether it’s in their strips, their rants, commentary in their books, a direct flame-war, or a well-timed onomatopoeia in an elevator at E3, you can count on their presence if you’re doing something in this industry.” He goes on to mention their other endeavors, like the PAX expo and Child’s Play charity, as what they’ve been able to accomplish that the “industry” overlooked.

Now, J. Allard may have been trying to find the only good thing he could say from his viewpoint about the two guys who are basically lampooning his work and everything he holds dear, and may have come up with this. After all, you don’t (typically) say bad things about the authors of a book you’re being asked to write the foreword of. But there are hints and elements elsewhere (though they are not expressed as well as in that foreword) that confirm this influence to be the case.

It’s certainly counter-intuitive to propose caricature as being the solution to bad communication, a hyping press, and distorted criticism, these seem to be caricatural themselves already! And that’s precisely why caricaturists are needed to point this out. Remember, good satire makes you laugh, then makes you think. Moreover, good satire, having the appearance of just funny pictures/words, spreads subversively, including in the targeted institution (be it Microsoft, Apple, the tech press, etc.), delivering its message from the inside: while these institutions may dismiss criticism they see coming from the outside as attacks, good satire is read by the rank and file, spreading from there. Notice that I don’t think this satire needs to be a webcomic, it could take other forms, though I think a webcomic has the most inherent advantages.

Now before you go ahead and fire up your email client to tell me how about Z is totally the Penny Arcade of Apple and the tech industry and shame on me for not knowing about it, let me preempt most of you by saying that, yes, I know about the Joy of Tech, Mosspuppet, Fake Steve Jobs, PC Weenies, Daring Fireball, the Macalope, Crazy Apple Rumors, the Onion, and the Oatmeal, and I do not think this equivalent is to be found within them. And while Penny Arcade itself sometimes covers tech topics (especially, read this, starting from the third paragraph, and tell me how it couldn’t perfectly be read as though it was about the Mac App Store,  incidentally announced a few days beforehand), it does not do so with enough regularity to play more than a minor role in the tech industry; and I don’t think this will change much.

What should the  Penny Arcade of Apple and the tech industry be? It needs to be good, obviously, and it needs to satirize technology because its author(s) love technology. It needs to be topical. It needs to be regularly published to an extent, so that people keep coming back to it. As you may have gathered, it needs to cover not just Apple, but also its competitors: I rag on Apple, and that’s because I care, but its competitors have their own issues (similar or different), and it’s only fair that all of them be targeted; also, while there is a lot of material you could make about Apple, I don’t think it would be enough to keep, say, a three days a week webcomic running viably. As I said, besides the video game industry Penny Arcade also satirizes the corresponding press and users, so its equivalent would need to target the tech press and tech fans as well. It needs to be popular, but that will happen if it’s actually good. It needs to be subversive, being ostensibly funny while having actual substance.

With all the webcomics and blogs that get created (and often quickly die) all the time, why doesn’t such a thing already exist? I guess because it’s hard. It’s already a miracle that Penny Arcade exists, there are very few like it, and none as good. It’s hard in part because it needs to be topical, which precludes any comic buffer. It’s also hard to be good at doing such a thing. And it’s hard to love technology, while relentlessly skewering technology, the people responsible for this tech, the press talking about this tech, and your fellow tech fans.

In my opinion the closest thing we currently have to a Penny Arcade of Apple and the tech industry is Fake Steve Jobs. Dear Leader mercilessly takes on all the companies in this space, as well as the old dying press, these upstart bloggers, the fanboys, the frigtards, and the clueless. He’s funny, insightful, and oh so good. The main problem is that his subversive power is limited for two reasons: first, the caricature aspect is too salient, by very nature of the character; before you read the first word of a post, you know it’s going to be a satire of some sort. Second, it’s easier for the creator to make his work pretend to be harmless humor using pictures (especially for those looking over your shoulder) than it is with prose, again limiting the subversive aspect; this in turn makes it hard for the message to penetrate Apple and the others from the inside. Hence, Fake Steve is the closest we have, but he is not the Penny Arcade of Apple and the tech industry in my opinion.

Lastly, I want to give a shout out to the Satiritron, which could end up fitting the bill, from the fine folks behind Mosspuppet. It launched while I was writing this post, and it’s too early to tell how it’s going to do in the long haul, but it’s off to a good start so far, and I wish it best of luck and many faithful visitors!

It’s stunning what you learn about your iPhone while in holidays

I should go in holidays more often. You find out interesting things about your iPhone when you go outside the urban environment it was mostly designed for. I’m not ready to talk about the tests I mentioned earlier, but here’s a few other observations I can share…

First, the iPhone, or at the very least the iPhone 3GS which is the model I own, seems to have trouble getting a GPS fix in cloudy weather. If obtaining a location regardless of weather is of any real importance, a dedicated GPS device is still required.

Second, if you’re not going to have access to a power outlet for, say, one week, switch your device to airplane mode, it’s just as efficient and more practical than turning it off. Let me explain. I was for a week in remote parts of the Alps, trekking from mountain refuge to mountain refuge; even those that have some electricity from a generator are unlikely to have power outlets. Furthermore, in that environment you often don’t get any signal, or if you do it’s very weak, so a lot of battery is going to be wasted looking for a signal or maintaining a weak connection. So last year, I simply turned off my iPhone so as to conserve battery for the whole week. However, it was not very practical as I had to wait for it to boot each time I wanted to use it, which wasn’t very fast… So this year, I put it in airplane mode instead of turning it off. This turns off the radios and leaves it consuming apparently very little power; but it was ready much faster when I wanted to check whether there was any signal, or test my application, or show off something, and it still had juice at the end of the week.

Lastly, and this is more for developers: even if your app requires location, you may not always get true north if there is no data connection, so be sure to always handle the lack of true heading and fall back to the magnetic heading. As you know, a compass does not point exactly to the North Pole (in this context also referred to as geographic north), but to a point called the magnetic north pole, located somewhere in Greenland (in fact it is even more complicated than that, but let’s leave it at that). The iPhone 3GS and iPhone 4, both featuring a magnetometer, cannot give you true north using only that sensor any more than a compass can; however, if the device also has your location, it can apply the correction between magnetic north and true north directions at that point and give you the true north; this is well documented in the API documentation. However, while your location is necessary, it is not sufficient! I have observed that if I don’t have any data connection, I only get magnetic north, even if I just got a GPS fix; the device probably needs to query a remote database to get the magnetic north correction for a given point. So if your application uses heading in any way, never assume you can have true north, even if you know you have location; always handle the lack of true north and fall back to magnetic north in that case, mountain trekkers everywhere will thank you.

The last “I’m a Mac” ad

Mac: “Hello, I’m a Mac”
PC: “And I’m a PC. And I feel like a new computer!”
Mac: “Oh?”
PC: “Yes, Windows 7 has rejuvenated me. No more problems, it changed everything – you should try it. Out with the old, in with the new!”
Mac: “Really? So, you mean, no more BIOS, registry or activation?”
PC: “Yes— NO! What are you talking about? These have nothing to do with it! Why shouldn’t I start by showing a screen with only white text on a black background, full of useless information?”
Mac: “I—”
PC: “Or why shouldn’t I check all the time that I’m not using a pirated Windows, allowing the user to sleep easy in this knowledge?”
Mac: “Actu—”
PC: “Besides, these things are part of me; I couldn’t live without them. Doesn’t that happen to you?”
Mac: “Well, no. If something is a problem, I get rid of it.”
PC: “I mean, you’re almost as old as I am! There’s bound to be some old cruft you can’t get rid of…”
Mac: “…no.”
(iPad enters from the left. She’s a young woman who seems to be 18 or 19. She is, of course, beautiful. She crosses the screen in front of our heroes, paying no attention whatsoever to them. She dances slightly as she’s walking, and she’s humming a tune to herself)
iPad: “I’m iPad, hmm hmm hmmm, hmm hmm hmmm…”
Mac (not looking so smug anymore): “I suddenly feel much older…”
PC (looking in the direction of iPad, who has left the screen): “Why, I feel much younger!”
(Cut to iPad. The actual device, I mean)

A theory on the significance of the Apple A4

Before I begin, a clarification: I do not own an iPad. Besides living in France (where you still can’t even pre-order one at the time of this writing), I also currently have no need for this particular device; however, I am very interested in the computing platform the iPad is inaugurating.

One of the perks of my current workplace is that many of my colleagues, while working on software, have a semiconductors background, NXP being a semiconductors company. So when Apple introduced the iPad, many of us were intrigued by the A4 “processor” they said was powering this device. We thought it was very unlikely they could have created a whole new, competitive processor core implementing the ARM architecture (similarly to e.g. XScale, which implements the ARM architecture but wasn’t created by ARM) in only one year and a half since the acquisition of PA Semi, so we considered Apple probably “just” licensed a processor core from ARM for the A4.

The first analyses seem to indicate that not only this is the case, but the A4 even features “just” a single Cortex A8 core like, for instance, the iPhone 3GS, not something fancier but still plausible like one or two Cortex A9. The same way, the graphics processor seems to be a PowerVR SGX like in the iPhone 3GS. It’s a higher-clocked Cortex A8, and the whole is probably on a smaller process node, but it’s based on a Cortex A8 nonetheless; apparently nothing they couldn’t have obtained from the SoC portfolio of e.g. Samsung (which seems to be fabbing the A4). So what is Apple doing with the A4? They certainly are not designing a SoC just for the sake of doing it.

Let me disclaim that I have no inside information, just a hunch, this is entirely speculation. It may be a sound, consistent theory that would explain everything, and still be wrong because the explanation is something completely different.

While many relate SoCs such as the Apple A41 to recent developments from Intel and AMD which put a graphics processor on the same chip as a processor (sometimes not even on the same die), and call SoCs: “processors”, a SoC is a system. But instead of being a system built by putting together chips from different vendors on a board, a System on a Chip is “built” by laying out components from different vendors on the same silicon die; this allows smaller designs, sometimes lower costs, and lower consumption from a comparable multi-chip solution. Using a SoC is pretty much a necessity on devices as constrained as a phone, and even if the iPad is less constrained, it is still a big win there.

This sounds like a tautology, but by designing their own SoC, Apple is designing their own system. The off-the-shelf SoCs, and even the ones customized for Apple found in other iPhone OS devices (which we know are customized if only because they are Apple-branded), may have been OK for the iPhone and iPod Touch, but these SoCs were initially designed with more traditional handsets in mind; the iPhone OS interface, with its smooth, continuous scrolling, use of animations, transparency, etc. (all of which are characteristic of the “new computing” the iPhone OS embodies) probably taxes these SoCs in ways that were not foreseen with Symbian and Windows Mobile interfaces. The graphics processor can do all these effects, but the intensity with which they are used likely reveals bottlenecks (probably data bandwidths) in the architecture of these SoCs; notice the processor core matters very little here. Now consider that the iPad needs to move more than five times more pixels than an iPhone, and you may start to understand the problem. There are probably other “areas” (e.g. power saving) of the system that could be properly designed only with a view of the whole system, with a whole software stack above the hardware. By designing the A4, Apple is more directly making the hardware decisions that will matter, for instance how the memory is shared; not in amount (I’m sure that’s configurable already) but e.g. in bandwidth. While the processor core matters too, it was probably not the main liability here.

Remember what Mansfield says in the iPad intro video, that the A4 was designed by the hardware team together with the software team, giving performance that could not be achieved any other way? That fits this theory. It is related to the end-to-end argument, which basically states that adding features at a low level has to be done in light of the whole system, otherwise the feature will be of limited usefulness; a consequence is that a low-level component, so far designed for a given system, may have some deficiencies when used in a new system, and these deficiencies can only be revealed in the context of this new system. Given how they use the hardware, iPhone OS devices end up being different enough systems that it makes sense to design a more specific SoC for it, and keep anyone else out of the design loop. To top it off, it allows to keep more details secret from Samsung, which is also a potential competitor.

To give you an analog situation, read this. Basically, on the original Macintosh, memory was accessed in regular alternance between the processor and the display system, as there was no dedicated video memory; not only that, but at the end of each scan line, there was no access during the interval when the screen beam goes back to the start of the next line, so they took advantage of this to fetch an audio sample instead. A brilliant design. Now imagine that instead of using a 68000 and a bunch of PALs for the other logic, the Mac team had to use a single chip containing the whole system except for memory and some I/O, and that chip was more designed with computers like the IBM PC in mind, and so actually optimised for text interfaces and PC speaker beeps. Would they have been able to build the Macintosh with such a chip? Even if they could have gotten the supplier of such an imaginary chip to fix bottlenecks and add features, this would still have been an extra step in the design loop, so they might eventually have had to develop such a chip themselves — if not at first, then for, say, the Mac II. Now, while there are direct parallels such as both devices having video memory shared with system memory, I don’t think the design challenges are similar in detail; but the situations are similar in a broad sense.

Note that this is valid for systems that are still maturing (and the portable smart device category is certainly one in flux right now); for mature systems the differences between platforms are less different and the technology is more universally mastered, such that it is more efficient for system-level hardware to be outsourced to a few common suppliers; this is the case for desktop computing nowadays. On mobile devices, however, in-house SoC design is probably going to be a competitive advantage in the foreseeable future, just like it was with personal computers in the 80’s.


  1. The Apple A4 is actually a package, that is, there are actually three dies in the ceramic package; however two of these are the RAM, the third chip is the A4 SoC.