Goodbye, NXP Software

For the last four years, starting before this blog even began, I have been working as a contractor programmer for NXP Software. Or rather had been, as the mission has now ended, effective 1st of January 2012. It was a difficult decision to take, and I will miss among other things the excellent office ambience, but I felt it was time for me to try other things, to see what’s out there, so to speak. After all, am I not the wandering coder?

I’ll always be thankful for everything I learned, and for the opportunities that have been offered to me while working there. Working at NXP Software was my first real job, and I couldn’t have asked for a better place to start at as people there have been understanding in the beginning when I clumsily transitioned to being a full-blown professional. I am also particularly thankful (among many other things) for the opportunity to go to WWDC 2010, where I learned a ton and which allowed me to meet people from the Apple community (not to mention visiting San Francisco and the bay area, even for a spell).

There are countless memories I’ll forever keep of the place, but the moment I’m most proud of would be the release of CineXplayer, and in particular its getting covered on Macworld. Proud because it’s Macworld (and Dan Moren), of course, but also because of something unassumingly mentioned in the article. You see, in the CineXplayer project I was responsible for all engine development work (others handled the UI development), including a few things at the boundary such as video display and subtitle rendering; we did of course start out from an existing player engine, and we got AVI/XviD support from ongoing development on that player (though we got a few finger cuts from that as we pretty much ended up doing the QA testing of the feature…), but interestingly when we started out this player engine had no support for scrubbing. None at all. It only supported asynchronous jumping, which couldn’t readily be used for scrubbing. And I thought: “This will not do.” and set out to implement scrubbing; some time later, it was done, and we shipped with it.

And so I am particularly proud of scrubbing in CineXplayer and its mention in Dan Moren’s article, not because it was particularly noticed but on the contrary because of the so modest mention it got: this means it did its job without being noticed. Indeed, rather than try and seek fifteen pixels of fame, programmers should take pride in doing things that Just Work™.

As I said, I wanted a change of scenery, and that is why I am still employed by SII and I have started a new mission in Cassidian to work on developing professional mobile radio systems (think the kind of private mobile network used by public safety agencies like police and firefighters). Don’t worry, I am certainly not done developing for iOS or dispensing iOS knowledge and opinions here, as I will keep doing iOS stuff at home; I can’t promise anything will come out of it on the iOS App Store, but you’ll certainly be seeing blog posts about it.

And I know some people in NXP Software read this blog, so I say farewell to all my peeps at NXP Software, and don’t worry, I’ll drop by from time to time so you’ll be seeing me again, most likely…

iOS lacks a document filing system

Since the beginning of 2010 when the iPad was released, there has been no end of debates over whether it is suitable for creating content, or whether it is primarily a “content consumption” (ugh) device (as if the choices were thus limited…). I am resolutely of the opinion that the iPad is an easel that very much supports serious creative endeavors given the right environment.

I unfortunately had (as you may have noticed) to qualify that last statement. Besides a few colleagues at work, two examples of iPad-using people that I base this statement on are the Macalope and Harry McCracken. And these examples have something in common: in all three cases, once the work is done, the documents are sent, handled, stored, etc. by either a corporate server, or a publishing CMS, or some other similar infrastructure. Here the iPad only needs to make a good job of storing the document for the time necessary to complete it; once done and sent, the document can even be removed from the device.

Let us contrast that with another situation. My father is a high school teacher; for the last 25+ years he has been working using computers, preparing teaching notes, transparent slides to project, diagrams, tests and their answers, student average note calculation documents, etc. on his Macs (and before that on an Apple ][e). He shares some of these with his colleagues (and back) and sometimes prints on school printers, so he is not working in complete isolation, but he cannot rely on a supporting infrastructure and has to ensure and organize storage of these teaching material documents himself. He will often need to update these when it’s time to teach the same subject one year later, because the test needs to be changed so that it’s not the exact same as last year, because the curriculum is changing this year, because the actual class experience of using them the previous year led him to think of improvements to make the explanation clearer, because this year he’s teaching a class with a different option so they have less hours of his course (but the same curriculum…), etc. Can you imagine him using solely an iPad, or even solely an imaginary iOS 5 notebook, to do so? I can’t. Let us enumerate the reasons:

  • Sure, one can manage documents in, say, Pages. But can one manage hundreds of them? Even with search this is at best a chore, and it’s easy to feel lost as there is no spatial organization; and search could return irrelevant results and/or not find the intended document because of e.g. synonyms.
  • If one remembers a document, but not the app which was used to create it, it’s hard to find it again, as the system-wide search in iOS cannot search in third-party apps (at least it couldn’t when this feature was released in iPhone OS 3.0, and I am not aware of this having changed), so one has to search each and every app where this document could have been made.
  • In some cases, for a project for instance, it is necessary to group documents created by different apps: sometimes there is no single app that can manage all the different media for a single project. On iOS these documents can only exist segregated into their own apps with no way to logically group them.
  • If there is a screwup, as far as I am aware it is not possible to restore a single document from backup, in fact it does not seem possible to restore a single app from backup, only full device restores, which may not be practical as it likely means losing work done elsewhere.

iOS needs a document filing system, badly.

The worst thing is, with the exception of file transfer in iTunes (which pretty much only shifts the issue to the computer, with some more overhead), the situation is the exact same as it was in iPhone OS 2.0 when third-party apps first became possible. iCloud solves exactly none of these problems: it is great to simplify working between your different devices, but it brings nothing to the single-device case. This has nothing to do with the hardware limitations of any iOS device, this is entirely the doing of the iOS software; in fact, while this is acceptable for the iPhone, I feel this gap already limits the potential of the iPad unnecessarily; and regardless of how you think it will happen (my take, which I will elaborate in a later post: Mac OS X is the new Classic), it is clear Apple has Big Plans for iOS, but it is hard to take iOS seriously for any device used for work if Apple hasn’t even shipped a first version of a document filing system, which is quite a design task and will require multiple iterations to get right for most people.

Now you may be wondering: does it really matter for working on iOS to depend a corporate, publishing, design studio, etc. infrastructure? Most people working on computers already work in the context of such an infrastructure. I think that yes, it does matter. Even if we admit that people working outside such an infrastructure are the exception rather than the rule, there are many of them, enough to prop up a competing platform (potentially the Mac) that would cater to their needs. Plus, sometimes such an infrastructure (e.g. in small businesses) may be unreliable, so it is a good idea to have a fallback. Moreover, it’s not really a good idea for Apple to make iOS dependent on such an infrastructure, as then Apple will not be able to control aspects of the experience it likely cares about, and will not be able to define, for instance, the modern notion of how to encapsulate user creations (I can imagine Apple getting past the concept of documents themselves and introducing something new), or how document typing information is represented. Whereas if iOS devices had a document filing system worthy of its name, but could also still be used in such an infrastructure as they can today, then Apple could define the rules and external infrastructure would follow the lead. Currently, iOS devices are more akin to terminals when it comes to working on them; not quite VT-100 or Chromebooks, but you get the idea.

When I see the absence of a user-visible traditional file system in iOS being lauded as some sort of brilliant new move, I’m scratching my head. It is a bold move, for sure, and not having something does represent accomplished work in the sense that it is a design decision, but honestly not having this feature is the easy part, creating a worthwhile replacement is the hard part, one that Apple has not shown even an interest in tackling. Moreover, the absence of a user-visible filesystem is nothing new. Indeed, back in the 80’s when computer GUIs were developed, two philosophies emerged for dealing with documents: a document-centric approach, where documents are at the center and applications are but tools which can each be used for a specific task on these documents, and an application-centric approach, where applications are the focus and documents only make sense within their context. The Apple Lisa, for instance, was document-centric: users would tear down from a stationery to create a document, which could then be operated on by tools. By contrast, the Macintosh (and everything it then inspired) was mostly application-centric. In this context, iOS merely is purely application-centric. Precedents of such systems exist, and include game consoles with memory cards for instance.

And was it really necessary to forego the filesystem in its entirety in the first place? Admittedly, it has become more and more complicated over the years, with documents being diluted by an ever increasing number of non-document files visible to the user, especially after the Internet and Web came to be. And, okay, even the Macintosh Finder at the origin did represent applications and system files along with user documents, and thus was not really a document filing system. However, was it really necessary to throw out the baby with the bathwater? It would have been feasible for iOS to feature a clean filesystem view with most everything invisible and various enhancements (like virtual folders and virtual filenames) so that it would only feature documents (in fact, I think the Mac OS X Finder in 2001 should have shown only the inside of the home folder, with applications launched from a Launchpad-like mechanism, but I guess a few things like the need to support Classic prevented that anyway). But maybe filesystems as users know them had truly become fatally tainted, and maybe it was indeed necessary to take a clean break from the past; in the end it doesn’t really matter either way, however it is not a good thing to forego something and put no successor for so long.

In the end, I am afraid Apple is not taking this aspect of the computing experience seriously, and is neglecting it. They ought to take it seriously, because it will matter, I think it will matter a lot in fact.

I explored a related aspect of document management in a followup — February 21, 2012

~ Reactions ~

Jesper (who, unbeknownst to me, had already touched some of these points, such as the specific notion of a document filing system) expands on the matter, also theorizing why the iOS group makes iOS be that way.

Unfortunately my knowledge of Magyar is exactly zero (and Google Translate is a bit hit and miss), but I’m sure Benke Zsolt is saying very interesting things.

I am honored that Lukas Mathis would link to me, but if I am mentioning it as a reaction it is because of the slightly overstated, but pretty good comparison he added.

A word about SOPA

The tech media is abuzz with news of a project called “SOPA”, and so I learned that the people of the United States of America, represented by their senators and representatives, are considering new legislation aimed at combatting digital piracy. It is not my position to criticize the decisions of the sovereign people of the USA over their domestic affairs. However, I urge the people of the USA and their representatives to seriously consider the impact of the proposed legislation over their international commitments before taking their decision.

For one, while filtering DNS entries for ISPs in the USA might seem it would only have a local impact, it would in fact seriously undermine the very infrastructure of the Internet, which is recognized to be a global infrastructure not belonging to any nation in particular.

Then, the broad and not too strict criteria for classifying a site in the proposed legislation mean rights holders in the USA would be given powers of enforcement much greater than they had in the past. Moreover, some rights holders have used existing tools in the past, such as DMCA takedowns, to target and block sites that were not engaged in intellectual property infringing activities, but rather in activities like parody, which is protected by free speech. Finally, adding to this the lack of any due process means that innovative sites from outside the USA would be exposed to a lot of risk of being blocked from a complaint by a competitor based in the USA, or being unable to collect money from USA citizens, with little recourse if this were to happen, which could be considered an impediment to free trade by the WTO.

People of the USA, I thank you for your attention and wish to send you my most friendly salutations.

GCC is dead, long live the young LLVM

(Before I get flamed, I’m talking of course of GCC in the context of the toolchains provided by Apple for Mac and iOS development; the GCC project is still going strong, of course.)

You have no doubt noticed that GCC disappeared from the Mac OS X developer tools install starting with Lion; if you do gcc --version, you’ll see LLVM-GCC has been given the task of handling compilation duties for build systems that directly reference gcc. And now with the release of the iOS 5 SDK, GCC has been removed for iOS development too, leaving only LLVM-based compilers there as well.

Overall I’m going to say it’s a good thing: LLVM, especially with the Clang front end has already accomplished a lot, and yet has so much potential ahead of it; while GCC was not a liability, I guess this very customized fork was a bit high maintenance. Still, after 20 years of faithful service for Cocoa development at NeXT then Apple, it seems a bit cavalier for GCC to be expelled in mere months between the explicit announcement and it actually being removed. Ah well.

But while I have no worry with LLVM when doing desktop development (that is, when targeting x86 and x86-64), however LLVM targeting iOS (and thus ARM) is young. Very young. LLVM has only been deemed production quality when targeting ARM in summer 2010, merely one year ago and change. Since then I have heard of (and seen acknowledged by Chris Lattner) a fatal issue (since fixed) with LLVM for ARM, and it seems another has cropped up in Xcode 4.2 (hat tip to @chockenberry). So I think the decision to remove GCC as an option for iOS development was slightly premature on Apple’s part: a compiler is supposed to be something you can trust, as it has the potential to introduce bugs anywhere in your code; it has to be more reliable and trustworthy than the libraries, or even the kernel, as Peter Hosey quipped.

Now don’t get me wrong, I have no problem with using Clang or LLVM-GCC for iOS development, in fact at work we switched to Clang on a trial basis (I guess it’s now no longer on a trial basis anymore, certainly not after the iOS 5 SDK) about one year ago, and we’ve not had any issue ourselves nor looked back since. Indeed, for its relative lack of maturity and the incidents I mentioned, LLVM has one redeeming quality, and it’s overwhelming: Apple is itself using LLVM to compile iOS; Cocoa libraries, built-in apps, Apple iOS App Store apps, etc., millions upon millions of lines of code ensure that if a bug crops up in LLVM, Apple will see it before you do… provided, that is, that you don’t do things Apple doesn’t do. For instance, Apple has stopped targeting ARMv6 devices starting with iOS 4.3 in March 2011, and it is no coincidence that the two incidents I mentioned were confined to ARMv6 and did not affect ARMv7 compilation.

So I recommend a period of regency, where we allow LLVM to rule, but carefully oversee it, and in particular prevent it from doing anything it wouldn’t do at Apple, so that we remain squarely in the use cases where Apple shields us from trouble. This means:

  • foregoing ARMv6 development from now on. In this day and age it’s not outlandish to have new projects be ARMv7-only, so do so. If you need to maintain an existing app that has ARMv6 compatibility, then develop and build it for release with Xcode 4.1 and GCC, or better yet, on a Snow Leopard machine with Xcode 3.2.6 (or if you don’t mind Snow Leopard Server, it seems to be possible to use a virtual machine to do so).
  • avoiding unaligned accesses, especially for floating-point variables. It is always a good idea anyway, but doubly so now; doing otherwise is just asking for trouble.
  • ensuring your code is correct. That sounds like evident advice, but I’ve seen in some cases incorrect code which would run OK with GCC, but was broken by LLVM’s optimizations.
  • I’d even be wary of advanced C++ features; as anyone who has spent enough time in the iOS debugger can attest from the call stacks featuring C++ functions from the system, Apple uses quite a bit of C++ in the implementation of some frameworks, like Core Animation, however C++ is so vast that I’m not sure they make use of every nook and cranny of the C++98 specification, so be careful.
  • avoiding anything else you can think of that affects code generation and is unusual enough that Apple likely does not use it internally.

Now there’s no need to be paranoid either; for instance to the best of my knowledge Apple compiles most of its code for Thumb, but some is in ARM mode, so you shouldn’t have any problem coming from using one or the other.

With this regency in place until LLVM matures, there should be no problems ahead and only success with your iOS development (as far as compiling is concerned, of course…)

“translation layers”, externally sold content, and unsandboxed apps

So Apple ended up relenting on most of the requirements introduced at the same time as subscriptions. Apple does still require that apps not sell digital content in the app itself through means other than in-app purchases, or link to a place where this is done, however. I would say this is a reasonable way to provide an incentive for these products to be offered as in-app purchases, were it not first for the fact the agency model used for ebooks in particular (but I’m sure other kind of digital goods are affected) does not allow for 30% of the price to go to Apple, even if the price used in-app is 43% higher than the price out of the app, and second for the fact some catalogs (Amazon’s Kindle one, obviously, but it must be a pain for other actors too) cannot even be made to fit in Apple’s in-app database.

John Gruber thinks this is not Apple’s problem, but at the same time Apple has to exist in reality at some point. Besides, I don’t think Apple is entitled over the whole lifetime of an app to 30% of any purchase where the buying intent originated from the app. Regardless of whether you think it’s fair or not, competitors will eventually catch up in this area and offer better conditions to publishers, making it untenable for Apple to keep this requirement. But it’s not fair either for Apple to shoulder for free the cost of screening, listing, hosting, etc. these “free” clients that in fact enable a lot of business. Maybe apps could be required to ensure the first 10$ of purchases made in the app can be paid only using tokens bought through in-app purchase (thus avoiding the issue of exposing all SKUs to Apple), then only could they directly take users’ money.

But what this edict has done anyway—besides making the Kobo, Kindle, etc. apps quite inscrutable by forcing them to remove links to their respective stores—is hurt Apple’s credibility with respect to developer announcements. Last year they prohibited Flash “translation layers”, and this prohibition had already been in application (to the extent that it could be enforced, anyway) for a few months when they relented on it. This year they dictated these rules for apps selling digital content, rejecting new apps for breaking them before these rules were even known, with existing apps having until the end of June to comply, only for Apple to significantly relax these rules at the beginning of June (and leave until the end of July to comply). This means that in both cases developers were actually better off doing nothing and waiting to see what Apple would actually end up enforcing. I was about to wonder how many Mac developers were actually scrambling to implement sandboxing, supposed to be mandatory in the Mac App Store by November, but it turns out Apple may have jumped the gun at the very least here too, as they just extended the deadline to March. In the future, Apple may claim that they warned developers of such things in advance but the truth is most of the stuff they warned about did not come to pass in the way they warned it would, so why should developers heed these “warnings”?

Steve

I wasn’t sure I should write something, at first. Oh, sure, I could have written about the fact I didn’t dress specially thursday morning or didn’t bring anything to an Apple Store, as I thought for Steve I should either do something in the most excellent taste or nothing, and I couldn’t think of the former (and so I kicked myself saturday when I went to the Opera Apple Store to buy a Lion USB key, saw them, and thought “Of course! An apple with a bite taken out of it… dummy!”). Or I could have written about the fact he was taken from his families at a way too early age. Or about the fact, except for this one (and variants of this one, though one would have been enough), I was appalled by the editorial cartoons about the event (“iDead”? Seriously?). Or about a few obituaries I read or heard where the author put some criticism along with the praise (which by itself I don’t mind, honestly, he was kind of a jerk), but put in a way that suggested the good could be kept without the flaws, while for instance in an industry where having different companies responsible for aspects of the user experience of a single device is considered standard practice, being a control freak is essential to ensure the quality of the user experience that has made Apple a success. Or about how his presence in the keynotes during his last leave of absence (while on the other hand he stepped back from presentation duties during the previous one), and his resignation merely 6 weeks ago, both take on a whole new meaning today.

But at the end of the day, what would have I brought, given the outpouring of tributes and other content about Steve Jobs, many from people more qualified and better writers than I am? Not much. However, I read a piece where the author acknowledges the impact Steve Jobs had on his life, and I thought I should, too, pay my dues and render unto Steve that which is Steve’s, if only to help with the cathartic process. I hope it will contribute something for his family, his family at Apple, his family at Disney/Pixar, and the whole tech and media industries in this time of grief.

I was quite literally raised with Apple computers; from an Apple ][e to the latest Macs, there has always been Apple (and only Apple) hardware in the house, for which I cannot thank my father enough. As a consequence, while I had no idea who Steve Jobs was at the time, he was already having a huge impact on me. Not because I think he designed these computers all by himself, but because, by demanding seemingly impossibly high standards from the ones who designed them with him, or in the case of later Macs, by having made enough of a mark at Apple that the effect was (almost) the same, he ensured a quality of user experience way beyond that of any competitor, which allowed my young self to do things he wouldn’t have been able to do otherwise, and teaching him to expect, nay, demand similar excellence from his computing devices.

Then I started learning about him when he returned to Apple in 1997, from a press cautiously optimistic that the “prodigal son” could get Apple out of trouble, then how he spectacularly did so. I indirectly learned from him (in particular through folklore.org) that it requires a great deal of effort to make something look simple, that there is never good enough, merely good enough to ship this once (because on the other hand, real artists ship) and that the job of the software developer is to be in service of the user experience, not to make stuff that is only of interest to other software developers and remain in a closed circuit.

Imagining my life had Steve Jobs not made what he made is almost too ludicrous to contemplate. Assuming I would even have chosen a career in programming, I would be developing mediocre software on systems that would be as usable a mid-nineties Macintosh, if that, and would have very little of the elegance (come on: setting aside any quibble about who copied whom, do you think Windows or any other operating system would be where it is today were it not for the Mac to at the very least compete with it and make it do one better in the usability department?). And the worst thing is that I would have been content with it and considered it as good as it gets, and it would have been the same for almost all of my peers.

It’s thus safe to say that as far as my influences go, Steve Jobs is second only to my closest family members. By envisioning the future, then making it happen through leadership, talent and just plain chutzpah (for good or ill, it doesn’t seem to be possible to make people believe in your predictions of what the future will be made of, other than by actually taking charge and realizing it), he showed us what computers (and portable music players, and mobile phones, etc.) could be rather than what most people thought they could be before he showed us. And by teaching a legion of users, multiple generations of developers, and everyone at Apple to never settle for great but always strive for the best, he has ensured the continuation of this ethic for a few decades, at least (this is, incidentally, the reason why I am not too worried about the future of Apple, Inc.).

Thank you Steve. Thank you for everything. See you at the crossroads.

Benefits (and drawback) to compiling your iOS app for ARMv7

In “A few things iOS developers ought to know about the ARM architecture”, I talked about ARMv6 and ARMv7, the two ARM architecture versions that iOS supports, but I didn’t touch on an important point: why you would want to compile for one or the other, or even both (thanks to Jasconius at Stack Overflow for asking that question).

The first thing you need to know is that you never need to compile for ARMv7: after all, apps last updated at the time of the iPhone 3G (and thus compiled for ARMv6) still run on the iPad 2 (provided they didn’t use private APIs…).

Scratch that, you may have to compile for ARMv7 in some circumstances: I have heard reports that if your app requires iOS 5, then Xcode won’t let you build the app ARMv6 only. – May 22, 2012

So you could keep compiling your app for ARMv6, but is it what you should do? It depends on your situation.

If your app is an iPad-only app, or if it requires a device feature (like video recording or magnetometer) that no ARMv6 device ever had, then do not hesitate and compile only for ARMv7. There are only benefits and no drawback to doing so (just make sure to add armv7 in the Required Device Capabilities (UIRequiredDeviceCapabilities) key in the project’s Info.plist, otherwise you will get a validation error from iTunes Connect when uploading the binary, such as: “iPhone/iPod Touch: application executable is missing a required architecture. At least one of the following architecture(s) must be present: armv6”).

If you still want your app to run on ARMv6 devices, however, you can’t go ARMv7-only, so your only choices are to compile only for ARMv6, or for both ARMv6 and ARMv7, which generates a fat binary which will still run on ARMv6 devices while taking advantage of the new instructions on ARMv7 devices1. Doing the latter will almost double the executable binary size compared to the former; executable binary size is typically dwarfed by the art assets and other resources in your application package, so it typically doesn’t matter, but make sure to check this increase. In exchange, you will get the following:

  • ability to use NEON (note that you will not automatically get NEON-optimized code from the compiler, you must explicitly write that code)
  • Thumb that doesn’t suck: if you follow my advice and disable Thumb for ARMv6 but enable it for ARMv7, this means your code on ARMv7 will be smaller than on ARMv6, helping with RAM and instruction cache usage
  • slightly more efficient compiler-generated code (ARMv7 brings a few new instructions besides NEON).

Given the tradeoff, even if you don’t take advantage of NEON it’s almost always a good idea to compile for both ARMv6 and ARMv7 rather than just ARMv6, but again make sure to check the size increase of the application package isn’t a problem.

Now I think it is important to mention what compiling for ARMv7 will not bring you.

  • It will not make your code run more efficiently on ARMv6 devices, since those will still be running the ARMv6 compiled code; this means it will only improve your code on devices where your app already runs faster. That being said, you could take advantage of these improvements to, say, enable more effects on ARMv7 devices.
  • It will not improve performance of the Apple frameworks and libraries: those are already optimized for the device they are running on, even if your code is compiled only for ARMv6.
  • There are a few cases where ARMv7 devices run code less efficiently than ARMv6 ones (double-precision floating-point code comes to mind); this will happen on these devices even if you only compile for ARMv6, so adding (or replacing by) an ARMv7 slice will not help or hurt this in any way.
  • If you have third-party dependencies with libraries that provide only an ARMv6 slice (you can check with otool -vf <library name>), the code of this dependency won’t become more efficient if you compile for ARMv7 (if they do provide an ARMv7 slice, compiling for ARMv7 will allow you to use it, likely making it more efficient).

So to sum it up: you should likely compile for both ARMv6 and ARMv7, which will improve your code somewhat (or significantly if you take advantage of NEON) but only when running on ARMv7 devices, while increasing your application download to a likely small extent; unless, that is, if you only target ARMv7 devices, in which case you can drop compiling for ARMv6 and eliminate that drawback.


  1. Apple would very much like you to optimize for ARMv7 while keeping ARMv6 compatibility: at the time of this writing, the default “Standard” architecture setting in Xcode compiles for both ARMv6 and ARMv7.

China declined to join an earlier coalition, Russia reveals

The saga of France’s liquidation sale continues (read our previous report). Diplomatic correspondence released yesterday by Russia in response to China’s communiqué reveals that China was asked to join an earlier coalition to acquire South Africa’s nuclear arsenal (an acquisition China mentioned in its communiqué as evidence of a conspiracy), but China declined.

This seemed to undermine China’s argument of an international conspiracy directed against it, at the very least it strengthens the earlier coalition’s claim that its only purpose was to figuratively bury these nuclear weapons; it should be noted high-profile countries Russia and USA are members of both coalitions.

China then answered with an update to their communiqué (no anchor, scroll down to “UPDATE August 4, 2011 – 12:25pm PT”) stating the aim of this reveal was to « divert attention by pushing a false “gotcha!” while failing to address the substance of the issues we raised. » The substance being, according to China, that both coalitions’ aim was to prevent China from getting access to these weapons for itself so that it would have been able to use them to dissuade against attacks, and that China joining the coalition wouldn’t have changed this.

Things didn’t stop here, as Russia then answered back (don’t you love statements spread across multiple tweets?) that it showed China wasn’t interested in partnering with the international community to help reduce the global nuclear threat.

For many geopolitical observers, the situation makes a lot more sense now. At the time the France sale was closed and the bids were made public, some wondered why China wasn’t in the winning consortium and had instead made a competing bid with Japan. China and Japan are somewhat newcomers to the nuclear club, and while China’s status as the world’s manufacturer pretty much guarantees it will never be directly targeted, its relative lack of nuclear weapons is the reason, according to analysts, it has less influence than its size and GDP would suggest. Meanwhile, China is subjected to a number of proxy attacks, so analysts surmise increasing its nuclear arsenal would be a way for China to dissuade against such attacks against its weaker allies.

So the conclusion reached by these observers is that, instead of joining alliances that China perceived as designed to keep the weapons out of its reach, China played everything or nothing. But the old boys nuclear club still has means China doesn’t have, and China lost in both cases, and now China is taking the battle to the public relations scene.

Geopolitical analyst Florian Müller in particular was quoted pointing out that, given the recent expansion of its influence, it was expected for China to be targeted by proxy, and other countries were likely acting their normal course and were not engaged in any organized campaign.

So to yours truly, it seems that while the rules of nuclear dissuasion may be unfair, it seems pointless to call out the other players for playing by these rules, and it makes China look like a sore loser. But the worst part may be that the Chinese officials seemingly believe in their own, seemingly self-contradicting (if they are so much in favor of global reduction of nuclear armaments, why wouldn’t they contribute to coalitions designed to take some out of the circulation?) rhetoric, which would mean the conflict could get even bitterer in the future.

France goes down, its nuclear weapons, and China

So France is going belly up. Kaput. Thankfully not after a civil war, the strife is more on the political side, though a few unfortunately died in some of the riots. But after regions like Corsica, Brittany and Provence unilaterally declared independence, after Paris declared a real Commune in defiance of the government, and Versailles, the usual fallback, did not seem safe either, it became clear there was no way out but eventual dissolution of the old, proud French Republic state; much like the USSR dissolved in 1991, but without an equivalent of Russia to pick up the main pieces, the Paris Commune being seen as too unstable.

Among the numerous geopolitical problems this raised, one stood out. Among its armed forces, the French Republic had under its control several nuclear warheads, the missiles to carry them, and a fleet of submarines to launch them. Legitimately terrified that these weapons could fall under the hands of a rogue state or terrorist group, the international community sustained the French government long enough for it to organize a liquidation sale of its nuclear armament and other strategic assets. But Russia certainly wasn’t going to let the USA buy them, an neither were the USA willing to see Russia get them. Realizing that making sure these weapons didn’t fall in the wrong hands was more important than for either party to take control of them itself, Russia, the USA, and a few other countries like India, the United Kingdom, etc. formed a coalition and jointly bid, and won, the dangerous arsenal.

Though they agreed on a few principles before making this alliance, it was considered urgent to get control of the arsenal in the first place, and at the time the sale was closed the coalition had not agreed on what to do with those weapons. But most geopolitical observers and analysts agreed that the coalition would end up keeping the weapons around just in case, but inactive and offline, and that was if they were not just going to disband them outright; after all, for them to be used would require joint agreement of all parties, an agreement that was extremely unlikely to be ever reached.

But China suddenly started publicly complaining that the members of the coalition were engaged in a conspiracy against it, citing military interventions from some of the coalition members in foreign countries, various international disagreements, and now this France liquidation sale (China did not take part in the coalition, it made, with one ally, a separate bid for these weapons but was eventually outbid by the coalition). Observers, however, were skeptical: these events did not seem connected in any way except for the fact of being mentioned together in that communiqué; plus, as if the joint ownership didn’t already ensure at least immobilization by bureaucracy, the coalition includes one partner of China: Brazil. And the fact the coalition spent quite a bit of money to acquire this arsenal, more than some initial estimates, probably only reflected the importance of keeping them out of the wrong hands, given the unstable international landscape, what with rogue states, terrorist groups, less than trustworthy states gaining importance, etc. Not to mention China itself bid rather high in that auction.

In the end it is suspected that, while Chinese officials may believe this conspiracy theory themselves, these complaints made in public view were actually intended to fire up nationalism in the country, or even better, in the whole east Asia.

The saga, unsurprisingly, didn’t stop there: Russia answered, read all about it in the followup. – August 5, 2011

In support of the Lodsys patent lawsuit defendants

If you’re the kind of person who reads this blog, then you probably already know from other sources that an organization called Lodsys is suing seven “indie” iOS (and beyond!) developers for patent infringement after it started threatening them (and a few others) to do so about three weeks ago.

Independently of the number of reactions this warrants, I want to show my support, and I want you to show your support, to the developers who have been thus targeted. Apparently, in the USA, even defending yourself to find out whether a claim is valid doesn’t just cost an arm and a leg, but can put such developers completely out of business with the sheer cost of the litigation. So it must be pretty depressing when you work your ass off to ship a product, a real product with everything it entails (engine programming, user interface programming, design, art assets, testing, bug fixing, support, etc.), only to receive demands for part of your revenue just because someone claims to have come up with a secondary part of your app first, this someone being potentially anyone with half of a quarter of a third of a case and richer than you, since you’d be out of business by the time the claim is found to be invalid. It must be doubly depressing when the infringement is from your use of a standard part of the platform, that you should (and in fact, in the case of iOS in-app purchase, have to) use as a good platform citizen.

I have known about iconfactory.com and enjoyed their work for fifteen1 twelve years now, I use Twitterrific, I have bought Craig Hockenberry’s iPhone dev book, I follow him on Twitter and met him once. I know that the Iconfactory is an upstanding citizen of the Mac and iOS ecosystem and doesn’t deserve this. I am not familiar with the other defendants, but I am sure they do not deserve to be thus targeted, either.

So, to Craig, Gedeon, Talos, Corey, Dave, David, Kate and all the other Iconfactory guys and gals; to the fine folks of Combay, Inc; to the no less fine folks of Illusion Labs AB; to Michael; to Richard; to the guys behind Quickoffice, Inc.; to the people of Wulven Games; I say this: keep faith, guys. Do not let this get you down, keep doing great work, know there are people who appreciate you for it and support you. I’m supporting you whatever you decide to do; if you decide to settle, that’s okay, maybe you don’t have a choice, you have my support; if you decide to fight, you have my support; if you want to set up a legal defense fun to be able to defend yourself, know that there are people who are ready to pitch in, I know I am.

And in the meantime before the patent system in the USA gets the overhaul it so richly deserves (I seriously wonder how any remotely innovative product2 can possibly come out of the little guys in the USA, given such incentives), maybe we can get the major technology companies to withdraw selling their products in that infamous East Texas district (as well as the other overly patent-friendly districts), such that this district becomes a technological blight where nothing more advanced than a corded phone is available. I don’t think it could or would prevent patent lawsuits over tech products from being filed there, but at least it would place the court there in a very uncomfortable position vis-à-vis of the district population.


  1. Let’s just say my memories of my time online in the nineties are a bit fuzzy; it’s a recent browse in digital archives that made me realize I in fact only discovered iconfactory.com in 1999

  2. The initial version of that post just read “innovation” instead of “remotely innovative product”; I felt I needed to clarify my meaning