Developer ID might not seem restrictive, but it is

I need to talk about Gatekeeper and Developer ID.

In short, I am very uncomfortable with this previewed security feature of Mountain Lion. Apple is trying to assure that users are only going to be safer and that developers are still going to be able to do business as usual, but the Mac ecosystem is not limited to these two parties and this ignores pretty much everyone else: for these people Gatekeeper is going to be a problem. Enough so to make me consider switching.

I don’t mean to say it’s all bad, as Apple is set to allow more at the same time as it allows less. Indeed, with Developer ID Apple is clearly undertaking better support of apps from outside the Mac App Store, if only because they will have to maintain this system going forward, and I can only hope this support will improve in other areas (such as distribution: disk images are long past cutting edge). But while Apple gives with one hand, it takes away with the other, as Mountain Lion will by default (at least as of the current betas, though that seems unlikely to change) reject unsigned apps and apps signed by certificates other than Mac App Store and Developer ID ones; of course most people will not change that default, and so you will have trouble getting these people to run your code unless you get at least a Developer ID from Apple, and while better than requiring you to go through the Mac App Store this requirement is quite restrictive too.

The matter is not that with Developer ID apps will now be exposed to being blacklisted by Apple; honestly, speaking as a developer I personally do not mind this bit of accountability. Maybe there are going to be issues with this power now entrusted to the hands of Apple, such as the possibility of authorities (through executive branch bullying, or with a proper court order) asking Apple to neutralize an app perceived as illegal, but if this ever happens I believe the first incidents will cause this eventuality to be properly restricted by law.

No, the matter, as I wrote to Wil Shipley in an email after his proposal, is that many people who are important to the Mac platform are going to be inconvenienced with this, as getting a Developer ID requires a Mac Developer Program membership.

  • Sysadmins/IT people, to begin with, often need to deploy scripts, and either those don’t need to be signed, and they become the new malware vectors, or they do (Apple could define an xattr that would store the signature for a script) and then any company deploying Macs needs to enter the Mac Developer Program and manage a Developer ID that IT needs to be able to access day to day (that is, not just for releases, like in a software company) and so could leak, just so that the company can service its own Macs internally.

  • Then we have people using open-source tools that Apple doesn’t provide, such as lynx, ffmpeg, httrack, mercurial, etc., and who likely get them from projects like MacPorts; maybe we have an exception for executables that were built on the same machine, but how is it enforced then?

  • Student developers have historically been very important to the Mac platform, if only because many current Mac developers started out as such. If entering the Mac Developer Program is required to distribute Mac apps in the future, it’s a threshold that many will not clear, and as a result they will not get precious feedback from other people using their code, or worse they will not choose Mac development as a career as they could have if they had been encouraged to do so by people using their software (for instance, Jeff Vogel wasn’t planning on making Mac games as a career, but he quit grad school when Exile started becoming popular). At 99$ (per year), it seems silly to consider the cost of the Mac Developer Program as an obstacle, especially when compared to the cost of a Mac, but you have to consider the Mac likely benefitted from a student discount and was possibly entirely paid by the family; not so for the Mac Developer Program. Regardless, any extra expense will, rationally or not, cause it not to be taken by a significant portion of the people who would have otherwise tried it, even if it would have paid for itself eventually.

  • Many users will tinker with their apps for perfectly legitimate reasons, for instance in order to localize it and then submit the localization to the author, or in the case of games to create alternate scenarios or complete mods. It’s something that I am particularly sensitive to, as for a long time I have both enjoyed other’s people’s mods and conversely tinkered myself and shared with others: I have created mods, documented the formats to help others create mods, extracted data from the game files, gave tips and tricks and development feedback on other people’s in-progress mods, I was even at some point in charge of approving mods for a game to the official mods repository, and I created tools to help develop mods (more on that later). The user modding tradition is very strong in the Ambrosia Software games community, going back to Maelstrom nearly 20 years ago, and that’s merely the one community I am most familiar with. However, tinkering in such ways typically breaks the app signature; an app with an invalid signature will currently run on Lion (I know it if only because my Dock currently has an invalid signature), but it will likely change with Mountain Lion as otherwise Gatekeeper would be pointless (an important attack to protect against is legitimate apps that have been modified to insert a malicious payload and then redistributed). So we will have to rely on developers excluding files that could be desirable for users to tinker with from the signature seal… well, except that the developer will then need to make sure the app cannot be compromised if the files outside the seal are, and I’m pretty sure it’s impossible to do so for nibs for instance, so app developers will not be able to simply leave the nibs out of the seal so that users may localize them; they will need to roll out systems like the one Wil Shipley developed for localizations, completely out of the realm of Apple-provided tooling.

  • Power users/budding developers will often create small programs whose sole purpose is to help users of a main program (typically a game, but not always), for instance by interpreting some files and/or performing some useful calculations; they typically develop it for themselves, and share it for free with other users in the community of the main program. It’s something I have done myself, again for Ambrosia games, and it’s a very instructive experience: you start with an already determined problem, file format, etc., so you don’t have to invent everything from scratch which often intimidates budding developers. However, if it is required to register in the Mac Developer Program to distribute those then power users will keep those to themselves and they won’t benefit from the feedback, and other users won’t benefit from these tools.

(Note that Gatekeeper is currently tied to the quarantine system, and so in that configuration some of the problems I mentioned do not currently apply, but let’s be realistic: it won’t remain the case forever, if only so that Apple can have the possibility of neutralizing rogue apps even after they have been launched once.)

In fact, a common theme here is that of future developers. Focusing solely on users and app developers ignores the fact that Mac application developers don’t become so overnight, but instead typically start experimenting on their spare time, in an important intermediate step before going fully professional; it is possible to become an app developer without this step, but then the developer won’t have had the practice he could have gotten by experimenting before he goes pro. Or worse, he will have experimented on Windows, Linux, or the web, and gotten exactly the wrong lessons for making Mac applications—if he decides he wants to target the Mac at all in the end.

Because of my history, I care a lot about this matter, especially the last two examples I gave, and so I swore that if Apple were to require code to be signed by an authority that ultimately derives from Apple in order to run on the Mac, such that one would have to pay Apple for the privilege to distribute one’s own Mac software (as would be the case with Developer ID), then I would switch away from the Mac. But here Apple threw me a curveball, as it is the case by default, but users can choose to allow everything, but should that matter, since the default is what most people will ever know? Argh! I don’t know what to think.

In fact, at the same time I am worrying about the security of the whole system and wish for it to be properly secure: I know that any system that allows unsigned code to run is subject to the dancing bunnies problem; and maybe the two are in fact irreconcilable and it is reality itself I am having a problem with. I don’t know. Maybe Apple could allow some unsigned apps to run by default, on condition they have practically all the sandboxing restrictions to limit their impact. The only thing is, in order to be able to do anything interesting, these apps would at least have to have access to files given to them, and even that, combined with some social engineering, would be enough for malware to do harm, as users likely won’t treat these unsigned apps differently from regular desktop apps, which they consider “safe”. Maybe the only viable solution for distribution of tinkerer apps are as web apps (I hear there is work going on to allow those to access user files); I don’t like that very much (e.g. JavaScript is not very good to parse arbitrary files), but at the same time users do tend to take web apps with more caution than they take desktop apps (at least as far as giving them files goes, I hope), and any alternate “hyper sandboxed” system that would be introduced would have to compensate the 15+ years head start the web has in setting user expectations.

The same way, the very same cost of the Mac Developer Program which is a problematic threshold for many is also the speed bump that will make it economically unviable for a malware distributor who just had its certificate revoked to get a new one again and again.

This is why, paradoxically, I wish for iOS to take over the desktop, as by then iOS will likely have gained the possibility to run unsigned apps, and users having had their expectations set by years of being able to use only (relatively) safe iOS App Store apps will see these unsigned apps differently than they do apps from the store.

Anyway, nothing that has been presented about Mountain Lion so far is final, and important details could change before release, so it’s no use getting too worked up based on the information we know today. But I am worried, very worried.

iOS lacks a document transfer system, as well

This is a follow-up of sorts to iOS lacks a document filing system, though it stands very well on its own.

I’ve never bought into the Free Software movement premise that, if only for all software we used we had the freedom to get its source code, the freedom to modify it as much as wanted, and the freedom to redistribute the modified version, then all would be good as we would be safe from investing in some system, then getting stuck with the vendor not providing the updates or bug fixes we want, without any recourse. I mean, this premise may be true, but in practice, while not always taken to these extremes this encourages software which is user-hostile and governed by the worst kind of meritocracies, so I am deeply skeptical the tradeoff is worth it for user-facing software.

However, I care a lot about a related matter, which is the transferability of documents (rather than source code). I consider it a fundamental freedom of the computer user that he be able to take the data he created out of the application he created it with, so that he may be able to use it with another application. This involves a number of things, in particular it’s better if the format is as simple as possible for the purpose, it’s better if it is documented, it’s better if the format has a clear steward for future evolutions, better yet if that steward is a standards body or consortium1. But the most important capability the user needs to have is the ability to take the raw file out of the application that created it. It is necessary, none of the other capabilities make any sense without it, and even in the worst case (undocumented, overcomplicated format) it is often sufficient given enough developer effort, especially if the aim is to extract an important part of the data (e.g. text without the formatting). This is the freedom zero of user data. Plus, if the user has this last resort ability, the app will tend to make sure it is at least not too complicated to do so, so as to improve the experience.

But on iOS, the user may not have even that. An app is entirely free to never allow export of user data (and thanks to the sandbox, other apps are not even allowed to get at the internal storage as a last resort). Or it could only allow export of, say, the flattened image, while the working document with layers remains detained by the creating app. On the Mac, on the other hand, not only can the user get at the underlying storage, but if an app wants to allow its documents to be moved in space (to another machine), then it necessarily has to save them in files and therefore allow them to be moved to another universe (another app). Meanwhile, on iOS iCloud document storage actually makes the situation worse, because now an app can allow its documents to be moved in local space (another of the user’s devices) without exposing itself to having the documents moved to outer space (a device belonging to someone else) or to another universe.

The sandbox is bad enough already for document transferability; in order to compensate what Apple should have done from the moment the iPad was available is have a real system for an app to offer one of its documents for sharing, the system would then handle making an email out of it, offering it to another app, or putting it on iCloud, etc.; then Apple should have strongly recommended this system be used in place of any ad-hoc document sharing (e.g. the app manually creating an email with a document attached). You might say this is precisely the kind of generic cover-all solution Apple is trying to get rid of, but I never said it would have to be a user-visible system. Rather, there would be specific ways for the user to initiate the various transfers; then in order to get the document out of the app, the system would call a callback on the app for it to give the document file path, without indicating what the system would be about to do with the document. And the kicker: iCloud would exclusively rely on this callback for documents to be provided to it, without any other way for the app to take advantage of iCloud document storage. So to have the “iCloud” feature, the app would have to (truthfully) implement this callback, and therefore would have no choice but to also allow its documents to be shared or transferred in this case.

Ownership of your creations is one advantage native apps have and that Apple could play over web apps (where the situation is so appalling it’s not even funny), but Apple hasn’t made a good job in this area so far. I hope it will change, because it will only matter more and more going forward.


  1. The two (Free Software and Free Data) are not entirely unrelated, though they are not as related as open source advocates would like you to think: source code is often a poor documentation for the file format; conversely, some of the best formats by these criteria, such as the MPEG4 container format or PostScript, have come from the closed-source world.

Goodbye, NXP Software

For the last four years, starting before this blog even began, I have been working as a contractor programmer for NXP Software. Or rather had been, as the mission has now ended, effective 1st of January 2012. It was a difficult decision to take, and I will miss among other things the excellent office ambience, but I felt it was time for me to try other things, to see what’s out there, so to speak. After all, am I not the wandering coder?

I’ll always be thankful for everything I learned, and for the opportunities that have been offered to me while working there. Working at NXP Software was my first real job, and I couldn’t have asked for a better place to start at as people there have been understanding in the beginning when I clumsily transitioned to being a full-blown professional. I am also particularly thankful (among many other things) for the opportunity to go to WWDC 2010, where I learned a ton and which allowed me to meet people from the Apple community (not to mention visiting San Francisco and the bay area, even for a spell).

There are countless memories I’ll forever keep of the place, but the moment I’m most proud of would be the release of CineXplayer, and in particular its getting covered on Macworld. Proud because it’s Macworld (and Dan Moren), of course, but also because of something unassumingly mentioned in the article. You see, in the CineXplayer project I was responsible for all engine development work (others handled the UI development), including a few things at the boundary such as video display and subtitle rendering; we did of course start out from an existing player engine, and we got AVI/XviD support from ongoing development on that player (though we got a few finger cuts from that as we pretty much ended up doing the QA testing of the feature…), but interestingly when we started out this player engine had no support for scrubbing. None at all. It only supported asynchronous jumping, which couldn’t readily be used for scrubbing. And I thought: “This will not do.” and set out to implement scrubbing; some time later, it was done, and we shipped with it.

And so I am particularly proud of scrubbing in CineXplayer and its mention in Dan Moren’s article, not because it was particularly noticed but on the contrary because of the so modest mention it got: this means it did its job without being noticed. Indeed, rather than try and seek fifteen pixels of fame, programmers should take pride in doing things that Just Work™.

As I said, I wanted a change of scenery, and that is why I am still employed by SII and I have started a new mission in Cassidian to work on developing professional mobile radio systems (think the kind of private mobile network used by public safety agencies like police and firefighters). Don’t worry, I am certainly not done developing for iOS or dispensing iOS knowledge and opinions here, as I will keep doing iOS stuff at home; I can’t promise anything will come out of it on the iOS App Store, but you’ll certainly be seeing blog posts about it.

And I know some people in NXP Software read this blog, so I say farewell to all my peeps at NXP Software, and don’t worry, I’ll drop by from time to time so you’ll be seeing me again, most likely…

iOS lacks a document filing system

Since the beginning of 2010 when the iPad was released, there has been no end of debates over whether it is suitable for creating content, or whether it is primarily a “content consumption” (ugh) device (as if the choices were thus limited…). I am resolutely of the opinion that the iPad is an easel that very much supports serious creative endeavors given the right environment.

I unfortunately had (as you may have noticed) to qualify that last statement. Besides a few colleagues at work, two examples of iPad-using people that I base this statement on are the Macalope and Harry McCracken. And these examples have something in common: in all three cases, once the work is done, the documents are sent, handled, stored, etc. by either a corporate server, or a publishing CMS, or some other similar infrastructure. Here the iPad only needs to make a good job of storing the document for the time necessary to complete it; once done and sent, the document can even be removed from the device.

Let us contrast that with another situation. My father is a high school teacher; for the last 25+ years he has been working using computers, preparing teaching notes, transparent slides to project, diagrams, tests and their answers, student average note calculation documents, etc. on his Macs (and before that on an Apple ][e). He shares some of these with his colleagues (and back) and sometimes prints on school printers, so he is not working in complete isolation, but he cannot rely on a supporting infrastructure and has to ensure and organize storage of these teaching material documents himself. He will often need to update these when it’s time to teach the same subject one year later, because the test needs to be changed so that it’s not the exact same as last year, because the curriculum is changing this year, because the actual class experience of using them the previous year led him to think of improvements to make the explanation clearer, because this year he’s teaching a class with a different option so they have less hours of his course (but the same curriculum…), etc. Can you imagine him using solely an iPad, or even solely an imaginary iOS 5 notebook, to do so? I can’t. Let us enumerate the reasons:

  • Sure, one can manage documents in, say, Pages. But can one manage hundreds of them? Even with search this is at best a chore, and it’s easy to feel lost as there is no spatial organization; and search could return irrelevant results and/or not find the intended document because of e.g. synonyms.
  • If one remembers a document, but not the app which was used to create it, it’s hard to find it again, as the system-wide search in iOS cannot search in third-party apps (at least it couldn’t when this feature was released in iPhone OS 3.0, and I am not aware of this having changed), so one has to search each and every app where this document could have been made.
  • In some cases, for a project for instance, it is necessary to group documents created by different apps: sometimes there is no single app that can manage all the different media for a single project. On iOS these documents can only exist segregated into their own apps with no way to logically group them.
  • If there is a screwup, as far as I am aware it is not possible to restore a single document from backup, in fact it does not seem possible to restore a single app from backup, only full device restores, which may not be practical as it likely means losing work done elsewhere.

iOS needs a document filing system, badly.

The worst thing is, with the exception of file transfer in iTunes (which pretty much only shifts the issue to the computer, with some more overhead), the situation is the exact same as it was in iPhone OS 2.0 when third-party apps first became possible. iCloud solves exactly none of these problems: it is great to simplify working between your different devices, but it brings nothing to the single-device case. This has nothing to do with the hardware limitations of any iOS device, this is entirely the doing of the iOS software; in fact, while this is acceptable for the iPhone, I feel this gap already limits the potential of the iPad unnecessarily; and regardless of how you think it will happen (my take, which I will elaborate in a later post: Mac OS X is the new Classic), it is clear Apple has Big Plans for iOS, but it is hard to take iOS seriously for any device used for work if Apple hasn’t even shipped a first version of a document filing system, which is quite a design task and will require multiple iterations to get right for most people.

Now you may be wondering: does it really matter for working on iOS to depend a corporate, publishing, design studio, etc. infrastructure? Most people working on computers already work in the context of such an infrastructure. I think that yes, it does matter. Even if we admit that people working outside such an infrastructure are the exception rather than the rule, there are many of them, enough to prop up a competing platform (potentially the Mac) that would cater to their needs. Plus, sometimes such an infrastructure (e.g. in small businesses) may be unreliable, so it is a good idea to have a fallback. Moreover, it’s not really a good idea for Apple to make iOS dependent on such an infrastructure, as then Apple will not be able to control aspects of the experience it likely cares about, and will not be able to define, for instance, the modern notion of how to encapsulate user creations (I can imagine Apple getting past the concept of documents themselves and introducing something new), or how document typing information is represented. Whereas if iOS devices had a document filing system worthy of its name, but could also still be used in such an infrastructure as they can today, then Apple could define the rules and external infrastructure would follow the lead. Currently, iOS devices are more akin to terminals when it comes to working on them; not quite VT-100 or Chromebooks, but you get the idea.

When I see the absence of a user-visible traditional file system in iOS being lauded as some sort of brilliant new move, I’m scratching my head. It is a bold move, for sure, and not having something does represent accomplished work in the sense that it is a design decision, but honestly not having this feature is the easy part, creating a worthwhile replacement is the hard part, one that Apple has not shown even an interest in tackling. Moreover, the absence of a user-visible filesystem is nothing new. Indeed, back in the 80’s when computer GUIs were developed, two philosophies emerged for dealing with documents: a document-centric approach, where documents are at the center and applications are but tools which can each be used for a specific task on these documents, and an application-centric approach, where applications are the focus and documents only make sense within their context. The Apple Lisa, for instance, was document-centric: users would tear down from a stationery to create a document, which could then be operated on by tools. By contrast, the Macintosh (and everything it then inspired) was mostly application-centric. In this context, iOS merely is purely application-centric. Precedents of such systems exist, and include game consoles with memory cards for instance.

And was it really necessary to forego the filesystem in its entirety in the first place? Admittedly, it has become more and more complicated over the years, with documents being diluted by an ever increasing number of non-document files visible to the user, especially after the Internet and Web came to be. And, okay, even the Macintosh Finder at the origin did represent applications and system files along with user documents, and thus was not really a document filing system. However, was it really necessary to throw out the baby with the bathwater? It would have been feasible for iOS to feature a clean filesystem view with most everything invisible and various enhancements (like virtual folders and virtual filenames) so that it would only feature documents (in fact, I think the Mac OS X Finder in 2001 should have shown only the inside of the home folder, with applications launched from a Launchpad-like mechanism, but I guess a few things like the need to support Classic prevented that anyway). But maybe filesystems as users know them had truly become fatally tainted, and maybe it was indeed necessary to take a clean break from the past; in the end it doesn’t really matter either way, however it is not a good thing to forego something and put no successor for so long.

In the end, I am afraid Apple is not taking this aspect of the computing experience seriously, and is neglecting it. They ought to take it seriously, because it will matter, I think it will matter a lot in fact.

I explored a related aspect of document management in a followup — February 21, 2012

~ Reactions ~

Jesper (who, unbeknownst to me, had already touched some of these points, such as the specific notion of a document filing system) expands on the matter, also theorizing why the iOS group makes iOS be that way.

Unfortunately my knowledge of Magyar is exactly zero (and Google Translate is a bit hit and miss), but I’m sure Benke Zsolt is saying very interesting things.

I am honored that Lukas Mathis would link to me, but if I am mentioning it as a reaction it is because of the slightly overstated, but pretty good comparison he added.

A word about SOPA

The tech media is abuzz with news of a project called “SOPA”, and so I learned that the people of the United States of America, represented by their senators and representatives, are considering new legislation aimed at combatting digital piracy. It is not my position to criticize the decisions of the sovereign people of the USA over their domestic affairs. However, I urge the people of the USA and their representatives to seriously consider the impact of the proposed legislation over their international commitments before taking their decision.

For one, while filtering DNS entries for ISPs in the USA might seem it would only have a local impact, it would in fact seriously undermine the very infrastructure of the Internet, which is recognized to be a global infrastructure not belonging to any nation in particular.

Then, the broad and not too strict criteria for classifying a site in the proposed legislation mean rights holders in the USA would be given powers of enforcement much greater than they had in the past. Moreover, some rights holders have used existing tools in the past, such as DMCA takedowns, to target and block sites that were not engaged in intellectual property infringing activities, but rather in activities like parody, which is protected by free speech. Finally, adding to this the lack of any due process means that innovative sites from outside the USA would be exposed to a lot of risk of being blocked from a complaint by a competitor based in the USA, or being unable to collect money from USA citizens, with little recourse if this were to happen, which could be considered an impediment to free trade by the WTO.

People of the USA, I thank you for your attention and wish to send you my most friendly salutations.

GCC is dead, long live the young LLVM

(Before I get flamed, I’m talking of course of GCC in the context of the toolchains provided by Apple for Mac and iOS development; the GCC project is still going strong, of course.)

You have no doubt noticed that GCC disappeared from the Mac OS X developer tools install starting with Lion; if you do gcc --version, you’ll see LLVM-GCC has been given the task of handling compilation duties for build systems that directly reference gcc. And now with the release of the iOS 5 SDK, GCC has been removed for iOS development too, leaving only LLVM-based compilers there as well.

Overall I’m going to say it’s a good thing: LLVM, especially with the Clang front end has already accomplished a lot, and yet has so much potential ahead of it; while GCC was not a liability, I guess this very customized fork was a bit high maintenance. Still, after 20 years of faithful service for Cocoa development at NeXT then Apple, it seems a bit cavalier for GCC to be expelled in mere months between the explicit announcement and it actually being removed. Ah well.

But while I have no worry with LLVM when doing desktop development (that is, when targeting x86 and x86-64), however LLVM targeting iOS (and thus ARM) is young. Very young. LLVM has only been deemed production quality when targeting ARM in summer 2010, merely one year ago and change. Since then I have heard of (and seen acknowledged by Chris Lattner) a fatal issue (since fixed) with LLVM for ARM, and it seems another has cropped up in Xcode 4.2 (hat tip to @chockenberry). So I think the decision to remove GCC as an option for iOS development was slightly premature on Apple’s part: a compiler is supposed to be something you can trust, as it has the potential to introduce bugs anywhere in your code; it has to be more reliable and trustworthy than the libraries, or even the kernel, as Peter Hosey quipped.

Now don’t get me wrong, I have no problem with using Clang or LLVM-GCC for iOS development, in fact at work we switched to Clang on a trial basis (I guess it’s now no longer on a trial basis anymore, certainly not after the iOS 5 SDK) about one year ago, and we’ve not had any issue ourselves nor looked back since. Indeed, for its relative lack of maturity and the incidents I mentioned, LLVM has one redeeming quality, and it’s overwhelming: Apple is itself using LLVM to compile iOS; Cocoa libraries, built-in apps, Apple iOS App Store apps, etc., millions upon millions of lines of code ensure that if a bug crops up in LLVM, Apple will see it before you do… provided, that is, that you don’t do things Apple doesn’t do. For instance, Apple has stopped targeting ARMv6 devices starting with iOS 4.3 in March 2011, and it is no coincidence that the two incidents I mentioned were confined to ARMv6 and did not affect ARMv7 compilation.

So I recommend a period of regency, where we allow LLVM to rule, but carefully oversee it, and in particular prevent it from doing anything it wouldn’t do at Apple, so that we remain squarely in the use cases where Apple shields us from trouble. This means:

  • foregoing ARMv6 development from now on. In this day and age it’s not outlandish to have new projects be ARMv7-only, so do so. If you need to maintain an existing app that has ARMv6 compatibility, then develop and build it for release with Xcode 4.1 and GCC, or better yet, on a Snow Leopard machine with Xcode 3.2.6 (or if you don’t mind Snow Leopard Server, it seems to be possible to use a virtual machine to do so).
  • avoiding unaligned accesses, especially for floating-point variables. It is always a good idea anyway, but doubly so now; doing otherwise is just asking for trouble.
  • ensuring your code is correct. That sounds like evident advice, but I’ve seen in some cases incorrect code which would run OK with GCC, but was broken by LLVM’s optimizations.
  • I’d even be wary of advanced C++ features; as anyone who has spent enough time in the iOS debugger can attest from the call stacks featuring C++ functions from the system, Apple uses quite a bit of C++ in the implementation of some frameworks, like Core Animation, however C++ is so vast that I’m not sure they make use of every nook and cranny of the C++98 specification, so be careful.
  • avoiding anything else you can think of that affects code generation and is unusual enough that Apple likely does not use it internally.

Now there’s no need to be paranoid either; for instance to the best of my knowledge Apple compiles most of its code for Thumb, but some is in ARM mode, so you shouldn’t have any problem coming from using one or the other.

With this regency in place until LLVM matures, there should be no problems ahead and only success with your iOS development (as far as compiling is concerned, of course…)

“translation layers”, externally sold content, and unsandboxed apps

So Apple ended up relenting on most of the requirements introduced at the same time as subscriptions. Apple does still require that apps not sell digital content in the app itself through means other than in-app purchases, or link to a place where this is done, however. I would say this is a reasonable way to provide an incentive for these products to be offered as in-app purchases, were it not first for the fact the agency model used for ebooks in particular (but I’m sure other kind of digital goods are affected) does not allow for 30% of the price to go to Apple, even if the price used in-app is 43% higher than the price out of the app, and second for the fact some catalogs (Amazon’s Kindle one, obviously, but it must be a pain for other actors too) cannot even be made to fit in Apple’s in-app database.

John Gruber thinks this is not Apple’s problem, but at the same time Apple has to exist in reality at some point. Besides, I don’t think Apple is entitled over the whole lifetime of an app to 30% of any purchase where the buying intent originated from the app. Regardless of whether you think it’s fair or not, competitors will eventually catch up in this area and offer better conditions to publishers, making it untenable for Apple to keep this requirement. But it’s not fair either for Apple to shoulder for free the cost of screening, listing, hosting, etc. these “free” clients that in fact enable a lot of business. Maybe apps could be required to ensure the first 10$ of purchases made in the app can be paid only using tokens bought through in-app purchase (thus avoiding the issue of exposing all SKUs to Apple), then only could they directly take users’ money.

But what this edict has done anyway—besides making the Kobo, Kindle, etc. apps quite inscrutable by forcing them to remove links to their respective stores—is hurt Apple’s credibility with respect to developer announcements. Last year they prohibited Flash “translation layers”, and this prohibition had already been in application (to the extent that it could be enforced, anyway) for a few months when they relented on it. This year they dictated these rules for apps selling digital content, rejecting new apps for breaking them before these rules were even known, with existing apps having until the end of June to comply, only for Apple to significantly relax these rules at the beginning of June (and leave until the end of July to comply). This means that in both cases developers were actually better off doing nothing and waiting to see what Apple would actually end up enforcing. I was about to wonder how many Mac developers were actually scrambling to implement sandboxing, supposed to be mandatory in the Mac App Store by November, but it turns out Apple may have jumped the gun at the very least here too, as they just extended the deadline to March. In the future, Apple may claim that they warned developers of such things in advance but the truth is most of the stuff they warned about did not come to pass in the way they warned it would, so why should developers heed these “warnings”?

Steve

I wasn’t sure I should write something, at first. Oh, sure, I could have written about the fact I didn’t dress specially thursday morning or didn’t bring anything to an Apple Store, as I thought for Steve I should either do something in the most excellent taste or nothing, and I couldn’t think of the former (and so I kicked myself saturday when I went to the Opera Apple Store to buy a Lion USB key, saw them, and thought “Of course! An apple with a bite taken out of it… dummy!”). Or I could have written about the fact he was taken from his families at a way too early age. Or about the fact, except for this one (and variants of this one, though one would have been enough), I was appalled by the editorial cartoons about the event (“iDead”? Seriously?). Or about a few obituaries I read or heard where the author put some criticism along with the praise (which by itself I don’t mind, honestly, he was kind of a jerk), but put in a way that suggested the good could be kept without the flaws, while for instance in an industry where having different companies responsible for aspects of the user experience of a single device is considered standard practice, being a control freak is essential to ensure the quality of the user experience that has made Apple a success. Or about how his presence in the keynotes during his last leave of absence (while on the other hand he stepped back from presentation duties during the previous one), and his resignation merely 6 weeks ago, both take on a whole new meaning today.

But at the end of the day, what would have I brought, given the outpouring of tributes and other content about Steve Jobs, many from people more qualified and better writers than I am? Not much. However, I read a piece where the author acknowledges the impact Steve Jobs had on his life, and I thought I should, too, pay my dues and render unto Steve that which is Steve’s, if only to help with the cathartic process. I hope it will contribute something for his family, his family at Apple, his family at Disney/Pixar, and the whole tech and media industries in this time of grief.

I was quite literally raised with Apple computers; from an Apple ][e to the latest Macs, there has always been Apple (and only Apple) hardware in the house, for which I cannot thank my father enough. As a consequence, while I had no idea who Steve Jobs was at the time, he was already having a huge impact on me. Not because I think he designed these computers all by himself, but because, by demanding seemingly impossibly high standards from the ones who designed them with him, or in the case of later Macs, by having made enough of a mark at Apple that the effect was (almost) the same, he ensured a quality of user experience way beyond that of any competitor, which allowed my young self to do things he wouldn’t have been able to do otherwise, and teaching him to expect, nay, demand similar excellence from his computing devices.

Then I started learning about him when he returned to Apple in 1997, from a press cautiously optimistic that the “prodigal son” could get Apple out of trouble, then how he spectacularly did so. I indirectly learned from him (in particular through folklore.org) that it requires a great deal of effort to make something look simple, that there is never good enough, merely good enough to ship this once (because on the other hand, real artists ship) and that the job of the software developer is to be in service of the user experience, not to make stuff that is only of interest to other software developers and remain in a closed circuit.

Imagining my life had Steve Jobs not made what he made is almost too ludicrous to contemplate. Assuming I would even have chosen a career in programming, I would be developing mediocre software on systems that would be as usable a mid-nineties Macintosh, if that, and would have very little of the elegance (come on: setting aside any quibble about who copied whom, do you think Windows or any other operating system would be where it is today were it not for the Mac to at the very least compete with it and make it do one better in the usability department?). And the worst thing is that I would have been content with it and considered it as good as it gets, and it would have been the same for almost all of my peers.

It’s thus safe to say that as far as my influences go, Steve Jobs is second only to my closest family members. By envisioning the future, then making it happen through leadership, talent and just plain chutzpah (for good or ill, it doesn’t seem to be possible to make people believe in your predictions of what the future will be made of, other than by actually taking charge and realizing it), he showed us what computers (and portable music players, and mobile phones, etc.) could be rather than what most people thought they could be before he showed us. And by teaching a legion of users, multiple generations of developers, and everyone at Apple to never settle for great but always strive for the best, he has ensured the continuation of this ethic for a few decades, at least (this is, incidentally, the reason why I am not too worried about the future of Apple, Inc.).

Thank you Steve. Thank you for everything. See you at the crossroads.

Benefits (and drawback) to compiling your iOS app for ARMv7

In “A few things iOS developers ought to know about the ARM architecture”, I talked about ARMv6 and ARMv7, the two ARM architecture versions that iOS supports, but I didn’t touch on an important point: why you would want to compile for one or the other, or even both (thanks to Jasconius at Stack Overflow for asking that question).

The first thing you need to know is that you never need to compile for ARMv7: after all, apps last updated at the time of the iPhone 3G (and thus compiled for ARMv6) still run on the iPad 2 (provided they didn’t use private APIs…).

Scratch that, you may have to compile for ARMv7 in some circumstances: I have heard reports that if your app requires iOS 5, then Xcode won’t let you build the app ARMv6 only. – May 22, 2012

So you could keep compiling your app for ARMv6, but is it what you should do? It depends on your situation.

If your app is an iPad-only app, or if it requires a device feature (like video recording or magnetometer) that no ARMv6 device ever had, then do not hesitate and compile only for ARMv7. There are only benefits and no drawback to doing so (just make sure to add armv7 in the Required Device Capabilities (UIRequiredDeviceCapabilities) key in the project’s Info.plist, otherwise you will get a validation error from iTunes Connect when uploading the binary, such as: “iPhone/iPod Touch: application executable is missing a required architecture. At least one of the following architecture(s) must be present: armv6”).

If you still want your app to run on ARMv6 devices, however, you can’t go ARMv7-only, so your only choices are to compile only for ARMv6, or for both ARMv6 and ARMv7, which generates a fat binary which will still run on ARMv6 devices while taking advantage of the new instructions on ARMv7 devices1. Doing the latter will almost double the executable binary size compared to the former; executable binary size is typically dwarfed by the art assets and other resources in your application package, so it typically doesn’t matter, but make sure to check this increase. In exchange, you will get the following:

  • ability to use NEON (note that you will not automatically get NEON-optimized code from the compiler, you must explicitly write that code)
  • Thumb that doesn’t suck: if you follow my advice and disable Thumb for ARMv6 but enable it for ARMv7, this means your code on ARMv7 will be smaller than on ARMv6, helping with RAM and instruction cache usage
  • slightly more efficient compiler-generated code (ARMv7 brings a few new instructions besides NEON).

Given the tradeoff, even if you don’t take advantage of NEON it’s almost always a good idea to compile for both ARMv6 and ARMv7 rather than just ARMv6, but again make sure to check the size increase of the application package isn’t a problem.

Now I think it is important to mention what compiling for ARMv7 will not bring you.

  • It will not make your code run more efficiently on ARMv6 devices, since those will still be running the ARMv6 compiled code; this means it will only improve your code on devices where your app already runs faster. That being said, you could take advantage of these improvements to, say, enable more effects on ARMv7 devices.
  • It will not improve performance of the Apple frameworks and libraries: those are already optimized for the device they are running on, even if your code is compiled only for ARMv6.
  • There are a few cases where ARMv7 devices run code less efficiently than ARMv6 ones (double-precision floating-point code comes to mind); this will happen on these devices even if you only compile for ARMv6, so adding (or replacing by) an ARMv7 slice will not help or hurt this in any way.
  • If you have third-party dependencies with libraries that provide only an ARMv6 slice (you can check with otool -vf <library name>), the code of this dependency won’t become more efficient if you compile for ARMv7 (if they do provide an ARMv7 slice, compiling for ARMv7 will allow you to use it, likely making it more efficient).

So to sum it up: you should likely compile for both ARMv6 and ARMv7, which will improve your code somewhat (or significantly if you take advantage of NEON) but only when running on ARMv7 devices, while increasing your application download to a likely small extent; unless, that is, if you only target ARMv7 devices, in which case you can drop compiling for ARMv6 and eliminate that drawback.


  1. Apple would very much like you to optimize for ARMv7 while keeping ARMv6 compatibility: at the time of this writing, the default “Standard” architecture setting in Xcode compiles for both ARMv6 and ARMv7.

China declined to join an earlier coalition, Russia reveals

The saga of France’s liquidation sale continues (read our previous report). Diplomatic correspondence released yesterday by Russia in response to China’s communiqué reveals that China was asked to join an earlier coalition to acquire South Africa’s nuclear arsenal (an acquisition China mentioned in its communiqué as evidence of a conspiracy), but China declined.

This seemed to undermine China’s argument of an international conspiracy directed against it, at the very least it strengthens the earlier coalition’s claim that its only purpose was to figuratively bury these nuclear weapons; it should be noted high-profile countries Russia and USA are members of both coalitions.

China then answered with an update to their communiqué (no anchor, scroll down to “UPDATE August 4, 2011 – 12:25pm PT”) stating the aim of this reveal was to « divert attention by pushing a false “gotcha!” while failing to address the substance of the issues we raised. » The substance being, according to China, that both coalitions’ aim was to prevent China from getting access to these weapons for itself so that it would have been able to use them to dissuade against attacks, and that China joining the coalition wouldn’t have changed this.

Things didn’t stop here, as Russia then answered back (don’t you love statements spread across multiple tweets?) that it showed China wasn’t interested in partnering with the international community to help reduce the global nuclear threat.

For many geopolitical observers, the situation makes a lot more sense now. At the time the France sale was closed and the bids were made public, some wondered why China wasn’t in the winning consortium and had instead made a competing bid with Japan. China and Japan are somewhat newcomers to the nuclear club, and while China’s status as the world’s manufacturer pretty much guarantees it will never be directly targeted, its relative lack of nuclear weapons is the reason, according to analysts, it has less influence than its size and GDP would suggest. Meanwhile, China is subjected to a number of proxy attacks, so analysts surmise increasing its nuclear arsenal would be a way for China to dissuade against such attacks against its weaker allies.

So the conclusion reached by these observers is that, instead of joining alliances that China perceived as designed to keep the weapons out of its reach, China played everything or nothing. But the old boys nuclear club still has means China doesn’t have, and China lost in both cases, and now China is taking the battle to the public relations scene.

Geopolitical analyst Florian Müller in particular was quoted pointing out that, given the recent expansion of its influence, it was expected for China to be targeted by proxy, and other countries were likely acting their normal course and were not engaged in any organized campaign.

So to yours truly, it seems that while the rules of nuclear dissuasion may be unfair, it seems pointless to call out the other players for playing by these rules, and it makes China look like a sore loser. But the worst part may be that the Chinese officials seemingly believe in their own, seemingly self-contradicting (if they are so much in favor of global reduction of nuclear armaments, why wouldn’t they contribute to coalitions designed to take some out of the circulation?) rhetoric, which would mean the conflict could get even bitterer in the future.