Apple to phase out usage of Imagination Technologies GPU in iOS devices

Big news dropped recently: via Daring Fireball, we learn that Apple notified Imagination Technologies that they would no longer be using their products in new iPhone, iPad or iPod Touch designs within a 15 to 24 months timeframe.

For some time already, the GPU has been the biggest driver and bottleneck of iOS performance, if not since the beginning, at least starting with the iPad and Retina devices, compounded when iPads became Retina themselves: iOS SoCs have been characterized for some time as being bandwidth monsters (relatively to mobile devices), most of it connected to the GPU so that it can feed the screen pixels. It is the GPU which is mostly responsible for scrolling smoothness, for the amount of layers you can have on screen before performance takes a dive, for the performance of games, etc. The improvement of CPU performance, comparatively, improves the iOS experience much less (in the browser, mostly). If you’ve been curious enough to look at CPU teardowns of iPhones, for instance here for the iPhone 7, you know the GPU can take as much space as the multiple CPU cores, and for iPads a truly outrageous amount of silicon surface is taken by the GPU alone. And you are more than aware of Apple’s reliance on graphical effects (not just partial transparency, but also now translucency, blurs, etc.) in the iOS interface, all of which are generated by the GPU. So the GPU on iPhones and iPads has strategic importance.

If you need a refresher, Apple has been using PowerVR GPUs from Imagination ever since the original iPhone. More than that, though, it is the only outside technology (and a significant one, at that) that is and has always been an explicit dependency for iOS apps: readers of this blog don’t need to be reminded of Apple’s insistence to own every single aspect of the iOS platform (if you missed the previous episodes, most of it is in my iPhone shenanigans category) so as not to let anyone (Microsoft, Adobe, whoever) get leverage over them, but graphical technologies have been a notable exception, being more than mere software. For instance, while Apple uses OpenGL ES, and now Metal, to abstract away the GPU, a number of PowerVR-specific extensions have always been available and Apple encouraged their use. Even if Apple has recently tried to wean their developers away from these extensions, and stopped advertising to developers the GPUs as being PowerVR products (starting with the A7/iPhone 5S, if I recall correctly), iDevices are still using Imagination products, and PVRTC, as in PowerVR Texture Compression format, textures are still a common sight in the bundle of iOS games and other apps, for instance.

So the first challenge here is the dependencies on these extensions. I don’t see Apple getting developers to make such a transition so quickly, especially as the first devices without Imagination tech are going to be available 12 months before the deadline (the iOS product lines have become too complex to perform the hardware transition all at once), which would leave developers 3 to 12 months to transition… So most likely, Apple is going to have to support those, and this is going to expose them to intellectual properties issues (patents or otherwise). Besides the extensions developers explicitly use, there are all the performance aspects and tradeoffs specific to PowerVR that iOS games have unwittingly become dependent upon (e.g. whether to use complex geometry or compensate with shaders, how to best obtain some effects, etc.), which Apple would have to best reproduce, or at least not regress on, in a new GPU.

And even if they started from a blank slate when it came to third-party software, Apple has many technological challenges to overcome. Much like audio and video codecs, graphical processing technologies are patented to the hilt; but contrary to audio/video codecs, there is no FRAND licensing, no patent pool, or single licensing counter for GPU tech; instead, existing GPU companies live in an uneasy truce, given they are all exposed to each other’s patents. And mobile GPUs are a particular breed within this universe, with adapted techniques to live in such constraints, like Tile-Based Deferred Rendering (present in all PowerVR GPUs). Apple has managed to build its own CPU with great success, so I have little doubt that they will manage to develop their own GPU, especially given their expertise in SoC design as well. But I also see patent royalty payments in Apple’s future.

So what does this mean for iOS developers? For now, nothing. There is nothing to justify scrambling to remove any PowerVR dependency at this point, and it’s pointless to second-guess the performance characteristics of these future Apple GPUs. Best to wait for Apple to come forward. But there is some transition ahead, because at least some long-held assumptions about how iPhone graphics work are going to be challenged when the new Apple GPU will eventually appear. If anything, I’m surprised for such a glaring externality in the iOS platform to have managed to remain for so long, and it will be interesting to see how this will play out and how Apple will manage any necessary transition.

See also: Ryan Smith’s take at AnandTech, a reference.


Patents and their application to software have been in the news lately: Lodsys and other entities that seem to have been created from whole cloth for that sole purpose are suing various software companies for patent infringement, Android is attacked (directly or indirectly) by historical operating system actors Apple, Microsoft and Oracle (as owner of Sun) for infringing their patents, web video is standardized but the codec is left unspecified as the W3C will only standardize on freely-licensable technologies while any remotely modern video compression technique is patented (even the ostensibly patent-free WebM codec is heading towards having a patent pool formed around it).

Many in the software industry consider it obvious that not only a reform is needed, but that software patents should be banned entirely given their unacceptable effects; however I haven’t seen much of a justification of why they should be banned, as often the article/blog post/editorial defending this position considers it obvious. Well, it is certainly obvious for the author as a practitioner of software, and obvious to me as the same, but it’s not to others, and I wouldn’t want engineers of other trades to see software developers as prima donnas who think they should be exempted from the obligations related to patents for no reason other than the fact it inconveniences them. So here I am going to expose why I consider that software patents actually discourage innovation, and in fact discourage any activity, in the software industry.

Why the current situation is untenable

Let’s start by the basics. A patent is an invention that an inventor, in exchange for registering it in a public office (which includes a fee), is given exclusive rights to. Of course, he can share that right by licensing the patent to others, or he can sell the patent altogether. Anyone else using the invention (and that includes an end user) is said to be infringing the patent and is in the wrong, even if he came up with it independently. That seems quite outlandish, but it’s a tradeoff that we as a society have made: we penalize parallel inventors who are of good faith in order to better protect the original inventor (e.g. to avoid copyists getting away with their copying by pretending they were unaware of the original invention). Of course, if the parallel inventor is not found to have been aware of the original patent, he is less penalized than if he were, but he is penalized nonetheless. The aim is to give practitioners in a given domain an incentive to keep abreast of the state of the art in various ways, including by reading the patents published by the patent office in their domain. In fields where the conditions are right, I hear it works pretty well.

And it is here we see the first issue with software patents: the notorious incompetence of the USPTO (United States Patent and Trademark Office)1, which has been very lax and inconsistent when it comes to software patents, and has granted a number of dubious ones; and I hear it’s not much better in other countries where software patents are granted (European countries thankfully do not grant patents on software, for the most part). One of the criteria when deciding whether an invention can be patented is whether it is obvious to a practitioner aware of the state of the art, and for reasonably competent software developers the patents at the center of some lawsuits are downright obvious inventions. The result is that staying current with the software patents that are granted is such a waste of time that it would sink most software companies faster than any patent suit.

Now, it is entirely possible that the USPTO is overworked with a flood of patent claims which they’re doing their best to evaluate given their means, and the bogus patents that end up being granted are rare exceptions. I personally believe the ones we’ve seen so far are but the tip of the iceberg (most are probably resting along with more valid patents in the patent portfolios of big companies), but even if we accept they are an exception, it doesn’t matter because of a compounding issue with software patents: litigation is very expensive. To be more specific, the U.S. patent litigation system seems calibrated for traditional brick and mortar companies that produce physical goods at the industrial scale; calibrated in the sense of how much scrutiny is given to the patents and the potential infringement, the number of technicalities that have to be dealt with before the court gets to the core of the matter, how long the various stages of litigation last, etc. Remember that in the meantime, the lawyers and patent attorneys gotta get paid. What are expensive but sustainable litigation expenses for these companies simply put most software companies, which operate at a smaller scale, out of business.

Worse yet, even getting to the point where the patent and the infringement are looked at seriously is too expensive for most companies. As a result, attackers only need to have the beginning of a case to start threatening software developers with a patent infringement lawsuit if they don’t take a license; it doesn’t matter if the attacker’s case is weak and likely to lose in court eventually, as these attackers know that the companies they’re threatening do not have the means to fight to get to that point. And there is no provision for the loser to have to pay for the legal fees of the winner. So the choice for these companies is either to back off and pay up, or spend at least an arm and a leg that they will never recover defending themselves. This is extortion, plain and simple.

So even if bogus patents are the exception, it is more than enough for a few of them to end up in the wild and used as bludgeons to waylay software companies pretty much at will, so the impact is disproportionate with the number of bogus patents. Especially when you consider the assailants cannot be targeted back since they do not produce products.

But at the very least, these issues appear to be fixable. The patent litigation system could be scaled back (possibly only for software patents), and, who knows, the USPTO could change and do a correct job of evaluating software patents, especially if there are disincentives in place (like a higher patent submission fee) to curb the number of submissions and allow the USPTO to do a better job. And one could even postulate a world where software developers “get with the program” and follow patent activity and avoid patented techniques (or license them, as appropriate) such that software development is no longer a minefield. But I am convinced this will not work, especially the latter, and that software (with possible exceptions) should not be patentable, for reasons I am going to expose.

Why software patents themselves are not sustainable

The first reason is that contrary to, say, mechanical engineers, or biologists, or even chip designers, the software development community is dispersed, heterogeneous, and loosely connected, if at all. An employee in IT writing network management scripts is a software practitioner; an iOS application developer is a software practitioner; a web front-end developer writing HTML and JavaScript is a software practitioner; a Java programmer writing line of business applications internal to the company is a software practitioner; an embedded programmer writing the control program for a washing machine is a software practitioner; a video game designer scripting a dialog tree is a software practitioner; a Linux kernel programmer is a software practitioner; an embedded programmer writing critical avionics software is a software practitioner; an expert writing weather simulation algorithms is a software practitioner; a security researcher writing cryptographic algorithms is a software practitioner. More significantly, every company past a certain size, regardless of its field, will employ software practitioners, if only in IT, and probably to write internal software related to its field. Software development is not limited to companies in one or a few fields, software practitioners are employed by companies from all industry and non-industry sectors. So I don’t see software developers ever getting into a coherent enough “community” for patents to work as intended.

The second reason, which compounds the first, is that software patents can not be reliably indexed, contrary to, say, chemical patents used in the pharmaceutical industry for instance2. If an engineer working in pharmacology wants to know whether the molecule he intends to work on is patented already, there are databases that, based on the formal description of the molecule, allow to find any and all patents covering that molecule, or allow the knowledge with a reasonably high degree of confidence that the molecule is not patented yet if the search turns up no result. No such thing exists (and likely no such thing can exist) for software patents, where there is at best keyword search; this is less accurate, but in particular cannot give confidence that an algorithm we want to clear is not patented, as a keyword search may miss patents that would apply. It appears that the only way to ensure a piece of software does not infringe patents is to read all software patents (every single one!) as they are issued to see if one of them wouldn’t cover the piece of software we want to clear; given that every company that produces software would need to do so, and remember the compounding factor that this includes every company past a certain size, this raises some scalability challenges, to put it lightly.

This is itself compounded by the fact you do not need a lot of resources available, or to spend a lot of resources or time, to develop and validate a software invention. To figure out whether a drug is worth patenting (to say nothing of producing it in the first place), you need a lab, in which you run experiments taking time and money to pay for the biological materials, the qualified technicians tending to the experiments, etc. Which may not work, in which case you have to start over; one success has to bear the cost of likely a magnitude more failures. To figure out whether a mechanical invention is worth patenting, you need to build it, spend a lot of materials (ones constitutive of the machine because it broke catastrophically, or ones the machine is supposed to process like wood or plastic granules) iterating on the invention until it runs, and even then it may not pan out in the end. But validating a software invention only requires running it on a computer that can be had for $500, eating a handful of kilojoules (kilojoules! Not kWhs, kilojoules, or said another way, kilowatt-seconds) of electrical power, and no worker time at all except waiting for the outcome, since everything in running software is automated. With current hardware and compilers, the outcome of whether a software invention works or not can be had in mere seconds, so there is little cost to failure of an invention. As a result, developing a software invention comparable in complexity to an invention described in a non-software patent has a much, much lower barrier of entry and requires multiple orders of magnitude less resources; everyone can be a software inventor. Now there is still of course the patent filing fee, but still in software you’ve got inventions that are easier to come up with, as a result many more of them will be filed, while they impact many more companies… Hmm…

Of course, don’t get me wrong, I do not mean here that software development is easy or cheap, because software development is about creating products, not inventions per se; developing a product involves a lot more (like user interface design, getting code by different people to run together, figuring out how the product should behave and what users want in the product, and many other things, to say nothing of non-programming work like art assets, etc.) than simply the inventions contained inside, and developing that takes a lot of time and resources.

Now let us add the possibility of a company getting a software patent so universal and unavoidable that the company is thus granted a monopoly on a whole class of software. This has historically happened in other domains, perhaps most famously with Xerox who for long held a monopoly on copying machines, by having the patent on the only viable technique for doing so at the time. But granting Xerox a monopoly on the only viable copying technique did not impact other markets, as this invention was unavoidable for making copying machines and… well, maybe integrated fax/printer/copying machine gizmos which are not much more than the sum of their parts, but that was it. On the other hand, a software invention is always a building block for more complex software, so an algorithmic patent could have an unpredictable reach. Let us take the hash table, for instance. It is a container that allows to quickly (in a sense formally defined) determine whether it already contains an object with a given name, and where, while still allowing to quickly add a new object; something computer memories by themselves are not able to do. Its performance advantages do not merely make programs that use it faster, they allow many programs, which otherwise would be unfeasibly slow, to exist. The hash table enables a staggering amount of software; for instance using a hash table you can figure out in a reasonable time from survey results the list of different answers given in a free-form field of that survey, and for each such answer the average age of respondents who picked that answer (as an example). Most often the various hash tables uses are even further removed from user functionality, but are no less useful, each one providing its services to another software component which itself provides services to another, etc. in order to provide the desired user functionality. Thanks to the universal and infinitely composable nature of software there is no telling where else, in the immensity of software, a software invention could be useful.

Back when it was invented, the hash table was hardly obvious, had it been patented everyone would have had to find alternative ways to accomplish more or less the same purpose (given the universal usefulness it has), such as trees, but those would themselves have become patented until they was no solution left, as there are only so many ways to accomplish that goal (given that in software development you cannot endlessly vary materials, chemical formulas, or environmental conditions); at that point software development would have become frozen in an oligopoly of patent-having companies, which would have taken advantage of being the only ones able to develop software to file more patents to indefinitely maintain that advantage.

Even today, software development is still very young compared to other engineering fields, even to what they were around the start of the nineteenth century when patent systems were introduced. And its fundamentals, such as the hardware it runs on and its capabilities, change all the time, such that there is always a need to reinvent some of its building blocks; therefore patenting techniques currently being developed risks having enormous impact on future software.

But what if algorithmic inventions that are not complex by software standards were not allowed patent protection, and only complex (by software standards) algorithms were, to compensate for the relative cheapness of developing an algorithmic invention of complexity comparable to a non-algorithmic invention, and avoid the issue of simple inventions with too important a reach? The issue is, with rare exceptions complex software does not constitute an invention bigger than the sum of individual inventions. Indeed, complex software is developed to solve user needs, which are not one big technical problem, but rather a collection of technical problems the software needs to solve, such that the complex software is more than the sum of its parts only to the extent these parts work together to solve a more broadly defined, non-technical problem (that is, the user needs). However this complex software is not a more complex invention solving a new technical problem its individual inventions do not already solve, so patenting this complex software would be pointless.

Exceptions (if they are possible)

This does leave open the possibility of some algorithmic techniques for which I would support making an exception and allowing them patent protection while denying it to algorithms in general, contingent on a caveat I will get into afterwards.

First of these are audio and video compression techniques: while they come down to algorithms in the end, they operate on real world data (music, live action footage, voice, etc.) and have shown to be efficient at compressing this real-world data, so they have more than just mathematical properties. But more importantly, these techniques compress data by discarding information that will end up not being noticed as missing by the consumer of the media once uncompressed, and this has to be determined by experimentation, trial and error, large user trials, etc. that take resources comparable to a non-algorithmic invention. As a result, the economics of developing these techniques is not at all similar to software, and application of these techniques is bounded to some, and not all, software applications, so it is worth considering keeping patent protection for these techniques.

Other techniques which are worth, in my opinion, patenting even though they are mostly implemented in software are some encryption/security systems. I am not necessarily talking here of encryption building blocks like AES or SHA, but rather of setups such as PGP. Indeed these setups have provable properties as a whole, so they are more than just the sum of their parts; furthermore, as with all security software the validation that such techniques work can not be done by merely running the code3, but only by proving (a non-trivial job) that they are secure, again bringing the economics more in line with those of non-algorithm patents, therefore having these techniques in the patent system should be beneficial.

So it could be worthwhile to try and carve an exception and allow patents for these techniques and others sharing the same patent-system-friendly characteristics, but if attempted extreme care will have to be taken when specifying such an exception. Indeed, even in the U.S.A. algorithm patents are formally banned, but accumulated litigations ended up with court decisions that progressively eroded this ban, first allowing algorithms on condition they were intimately connected to some physical process, then easing more and more that qualification until it became meaningless; software patents must still pretend being about something other than software or algorithms, typically being titled some variation of “method and apparatus”, but in practice the ban on algorithm patents is well and truly gone, having been loopholed to death. So it is a safe bet any granted exception, on an otherwise general ban on software patents should it happen in the future, will be subject to numerous attempts to exploit it for loopholes to allow software in general to be patented again, especially given the important pressure from big software companies to keep software patents valid.

So if there is any doubt as to the validity and solidity of a proposed exception to a general ban on software patents, then it is better to avoid general software patents coming back through a back door, and therefore better to forego the exception. Sometimes we can’t have nice things.

Other Proposals

Nilay Patel argues that software patents should be allowed, officially even. He mentions mechanical systems and a tap patent in particular, arguing that since the system can be entirely modeled using physical equations, fluid mechanics in particular, the entire invention comes down to math in the end like for software, so why should software patents be treated differently and banned? But the key difference here, to take again the example of the tap patent he mention, is that the part of math which is an external constraint, the fluid mechanics, are an immutable constant of nature. On the other hand with algorithm patents all algorithms involved are the work of man; even if there are external constraining algorithms in a given patent, due to legacy constraints for instance, these were the work of man too. In fact, granting a patent because an invention is remarkable due to the legacy constraints it has to work with and how it solves them would indirectly encourage the development and diffusion of such constraining legacy! We certainly don’t want the patent system encouraging that.

The EFF proposes, among other things, allowing independent invention as a valid defense again software patent infringement liabilities. If this is allowed, we might as well save costs and abolish software patents in the first place: a patent system relies on independent infringement being an infringement nonetheless in order to avoid abuses rendering the whole system meaningless, and I do not see software being any different in that regard.

I cannot remember where, but I heard the idea, especially with regard to media compression patents, of allowing software implementations to use patented algorithm inventions without infringing, so that software publishers would not have to get a license, while hardware implementations would require getting a license. But an issue is that “hardware” implementations are sometimes in fact DSPs which run code actually implementing the codec algorithms, so with this scheme the invention could be argued to be implemented in software; therefore OEMs would just have to switch to such a scheme if they weren’t already, qualify the implementation as software, and not have to pay for any license, so it would be equivalent to abolishing algorithm patents entirely.

  1. I do not comment on the internal affairs of foreign countries in this blog, but I have to make an exception in the case of the software patent situation in the U.S.A., which is so appalling that it ought to be considered a trade impediment.

  2. I learned that indexability was a very useful property that, in contrast to software patents, some patent domains did have, and the specific example of the pharmaceutical industry as such a domain, from an article on the web which I unfortunately cannot find at the moment; a search on the web did not allow me to find it but turned up other references for this fact.

  3. It’s like a lock: you do not determine that a lock you built is fit for fulfilling its purpose by checking that it closes and that using the key opens it; you determine it by making sure there is no other way to open it.

China declined to join an earlier coalition, Russia reveals

The saga of France’s liquidation sale continues (read our previous report). Diplomatic correspondence released yesterday by Russia in response to China’s communiqué reveals that China was asked to join an earlier coalition to acquire South Africa’s nuclear arsenal (an acquisition China mentioned in its communiqué as evidence of a conspiracy), but China declined.

This seemed to undermine China’s argument of an international conspiracy directed against it, at the very least it strengthens the earlier coalition’s claim that its only purpose was to figuratively bury these nuclear weapons; it should be noted high-profile countries Russia and USA are members of both coalitions.

China then answered with an update to their communiqué (no anchor, scroll down to “UPDATE August 4, 2011 – 12:25pm PT”) stating the aim of this reveal was to « divert attention by pushing a false “gotcha!” while failing to address the substance of the issues we raised. » The substance being, according to China, that both coalitions’ aim was to prevent China from getting access to these weapons for itself so that it would have been able to use them to dissuade against attacks, and that China joining the coalition wouldn’t have changed this.

Things didn’t stop here, as Russia then answered back (don’t you love statements spread across multiple tweets?) that it showed China wasn’t interested in partnering with the international community to help reduce the global nuclear threat.

For many geopolitical observers, the situation makes a lot more sense now. At the time the France sale was closed and the bids were made public, some wondered why China wasn’t in the winning consortium and had instead made a competing bid with Japan. China and Japan are somewhat newcomers to the nuclear club, and while China’s status as the world’s manufacturer pretty much guarantees it will never be directly targeted, its relative lack of nuclear weapons is the reason, according to analysts, it has less influence than its size and GDP would suggest. Meanwhile, China is subjected to a number of proxy attacks, so analysts surmise increasing its nuclear arsenal would be a way for China to dissuade against such attacks against its weaker allies.

So the conclusion reached by these observers is that, instead of joining alliances that China perceived as designed to keep the weapons out of its reach, China played everything or nothing. But the old boys nuclear club still has means China doesn’t have, and China lost in both cases, and now China is taking the battle to the public relations scene.

Geopolitical analyst Florian Müller in particular was quoted pointing out that, given the recent expansion of its influence, it was expected for China to be targeted by proxy, and other countries were likely acting their normal course and were not engaged in any organized campaign.

So to yours truly, it seems that while the rules of nuclear dissuasion may be unfair, it seems pointless to call out the other players for playing by these rules, and it makes China look like a sore loser. But the worst part may be that the Chinese officials seemingly believe in their own, seemingly self-contradicting (if they are so much in favor of global reduction of nuclear armaments, why wouldn’t they contribute to coalitions designed to take some out of the circulation?) rhetoric, which would mean the conflict could get even bitterer in the future.

France goes down, its nuclear weapons, and China

So France is going belly up. Kaput. Thankfully not after a civil war, the strife is more on the political side, though a few unfortunately died in some of the riots. But after regions like Corsica, Brittany and Provence unilaterally declared independence, after Paris declared a real Commune in defiance of the government, and Versailles, the usual fallback, did not seem safe either, it became clear there was no way out but eventual dissolution of the old, proud French Republic state; much like the USSR dissolved in 1991, but without an equivalent of Russia to pick up the main pieces, the Paris Commune being seen as too unstable.

Among the numerous geopolitical problems this raised, one stood out. Among its armed forces, the French Republic had under its control several nuclear warheads, the missiles to carry them, and a fleet of submarines to launch them. Legitimately terrified that these weapons could fall under the hands of a rogue state or terrorist group, the international community sustained the French government long enough for it to organize a liquidation sale of its nuclear armament and other strategic assets. But Russia certainly wasn’t going to let the USA buy them, an neither were the USA willing to see Russia get them. Realizing that making sure these weapons didn’t fall in the wrong hands was more important than for either party to take control of them itself, Russia, the USA, and a few other countries like India, the United Kingdom, etc. formed a coalition and jointly bid, and won, the dangerous arsenal.

Though they agreed on a few principles before making this alliance, it was considered urgent to get control of the arsenal in the first place, and at the time the sale was closed the coalition had not agreed on what to do with those weapons. But most geopolitical observers and analysts agreed that the coalition would end up keeping the weapons around just in case, but inactive and offline, and that was if they were not just going to disband them outright; after all, for them to be used would require joint agreement of all parties, an agreement that was extremely unlikely to be ever reached.

But China suddenly started publicly complaining that the members of the coalition were engaged in a conspiracy against it, citing military interventions from some of the coalition members in foreign countries, various international disagreements, and now this France liquidation sale (China did not take part in the coalition, it made, with one ally, a separate bid for these weapons but was eventually outbid by the coalition). Observers, however, were skeptical: these events did not seem connected in any way except for the fact of being mentioned together in that communiqué; plus, as if the joint ownership didn’t already ensure at least immobilization by bureaucracy, the coalition includes one partner of China: Brazil. And the fact the coalition spent quite a bit of money to acquire this arsenal, more than some initial estimates, probably only reflected the importance of keeping them out of the wrong hands, given the unstable international landscape, what with rogue states, terrorist groups, less than trustworthy states gaining importance, etc. Not to mention China itself bid rather high in that auction.

In the end it is suspected that, while Chinese officials may believe this conspiracy theory themselves, these complaints made in public view were actually intended to fire up nationalism in the country, or even better, in the whole east Asia.

The saga, unsurprisingly, didn’t stop there: Russia answered, read all about it in the followup. – August 5, 2011

In support of the Lodsys patent lawsuit defendants

If you’re the kind of person who reads this blog, then you probably already know from other sources that an organization called Lodsys is suing seven “indie” iOS (and beyond!) developers for patent infringement after it started threatening them (and a few others) to do so about three weeks ago.

Independently of the number of reactions this warrants, I want to show my support, and I want you to show your support, to the developers who have been thus targeted. Apparently, in the USA, even defending yourself to find out whether a claim is valid doesn’t just cost an arm and a leg, but can put such developers completely out of business with the sheer cost of the litigation. So it must be pretty depressing when you work your ass off to ship a product, a real product with everything it entails (engine programming, user interface programming, design, art assets, testing, bug fixing, support, etc.), only to receive demands for part of your revenue just because someone claims to have come up with a secondary part of your app first, this someone being potentially anyone with half of a quarter of a third of a case and richer than you, since you’d be out of business by the time the claim is found to be invalid. It must be doubly depressing when the infringement is from your use of a standard part of the platform, that you should (and in fact, in the case of iOS in-app purchase, have to) use as a good platform citizen.

I have known about and enjoyed their work for fifteen1 twelve years now, I use Twitterrific, I have bought Craig Hockenberry’s iPhone dev book, I follow him on Twitter and met him once. I know that the Iconfactory is an upstanding citizen of the Mac and iOS ecosystem and doesn’t deserve this. I am not familiar with the other defendants, but I am sure they do not deserve to be thus targeted, either.

So, to Craig, Gedeon, Talos, Corey, Dave, David, Kate and all the other Iconfactory guys and gals; to the fine folks of Combay, Inc; to the no less fine folks of Illusion Labs AB; to Michael; to Richard; to the guys behind Quickoffice, Inc.; to the people of Wulven Games; I say this: keep faith, guys. Do not let this get you down, keep doing great work, know there are people who appreciate you for it and support you. I’m supporting you whatever you decide to do; if you decide to settle, that’s okay, maybe you don’t have a choice, you have my support; if you decide to fight, you have my support; if you want to set up a legal defense fun to be able to defend yourself, know that there are people who are ready to pitch in, I know I am.

And in the meantime before the patent system in the USA gets the overhaul it so richly deserves (I seriously wonder how any remotely innovative product2 can possibly come out of the little guys in the USA, given such incentives), maybe we can get the major technology companies to withdraw selling their products in that infamous East Texas district (as well as the other overly patent-friendly districts), such that this district becomes a technological blight where nothing more advanced than a corded phone is available. I don’t think it could or would prevent patent lawsuits over tech products from being filed there, but at least it would place the court there in a very uncomfortable position vis-à-vis of the district population.

  1. Let’s just say my memories of my time online in the nineties are a bit fuzzy; it’s a recent browse in digital archives that made me realize I in fact only discovered in 1999

  2. The initial version of that post just read “innovation” instead of “remotely innovative product”; I felt I needed to clarify my meaning