A theory on the significance of the Apple A4

Before I begin, a clarification: I do not own an iPad. Besides living in France (where you still can’t even pre-order one at the time of this writing), I also currently have no need for this particular device; however, I am very interested in the computing platform the iPad is inaugurating.

One of the perks of my current workplace is that many of my colleagues, while working on software, have a semiconductors background, NXP being a semiconductors company. So when Apple introduced the iPad, many of us were intrigued by the A4 “processor” they said was powering this device. We thought it was very unlikely they could have created a whole new, competitive processor core implementing the ARM architecture (similarly to e.g. XScale, which implements the ARM architecture but wasn’t created by ARM) in only one year and a half since the acquisition of PA Semi, so we considered Apple probably “just” licensed a processor core from ARM for the A4.

The first analyses seem to indicate that not only this is the case, but the A4 even features “just” a single Cortex A8 core like, for instance, the iPhone 3GS, not something fancier but still plausible like one or two Cortex A9. The same way, the graphics processor seems to be a PowerVR SGX like in the iPhone 3GS. It’s a higher-clocked Cortex A8, and the whole is probably on a smaller process node, but it’s based on a Cortex A8 nonetheless; apparently nothing they couldn’t have obtained from the SoC portfolio of e.g. Samsung (which seems to be fabbing the A4). So what is Apple doing with the A4? They certainly are not designing a SoC just for the sake of doing it.

Let me disclaim that I have no inside information, just a hunch, this is entirely speculation. It may be a sound, consistent theory that would explain everything, and still be wrong because the explanation is something completely different.

While many relate SoCs such as the Apple A41 to recent developments from Intel and AMD which put a graphics processor on the same chip as a processor (sometimes not even on the same die), and call SoCs: “processors”, a SoC is a system. But instead of being a system built by putting together chips from different vendors on a board, a System on a Chip is “built” by laying out components from different vendors on the same silicon die; this allows smaller designs, sometimes lower costs, and lower consumption from a comparable multi-chip solution. Using a SoC is pretty much a necessity on devices as constrained as a phone, and even if the iPad is less constrained, it is still a big win there.

This sounds like a tautology, but by designing their own SoC, Apple is designing their own system. The off-the-shelf SoCs, and even the ones customized for Apple found in other iPhone OS devices (which we know are customized if only because they are Apple-branded), may have been OK for the iPhone and iPod Touch, but these SoCs were initially designed with more traditional handsets in mind; the iPhone OS interface, with its smooth, continuous scrolling, use of animations, transparency, etc. (all of which are characteristic of the “new computing” the iPhone OS embodies) probably taxes these SoCs in ways that were not foreseen with Symbian and Windows Mobile interfaces. The graphics processor can do all these effects, but the intensity with which they are used likely reveals bottlenecks (probably data bandwidths) in the architecture of these SoCs; notice the processor core matters very little here. Now consider that the iPad needs to move more than five times more pixels than an iPhone, and you may start to understand the problem. There are probably other “areas” (e.g. power saving) of the system that could be properly designed only with a view of the whole system, with a whole software stack above the hardware. By designing the A4, Apple is more directly making the hardware decisions that will matter, for instance how the memory is shared; not in amount (I’m sure that’s configurable already) but e.g. in bandwidth. While the processor core matters too, it was probably not the main liability here.

Remember what Mansfield says in the iPad intro video, that the A4 was designed by the hardware team together with the software team, giving performance that could not be achieved any other way? That fits this theory. It is related to the end-to-end argument, which basically states that adding features at a low level has to be done in light of the whole system, otherwise the feature will be of limited usefulness; a consequence is that a low-level component, so far designed for a given system, may have some deficiencies when used in a new system, and these deficiencies can only be revealed in the context of this new system. Given how they use the hardware, iPhone OS devices end up being different enough systems that it makes sense to design a more specific SoC for it, and keep anyone else out of the design loop. To top it off, it allows to keep more details secret from Samsung, which is also a potential competitor.

To give you an analog situation, read this. Basically, on the original Macintosh, memory was accessed in regular alternance between the processor and the display system, as there was no dedicated video memory; not only that, but at the end of each scan line, there was no access during the interval when the screen beam goes back to the start of the next line, so they took advantage of this to fetch an audio sample instead. A brilliant design. Now imagine that instead of using a 68000 and a bunch of PALs for the other logic, the Mac team had to use a single chip containing the whole system except for memory and some I/O, and that chip was more designed with computers like the IBM PC in mind, and so actually optimised for text interfaces and PC speaker beeps. Would they have been able to build the Macintosh with such a chip? Even if they could have gotten the supplier of such an imaginary chip to fix bottlenecks and add features, this would still have been an extra step in the design loop, so they might eventually have had to develop such a chip themselves — if not at first, then for, say, the Mac II. Now, while there are direct parallels such as both devices having video memory shared with system memory, I don’t think the design challenges are similar in detail; but the situations are similar in a broad sense.

Note that this is valid for systems that are still maturing (and the portable smart device category is certainly one in flux right now); for mature systems the differences between platforms are less different and the technology is more universally mastered, such that it is more efficient for system-level hardware to be outsourced to a few common suppliers; this is the case for desktop computing nowadays. On mobile devices, however, in-house SoC design is probably going to be a competitive advantage in the foreseeable future, just like it was with personal computers in the 80’s.


  1. The Apple A4 is actually a package, that is, there are actually three dies in the ceramic package; however two of these are the RAM, the third chip is the A4 SoC.

Rallying against Section 3.3.1 of the new iPhone Developer Agreement

I wasn’t planning on starting my blog that way. However, circumstances mandate it.

Basically, Apple’s new iPhone Developer Agreement terms added to section 3.3.1 are not just unacceptable, hubristic by pretending to specify which tools we use for the job, and anti-competitive. They’re also completely impossible to enforce, and so utterly ambiguous that no one in his right mind should agree to them.

Read the additions again. They are obviously meant to prohibit “translation layers” such as Flash and other similar technologies. But as written, they could mean anything. For instance, suppose you’re using (or a dependency you’re using uses) a build system that dynamically generates headers depending on configuration and/or platform capabilities. It translates specification files to C code! It’s prohibited! Okay, you may say that it isn’t actually “code” which is generated, just defines and similar configuration. What if you use Lex and Yacc (or substitutes), which take a specification (e.g. a grammar in the case of Yacc) and do generate actual C functions? It’s prohibited! And what if you use various tools and scripts to generate variations of C code as part of your build process, because the C preprocessor is not powerful enough for your needs? To say nothing of having (some of) your code be written in Pascal, Fortran, etc.

It’s worse if you consider that the language of the agreement could be interpreted to mean that libraries that abstract even partially the “Documented APIs”, even if you use them from C/C++/Objective-C, could be prohibited. It’s in a limbo, like usage of an on-device interpreter with “sealed” (i.e. not downloadable, not user-changeable) scripts, which isn’t clearly allowed or prohibited (or rather wasn’t; the new terms clearly forbid it), so few people did it for fear of trying the legal waters. Furthemore, someone could come up with a cross-platform meta-framework with a C++ API (very possible, for instance Bada has C++ APIs), and given the intent behind this change it could be something Apple would want to block as well; I’m loathe to use the “Apple may do this in the future, we must block them now” argument, but I’m not doing so, it’s something they could try to do with an interpretation of the current language of the agreement (who’s to say the meta-framework isn’t a “compatibility layer”?).

It’s not just a matter of how the terms are written. Apple is basically trying to mandate how our programs are developed and maintained. What if you have special needs and develop a custom language that compiles down to C (which is then compiled the usual way)? That doesn’t seem very kosher under these terms. What tools we use is our business, not Apple’s; what matters is the output. It’s also dangerously close to mandating what kind of infrastructure software (as opposed to user-facing functionality) is allowed to run on their hardware.

And this disposition is also completely impossible to enforce. If you have a “translation layer” that works entirely by generating C and Objective-C code which is then processed by the SDK tools1, how could anyone or anything tell from the output? False negative and even false positives are going to happen.

You might think you can avoid agreeing to the new terms, keep using the current SDK, and ignore the new APIs and functionality… except one must accept the new agreement to be able to access the provisioning portal past April 22nd; some time after that, the provisioning profiles will expire, and development on a device will be impossible.

So what can we do? I don’t think it makes much sense to boycott development for the plaftorm, or all devices running that platform altogether, because Apple will not realise the loss until it is way too late. So we should let them know these agreement terms are a problem. There is no Radar to dupe, as this is not a matter with a product, instead you should contact them to let them know why you won’t, and possibly can’t, agree to the terms, and ask them to clarify edge cases until these agreement terms become meaningless. For instance, they forgot (I can’t see any other explanation) to list assembler code as one of the mandatory languages, so if you use, or plan to use, assembler as part of your project, contact! The same goes for shader language, so if you use, or plan to use, OpenGL ES 2.0, contact! Oh, and Objective-C++ too! Contact! Games often use a middleware engine, so if you use one, or plan to use one, contact! I’m sure some projects out there use some Pascal, Fortran, Ada or Lisp (why not?), if that’s your case, contact! Using a tool or a fancy build system that generates C code? Contact! Unsure about something? Contact! You can even use your imagination to come up with an edge case that they haven’t anticipated, however no spam please, because we’re not having a temper tantrum, but a legitimate concern about the validity of these new terms.

In closing, I will say that it’s not just a matter of Apple making the rules, and we play by them or not at all. Here Apple has reached the point of hubris, and they must be brought back down to Earth. If they are afraid that iPhone development is becoming more popular because of the installed base than because of the Cocoa Touch framework, and that developers are going to sidestep Cocoa Touch (eventually making them lose their network effect), then the answer is to make and keep Cocoa Touch awesome, which is currently the case; the answer is not to mandate its use.


  1. They may even sidestep the Xcode IDE: Xcode is not doing anything magical, just invoking the compiler, resource compiler, code sign utility, etc.

Start

Hello, traveler.

You have reached the start of Wandering Coder. Yep, this is the very first post. You can now start reading chronologically from here, if you’re so inclined.