Simple File Cache: improve the performance of FileReader in the browser

When was the last time you obtained a 10x (ten times, 1000%) performance gain with a single improvement?

Not recently, I bet. Most optimizations work incrementally, eking out 3% here, 2% there, and only achieve an observable effect by iterating many such optimization steps. Even algorithmic improvements, such as replacing an O(n²) algorithm by an O(n·log(n)) one, typically get you on the order of 3 or 4 times performance improvement, at least on the data sizes in typical use at the time the improvement is made. So let me tell you how I improved performance of JPS, my web app to apply IPS patches, tenfold.

Once upon a time…

Soon after the initial public version of JPS, I started working on support of another format that (among other processing) requires the CRC32 of the whole file to be obtained, which is best done in blocks of, say, 1024 bytes rather than reading from the file byte by byte. Given my prior experiences, I dreaded the performance penalty from having to (re-)visit every single byte of the file, but it turned out to perform surprisingly well. Why couldn’t I get the same performance when processing IPS files?

So as a proof of concept I started developing a layer that would read from the file in blocks of 4096 bytes, then serve read requests from the loaded data whenever possible, entirely in JavaScript. In other words, a cache. Writing a cache is something you always end up learning in any Computer Science curriculum, and you always wonder why, given that it seems so simple and obvious it need not be taught, and simultaneously is something the platform will provide anyway (especially as modern caches tend to be very complex beasts, what with replacement policies, cache invalidation, and so forth). And Mac OS X, on which I develop, aggressively caches filesystem reads at every level already. Writing my own cache for file reads seemed too obvious to be something worth doing.

Photo of the Mont Blanc, lighted with sunset light

From now on, my longer posts will have random photos from my various trips inserted to serve as breathers. This is the Mont Blanc, lighted by sunset light.

As a way to test this anyway, I wrote the dumbest file cache you could possibly imagine: there is only one cache bucket, it can only be loaded from whole block-aligned ranges in the file, with the result that a number of requests, e.g. those that cross block-aligned boundaries, or those that load from the remainder of the file that can’t form a whole block, have to sidestep the cache and be served from the file separately. Furthermore, JavaScript Blobs are supposed to be immutable, so I did not need to worry about invalidating my cache when the underlying storage changed. Even then, this was a not a trivial thing: the asynchronous nature of the browser file reading API meant the cache had to provide an asynchronous API itself and maintain a “todo list” of read operations being processed.

And now I turn on the cache, and measure the performance improvement… and files that used to take Chrome 50 seconds to process now take 5 seconds! (Chrome being my reference browser for development of JPS). And the 10x factor is consistent, applying over various source files, often turning the processing time into “too short to measure”, and over various platforms: the same files which took around 200 seconds on Chrome for Android now take 20 (and the behavior of desktop Chrome on Windows was the same as on Mac OS X). Similar improvements could be observed with desktop Firefox, with processing times going from 20 seconds to 2 seconds.

Wow.

I reported these findings on the Chromium discussion forums (Chrome being the worst offender), because surely that meant something was wrong with Chrome somewhere. However, not much came out of it, so I decided to productize the cache so as to deploy these performance improvements in production.

From proof of concept to production-worthy code

The proof of concept assumed that, for every read operation except from the first it could just append a new read request from the client to its todo list, and once control would bubble up back to the cache code, the request could be served there if it was in cache. That worked in most cases, at least enough to get performance measurements; but in some cases, a new request would be logged from code that was not called from a callback from our cache, so it would never bubble up back to our code and never be served, and the pump would drain.

Photo of a young Ibex

A young ibex.

Easy enough, I thought, I will get rid of the todo list and instead I will always defer processing by calling setTimeout(,O).

That worked.

But it was slow. Even slower than without the cache.

Turns out, the overhead of calling setTimeout(,O) and getting called back by it was killing this solution. What to do, what to do, what to do? Back to the drawing board, I came up with the solution: reinstate the todo list, and use it, but only if we can tell for sure that we are within code that is being called by cache code — which entails keeping track of that information. If we are not within code that is being called by cache code, only then use setTimeout(,O). That managed to both work in all cases and with good performance.

And then I also had to support aborting requests, adding a number of unit tests, fix a few bugs… and then it was done.

Photo of the Grandes Jorasses

The Grandes Jorasses.

What have we learned?

  • Don’t diss CS or the CS curriculum. You never know when what you learn there might turn out to be useful.
  • Sometimes the obvious solution is the right one.
  • The source of slowness isn’t reading files, per se, but rather the shocking overhead of calling a Web API and getting called back by it (whether it be FileReader or setTimeout(,O)), which by my estimates is around 2 ms for each such operation with Chrome on a modern desktop machine. This is crazy. Other browsers (with the exception of Internet Explorer/Edge, which I have not been able to test) fare better, but still have enough overhead that you have to wonder what is going on in there.

Get the code

I set up a specific project for the cache code: you can get the code on BitBucket, and I also published it on NPM as simple-file-cache. It is free to use and modify (under the terms of the BSD license). If you find it useful, I request that you consider donating to the ACLU and the UNHCR, however.


P.S.: While I’ve got your attention, I’m happy to report that JPS will soon support Safari, as this browser is finally about to get support for the download attribute and downloading blobs, normally as part of Safari 10.1, which is meant to arrive with Mac OS X 10.12.4. Being able to be used on a stock install of Mac OS X will be a huge milestone for JPS and the viability of web apps in general as a way to circumvent Developer ID and Gatekeeper.

In-app purchases are in need of reform

The common wisdom with Apple, especially when it comes to explaining the unusual and apparently limiting ways they introduce features, is that to better serve the user they introduce features that solve the user need in a specific way for each task, instead of providing a generic, unrestricted feature that may not provide an optimal user experience.

At least, that’s how I have seen it expressed, e.g. in one Jesper post:

I am more out of my depth here, but just applying the output to what we know of the process, I think the iOS group sees files as something you are under pressure to manage. In particular, it sees files for everything as a generic solution, and by applying Apple philosophy, it thinks that most of the problems that can be solved using files and applications are instead better solved in a task-specific way for each task.

(post which you may remember from our exchanges on the lack of a document filing system on iOS)

This applies very well to iOS multitasking, as well: instead of just allowing apps to run unconditionally in the background, Apple provided ways to fulfill (practically) each user need in a specific way, and grant background execution privileges commensurately with the need: frozen but no background execution in the general case, background execution for a limited time in the “complete a task” case to e.g. complete an upload, background execution only as long as audio is played for the “play audio while doing something else” case, etc. List which Apple has expanded a few years later with new specific privileges, which shows a willingness to revisit initial restrictions.

So I have to wonder why Apple is not applying this principle to in-app purchases. Currently, it is a generic feature that does not provide an optimal user experience for a variety of user needs:

  • digital content purchases (ebooks, comics, etc.)
  • apps that are downloaded for free with limited features for trial purposes, with a one-time fee to buy the app and get the full functionality (known to old-timers like me as the shareware model)
  • games with a base scenario supplemented by substantial expansions (think StarCraft/StarCraft Brood War)
  • games with more discrete, non-recurring downloadable content (extra weapons, extra maps, etc.)
  • apps with extra functionality obtainable through in-app purchase
  • coin-operated games or with consumables (ammunition, smurfberries, boosters, gems, etc…)

Yes, the purchase experience per se is optimized for each user need, by virtue of each app managing entirely that experience; where this is not is for the other places where in-app purchases have an impact, such as the top grossing list. In particular, information in the iOS App Store about presence of in-app purchases, and how many/how expensive they are, is a completely generic solution to many specific problems, and in way which is not very transparent, to say the least.

This results in warped incentives for app developers, which you probably know about already since Apple has gotten in hot water in the press for those, especially the matter with children buying smurfberries amounting hundreds of dollars or more (which they’ve been able to do while under the timer, initiated by the initial purchase, where the Apple ID password is not prompted for). Apple has fixed the most egregious issues, for instance by having separate timers for the initial iOS App Store download and for in-app purchases, but the fundamental incentive of appearing as an ordinary game, then tempting the user with “boosters” to get him out of a bind, or even possibly get him addicted to these boosters, remains1.

Apple has more recently improved the situation, by changing the language when obtaining free apps (which now reads “Get” rather than “Free”), including those with in-app purchases, and by featuring games that you Pay Once and Play, i.e. without in-app purchases, and while this is a step in the right direction, this is far from sufficient as this excludes games like Monument Valley that feature a single, consistent expansion, and everyone (Apple included, since they featured Monument Valley in the WWDC intro video) wants to encourage apps like Monument Valley.

What can be done?

So what can be done? I think the most important is not to prohibit anything outright, because there may always be a legitimate use for a particular in-app purchase pattern. For instance, long ago, way before there even was an iPhone, I remember reading an article bemoaning that arcade games (back, you know, when arcade games mattered) were ported to consoles without any adaptation; that is, when the arcade version would prompt for a quarter after a game over, the port would simply allow unlimited continues, which sometimes would make it absurdly easier. And the article imagined potential solutions, one of which was a system by which the player on his home console would actually pay 25¢ whenever he would continue that way, that would be wired somehow to the game publisher, which sounded completely outlandish at the time. Not so outlandish now, eh?

But whatever is allowed, what matters is that the user is properly informed when he installs the app.

So the solution I propose is to keep in-app purchases as the common infrastructure behind the scenes, but for the iOS App Store to present each app in a specific way for each use case:

  • First, of course, apps (free or paid) without any in-app purchase, featured as they are currently.

  • Then, apps that you can try before buying. Those would be listed among paid apps, with a price tag that is the unlock price, but with a mention that you can try them out for free; and those would have two buttons rather than “Get”: something like “Try for free” and “Buy outright”, so that you could save yourself the trouble of going through the in-app purchase process if you know the app already and know you need it.

  • Then we would have apps, typically games, with a discrete and limited number of “tiers”. They would be listed among paid apps with a price tag which is the first tier; and in the page for the app, the tiers would be shown in a clear way (instead of this meaningless ranking of in-app purchases), e.g. as a series of “expansion” elements which visually combine, with each the name and price, as in:

    /-------------------\--------------\
    | StarCraft          \ Brood War    \
    | $20                / +$10         /
    \-------------------/--------------/
    

    or even:

    /-------------------\------------------\-------------------------\------------\----------------------\
    | World of Warcraft  \ Burning Crusade  \ Wrath of the Lich King  \ Cataclysm  \ Hey why not Narnia?  \
    | $30                / +$15             / +$15                    / +$15       / +$15                 /
    \-------------------/------------------/-------------------------/------------/----------------------/
    

    (Maybe with a shape that less suggests an arrow, but you get the drift)

  • Then apps that have unlockable features in a more complicated structure, but with no “ammunition” in-app purchase (what Apple refers to as a “Consumable” in-app purchase). Those would just have their initial price, then a ranking of these in-app purchases in a way close with what is done currently, but a “maximum cost” which is the price of obtaining all of them would also be shown as an indication.

  • Then apps with content in-app purchases, such as Comixology before it removed them. For those, there would be no such “maximum cost”, because no one is going to buy the whole catalog.

  • And lastly, apps that do have “ammunition” in-app purchases. These would be listed with a special price tag mentioning no specific cost, and the page for the app would have the button say, not “Free”, not “Get”, but “Install coin-operated machine” or some such that makes it clear you would be inviting on your device a box that belongs to the app developer and has a slot that takes money and directly sends it there, because that is what these apps are. Such a decision wouldn’t be popular with many app developers, but Apple has shown itself willing to take decisions that don’t sit well with developers when they sincerely think they are acting for the benefit of the consumer, for instance when Apple still doesn’t allow paid upgrades.

  • And we would also have apps using recurring subscriptions, about which I don’t have much of an opinion so far.

Building on these distinctions, more changes would be possible, for instance there could be separate top grossing lists, one for each category, which would avoid legitimate hits of the first categories from being drowned by the eternally grossing coin-operated machines of the App Store.

There you have it. At any rate, even if there could be completely different ways to go about it, it is certainly an area of the iOS App Store that could use some improvement (Mr. Schiller, if you’re listening…), having barely changed for so long without any of Apple’s apparent philosophy of “Let’s replace this confusing, generic solution by a number of specific solutions designed for each task”.


  1. In fact, given the similarities with gambling, I can’t exclude for these boosters-laden games to be eventually regulated as such

Re: Nintendo and iOS games

While there would be plenty to say on the technical (say, heterogenous multiprocessing in the iPhone 7) or tech-related (say, Apple’s transition away from the 3.5 mm audio jack) announcements from the latest Apple announcement event, I want to focus today on one that has been less talked about (relatively). Which is the commitment from Nintendo on smartphone games, materialized, and how, by Shigeru Miyamoto’s appearance on the event to present a Super Mario game for the iPhone.

Miyamoto-san’s appearance is not that big a deal, per se; or at least that is what my head says (he has appeared as a guest in non-Nintendo productions before), because, look, for someone like me who grew up on Apple hardware and the NES, seeing Miyamoto-san and Apple together in some official fashion is like some sort of childhood dream come true. Nintendo being willing to show what is arguably their most iconic character starring a game on Apple hardware is, however, a big deal whether you look at it from the viewpoint of your adult self or your 10-year-old self.

One thing I was particularly interested in was the angle, namely, how they would justify it being on a handset by taking advantage of something they couldn’t do on their own hardware (a discussion you may remember my old post, from when everyone in the Apple community seemed to have an opinion on what Nintendo should be doing); turns out, it’s one-handed operation and quick start, quick stop interactions. I never saw it coming, but makes every bit of sense: I triple dog dare you to play anything on any Nintendo handheld one-handed (well, maybe WarioWare Twisted, which you may remember from my earlier post), and while the DS (and later devices) goes to sleep when closed and can be resumed quickly, most of the time this is not really conductive to such gameplay; which is fine: this is one of the reasons I still play on my DS during my commute, because I have an uninterrupted 30-minute stretch in it where I’d rather play something “meaty”. However, I indeed never take it out while waiting for the bus.

And on that matter, while the aim is not to make a score sheet of what I got right or wrong back then, I have to admit that I was wrong that Apple would not be going to bend the rules for Nintendo: Apple introduced an interesting feature on the iOS App Store specifically for Super Mario Run. Indeed, even though the game is not out yet there is already a page for it on the iOS App Store, where instead of the “get” button you have a “notify” button. I have no doubt this is going to be extended to other developers in the future, but for now it’s exclusive to Super Mario Run.

I’ll also note that this kind of smaller-scale project fits well with Miyamoto-san’s role at Nintendo, where a few years ago he changed position to focus on more experimental projects rather than head the blockbuster game releases.

While previously Nintendo’s commitment to smartphone games was questioned even with the DeNa partnership, then Pokémon Go (which many considered as not being “real” Nintendo games, an assessment I do not share, but what do I know?), now with Shigeru Miyamoto’s appearance and Super Mario Run there is no doubt on Nintendo’s commitment, it will be hard for them to turn back on that. And who knows, maybe at some point Nintendo will make that WarioWare for iOS based on Apple nostalgia games I expected back then

Do not retroactively change the pistol emoji

Apple: don’t retroactively change the pistol emoji. Just don’t. The costs far outweigh the benefits, and even if you’re successful, it will come back in another way, so we will be back to square one anyway (except we will not have recovered the costs).

When I first heard of the change, I was already skeptical, and after pondering it some more, I have reason to think the benefits are not worth the costs.

To begin with, by doing it this way Apple makes the change retroactive. Any piece of text (email, text message, blog post, article, photo caption, or of course tweet) with a pistol emoji has now had its meaning retroactively changed when viewed on the latest iOS 10 beta. This change does not just affect newly received messages: any time the pistol emoji was used in the last few years will be affected by this change.

Besides personal usage, this will represent an issue for researchers studying past texts. Tweets get archived, you know (even if these efforts still can’t be accessed). Will researchers who study these archives have to use special software to render the pistol emoji from texts pre-iOS 10 as a revolver, from texts from 2017 on as a water pistol, and from the intermediate period as something else to signal the ambiguity?

Even accounting just for immediate message interchange, the drawbacks of the semantic change during the transition period may kill the idea: Jeremy Burge mentioned one, but problems also exist the other way round: people sending the pistol emoji but the recipient interpreting it as merely being a water pistol and not taking them seriously.

It has been noticed that (at least up until recently) Microsoft has been using a futuristic/toy gun glyph to represent the pistol emoji without causing the same kind of reactions. This is worth noting as an interesting piece of context, however I don’t feel it constitutes precedent, as the Microsoft glyph still represents a lethal weapon, if a fantastic one, so I see it more as a stylistic variant (of which there are many of this caliber between emoji typefaces, be it with this glyph or others) of the same semantic base. And Microsoft has limited impact in this domain, anyway.

Besides, this sets a dangerous precedent, because if Apple can unilaterally force everyone to change the meaning in this way of one Unicode character, what’s to stop them from doing it again? Even with the best intentions of the world, to circumvent in this way the Unicode consortium is probably too much power given to one particular vendor, be it Apple or any other.

But then (in case that was not reason enough) another reason came to mind, and I started performing research, which very quickly bore fruit.

If you’ve ever read comics of the French-Belgian tradition (and even a few others), you are undoubtedly familiar with the graphical symbols used to represent swearing. And I have no doubt that they will all someday be able to be represented as part of text; most of them are in the emoji repertoire anyway, and it’s only a matter of when, not if, the few remaining ones will be standardized. And guess what I quickly found in the handful of such comics I have on hand?

Exerpt of a comic page, with in one panel a character using symbol swearing

(from a Les Tuniques Bleues book, “Mariage à Fort Bow”, page 24)

This is not rare; of course you’re not going to find any in, say, Astérix, but in anything thematically appropriate it’s going to be found. And you can’t retroactively change that. You just can’t.

Having the pistol emoji as such in Unicode for the purposes of symbol swearing will also be useful to “type” it so that it can be rendered as such by a lettering typeface. For instance, Blambot (if you’ve been reading a webcomic in the last few years, and it’s not hand lettered, then it’s most likely using a Blambot typeface) has a typeface for symbol swearing, called Potty Mouth BB, and yes, it does contain a pistol as part of its repertoire. Currently the font ”cheats” and uses ordinary letters to allow you to type up these symbols, much like the Symbol font of old, but at some point a Unicode update to support them will inevitably happen. And if by that point the original PISTOL emoji’s meaning will have been successfully watered down, the Unicode consortium will have no choice but to add a new REAL PISTOL or some such emoji to support the actual pistol in these typefaces. And the gun that you thought you had chased will have come back through the window.

So removing the gun emoji from the iOS keyboard would be fine. But don’t change it. Unless you want to go against every gun representation in the world, then good luck to you.

Looking back on WWDC 2016

Now that the most important Apple release of WWDC has been dealt with, we can cover everything else. I haven’t followed as closely as previous years (hence no keynote reactions on Twitter), but to me here is what stands out.

The Apple App Stores policy announcements

As seen at Daring Fireball for instance, Apple briefed the press on many current and coming improvements to the Apple App Stores (iOS, Mac, tvOS, watchOS). This actually happened ahead of WWDC, but is part of the package. There are a lot of good things, such as for instance the first acceptance that Apple isn’t entitled over the whole lifetime of an app to 30% of any purchase where the buying intent originated from the app with the 85/15 split instead of 70/30 for subscriptions after the first year. However, none of this solves the lack of free trials: if only subscription apps can have free trials, then thanks, but no thanks. I want to both try before I buy and avoid renting my software, and I don’t think subscriptions make sense for every app anyway, so improvements and clarifications (e.g. indication of whether the app is “pay once and play” or ”shareware” or ”coin-op machine”) to apps using non-recurring payment options would be welcome (more on that in a later post). Also, while those apply to the Mac App Store as well, this one will need more specific improvements to regain credibility. I don’t have much of an opinion on the new search ad system.

The new Apple File System (APFS for short)

Apple announced a new filesystem, and to say that it has, over the years, accumulated a lot of pent-up expectations to fulfill would be the understatement of the year. I can’t speak for everyone, but each year N after the loss of ZFS my reaction was “Well, they did not announce anything this year, it’s likely because they only started on year N-1 and can’t announce it yet because they can’t develop such a piece of software in a yearly release cycle, so there is not use complaining about it as it could be already started, and will show up for year N+1.” Repeat every year. So while I can scarcely believe the news that development of APFS only started in 2014, at the same time I’m not really surprised by it.

I haven’t been able to try it out, unfortunately, but from published information these are the highlights. This is as compared to ZFS because ZFS is the reference that the Mac community has studied extensively back when Apple was working on a ZFS port in the open.

What we’ll get from APFS that we hoped to have with ZFS:

  • A modern, copy-on-write filesystem. By itself, this doesn’t do much, but this is the indispensable basis for everything else:
  • Snapshots, or if you prefer, read-only clones of the filesystem as a whole. Probably the most important feature, by itself it alone would justify the investment of a new filesystem to replace HFS+.

    While the obvious use case is backups, particularly with Time Machine, it is not necessarily in the way you think. Currently, when Time Machine backs up a volume, it has to contend with it being in use, and potentially being modified, while it is being backed up; if it was required to freeze a volume while backing it up, you wouldn’t be able to use it during that time and, as a result, you would back up much less often and that would defeat most of the purpose of Time Machine. So Time Machine has no choice but to read a volume while it is being modified, and as a result may not capture a consistent view of the filesystem! Indeed, if two files are modified at the same time, but one was read by Time Machine before the modification and the other after, on the backup the saved filesystem will have one file without the modification and the other with, which has not been the state of the filesystem you intended to back up at any point in time. In fact, this may mean the data is lost if you have to reload from that backup in case neither half can work with the other as a result.

    Instead, with APFS the backup application will be able to create a snapshot, which is a constant time operation (i.e. does not depend on how much data the volume contains) and results in no additional space being taken, at least initially, then can copy from that snapshot, while the filesystem is in use and being modified, and be confident that it is capturing a consistent view of the filesystem, regardless of where the data is being saved (it could be to an HFS+ drive!). Once the copy is over, the snapshot can be harvested to make sure no additional space is used beyond that needed by the live data. Of course, this will also allow, by using multiple snapshots, to more efficiently determine what changed from last time, and with APFS on the backup drive as well the backup application will be able to save space on the backup drive, in particular not taking up space for redundancies the source APFS drive knows about already. But snapshots on the APFS source drive will mean that, after 10 years, Time Machine will finally be safe: this is a correctness improvement, not merely a performance (faster backups and/or taking less space) one.

  • Real protection in the face of crashes and power loss events. HFS+ had some of that with its journal, but it only protected metadata and came with a number of costs. APFS will make sure its writes and other filesystem updates are “crash-safe”.
  • I/O prioritization. A filesystem does not exist merely as a layout of the data on disk, but also as a kernel module that has in-memory state (mostly cache) that processes filesystem requests, and the two are generally tied. I/O prioritization, some level of it at least, will allow some more urgent requests (to load data for an interactive action for instance) to “jump the queue” ahead of background actions (e.g. reads by a backup utility), all the while keeping the filesystem view consistent (e.g. a read after a write to the same file has to see the file as modified, so it can’t just naively jump over the corresponding write).
  • Multithreaded. In the same vein of improvements to the tied filesystem kernel module, this will allow to better serve different processes or threads that read and write from independent parts of the filesystem, especially if multiple cores are involved. HFS+, having been designed at the time of single-processor, single-threaded machines, requires centralized, bottleneck locks and is inefficient for multithreaded use cases.
  • File and directory hierarchy clones. Contrary to snapshots, clones are writable and are copied to another place in the directory hierarchy (while snapshots are filesystem-wide and exist in a namespace above the filesystem root). The direct usefulness is less clear, but it could be massively useful as infrastructure used by specialized apps, version control notably (both for work areas and repositories).
  • Logical volume management. Apple calls this “space sharing”, but it’s really the possibility to make “super folders” by making them their own filesystem in the same partition, and allows this super folder to have different backup behavior for instance.
  • Sparse files. Might as well have that, too.

What APFS will provide beyond ZFS, btrfs, etc. features:

  • Encryption as a first class feature. Full disk and per-file encryption will be integrated in the filesystem and provided by a common encryption codebase, not as layers above or below the filesystem and with two separate implementations. This also means files that are encrypted per-file will be able to be cloned, snapshotted, etc. without distinction from their unencrypted brethren.
  • Scalability down to the watch. ZFS never scaled down very well, in particular when it comes to small RAM amounts.

What we hoped to have with ZFS, but won’t get from APFS:

  • Crazy ZFS-like scalability. For instance, APFS has 64-bit nodes, not 128-bit. This is probably not unreasonable on Apple’s part.
  • RAID integration as part of the filesystem. APFS can work atop a software or hardware RAID in traditional RAID configurations (RAID-0, RAID-1, RAID-10, RAID-5, etc.), but always as a separate layer. APFS does not provide anything like RAID-Z or any other solution to the RAID-5 write hole. That is worth a mention, though I have no idea whether this is a need Apple should fulfill.
  • Deduplication. This is more generally useful to save space than clones or sparse files, but is also probably only really useful for enterprise storage arrays.

What is unclear at this point, either from the current state or because Apple may or may not add it by the time it ships:

  • Whether APFS will checksum data, and thus guarantee end-to-end data integrity. Currently it seems it doesn’t, but it checksums metadata, and has extensible data structures such that the code could trivially be extended to checksum all data while remaining backwards compatible. I don’t know why Apple does not have that turned on, but I beg them to do so, given the ever-increasing amounts of data we store on disks and SSD and their decreasing reliability (e.g. I have heard of TLC flash being used in Apple devices); we need to know when data becomes bad rather than blindly using it, which is the first step to try and improve storage reliability.
  • Whether APFS is completely transaction-based and always consistent on-disk. Copy-on-write filesystems generally are, but being copy-on-write is not sufficient by itself, and the existence of a fsck_apfs suggests that APFS isn’t always consistent on-disk, because otherwise it would not need a FileSystem Consistency checK. Apple claims writes and other filesystem updates will be “crash-safe”, but the guarantees may be lower than a fully transactional FS.
  • Whether APFS containers will be able to be extended after the fact with an additional partition (from another disk, typically), possibly even while the volumes in it are mounted. APFS support for JBOD, and the fact APFS lazily initializes its data structures (saving initialization time when formatting large disks), suggest it, and it would be undeniably useful, but it is still unknown at this time.
  • Whether APFS will be composition-preserving when it comes to file names. It will, certainly, be insensitive to composition differences in file names, like HFS+; however HFS+ goes one step further and normalizes the composition of file names, which ends up making the returned file name byte string different from what was provided at file creation, which itself subtly trips up some software like version control (via Eric Sink), and which is probably the specific behavior that led Linux founder Linus Torvalds to proclaim that HFS+ was “complete and utter crap”; see also this (latter via the Accidental Tech Podcast guys, who had the same Unicode thoughts as I did). Won’t you make Linus happy now by at least preserving composition, Apple? This is your opportunity!
  • Whether APFS uses B+trees. I know, this is an implementation detail, but it’d be neat if Apple could claim to have continuously been using B-/+trees of either kind for their storage for the last 30 years and counting.

For a more in-depth look at what we know so far about APFS, the best source by all accounts is Adam Leventhal’s series of posts.

Apple File Protocol deprecation

Along with APFS, Apple announced it would not be able to be served over AFP, only SMB (Windows file sharing), and AFP was thus deprecated. This raises the question over whether SMB is at parity with AFP: last I checked (but it was some time ago), AFP was still superior when it came to:

  • metadata and
  • searching

But I have no doubt that, whatever feature gap is left between SMB and AFP (if there is even one left), Apple will make sure it is closed before APFS ships, just like Apple made sure Bonjour had feature parity with AppleTalk before stopping support for AppleTalk.

Playgrounds on iOS

I’m of two minds about this one. I’ve always found Swift playgrounds to be a great idea. To give you an idea, back in the day when the only computer in the house was an Apple ][e, I did not yet know how to code, but I knew enough syntax that my father had set up a program that would, in a loop, plot the result of an expression over a two-axis system, and I would only have to change the line containing the expression, with the input variable being conveniently x, and the output, y; e.g. to plot the result of squaring x, I would only have to enter1:

60 y = x*x

run the program, and away I went. It was an interesting lesson when, due to my limited understanding of expressions, specifically that they are not equations, I once wrote:

60 2y = x+4

Which resulted in the same thing as I previously plotted, because this command actually modified line 602 (beyond the end of the loop)… good times.

Anyway, Swift playgrounds, which automatically plot the outcome of expressions run multiple times in a loop for instance, and even more so on iPad where you have the draggable loop templates and other control structure templates, provide the necessary infrastructure program out of the box, and learners will be able to experiment and visualize what they are doing in autonomy.

These playgrounds will be able to be shared, but when I hear some people compare this to the possibilities of Hypercard stacks, I don’t buy it. There is nothing for a user to do with these playgrounds, the graphic aspect is only a visualization (and why does it need to be so elaborate? This is basically Logo, you don’t need to make it look like a Monument Valley that would not even be minimalistic); even if the user can enter simple commands, it always has to start back from the beginning when you change the code (which is not a bad thing mind you, but shows even the command area isn’t an interactive interface). You can’t interact with these creations. Sharing these is like sharing elaborate Rube Goldberg constructions created in The Incredible Machine: it’s fun, and it’s not entirely closed as the recipient can try and improve on it, but except watching it play there is nothing for the recipient to do without understanding the working of the machine first.

Contrast that with Hypercard, in which not only you set up an actual interface, but what you’d code was handlers for actions coming from the interface, and not a non-interactive automaton. This also means that it was much less of a jump to go from there to an actual app, especially one using Cocoa: it’s fundamentally just a bunch of handlers attached to a user interface. It’s a much bigger jump when all you’re familiar with is playgrounds or even command-line programs, because it’s far from obvious how to go from there to something interactive. Seriously, I’m completely done with teaching programming by starting with command-line apps. It needs to die. What I’d like to see Apple try on the iPad is something inspired by the old Currency Converter tutorial (unfortunately gone now), where you’d create a simple but functional app that anyone could interact with.

Stricter Gatekeeper

…speaking of sharing your programming creations. I’m hardly surprised. This shows web apps is definitely the future of tinkerer apps.


  1. In Apple II Basic, you’d enter a line number then a statement, and that would replace the line in the saved program by the one you just entered. Code editors have improved a bit since then.

RIP, QuickTime for Windows

As you may have heard, Apple will no longer provide fixes for QuickTime for Windows, not even for two released security vulnerabilities (this post is a sort of PSA, as well: if for some reason you have QuickTime for Windows, uninstall it now). I wonder why anyone refers to QuickTime for Windows as being deprecated, as deprecated technologies don’t receive updates or fixes except for critical issues: the correct term for the no-fixes-at-all situation is unsupported; for all intents and purposes, QuickTime for Windows is dead. And while this has been coming for some time, this doesn’t make these news any less sad; so today, let us remember QuickTime for Windows.

While I think it existed earlier in some form, the real beginning for QuickTime for Windows was with QuickTime 3.0, which had feature parity with the MacOS version — imagine that! I know little about how it fared at that time, since my usage of Windows machines was limited; I only know that a number of game developers adopted it, eager for an acceptable media playback solution (e.g. for cutscenes): a number of games had you install QuickTime for Windows (bundled on the game CD) in order to run. Also, QuickTime for Windows came with an implementation of a subset of the Mac toolbox (though with some differences, e.g. file name length), which helped with the port of some Mac games to Windows.

Then, some of you might not have really known that time, so you have to take my word for the fact that, before YouTube in 2005-2006, there was no universal standard for distributing video online; but QuickTime with its browser plugin was the closest we had. So people were posting videos in QuickTime format (e.g. this Apple switch ad campaign parody); this did not support Linux or Unix, and Windows users were a bit reluctant to install QuickTime, but this was miles better than any alternative such as Windows media which, if it was supported at all on the Mac, was always incredibly crappy.

QuickTime served also, back then, as the basis for media playback of iTunes for Windows, which itself was the indispensable tool for allowing anyone (not just Mac owners) to own an iPod, then later on an iPhone. For those purposes and many others, QuickTime for Windows carried the burden of making sure many Apple initiatives were at least viewable outside of just Macs, playing no small part in keeping Apple relevant for all these years. QuickTime for Windows was the symbol of Apple’s leadership in multimedia, and everything it allowed legitimized the Mac and Apple including for die-hard Windows users in a way that is impossible to overstate.

For instance, back when I worked at NXP Software, QuickTime Player was the standard test for determining whether a movie file was correctly formatted (among other reasons because we were working with 3GPP media files, whose format, like that of MPEG4 media files, was derived from the QuickTime movie format): if a file generated by our media recorder had an issue with QuickTime Player, which was necessarily on Windows (we did not use Macs, at least not before we developed iPhone apps), then there was a bug in our media recorder. This made for a fun investigation when I tried to understand a bug that turned out to actually be in QuickTime!

As far as users go, the average user now has a number of alternatives, starting with VLC, but there are a number of people working on Windows in media and media-related industries who will miss having a reference media player on their machine (iTunes’ just not the same thing). However, software developers who were still building against the QuickTime SDK and relying on QuickTime being installed on Windows should have seen it coming for some time: the writing has been on the wall for QuickTime for Windows since QuickTime X in 2009, when there was no corresponding update on the Windows side, which stayed on QuickTime 7. I have not used Windows machines for media work for some time, and I missed the event when iTunes for Windows become independent of QuickTime, so this personally caught me a bit by surprise nevertheless.

So long, QuickTime for Windows. We’ll miss you.

The Stela comics app

Stela is a new comics app for smartphones (iOS-only at the time of this writing), but it works nothing like, say, Comic Chameleon (which presents existing webcomics with a phone-adapted navigation) or Comixology (which presents comics you’d find in stores as digital products, with a phone-adapted navigation when not running on a tablet). Rather, once you use it it becomes clear Stela’s purpose is to publish comics that embrace the 5 centimeters (that’s about 2 inches, for the metrically-challenged) width of today’s smartphone screens1.

These are comics that are native to that world: the panels are only as wide as the screen (nary a vertical gutter in sight) and can only extend vertically, but they can do so as much as desired because they are read by vertical scrolling. A panel may not necessarily fit on a screen (at least on an iPhone 5/5S/SE; I haven’t checked on the larger models)! An iPhone 5 screenful is a common size, but most of these comics have widely varying panels sizes, and anyway have conversations for instance that extend over multiple screenfuls: they don’t follow a pattern of identically-sized pages. The result is a very fluid flow and a reading experience that is meant to be fast.

The essence of most iPhone apps since the beginning, as best seen for instance with Twitter clients, is of a (potentially long) scrolling list of items (our friend the UITableView), with more or less drilldown or navigation between these lists. Stela is the comics embodiment of that2, and it’s very addictive.

The comics are updated chapter by chapter (which make for checkpoints as well); the economic model is that the first chapter of each story is free, and you can get a subscription (using Apple’s in-app subscription system) to read after that. It is a single subscription global to the app, not per-series, so it works a bit like an anthology series. Comics are always loaded from the network, which bothers me a little: there is no way to preload while on WiFi to avoid eating into your phone data allotment, and no way to read at all if you are off the network. iPod Touches exist, you know.

The comics themselves are of good quality, and I enjoyed the series I read, though many are still developing their story (eagerly waiting for the next chapter of Crystal Fighters for instance) and it’s a bit early to tell how they will turn out.

Either way, whether you’re from my usual audience of iOS app developers, and/or involved in comics, or neither, check it out, you’re bound to find some interesting lessons in this experiment in comics and app design.

~ Reactions ~

Over at Fleen, Gary Tyrrell cautions that, since it’s subscription-based, your access to the content will only last as long as you keep paying for it (I specifically allowed him to quote from this post as much as he wanted). It’s absolutely worth noting; maybe I’ve just become blasé to such things.


  1. The app works natively on iPad, but the comics are just scaled up, which makes for funnily huge lettering.
  2. For instance, images are loaded dynamically and present a spinner if you scroll too fast before they have had time to load, as is traditional in iPhone apps: prioritize the flow, even if that means betraying some implementation realities.

Application Cache was fired for his douchebaggery

To all of you who enquired about the whereabouts of Application Cache, I regret that I have to inform you that he is no longer with our company. This was not an easy decision to take, but we believe it was the right one.

While it has been no secret for some time that Application Cache was a douchebag, this was not necessarily apparent at first. Application Cache promised so much, and we believed him because he could prove his claims to a large extent. However, his way of working was so much at odds with the way other web components work (especially long-time pillar of web infrastructure HTTP cache) that his core value proposition was harder to exploit than it should have been (with many unfortunate pitfalls, as Jake Archibald documented); and worse, his more advanced promises, while working in basic scenarios, had some ancillary troubles, which unexpectedly turned out to be intractable no matter how hard we tried, and so these promises never came to light.

Because he was useful despite the issues, we tried to work with him on these, with many counseling sessions with HR; however, Application Cache was adamant that this was his fundamental mode of operation and he could not work any other way, and that others would have to adapt to him. This, of course, was not remotely acceptable, but we could not find any way to make him change either, so little progress was made. There was some, as we did manage to make him more transparent; some claimed that made him no longer a douchebag, but in truth he remained one.

Still, we believed that it could still be worth keeping him just for his core value proposition of using web apps while offline. But as time went on, it became clear that even that was not going to be worth the bother, again as a consequence of his fundamentally different way of working. Things came to a head when we tried to solve race conditions resulting from the possibility that a user load the initial HTML page before the web app is updated, and its dependencies (including the manifest) after the web app is updated: the manifest has to be updated at the same URL (it acts as a fixed entry point of sorts for users who already have the web app in Application cache), so we could not rely on the HTML pointing to a new manifest URL so that the update of the entry point would atomically result in the update of the web app. Even with the provision that the manifest be redownloaded after the entry point, and checked against the manifest downloaded before in the case of an app already in Application Cache (so as to try to have the manifest always loaded after the entry point, at least conceptually), we were stuck.

Some solutions were found, though limited to ideal situations; there was no solution available for the case of a serving infrastructure, such as content distribution networks, with only “eventually consistent” or other weak guarantees, and there was no solution either if even minimal use of FALLBACK: was required. Moreover, even in ideal situations those solutions bring a lot of burden on the web developer, too much burden considering that offline web apps ought to work correctly in the face of these race conditions by default, or at least with minimal care. In the end, Application Cache was let go a few months ago.

If you were relying on the services provided by Application Cache, don’t worry. While there will be no future evolution (in particular, don’t expect bugs to get fixed), a new guy was hired to perform the tasks of Application Cache exactly as the latter did them. This new guy, Service Worker, will also provide a new service allowing web apps to work offline, this time in harmony with the other web components: for instance, out of the box he makes it possible to throttle checks for updated versions simply by setting a cache control header on the service worker (the period being a day at most); something which was exceedingly hard, if not impossible, with Application Cache due to his bad interactions with HTTP cache. He was already available in Chrome, and with the recently released Firefox 44, two independent, non-experimental implementations have now shipped, so you should take the time to make his acquaintance.

New software release: JPS

JPS (stands for Javascript Patching System) is a web app that applies binary patches, currently IPS patches. Usage is simple enough: you provide the reference file and the patch file, and once patching is done you recover the patched file just as you would download a file from a server, except everything happens on your local machine. Moreover, JPS works while offline, thanks to Curtain, which was in fact developed for the needs of JPS.

JPS works on any reasonably recent version of Firefox or Chrome (both of which update automatically anyway), as well as any version of Opera starting with Opera 15. Unfortunately, some of the features used (download of locally-generated files in particular) are not universally supported yet, which means that, regardless of my efforts, Safari (rdar://problem/23550189, OpenRadar) and Internet Explorer are not supported; as a Safari user myself, this bothers me, but I could not find any way around this issue, you will have to wait for a version of Safari that supports the download attribute.

Some background…

My motivation for writing JPS came from two events:

Indeed, when I learned of Zelda Starring Zelda I wanted to play it (A+++ would play again, currently playing the second installment), but realized the IPS patcher I previously used no longer ran (it was built for PowerPC), and while I was able to download and use a different patcher I thought there had to be a better way than each platform using a different program, program also susceptible to becoming unsupported. And this joined my thoughts from the time when Gatekeeper and Developer ID were announced, where I wondered if we couldn’t circumvent this Apple restriction using web apps. So I decided I would develop a web app to apply IPS patches.

While most of the difficulties were encountered when developing the Curtain engine, the browser features used by JPS itself, namely client-side file manipulation and download, led to some challenges as well. One fun aspect was taking a format, IPS, which embeds many assumptions, some undocumented, on C-like manipulation APIs (e.g. writing to a mutable FILE*-like object, and performing automatic zero filling when writing past the end of file), and making it work using the functional Blob APIs, based on slicing and concatenation of arrays and immutable Blob objects. There were a few interesting surprises, for instance early versions of JPS could, on some input files, cause Firefox to crash, taking down JPS and all the other Firefox tabs! Worse, resolving this required a significant rewrite to the patching engine, which led me to develop automated tests to catch any regression before performing this rewrite, to ensure that the rewrite would not regress in any way (it didn’t).

JPS has been extensively tested prior to this release; I tested myself about a hundred patches, with only one patch not working while running on Firefox (bug report), and it has been in open beta for some time without any other problematic patch having been reported.

The JPS source code is available under a BSD license; the source release contains all the needed code to deploy it with Curtain (which has to be downloaded separately), as well as test vectors for the IPS file format and a test harness to automatically test JPS using these files.

A few more words

While I would have liked to support Safari so that JPS could run out of the box on Mac OS X, I deem this proof of concept of a desktop-like web app to be good enough for at least a subset of desktop use cases; enough so for me to put the Gatekeeper and Developer ID concerns behind me. I can now reveal that, because of these concerns, I did not update to Mac OS X Mountain Lion or any later version until today; yes, up until yesterday I was still running Lion on my main machine.

Now that JPS and Curtain have been released, I can’t wait to see what will be done with this easy (well, OK, easier) way to develop small desktop-like tinkerer tools using the web!

Introducing Curtain

I’m excited to introduce Curtain to you today. Curtain is a packaging and deployment engine for desktop-like web apps; Curtain handles the business of generating the app from source files and deploying it on the server such that it supports offline use.

Curtain can be cloned from BitBucket, and it has a sample app, both under the BSD license. Rather than repeating the Readme found there, I would like here to provide some background.

Some background…

Offline support

I wanted to use Application Cache for a project; as you know, Application Cache is a douchebag, but even that article did not prepare me for how much of a douchebag it is. In particular, you want web apps to be able to be updated, if only because the first version inevitably has bugs. Remember that, even if the list of files in the manifest does not change, the manifest has to change whenever the app changes, otherwise users won’t get the updated version. So how to update the manifest and app?

  • If the app is updated in this manner:

    1. manifest is updated
    2. remainder is updated

    or even if the two are updated at the same time, then you could run into the following scenario:

    1. user does not have the app in cache, and fetches the HTML resource
    2. manifest is updated
    3. remainder is updated
    4. due to a network hiccup on her side, user only now fetches the manifest

    Now the user has the manifest for the updated version, but is really running the previous version of the web app. Even if the list of cached files is still correct, now whenever the user agent checks for an updated manifest it will find it to be bit-for-bit identical, and the user agent will not update the version the user uses, which is out of date, until a second update occurs. This is obviously not acceptable, and if the list of cached files is incorrect for the version it will be even worse.

  • Now imagine the web app is updated in this manner:

    1. remainder is updated
    2. manifest is updated 30 seconds (one network timeout) later

    In this case, the scenario in the previous case cannot occur: if the user fetched the HTML resource prior to the update, the user agent will either succeed before the manifest is updated, or will give up at its network timeout. However, another scenario can now occur:

    1. remainder is updated
    2. user loads the app from the server (either initial install or because he still had a version prior to the one before the update), both app files an manifest
    3. manifest is updated

    In that case, the user has the updated app but the manifest for the previous version. Even if the list of cached files is correct, the versions are inconsistent which is an issue if the new version turns out to have a showstopping issue (which sometimes only becomes apparent after public deployment, due to the enormous variety of user agents in the wild) and we decide to roll back to the previous version: in that case, whenever the user agent checks for an updated manifest, it will find it hasn’t changed and the user will keep using the version of the app that has a showstopping issue. When performing the rollback, we could decide to modify the manifest so that it is different from both versions, but this is dangerous: when rolling back you want to deploy exactly what you deployed before, in order to avoid running into further issues. And I don’t need to tell you how problematic having inconsistent app and manifest would be if the list of resources to cache changed during the update.

So how does Curtain solve this problem?

By updating the manifest twice:

  1. manifest is updated with intermediate contents
  2. remainder is updated
  3. manifest is updated again 30 seconds (one network timeout) later

If the list of resources to cache changes during the update, the manifest contains the union of the files needed by the previous version and the files needed by the updated version; and in all cases, the intermediate manifest contains in a comment two version numbers: the one for the app prior to the update, and the one for the app after the update. That way the manifest is suitable in both cases, and his method of update avoid all the issues associated with the previous methods.

Of course, that would be tedious and error-prone to handle by hand, so Curtain generates both intermediate and updated manifests from a script.

Versioned resources

I enjoy reading Clients from Hell; even though I don’t design web sites for a living I relate strongly to these horror stories. Except for one kind: those where the client complains he should not have to clear the cache/do a hard reload/etc. to see the fully updated site. Sorry, but for those, I side completely and unquestioningly with the client. Even in a development iteration context, it is up to the developer to show he can change the site and have the changes propagate without the user needing to do anything more than a soft reload (which invalidates the initial HTML resource if necessary, but nothing else), because such changes will need to happen in a post-deployment context. And don’t get me started on the number of site redesigns where the previous versions of all assets (icons, previous/next arrows, etc.) are still visible, and the announcement post starts with the caveat that you may have to reload manually in order for the redesign to be fully in effect… and even then, it has to be done again on a second page, because the main page does not have a “next” arrow for instance.

Yes, clearly you want resources and image assets, in particular, to be far-expire in order to save on bandwidth. But this means they must also be immutable: they might disappear, but may never, ever change; and if a different resource is needed, then it must have a different URL. Period.

Obviously, changing the resource name by hand, especially if you need to do so for every development iteration, is tedious and error-prone. When I read in web development tutorials, including some Application Cache ones, the suggestion to use, say, script-v2.js and increment version numbers that way, I can’t help but think “Isn’t that the cutest thing? Thinking you can do so flawlessly without ever forgetting to so do whenever a resource changes? Awwww…” because that is a recipe for failure, even if you only change these resources as part of a deploy.

Such inconsistency issues are even worse for offline web apps. Indeed, if your web app cannot work offline, you can just assume that, if your web app works incorrectly because of an inconsistent set of resources, the user will just reload and she will eventually get consistent resources. But in the case of an offline web app, once the user is back to her new home for which DSL hasn’t been installed yet (I’m getting tired of the airplane example) she has no opportunity for reload.

Even worse, even if the user checked while she was online that the web app was working correctly (which is asking a lot of her already), it may in fact be the previous version that was reloaded from the cache, while an inconsistently updated version is being downloaded, and when she relaunches it while at home she will get the inconsistent version. You can’t afford to be careless with offline web apps.

Curtain resolves this issue by relying on a version control system. On the build machine, all resources must be under a version-controlled work area, and Curtain will query the version control system for the ID of the version where the resource was last updated, and will generate a resource name by appending this version ID. Note that by doing it this way, Curtain will avoid changing the URL of the resource (which would invalidate it in the cache) even if everything else has changed, as long as the resource itself hasn’t changed. Curtain will process your HTML to replace references to the resource by references to the versioned resource name, and upload the result, and upload the resources themselves so that they have the versioned name on the server.

Curtain will also assign a version to the app as a whole, this is in particular put in a comment in the manifest (see above): this version is simply the current version of the version control system work area. Curtain itself must be in such a work area, so that if Curtain is updated but the source files are not, the version number is changed.

As part of these tasks, Curtain will manage the Cache-Control headers by synthesizing the necessary .htaccess file, which is especially important when using Application Cache; since it has to deal with .htaccess anyway, Curtain will also directly manage the MIME type of these resources, to avoid relying on the default Apache behavior (based on file extensions).

No progressive rendering

I have always found progressive rendering to be unsightly. It was necessary in the first days of the web, what with images taking seconds to download, it is largely necessary on mobile to this day, and it is still desirable on desktop for online apps. But for offline, desktop-like web apps? No way.

Curtain opts out of progressive rendering by downloading all dependent resources through XMLHttpRequest and explicitly loading the content, for instance for image resources by generating a URL for the downloaded Blob and assigning it through code to the src attribute of the img tag; this means Curtain-deployed web apps depend on XHR2 and Blob as a XHR responseType. Curtain will hide the interface until all resources have been loaded and assigned, assuming that the user will retry loading the app if no interface appears after a time; it is safe to assume the user is online at that time, because if he is offline, this means all the files listed in the Application Cache manifest are locally available and so will not fail to load.

If JavaScript is disabled or the browser does not have the necessary support for Curtain, we want to be able to show a message to that effect, and we want to do it in the context of the “usual” interface, so that she recognizes the web app. So the entirety of the interface is put in a div belonging to a CSS class called curtain. A small bit of JavaScript code before the interface hides this div: if JavaScript is disabled, the interface simply won’t be hidden. Then code after the interface will check everything necessary for the Curtain runtime to perform its job (using Modernizr in particular); if not everything is available, then the message will be changed and the div will be made visible.

The HTTP URLs of the images are put in the src attributes of the img tags in the initially downloaded HTML. However, this is only a provision for the above two error cases; in normal usage they will have been replaced by the blob URLs prior to the interface becoming visible.

Language

First, Curtain generates static sites, and does not depend on any server programming language or any kind of server processing. Second, while early versions of the build and upload script were written as a shell script, Curtain is written in Python so as to be as portable as possible (it was that or Perl; I chose Python), though I have not been able to test it on Windows or Linux yet.

Third, Curtain embeds a bit of JavaScript code along with your app, and it expects your app to be written in JavaScript. However, Curtain makes no pretense at bringing JavaScript framework features; you should be able to use it with any JavaScript framework, including Vanilla JS.

Stay tuned…

Stay tuned, because tomorrow I will present you the sample app for Curtain, and its justification.