Thank you, Mr. Siracusa

Today, I learned that John Siracusa had retired from his role of writing the review of each new Mac OS X release for Ars Technica. Well, review is not quite the right word: as I’ve previously written when I had the audacity to review one of his reviews, what are ostensibly articles reviewing Mac OS X are, to my mind, better thought of as book-length essays that aim to vulgarize the progress made in each release of Mac OS X. They will be missed.

It would be hard for me to overstate the influence that John Siracusa’s “reviews” have had on my understanding of Mac OS X and on my writing; you only have to see the various references to John or his reviews I made over the years on this blog (including this bit…). In fact, the very existence of this blog was inspired in part by John: when I wrote him with some additional information in reaction to his Mac OS X Snow Leopard review, he concluded his answer with:

You should actually massage your whole email into a blog post [of] your own.  I’d definitely tweet a link to it! :)

to which my reaction was:

Blog? Which blog? On the other hand, it’d be a good way to start one
Hmm

Merely 4 months later, for this reason and others, this blog started (I finally managed to drop the information alluded to in 2012; still waiting for that tweet ;) ).

And I’ll add that his podcasting output may dwarf his blogging in volume, but, besides the fact I don’t listen to podcasts much, I don’t think they really compare, mostly because podcasts lack the reference aspect of his Mac OS X masterpieces due to the inherent limitations of podcasts (not indexed, hard to link to a specific part, not possible to listen in every context, etc.). But, ultimately, it was his call; as someone, if I remember well, commented on the video of this (the actual video has since gone the way of the dodo): “Dear John, no pressure. Love, the Internet”. Let us not mourn, but rather celebrate, from the Mac OS X developer preview write-ups to the Mac OS X 10.10 Yosemite review, the magnum opus he brought to the world. Thank you, Mr. Siracusa.

April’s Fools 2015

As you probably guessed, the post I made Wednesday was an April’s fools… well, the kind of April’s fools I do here, of course: just because this was for fun does not mean this was no deeper message to that post (now translated to English for your understanding).

In case you missed it, for April the first (besides posting that post) I translated to French my greatest hits (as listed there) and a few other minor posts, replaced all others with a message in French claiming the post in question was being translated, replaced the comicroll with an equivalent one listing French online comics, and translated to French all post titles and all elements of the blog interface: “React”, search, dates, etc. up to the blog title: “Le Programmeur Itinérant” (it stayed that way a bit longer than the initially planned 1-2 days because of unforeseen technical issues, my apologies for the trouble). Thus reminding you, in case my name did not make it clear enough, that even though I publish in English my first language is actually French.

The problem of availability of information, especially technical one, in more than one human language has always interested me, for reasons of inclusiveness among others. It remains a very hard problem (I did get a good laugh at the results of Google translate back to English when applied to my French posts), and so initiatives such as this are very welcome (they translated my “A few things iOS developers…” for instance, but I can’t find the link at the moment).

Lastly, there have been a few influences that led me to do this for April the first, but I want to thank in particular Stéphane Bortzmeyer, who manages to maintain a very technical blog in French; whenever I needed the French translation of a technical term I could typically just look in his blog to see what he uses (or to confirm there was no point in trying, e.g. for “smartphone”, which has no real French translation). Much respect to him for this.

To arms, citizens!

To arms, I say! I just realized the enormous scandal that is the presence in Unicode of the emoji character TOKYO TOWER (U+1F5FC), which you should be able to see if you are equipped with a color set after the colon: 🗼. Scandal, I say, as this thing, which we never talk about at home whenever we talk about Tokyo, and for good reason, as it is in truth a pale imitation of our national tower, the Eiffel tower, that the Japanese made at a time when they found success in imitation… Where was I? Oh, yes, so, that thing managed to steal a spot in Unicode even though our Eiffel tower isn’t in there! Scandal, I say!

Worse yet, this was done with the yankees’ complicity, who shamelessly dominate the Unicode consortium; the collusion is obvious when we see they themselves took advantage of it to slot in the statue of liberty. And I say, no, this shall not pass! Say no to the US-Japan cultural domination! That is why, from now on, my blog will be in French. Too bad for you if you can’t read it. I even started translating my previous posts, starting with my greatest hits, namely A few things iOS developers ought to know about the ARM architecture, Introduction to NEON on iPhone, Benefits (and drawback) to compiling your iOS app for ARMv7 et PSA: Do not release ARMv7s code until you have tested it. And I have no intent of stopping there.

Join me in the protest to demand that the Eiffel tower be added to Unicode! To arms!

(disclaimer)

Unconventional iOS app extension idea: internal thumbnail generator

The arrival (along with similar features) of extensions in iOS 8, even if it does not solve all problems with the platform’s inclusiveness, represents a sea change in what is possible for third-party developers with iOS, enabling many previously unviable apps such as Transmit iOS. But, even with the ostensibly specific scenarios (document provider extensions, share extensions, etc.) app extensions are allowed to hook themselves to, I feel we have only barely begun to realize the potential of extensions. Today I would like to present a less expected problem extensions could solve: fail-safe thumbnail generation.

The problem is one we encountered back in the day when developing CineXPlayer. I describe the use case in a radar (rdar://problem/9115604), but the tl;dr version is we wanted to generate thumbnails for the videos the user loaded in the app, and were afraid of crashing at launch as a result of doing this processing (likely resulting in the user leaving a one-star “review”), so we wanted to do so in a separate process to preserve the app, but the sandbox on iOS does not allow it.

But now in iOS 8 there may be a way to use extensions to get the same result. Remember that extensions are run in their own process, separate from both the host app process and the containing app process; so the idea would be to embed an action extension for a custom type of content that in practice only our app provides, make the videos loaded in the app provided under that type to extensions, and use the ability of action extensions to send back content to the host to send back the generated thumbnail; if our code crashes while generating the thumbnail, we only lose the extension process, and the app remains fine.

This would not be ideal, of course, as the user would have to perform an explicit action on each and every file (I haven’t checked to see whether there would be sneaky ways to process all files with one extension invocation), but I think it would be worth trying if I were still working on CineXPlayer; and if after deployment Apple eventually wises up to it, well, I would answer them that it’s only up to them to provide better ways to solve this issue.

MPW on Mac OS X

From Steven Troughton-Smith (via both Michael Tsai and John Gruber) comes the news of an MPW compatibility layer project and how to use it to build code targeting Classic Mac OS and even Carbonized code from a Mac OS X host, including Yosemite (10.10). This is quite clever, and awesome news, as doing so was becoming more and more complicated, and in practice required keeping one ore more old Macs around.

Back in the days of Mac OS X 10.2-10.4, I toyed with backporting some of my programming projects, originally developed in Carbon with Project Builder, to MacOS 9, and downloaded MPW (since it was free, and CodeWarrior was not) to do so. The Macintosh Programmer’s Workshop was Apple’s own development environment for developing Mac apps, tracing its lineage from the Lisa Programmer’s Workshop, which was originally the only way to develop Mac apps (yes, in 1984 you could not develop Mac software on the Mac itself). If I recall correctly, Apple originally had MPW for sale, before they made it free when it could no longer compete with CodeWarrior. You can still find elements from MPW in the form of a few tools in today’s Xcode — mostly Rez, DeRez, GetFileInfo and SetFile. As a result, I do have some advice when backporting code from Mac OS X to MacOS 9 (and possibly earlier, as Steven demonstrated).

First, you of course have to forget about Objective-C, forget about any modern Carbon (e.g. HIObject, though the Carbon Event Manager is OK), forget about Quartz (hello QuickDraw), forget about most of Unix, though if I recall correctly the C standard library included with MPW (whose name escapes me at the moment) does have some support beside the standard C library, such as open(), read(), write() and close(). Don’t even think about preemptive threads (or at least, ones you would want to use). In fact, depending on how far back you want to go, you may not have support for what you would not even consider niceties, but were actually nicer than what came before; for instance, before Carbon, a Mac app would call WaitNextEvent() in a loop to sleep until the next event that needed processing, and then the app would have to manually dispatch it to the right target, including switching on the event type, performing hit testing, etc.: no callback-based event handing! But WaitNextEvent() itself did not appear until System 7, if I recall correctly, so if you want to target System 6 and earlier, you have to poll for events while remembering to yield processing time from time to time to drivers, to QuickTime (if you were using it), etc. The same way, if you want to target anything before MacOS 8 you cannot use Navigation Services and instead have to get yourself acquainted with the Standard File Package… FSRefs are not usable before MacOS 9, as another example.

When running in MacOS 9 and earlier, the responsibilities of your code also considerably increase. For instance, you have to be mindful of your memory usage much more than you would have to in Mac OS X, as even when running with virtual memory in MacOS 9 (something many users disabled anyway) your application only has access to a small slice of address space called the memory partition of the application (specified in the 'SIZE' resource and that the user can change): there is only one address space in the system which is partitioned between the running apps; as a result memory fragmentation becomes a much more pressing concern, requiring in practice the use of movable memory blocks and a number of assorted things (move high, locking the block, preallocating master pointers, etc.). Another example is that you must be careful to leave processor time for background apps, even if you are a fullscreen game: otherwise, for instance if iTunes is playing music in the background, it will keep playing (thanks to a trick known as “interrupt time”)… until the end of the track, and become silent from then on. Oh, and did I mention that (at least before Carbon and the Carbon Event Manager) menu handling runs in a closed event handling loop (speaking of interrupt time) that does not yield any processing time to your other tasks? Fun times.

Also, depending again on how far back you want to go, you might have difficulty using the same code in MacOS 9 and Mac OS X, even with Carbon and CarbonLib (the backport of most of the Carbon APIs to MacOS 9 as a library, in order to support the same binary and even the same slice running on both MacOS 9 and Mac OS X). For instance, if you use FSSpec instead of FSRef in order to run on MacOS 8, your app will have issues on Mac OS X with file names longer than were possible on MacOS 9; they are not fatal, but will cause your app to report the file name as something like Thisisaverylongfilena#17678A… not very user-friendly. And the Standard File Package is not supported at all in Carbon, so you will have to split your code at compile time (so that the references to the Standard File Package are not even present when compiling for Carbon) and diverge at runtime so that when running in System 7 the app uses the Standard File Package, and when running in MacOS 8 and later it uses Navigation Services, plus the assorted packaging headaches (e.g. using a solution like FatCarbon to have two slices, one ppc that links to InterfaceLib, the pre-Carbon system library, linking weakly to the Navigation Services symbols, and one ppc that links to CarbonLib and only runs on Mac OS X).

You think I’m done? Of course not, don’t be silly. The runtime environment in MacOS 9 is in general less conductive to development than that of Mac OS X: the lack of memory protection not only means that, when your app crashes, it is safer to just reboot the Mac since it may have corrupted the other applications, but also means you typically do not even know when your code, say, follows a NULL pointer, since that action typically doesn’t fault. Cooperative multitasking also means that a hang from your app hangs the whole Mac (only the pointer is still moving), though that can normally be solved by a good command-alt-escape… after which it’s best to reboot anyway. As for MacsBug, your friendly neighborhood debugger… well, for one, it is disassembly only, no source. But you can handle that, right?

It’s not that bad!

But don’t let these things discourage you from toying with Classic MacOS development! Indeed, doing so is not as bad as you could imagine from the preceding descriptions: none of those things matter when programming trivial, for fun stuff, and even if you program slightly-less-than-trivial stuff, your app will merely require a 128 MB memory partition where it ought to only take 32 MB, which doesn’t matter in this day and age.

And in fact, it is a very interesting exercise because it allows a better understanding of what makes the Macintosh the Macintosh, by seeing how it was originally programmed for. So I encourage you all to try and play with it.

For this, I do have some specific advice about MPW. For one, I remember MrC, the PowerPC compiler, being quite anal-retentive for certain casts, which it just refuses to do implicitly: for instance, the following code will cause an error (not just a warning):

SInt16** sndHand;
sndHand = NewHandle(sampleNb * sizeof(SInt16));

You need to explicitly cast:

SInt16** sndHand;
sndHand = (Sint16**)NewHandle(sampleNb * sizeof(SInt16));

It is less demanding when it comes to simple casts between pointers. Also, even though it makes exactly no difference in PowerPC code, it will check that functions that are supposed to have a pascal attribute (supposed to mark the function as being called using the calling conventions for Pascal, which makes a difference in 68k code), typically callbacks, do have it, and will refuse to compile if this is not the case.

If you go as far back as 68k, if I remember correctly int is 16 bit wide in the Mac 68k environment (this is why SInt32 was long up until 64-bit arrived: in __LP64__ mode SInt32 is int), but became 32 bit wide when ppc arrived, so be careful, it’s better not to use int in general.

QuickDraw is, by some aspects, more approachable that Quartz (e.g. no object to keep track of and deallocate at the end), but on the other hand the Carbon transition added some hoops to jump through that makes it harder to just get started with it; for instance something as basic as getting the black pattern, used to ensure your drawing is a flat color, is described in most docs as using the black global variable, but those docs should have been changed for Carbon: with Carbon GetQDGlobalsBlack(&blackPat); must be used to merely get that value. Another aspect which complicates initial understanding is that pre-Carbon you would just directly cast between a WindowPtr, (C)GrafPtr, offscreen GWorldPtr, etc., but when compiling for Carbon you have to use conversion functions, for instance GetWindowPort() to get the port for a given window… but only for some of those conversions, the others just being done with casts, and it is hard to know at a glance which are which.

When it came to packaging, I think I got an app building for classic MacOS relatively easily with MPW, but when I made it link to CarbonLib I got various issues related to the standard C library, in particular the standard streams (stdin, stdout and stderr), and I think I had to download an updated version of some library or some headers before it would work and I could get a single binary that ran both in MacOS 9 and natively on Mac OS X.

Also, while an empty 'carb' resource with ID 0 does work to mark the application as being carbonized and make it run natively on Mac OS X, you are supposed to instead use a 'plst' resource with ID 0 and put in there what you would put in the Info.plist if the app were in a package. Also, it is not safe to use __i386__ to know whether to use framework includes (#include <Carbon/Carbon.h>) or “flat” includes (#include <Carbon.h>); typically you’d use something like WATEVER_USE_FRAMEWORK_INCLUDES, which you then set in your Makefile depending on the target.

Lastly, don’t make the same mistake I originally did: when an API asks for a Handle, it doesn’t just mean a pointer to pointer to something, it means something that was specifically allocated with NewHandle() (possibly indirectly, e.g. with GetResource() and loaded if necessary), so make sure that is what you give it.

I also have a few practical tips for dealing with Macs running ancient system software (be they physical or emulated). Mac OS X removed support for writing to an HFS (as opposed to HFS+) filesystem starting with Mac OS X 10.6, and HFS is the only thing MacOS 8 and earlier can read. However, you can still for instance write pre-made HFS disk images to floppy discs with Disk Utility (and any emulator worth its salt will allow you to mount disk images inside the emulated system), so your best bet is to use a pre-made image to load some essential tools, then if you can, set up a network connection (either real or emulated) and transfer files that way, making sure to encode them in MacBinary before transfer (which I generally prefer to BinHex); unless you know the transfer method is Mac-friendly the whole way, always decode from MacBinary as the last step, directly from the target. Alternately, you can keep around a Mac running Leopard around to directly write to HFS floppies, as I do.

Okay, exercise time.

If you are cheap, you could get away with only providing a 68k build and a Mac OS X Intel build (except neither of these can run on Leopard running on PowerPC…). So the exercise is to, on the contrary, successfully build the same code (modulo #ifdefs, etc.) for 68k, CFM-PPC linking to InterfaceLib, CFM-PPC linking to CarbonLib, Mach-o Intel, Mach-o 64-bit PPC, and Mach-o 64-bit Intel (a Cocoa UI will be tolerated for those two) for optimal performance everywhere (ARM being excluded here, obviously). Bonus points for Mach-o PPC (all three variants) and CFM-68k. More bonus points for gathering all or at least most of those in a single obese package.

Second exercise: figure out the APIs which were present in System 1.0 and are supported in 64-bit on Mac OS X. It’s a short list, but I know for sure it is not empty.

References

Macintosh C Carbon: besides the old Inside Mac books (most of which can still be found here), this is how I learned Carbon programming back in the day.

Gwynne Raskind presents the Mac toolbox for contemporary audiences in two twin articles, reminding you in particular to never neglect error handling, you can’t get away with it when using the toolbox APIs.

Introducing Intrepid Programmer (et Programmeur Intrépide)

I decided to open a new blog for subjects that would not really belong to Wandering Coder, at intrepidprogrammer.com; there is also a French version of that blog available, at the surprisingly named programmeurintrepide.fr. There are two posts available as of this writing, and I just posted on whether or not to buy Charlie Hebdo tomorrow.

See you there if you want, and stay here for Apple nerdery as usual.

Je Suis Charlie

Today, in France, freedom of expression was attacked. For through their odious act today, the perpetrators did not merely target Charlie Hebdo. They targeted Siné Hebdo. They targeted Le Canard Enchaîné. They targeted Le Monde. They targeted TF1. They targeted RTL. They targeted Maitre Eolas. They targeted Cyprien. They targeted the press, the radio, the television, the blogs and all of Internet. They targeted the whole of the media, the freedom of expression of us all and through it, our Republic.

And we will not back down.

This evening, I am on the place de la République, because as a Frenchman, as a blogger, as a person, I benefit from freedom of expression and I will stand by it. And even if I did not, in fact, buy or read Charlie Hebdo, today #JeSuisCharlie, and this site is blacked out.

And we also will not take any shortcut. Today our enemy is not Islam, or Muslims, or Arabs, or any religious, ethnic, national or other group. Today our enemy is terrorism.

We will not “remember” Charlie Hebdo, because rest assured that Charlie Hebdo will live. But we will remember Cabu, Charb, Wolinsky, Bernard Maris, and the others I am less familiar with (sorry) or who were not mentioned on the radio. Today my deepest condolences go to their families.

image

Our Republic is looking extra significant tonight.

Apple is all grown up. It needs to act like it

Look. I have already argued about Apple reaching hubris. I have previously written about what seriously looked like power abuses, then chronicled in the past how their credibility may be eroding (while adding a jab at how I thought they were stretching the truth). And rest assured there are many other events I did not cover in this ongoing iPhone shenanigans category. But here I really have to wonder whether Apple is currently engaged in a oneupmanship match with itself in that regard.

The latest events, of which the Transmit iOS feature expulsion is but the most visible, have made me think and eventually reach the conclusion that the iOS (and Mac, to an extent) platform is not governed in a way suitable for a platform of this importance, to put it lightly; even less so coming from the richest company on Earth. Apple has clearly outgrown their capability to manage the platform in a fair and coherent way (if it ever was managed that way), at least given their current structures, yet they act as if everything was fine; the last structural change in this domain was the publication of the App Store Review Guidelines in 2010, and even then, those were supposed to be, you know, guidelines. Not rules or laws. And yet guidelines like those are used as normative references to justify rejections and similar feature removal requests. This is not sustainable.

Back at the time of the Briefs saga, I was of the opinion that the problem was not so much the repeated rejection decisions than the developer being repeatedly led to believe that with this change or maybe that change Briefs could be accepted, only for his hopes to be squashed each time. Look, I get that the Apple developer evangelists at DTS (Developer Technical Support) honestly thought Briefs would be a worthwhile addition to the iOS platform and genuinely were interested in this app seeing the light of the day on the iOS App Store; but at the end of the day, it was Rob Rhyne’s time and effort and livelihood that was on the line, not theirs, so yes, the fault for the whole Briefs debacle lies with them, not the iOS App Review Team. Today, the same way I wonder whether the fault really lies with the iOS App Review team. Okay, okay, before you go ahead and drown me under the encrypted core dumps reported from users of your iOS and Mac apps, hear me out. To begin with, no matter how desirable (including for some people at Apple) a rejected app is, if the higher ups at Apple were to start issuing executive orders overriding App Review Team decisions, or if pressure was put on reviewers by other Apple employees to accept an app, it would undermine the work of the App Review Team at a fundamental level, their authority would become a joke, and anyone worthwhile working there would quit, leaving the rest to handle the reviews. I don’t think that is what anyone wants. So yes, this means that regardless of the great work on new APIs from the OS software teams, regardless of the interest from Apple leaders to have on iOS, say, more apps for teaching computing, regardless of the desire of the iOS App Store editorial staff or Apple ad teams to feature innovative apps, regardless of the willingness of DTS to help quirky apps come to life, this means that regardless of all this, if the App Review Team isn’t on board, none of that will be of use. So its power needs to be limited, certainly, but not just about any random way.

“It is a timeless experience that any man with power is brought to abuse it […] So that power cannot be abused, the dispositions of things must be such that power stops power.” De l’Esprit des Lois, Livre XI, chapitre IV. I think it may be time for Apple to apply the rantings[fr] of an obscure magistrate from the Bordeaux area (link and expression courtesy Maitre Eolas[fr]). To begin with, all app review decisions must refer to normative texts published before the app was submitted (no retroactive application). I can already hear reviewers (even though I know they will never say it out loud) complain that they cannot possibly predict every situation in advance and need the flexibility to come up with new rules on demand, to which I answer: shut up, shut up, shut up. To me, that line of thinking (which just oozes out of the Introduction to the [iOS] App Store Review Guidelines) sounds suspiciously like a George III or Louis XIV. Even if you reviewer thinks an app should not be part of the App Store, if this app has to be accepted for lack of a rule prohibiting it, then so be it; if the rule makers (which are of course different from those applying these rules) are interested, they will come up with a new rule, at which point it can be enforced on new apps. Speaking of which, secondly, enforcing rules on app updates should not be done the same way as on new apps: blocking an app update must be balanced against the drawbacks, namely leaving a buggy, out-of-date version available on the iOS App Store; this goes especially if that previous version was already violating the rule in question (but on the other hand, they do need to be able to enforce the rules in some way even if previous app versions violating these rules were accepted, as otherwise rules would quickly become unenforceable). Third, the detailed reasoning of the rejection (with references to the relevant rules) will have to be provided to the developer, and app review decisions can only be overturned by a proper appeals process attacking this reasoning. Fourth, the person arguing against an app must be different from the person making the decision, and the app developer must be able to provide counterarguments. Fifth, for these violations that directly concern users, there has to be a way for these users to complain about such a violation so as to avoid inconsistent and unfair application of the rules (Ninjawords, anyone?). Etc. In short, at least a proper judicial process based on Rules and a proper process to come up with these rules.

All of this might seem outlandish for a company to implement: to the best of my knowledge, this has never been put in place by a private entity before. But as long as Apple has these claims to try and dictate large aspects of which features apps should and should not provide to users, I see no way they can sustainably avoid such a separation of powers given how large and incoherent in this regard Apple has become; and this is even if they were allow iOS apps outside the iOS App Store and give up on Mac App Store exclusive APIs (e.g. iCloud), as the iOS App Store and Mac App Store have become too important for them to avoid such a rationalization. Look, I’m not asking for anything like a full due process or enforced independence of judges. No one’s liberty (or, God forbid, life) is at stake here. But the livelihood of developers and the credibility of the iOS platform certainly are. Apple has too many responsibilities and has grown too much to keep acting like an inconsequential teenager. Apple is all grown up, and it needs to act like it.

And on a lighter note about Swift…

One more thought on the matter of Swift, which wasn’t suitable for my previous post (and is too long for Twitter)

2014 – Chris Lattner invents Swift. Swift is an admittedly relatively concise, automatically reference counted, but otherwise class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Apple loudly heralds Swift’s novelty.

(With apologies to James Iry, and to William Ting, who beat me to it except he mischaracterized Swift as being garbage collected.)

Swift Thoughts

Here are my thoughts on Swift, the new application programming language Apple announced at WWDC 2014, based on my reading of The Swift Programming Language (iTunes link, iOS Developer Library version), with a few experiments (you can get my code if you want to reproduce them), all run on the release version of Xcode 6, to clarify behavior that was unclear from the book description: my thoughts are entirely based on the language semantics and the consequences they impose on any implementation, and will hopefully remain valid whichever the implementation, they are not based on any aspect specific to the current implementation (such as how, say, protocols and passing objects supporting multiple protocols is implemented, though that would be interesting too). These thoughts do not come in any particular order: this post is something of an NSSet of my impressions.

First:

on the book itself, I have to mention numerous widows, that is the first line of a paragraph, or even sometimes a section header, appearing at the end of a page with the remainder of the paragraph on the next page (e.g. : “Use the for-in loop with an array to iterate over its items” at the end of a page about more traditional for loops, “variadic parameters”, etc.). If they’re going to publish it on the iBookstore, they ought to watch for that kind of stuff (and yes, even if it is not a static layout as the text is able to reflow, when for instance the text size is changed, there are ways to guard against this happening).

The meta-problem with Swift:

the Apple developer community had all of about three months (from WWDC 2014 to the language GM) to give feedback on Swift. And while I do believe that Swift has been refined internally for much longer than that, I cannot help but notice the number of fundamental changes in Swift from June to August 2014 (documented forever in the document revision history), with for instance Array changing to have full value semantics, or the changes to the String (and Character) type. This is not so much the biggest problem with Swift, than it compounds the other issues found in Swift: if a design issue in Swift only became clear from feedback from the larger Apple developer community, and the feedback came too late or there was no time to fix it in the three (northern hemisphere summer) months, then too bad, it is now part of the language. I think there could have been better ways to handle this.

I might have to temperate that a bit, though: even though Apple is allowing and encouraging you to use Swift in shipping apps, it appears that they are reserving the possibility to break source compatibility (something I admit I did not realize at first, hat tip to, who else, John Siracusa). But I wonder whether Apple will be able to actually exercise that possibility in the future: even in the pessimistic case where Swift only becomes modestly popular at first, there will be significant pushback against such an incompatible change happening — even if Apple provides conversion tools. We’ll see.

The (only) very bad idea:

block comment markers that supposedly nest, so that they can also serve to disable code. For, you see, what is inside the block comment markers is in all likelihood not going to be parsed as code (and this is, in fact, the behavior as I write this post, as of the Xcode 6 release), therefore nested comment markers are simply searched, resulting in the following not working:

/*
println("The basic operators are +-*/%");
*/

The only alternative is to parse text after the block comment start marker as code or at least as tokens… in which case guess what would happen in the following case:

/*
And here I’d like to thank my parents for introducing me to
computers at an early age ":-)
*/

Nested block comments do not work. They cannot be made to work (for those who care, I filed this as rdar://problem/18138958/, visible on Open Radar; it was closed with status “Behaves correctly”). That is why the inside of an #if 0 / #endif pair in C must still be composed of valid preprocessing tokens. “Commenting out” code is a worthy technique, but it should never have been given that name. Instead, in Swift disable code by using #if false / #endif, which is supported but oddly enough only documented in Using Swift and Cocoa with Objective-C.

I don’t like:

the fact that many elements from C have not been challenged. Since programmers coming from C will have many of their habits challenged and will have to unlearn what they have learned anyway, why keep anything from C without justification? For instance, Swift has break; to exit from looping constructs AND to exit from a switch block (even though a switch in Swift is so much more than a switch in C as to be almost a different thing), which forces us to label the looping construct just in order to use a switch as the condition system to exit the loop:

var n=27;

topWhile: while (true)
{
    switch (n)
    {
    case 1:
        break topWhile;
        
    case let foo where foo%2 == 0:
        n = n/2;
        
    default:
        n = n*3 + 1;
    }
}

println(n);

If exiting from a switch had been given a different keyword, uselessly labeling the loop in this case would have been avoided.

I like:

Avoiding the most egregious C flaws. In my opinion, C has a number of flaws that its designers should have avoided even given the stated goals and purposes C was originally meant for. There are many further flaws in C, but many of those make sense as tradeoffs given what the designers of C were aiming for (e.g. programmers were expected to keep track of everything); the following flaws, on the other hand, don’t. Those are: the dependency import model which is simply a textual include (precluding many optimizations to compilation time and harming diagnostics), no (mandatory) keyword to introduce variable declarations (such as let, var, etc. in Swift) which hurts compilation time (given that the compiler has to figure out which tokens are valid types before it can determine whether a statement is a variable declaration or an expression), aliasing rules which are both too restrictive for the compiler (two arrays to the same type may always alias each other, preventing many optimizations; no one uses restrict in practice, and even fewer people could tell you the precise semantics of that keyword) and too restrictive for the developer (he is not supposed to write to a pointer to UInt32 and read from the same pointer as pointing to float). A further flaw becomes glaring if we further consider C as a language for implementing only bit-twiddling and real-time sub-components called from a different higher level language: the lack of any mechanism for tracking scope (initialization, copies, deletion) of heap-bounds variables: those are simply handled in C as byte array blocks which get interpreted as the intended structure type by cast; this is what prevents pointers to Objective-C objects from being stored in C structures in ARC mode, for instance. This is one thing that C++ got right and why Objective-C++ can be a worthwhile way to integrate bit-twiddling and real-time code with Objective-C code. Swift, thankfully, avoids all of these flaws, and many others.

I don’t like:

the method call binding model. Right after watching the keynote, in reaction to the proclamation that Swift uses the same runtime as Objective-C I remarked this had to mean that the messaging semantics had to be the same; I meant it to rule out already the possibility of Swift being even more dynamic that Objective-C. Little did I know that not only Swift method calls are not more dynamic than Objective-C method calls, but in fact don’t use objc_msgSend() at all by default! Look, objc_msgSend() (and friends) is the whole point of the Objective-C runtime. Period. Everything else is bookkeeping in support of objc_msgSend(). Swift can call into objc_msgSend() when calling Objective-C methods and Swift methods marked objc. But using this to proclaim that Swift “uses the same runtime as Objective-C” amounts to telling Python uses the same runtime as Objective-C because of the Python-Cocoa bridge and NSObject-derived Python objects. Apple is trying to convince us of the Objective-C-minus-the-C-part lineage of Swift, but the truth is that Swift has very little to do with that, and much more to do, semantically, with C++. This would never have happened had Avie Tevanian still been alive working at Apple.

My theory as for why Swift works that way is as follows. On the one hand, the people in charge probably think that vtables are dynamic enough, and on the other hand, they may have decided that way first in order to enable Swift to be used in (almost — Swift looks unsuitable for code running at interrupt time) all the places C can be used, including in very demanding, real-time environments such as low-latency audio, drivers, and all the dependencies of these two cases (though for these cases any allocation will have to be avoided, which means not bringing any object or any non-trivial or non-built-in structure in scope); and second in order to allow more optimization opportunities. Indeed, the whole principle of the Smalltalk model that ObjC inherited is that method calls are never bound to the implementation until exactly at the last possible time: right as the method is called, and almost all of the source information is still available at that point for the runtime to decide the binding, in particular the full name of the method in ASCII and parameter metadata (allowing such feats as forwarding, packaging the call in an invocation object, but also method swizzling, isa swizzling, etc.). Meanwhile, with LLVM and Clang Apple has an impressive compilation infrastructure that can realize potentially very useful optimizations, particularly across procedure calls (propagating constants, suppressing useless parameters, hoisting invariants out of loops, etc.). But these interprocedural optimizations cannot occur across Objective-C method calls: the compiler cannot make any assumption about the binding between the call site and the implementation (even when it ends up at run time that the same implementation is always called), which is necessary before the compiler can perform any optimization across the call site.

The problem here may be not so much the cost of objc_msgSend() itself (which can indeed often be reduced for a limited number of hot call sites by careful application of IMP caching) than the diffuse cost of the unexploited optimization opportunities across every single ObjC method call, especially if most or all subroutine calls end up being Objective-C method calls. And the combination of the two has likely prevented Objective-C from being significantly used for the implementation of complex infrastructural code where some dynamism is required (and some resistance to reverse-engineering may be welcome…), such as HTML rendering engines, database engines, game engines, media playback and processing engines, etc., where C++ reigns unchallenged. With Swift, Apple has a language that can reasonably be used for the whole infrastructural part of any application down to the most real-time and performance sensitive tasks you could reasonably want to perform on a general purpose computer or server, not just (as is currently mostly the case with Objective-C) for the MVC organization at the top, with anything below model objects not necessarily being written in the same language as the high-level MVC code.

One way Apple could have had both Smalltalk-style dynamism and optimization across method calls (including the cost itself of binding) would have been to use a virtual machine and use incremental, dynamic optimization techniques, such as those developed for JavaScript in Safari, but Apple decided against it; probably for better integration with existing C code and the Cocoa frameworks, but also maybe because of the reputation of virtual machines for inferior performance. In Smalltalk, precisely, the virtual machine was allowed to inline and in general apply optimizations to (a<b) ifThen: foo else: toto (yes, flow control in Smalltalk was implemented in terms of messages to an object); in Objective-C, the compiler cannot do the equivalent, and such an optimization cannot happen at runtime either given that the program is already frozen as machine code. It is also worth mentioning that the virtual machine approach, while allowing a combination of late binding and whole program optimizations, would not have enabled Swift to both have Smalltalk messaging semantics and be suitable for real-time code: the Smalltalk and Objective-C messaging model is basically lazy binding, and laziness is fundamentally incompatible with real-time.

I like:

the transaction-like aspect of tying variables (typically constant ones) with control flow constructs. Very few variables actually do need to vary, most of them are actually either calculation intermediates, or fixtures which are computed once and then keep the same value for as long as they are valid. And when such a fixture is necessary in a scope, it is for a reason almost always tied to the control flow construct that introduces the scope itself: dereferencing a pointer depends on a prior if statement, for instance. The same way, I like the system (and the switch-case variable tying system that results) that allows tying a dependent data structure to enum values, though making that (at least syntactically) an extension of an enumerated type feel odds to me, I rather consider such a thing a tagged union. In fact, I think they should have gone further, and allowed tying a new variable to the current value of the loop induction variable in case of a break, rather than allow access to the loop induction variable outside the loop by declaring it before the loop.

I don’t like:

the kitchen-sink like aspect, which too reminds me a bit too much of C++. This may be the flip side of the previous point, but nevertheless: do we need an exceedingly versatile, “unified” function declaration syntax? Not to mention we are never clearly told in the book which functions are considered to have the same identifier and will collide if used in the same program; this is not an implementation detail, code will break if two functions which did not collide start doing so with a never version of the Swift compiler. By contrast, Objective-C, even with the recent additions such as number, array and dictionary literals is a simple language, defining only what it needs to define.

I don’t like:

the pretense at being a script-like language when actually compiling down to native code. Since Swift compiles down to native code, this means it inherits the linking model of languages that compile to native code, but in order to claim “approachable scripting language” brownie points, Swift makes top level code the entry point of the process… that is, as long as you write that code in a file called “main.swift” (top level code is otherwise forbidden). Sure, “you don’t need a main function”, but if (unless you are working in a playground) you need to name the file containing the main code “main.swift”, what has been gained is unclear to me.

I have reservations on:

the optional semicolon. I was afraid it would be of the form “semicolons are inserted at the points where leaving it out would end up causing an error”, but it is more subtle than that, avoiding the most obvious pitfalls thanks to a few other rules. Indeed, Swift governs where whitespace can go around operators more strictly than C and other mainstream languages do: in principle (there are exceptions), whitespace is not allowed after prefix and before postfix operators, and infix operators can either have whitespace on both sides, or whitespace on neither side; no mix is allowed. As a result, this code:

infix operator *~* {}
func *~* (left: Int, right:Int) -> Int
{
    return left*right;
}

postfix operator *~* {}
postfix func *~* (val: Int) -> Int
{
    return val+42;
}

var bar = 4, foo = 2;
var toto = 0;

toto = bar*~*
foo++;

foo

will result in this execution:

But add one space before the operator, and what happens?

So the outcome here is unambiguous thanks to these operators and whitespace rules, the worst has been avoided. That being said, I remain very skeptical of the optional semicolon feature, to my mind it’s just not necessary while bringing the risk of subtle pitfalls (of which I admit I have not found any so far). Also, I admit my objection is in part because it encourages (in particular with the simplified closure-as-last-function-parameter syntax) the “Egyptian” braces style, which I simply do not like.

I have big reservations on:

custom operator definition. Swift does not just have operator overloading, where one can declare a function that will be called when one uses an operator such as * with at least one variable of a type of one’s creation, say so that mat1 * mat2 actually performs matrix multiplication; Swift also allows one to define custom operators using unused combination of operator symbols, such as *~*. And I don’t really see the point. Indeed, operator overloading in the first place only really makes sense when one needs to perform calculations on types that are algebraic in nature: matrices, polynomials, Complex or Hamiltonian numbers, etc., where it allows the code to be naturally and concisely expressed as mathematical expressions, rather than having to use a function call for every single product or addition; outside of this situation, the potential for confusion and abuse is just too great for operator overloading to make sense. So custom operators would only really make sense in situations when one operates within an algebraic system but with operations that can not be assimilated to addition and multiplication; while I am certain such situations exist (I can’t think of any off the top of my head), this strikes me as extremely specialized tasks that could be implemented in a specialized language, where they would be better served anyway. So the benefit of custom operators is very limited, while the potential cost in abuse and other drawbacks (such as the compiler reporting an unknown operator rather than a syntax error when it meets a nonsensical combination of operators due to a typo) is much greater, so I have big reservations about the custom operators feature of Swift.

I like:

the relatively strict typing (including for widening integer types) and the accompanying type inference. I think C’s typing is too loose for today’s programming tasks, so I welcome the discipline found in Swift (especially with regard to optional types). It does make it necessary to introduce quite a bit of infrastructure, such as generics and tagged unions (mistakenly labeled as enumerations with associated values), but those make the programmer intentions clearer. And Swift allows looser typing when it matters: with class instances and the AnyObject type, such as when doing UI work, where Swift does keep a strength of Objective-C.

I have reservations on:

string interpolation. It’s quite clever, as far as I can tell being syntactically unambiguous (a closing paren is unambiguously one terminating the expression or not simply by counting parens), however I am wondering if such a major feature is warranted if the usefulness is limited to debugging purposes, as indeed for any other purpose the string will need to be localized, which as far as I can tell precludes the use of this feature.

I am very intrigued about:

the full power of switch. I have a feeling it may be going a bit too far in completeness, but the whole principle of having more complex selection and having the first criterion that applies in case two overlap will allow much more natural expression of complex requirements requiring classification of a situation according to a criterion for each case, but where later criteria must not be applied if one applies already.

I have reservations on:

tuple variables and individual element access (other than through decomposition). If you need a tuple enough that you need to keep it in a variable, then you should define a structure instead; same goes for individual element access. Tuple constants might be useful; other than that, tuple types should only be transitorily used as function return and parameters (in case you want to directly use a returned tuple as a parameter for that function), and should have to be composed and decomposed (including when accessing them inside a function that has a tuple parameter) for any other use.

I have reservations on:

tuple type conversions. This is one place where Swift actually does use duck typing, but with subtle rules that can trip you up, let us see what happens when we put this code:

func tupleuser(toto : (min: Int, max: Int)) -> Int
{
    return toto.max - toto.min;
}

func tupleprovider(a :Int, b: Int) -> (max: Int, min: Int)
{
    return (a - b/2 + b, a - b/2);
}

func filter(item: (Int, Int)) -> (Int, Int)
{
    return item;
}

func filter2(item: (min: Int, max: Int)) -> (min: Int, max: Int)
{
    return item;
}


tupleuser(filter2(tupleprovider(100, 9)));

// I tried to use a generic function instead of "filter2", but
// the compiler complained with "Cannot convert the expression's type
// '(max: Int, min: Int)' to type ’T’", it seems that when the
// parameter type and the expected return type disagree, the Swift
// compiler would rather not infer at all.

in a playground:

The code above in a playground, with in the playground margin 105 and 96 being inverted between tupleprovider and filter2, and the final result being 9

But then let us change the intermediate function:

Same code as above in a playground, except filter2 has been replaced by filter in the last line, and as a result 105 and 96 are no longer inverted between tupleprovider and filer, and the final result is -9

Uh?! That’s right: when a tuple value gets passed between two tuple types (here, from function result to function parameter) where at least one of the tuple types has unnamed fields, then tuple fields keep their position. However, when both tuple types have named fields, then tuple fields are matched by name (the names of course have to match) and can change position! Something to keep in mind, at the very least.

I like:

closures, class extensions. Of course they have to be in.

I have reservations on:

all the possible syntax simplifications for anonymous closures. In particular, the possibility of putting the closure passed as the last parameter to a function outside that function’s parentheses is a bit misleading about whether that code is part or not of the caller of that function, so programmers may make the mistake of putting a return in the closure expecting to exit from the caller function, while this will only exit from the closure.

I have reservations on:

structure and enumeration methods. Structure methods is already taking a superfluous feature from C++, but enumeration methods just take the cake. What reasonable purpose could this serve? Is it so hard to write TypeDoStuff(value) rather than value.doStuff()? Because remember, inheritance is only for classes, so there is no purpose for non-class methods other than the use of the method invocation syntax.

I have big reservations on:

the Character type. I am resolutely of the opinion (informed by having seen way too many permutations of issues that appear when leaving the comfortable world of ASCII) that ordinary programmers should never concern themselves with the elementary constituents of a string. Never. When manipulating sound, do you ever consider it a sequence of phonemes or notes that can be manipulated individually? Of course not: you consider it a continuous flow; even when it needs to be processed as blocks or samples, you apply the same processing (maybe with time-dependent inputs, but the same processing nonetheless) to all of them. So the same way, strings and text should be processed as a media flow. Python has the right idea: there is no character type, merely very short strings when one does character-like processing, though I think Python does not go far enough. The only string primitives ordinary programmers should ever need are:

  • defining literal ASCII strings (typically for dictionary keys and debugging)
  • reading and writing strings from byte arrays with a specified encoding
  • printing the value of variables to a string, possibly under the control of a format and locale
  • attempting to interpret the contents of a string as an integer or floating-point number, possibly under the control of a format and locale
  • concatenating strings
  • hashing strings (with an implementation of hashing that takes into account the fact strings that only vary in character composition are considered equal and so must have equal hashes)
  • searching within a string with appropriate options (regular expression or not, case sensitive or not, anchored or not, etc.) and getting the first match (which may compare equal while not being the exact same Unicode sequence as the searched string), the part before that match, and the part after that match, or nothing if the search turned empty.
  • comparing strings for equality and sorting with appropriate options (similar to that of searching, plus specific options such as numeric sort, that is "1" < "2" < "100")
  • and for very specific purposes, a few text transformations: mostly convert to lowercase, convert to uppercase, and capitalize words.

That’s it. Every other operation ordinary programmers perform can be expressed as a combination of those (and provided as convenience functions): search and replace is simply searching, then either returning the input string if the search turned empty, or concatenating the part before the match, the replacement, and the result of a recursive search and replace on the part after the match; parsing is merely finding the next token (from a list) in the string, or advancing until the regular expression can no longer advance (e.g. stopping once the input is no longer a digit) and then further parsing or interpreting the separated parts; finding out whether a file has file extension “avi” in a case-insensitive way? Do a case-insensitive, anchored, reverse, locale-independent search for ".avi" in the file name string. Etc.

None of those purposes necessitate breaking up a string into its constituents Unicode code points, or into its constituents grapheme clusters, or into its constituents UTF-8 bytes, or into its constituents whatevers. Where better access is needed is for very specific purposes such as text editing, typesetting, and rendering, implemented by specialists in specialized libraries that ordinary programmers use through an API, and these specialists will need access down to the individual Unicode code points, with the Character Swift type being in all likelihood useless for them. So I think Swift should do away with the Character type; yes, this means you would not be able to use the example of “reversing” a string (whatever that means when you have, say, Hangul syllables) to demonstrate how to do string processing in the language, but to be honest this is the only real purpose I can think of for which the Character type is “useful”.

I don’t like:

the assumption across the book that we are necessarily writing a Mac OS X/iOS app in Xcode. For instance, runtime errors (integer overflow, array subscript out of bounds, etc.) are described as causing the app to exit. Does this means Swift cannot be used for command-line tools or XPC services, for instance? I suppose that is not the case, or Swift would be unnecessarily limited, so Swift ought to be described in more general terms (in terms of processes, OS interaction, etc.).

I have reservations on:

the Int and UInt type having different width depending on whether the code is running on a 32-bit or 64-bit environment. Except for item count, array offset, or other types that need to or benefit from scaling with memory size and potential count magnitudes (hash values come to mind), it is better for integer types to be predictable and have fixed width. The result of indiscriminately using Int and UInt will be behavior that is unnecessarily different between the same code running on a 32-bit environment and a 64-bit environment.

I don’t like:

a lot of ambiguities in the language description. For instance, do the range operators ... and ..< return values of an actual type which I could manipulate if I wanted to, or are they an optional part of the for and case statements syntax, only valid there? And why this note about capturing that tells “Swift determines what should be captured by reference and what should be copied by value”? This makes no sense, whether variables are captured by reference or by value is part of the language semantics, it is not an implementation detail. What it should tell is that variables are captured by reference, but when possible the implementation will optimize away the reference and the closure will directly keep the value around (the same way that they do describe that Strings are value types and thus are copied in principle, but the compiler will optimize away the copy whenever possible).

I don’t understand:

how are lazy stored properties useful. Either the initializer for lazy stored properties may depend on instance stored properties, in which case I’d love to know under which conditions (if I had to guess, I’d say only let stored properties could be used as parameters of this initializer, which would in turn justify the usefulness of let stored properties), or it can’t, in which case why pay for the expensive object for multiple instances, as they are just going to be creating always the same one, so the expensive object could just be a global.

I don’t understand:

why so many words are expended to specify the remainder operator behavior, while leaving unanswered the behavior of the integer division operator in the same cases. Look, in any reasonable language, the two expressions a/b and a%b are integers satisfying the following equations:

1: a = (a/b) × b + a%b
2: (a%b) × (a%b) < b × b

with the only remaining ambiguity being the sign of a%b; as a corollary, the values of a, b and a%b necessarily determine the value of a/b in a reasonable language. Fortunately, Swift is a reasonable language, so when delving on the behavior of a%b (answer: it is either 0 or has the same sign as a) the book should specify the tied behavior of a/b along with it. Speaking of which: Swift allows using the remainder operator on floating-point numbers, but how do I get the corresponding Euclidian division of these same floating point numbers? I guess I could do trunc(a/b), but I’m sure there are subtleties I haven’t accounted for.

I don’t like:

the lack of any information on a threading model. Hello? It’s 2014. All available Mac and iOS devices are multi-core, and have been for at least the past year. And except for spawning multiple processes from a single app (which as far as I know is still not possible on iOS, anyway), threads and thread-based infrastructure, such as Grand Central Dispatch, are the only way to exploit the parallelism of our current multi-core hardware. So while not all apps necessarily need to be explicitly threaded, this is an important enough feature that I find it very odd that there is no description or documentation of threading in Swift. And yes, I know you can spawn threads using the Objective-C APIs and then try and run Swift code inside that thread; that’s not the point. The point is: as soon as I share any object between two threads running Swift code, what happens? Which synchronization primitives are available, and what happens if an object is used by two threads without synchronization, is there a possibility of undefined behavior (so far there is none in Swift), or is a fault the worst that could happen? Is it even supported to just use Swift code in two different threads, without sharing any object? This is not documented. I’m not asking for much, even an official admission that there is no currently defined threading model, that they are working on one, and that Swift should only be used on the main thread for now would be enough, and allow us to plan for the future (while allowing to reject contributor suggestions that would end up causing Swift code to be used in an unsafe way). But we don’t get even that, as far as I can tell.

I like:

the support for named parameters. Yes, Swift has named parameters, in the sense that you can omit any externally named parameter that has a default value in whichever way you like, it’s not just the N last parameters that can be omitted as in C++, just as long as these optional parameters have different external names; the only other (minor) restriction is that the parameters that are given must be provided in order. On that subject, it is important to note that two functions or methods can differ merely in the optional parameters part and yet not collide, but doing so will force invocations to specify some optional parameters in order to disambiguate between the two (and therefore make these parameters no longer optional in practice), otherwise a compilation error will occur, as seen in this code:

func joinString(a: String, andString b: String = " ",
                andString c: String = ".") -> String
{
    return a + b + c;
}

func joinString(var a: String, andString b: String = " ",
                numTimes i: Int = 1) -> String
{
    for _ in 0..<i
    {
        a = a + b;
    }
    
    return a;
}


joinString("toto", andString: "s", numTimes:3);

which normally executes as follows:

The code above in a playground, with the final result being totosss

But what if we remove numTimes:?

So make sure that the function name combined with the external names of mandatory parameters is enough to provide the function with a unique signature.

On a related note:

external parameter names are part of the function type, such that if you assign a function with external parameter names (with default values or not) to a variable, the inferred type of the variable includes the external names; as a result, when the function is invoked through the variable, the external parameter names have to be provided, as can be seen in this code:

func modifyint(var base: Int, byScalingBy shift: Int) -> Int
{
    for _ in 0..<shift
    {
        base *= 10;
    }
    
    return base;
}

var combinerfunc = modifyint;

combinerfunc(3, 5)

which will result in an error, as seen here:

You need to add the external parameter name even for this kind of invocation:

Same code as above, except the external parameter name has been added as recommended in the last line, and the result in the playground margin is 300,000

In practice this means functions and closures that are to be called through a variable should not have externally named parameters.

I have reservations on:

seemingly simple statements that cause non-obvious activity. For instance, how does stuff.structtype.field = foo; work? Let us see with this code:

struct Simpler
{
    var a: Int;
    var b: Int;
}

var watcher = 0;

class Complex
{
    var prop : Simpler = Simpler(a: 0, b: 0)
    {
        willSet(newSimpler)
        {
            watcher++;
        }
    }
}

let frobz = Complex();

frobz.prop.b = 4;
frobz.prop.a = 6;

watcher;

println("\(frobz.prop.a), \(frobz.prop.b)");

Which executes as follows:

The code above in a playground, with the result of watcher in the line before last being 2

So yes, a stuff.structtype.field = foo statement, while it looks like a simple assignment, actually causes a read-modify-write of the structure in the class; this is actually a reasonable behavior, otherwise the property observers would not be able to do their job.

I don’t like:

some language features are not documented before the “language reference” part (honestly, who is going to spontaneously read that section from start to finish?), such as dynamicType; this is all the more puzzling as overriding class methods (which is very much described as part of class features in the “language guide”) is useless without dynamicType.

On a related note:

dynamicType cannot be called on self until self is satisfactorily initialized (at least when I tried doing so), as if dynamicType was an ordinary method, even though it is not an ordinary method: after all, dynamicType only gives you access to the type and its type methods, which do not rely on any instance, why would the state of this particular instance matter? This makes dynamicType and overridable class methods that much less useful to control early instance initialization behavior.

I have reservations on:

subscripting on programmer-defined classes and structures. Basically, the questions I have for supporting custom operators are the same I have for support of subscripting: I just don’t see the need in a general-purpose language.

On a related note:

the correct subscript method between the different ones a class can support is chosen according to the (inferred, if necessary) type of the subscript, which sounds like C++’s strictly type (data shape) based overloading, and it is, but it is acceptable in this instance.

I have reservations on:

computed property setters. Modifying a computed property modifies, by definition, at least one stored property, but there is no language feature to document the interdependency, and this absence is going to be felt (just like was felt the lack of any way to mark designated initializers in Objective-C until recently).

I have reservations on:

allowing running a closure for setting the default value of a property. Is it really a good idea?

I like:

the good case examples for the code samples in the book. Each time it is clear why the code construct just introduced is the appropriate way to treat the practical problem.

I don’t like:

the lacks of a narrative, or at least of a progression, in the book. Where is the rationale for some of the less obvious features? Where is the equivalent of Object-Oriented Programming with Objective-C (formerly the first half of “Object-Oriented Programming and the Objective-C Programming Language”)? This matters, we can’t just expect to give developers a bunch of tools and expect them to figure out which tool is for which purpose, or at least not in a consistent way. Providing a rationale for the features is part of a programming language as well.

I like:

the declaration syntax. While compared to C we no longer have the principle that declaration mimics usage, I think it’s worth on the other hand getting rid of this:

char* foo, bar, **baz;

which in C declares foo as a pointer to char, baz as a pointer to pointer to char, but bar as a char, not a pointer to char… In fact, in Swift when you combine the type declaration syntax (colon then type name after the variable/parameter name), function declaration syntax, top level code being the entry point, and nested functions, you get at times a very Pascalian feel… In 2014, Apple languages have gone full circle from 1984 (for the younguns among you, Pascal was the first high level programming language Apple supported for Mac development, and remained the dominant language for Mac application development until the arrival of PowerPC in 1993).

I don’t like:

the lack of any portability information. I guess that it’s a bit early for any kind of cross-platform availability, right now Apple concentrates on making the language run and shine on Apple platforms, I get that. But I’d like some kind of information, even just a rough intent (and the steps they are taking towards it, e.g. working towards standardization? Or making sure Swift support is part of the open-source LLVM releases maybe?) in that area, so that I can know whether I can invest in Swift and eventually leverage this work on another platform, as I can today with, say, C(++). Sorry, but I’m not going to encode my thoughts (at least not for many of my projects) in a format if I do not know whether this format will stay locked to Apple platforms or not. On a related note, some information on which source changes will maintain ABI compatibility and which will not would be appreciated. But this information is not provided. I know that Apple does not guarantee any binary compatibility at this time, but even if it is not implemented yet they have some idea of what will be binary compatible and what will not, and knowing this would inform my API design, for instance.

I like:

the few cases where implicit conversion is provided (that is, where it makes sense). For instance, you might have noticed that, if foo is an optional Int (that is, Int?), you never need to write foo = Some(4);, but simply foo = 4;. This is appreciated when you may or may not do a given action at the end of the function, but if you do a value is necessarily provided, for instance an error code: in that case, you track the need to do this action eventually with an optional of the value’s type, and you have plenty of spots where this optional variable is set, so any simplification is appreciated.

My pessimistic conclusion

Swift seems to go counter to all historical programming language trends: it is statically typed when most of the language work seems to trend towards more loosely typed semantics and even duck typing, it compiles down to machine code and has a design optimized for that purpose when most new languages these days run in virtual machines, it goes for total safety when most new languages have abandoned it. I wonder if Swift won’t end up in the wrong side of history eventually.

My optimistic conclusion

Swift, with its type safety, safe semantics and the possibility to tie variables as part of control flow constructs (if let, etc.), promises to capture programmer intent better than any language that I know of, which ought to ease maintenance and merge operations; this should also help observability, at least in principle (I haven’t investigated Swift’s support for DTrace), and might eventually lead to an old dream of mine: formally defined semantics for the language, which would allow writing proofs (that the compiler could verify) that for instance the code I just wrote could not possibly crash.

Post-scriptum:

let me put a few words of comments on the current state of the toolchain: it still has ways to go in terms of maturity and stability. Most of the time when you make a mistake the error message from the compiler is inscrutable, and I managed to crash the background compilation process of the playground on multiple occasions while researching this post. Nevertheless, as you can see in the illustrations the playground concept has been very useful to experiment with the language, much faster and more enjoyable than with, say, an interactive interpreter interface (as in Python for instance), so it wasn’t a bad experience overall.