Swift Thoughts

Here are my thoughts on Swift, the new application programming language Apple announced at WWDC 2014, based on my reading of The Swift Programming Language (iTunes link, iOS Developer Library version), with a few experiments (you can get my code if you want to reproduce them), all run on the release version of Xcode 6, to clarify behavior that was unclear from the book description: my thoughts are entirely based on the language semantics and the consequences they impose on any implementation, and will hopefully remain valid whichever the implementation, they are not based on any aspect specific to the current implementation (such as how, say, protocols and passing objects supporting multiple protocols is implemented, though that would be interesting too). These thoughts do not come in any particular order: this post is something of an NSSet of my impressions.

First:

on the book itself, I have to mention numerous widows, that is the first line of a paragraph, or even sometimes a section header, appearing at the end of a page with the remainder of the paragraph on the next page (e.g. : “Use the for-in loop with an array to iterate over its items” at the end of a page about more traditional for loops, “variadic parameters”, etc.). If they’re going to publish it on the iBookstore, they ought to watch for that kind of stuff (and yes, even if it is not a static layout as the text is able to reflow, when for instance the text size is changed, there are ways to guard against this happening).

The meta-problem with Swift:

the Apple developer community had all of about three months (from WWDC 2014 to the language GM) to give feedback on Swift. And while I do believe that Swift has been refined internally for much longer than that, I cannot help but notice the number of fundamental changes in Swift from June to August 2014 (documented forever in the document revision history), with for instance Array changing to have full value semantics, or the changes to the String (and Character) type. This is not so much the biggest problem with Swift, than it compounds the other issues found in Swift: if a design issue in Swift only became clear from feedback from the larger Apple developer community, and the feedback came too late or there was no time to fix it in the three (northern hemisphere summer) months, then too bad, it is now part of the language. I think there could have been better ways to handle this.

I might have to temperate that a bit, though: even though Apple is allowing and encouraging you to use Swift in shipping apps, it appears that they are reserving the possibility to break source compatibility (something I admit I did not realize at first, hat tip to, who else, John Siracusa). But I wonder whether Apple will be able to actually exercise that possibility in the future: even in the pessimistic case where Swift only becomes modestly popular at first, there will be significant pushback against such an incompatible change happening — even if Apple provides conversion tools. We’ll see.

The (only) very bad idea:

block comment markers that supposedly nest, so that they can also serve to disable code. For, you see, what is inside the block comment markers is in all likelihood not going to be parsed as code (and this is, in fact, the behavior as I write this post, as of the Xcode 6 release), therefore nested comment markers are simply searched, resulting in the following not working:

/*
println("The basic operators are +-*/%");
*/

The only alternative is to parse text after the block comment start marker as code or at least as tokens… in which case guess what would happen in the following case:

/*
And here I’d like to thank my parents for introducing me to
computers at an early age ":-)
*/

Nested block comments do not work. They cannot be made to work (for those who care, I filed this as rdar://problem/18138958/, visible on Open Radar; it was closed with status “Behaves correctly”). That is why the inside of an #if 0 / #endif pair in C must still be composed of valid preprocessing tokens. “Commenting out” code is a worthy technique, but it should never have been given that name. Instead, in Swift disable code by using #if false / #endif, which is supported but oddly enough only documented in Using Swift and Cocoa with Objective-C.

I don’t like:

the fact that many elements from C have not been challenged. Since programmers coming from C will have many of their habits challenged and will have to unlearn what they have learned anyway, why keep anything from C without justification? For instance, Swift has break; to exit from looping constructs AND to exit from a switch block (even though a switch in Swift is so much more than a switch in C as to be almost a different thing), which forces us to label the looping construct just in order to use a switch as the condition system to exit the loop:

var n=27;

topWhile: while (true)
{
    switch (n)
    {
    case 1:
        break topWhile;
        
    case let foo where foo%2 == 0:
        n = n/2;
        
    default:
        n = n*3 + 1;
    }
}

println(n);

If exiting from a switch had been given a different keyword, uselessly labeling the loop in this case would have been avoided.

I like:

Avoiding the most egregious C flaws. In my opinion, C has a number of flaws that its designers should have avoided even given the stated goals and purposes C was originally meant for. There are many further flaws in C, but many of those make sense as tradeoffs given what the designers of C were aiming for (e.g. programmers were expected to keep track of everything); the following flaws, on the other hand, don’t. Those are: the dependency import model which is simply a textual include (precluding many optimizations to compilation time and harming diagnostics), no (mandatory) keyword to introduce variable declarations (such as let, var, etc. in Swift) which hurts compilation time (given that the compiler has to figure out which tokens are valid types before it can determine whether a statement is a variable declaration or an expression), aliasing rules which are both too restrictive for the compiler (two arrays to the same type may always alias each other, preventing many optimizations; no one uses restrict in practice, and even fewer people could tell you the precise semantics of that keyword) and too restrictive for the developer (he is not supposed to write to a pointer to UInt32 and read from the same pointer as pointing to float). A further flaw becomes glaring if we further consider C as a language for implementing only bit-twiddling and real-time sub-components called from a different higher level language: the lack of any mechanism for tracking scope (initialization, copies, deletion) of heap-bounds variables: those are simply handled in C as byte array blocks which get interpreted as the intended structure type by cast; this is what prevents pointers to Objective-C objects from being stored in C structures in ARC mode, for instance. This is one thing that C++ got right and why Objective-C++ can be a worthwhile way to integrate bit-twiddling and real-time code with Objective-C code. Swift, thankfully, avoids all of these flaws, and many others.

I don’t like:

the method call binding model. Right after watching the keynote, in reaction to the proclamation that Swift uses the same runtime as Objective-C I remarked this had to mean that the messaging semantics had to be the same; I meant it to rule out already the possibility of Swift being even more dynamic that Objective-C. Little did I know that not only Swift method calls are not more dynamic than Objective-C method calls, but in fact don’t use objc_msgSend() at all by default! Look, objc_msgSend() (and friends) is the whole point of the Objective-C runtime. Period. Everything else is bookkeeping in support of objc_msgSend(). Swift can call into objc_msgSend() when calling Objective-C methods and Swift methods marked objc. But using this to proclaim that Swift “uses the same runtime as Objective-C” amounts to telling Python uses the same runtime as Objective-C because of the Python-Cocoa bridge and NSObject-derived Python objects. Apple is trying to convince us of the Objective-C-minus-the-C-part lineage of Swift, but the truth is that Swift has very little to do with that, and much more to do, semantically, with C++. This would never have happened had Avie Tevanian still been alive working at Apple.

My theory as for why Swift works that way is as follows. On the one hand, the people in charge probably think that vtables are dynamic enough, and on the other hand, they may have decided that way first in order to enable Swift to be used in (almost — Swift looks unsuitable for code running at interrupt time) all the places C can be used, including in very demanding, real-time environments such as low-latency audio, drivers, and all the dependencies of these two cases (though for these cases any allocation will have to be avoided, which means not bringing any object or any non-trivial or non-built-in structure in scope); and second in order to allow more optimization opportunities. Indeed, the whole principle of the Smalltalk model that ObjC inherited is that method calls are never bound to the implementation until exactly at the last possible time: right as the method is called, and almost all of the source information is still available at that point for the runtime to decide the binding, in particular the full name of the method in ASCII and parameter metadata (allowing such feats as forwarding, packaging the call in an invocation object, but also method swizzling, isa swizzling, etc.). Meanwhile, with LLVM and Clang Apple has an impressive compilation infrastructure that can realize potentially very useful optimizations, particularly across procedure calls (propagating constants, suppressing useless parameters, hoisting invariants out of loops, etc.). But these interprocedural optimizations cannot occur across Objective-C method calls: the compiler cannot make any assumption about the binding between the call site and the implementation (even when it ends up at run time that the same implementation is always called), which is necessary before the compiler can perform any optimization across the call site.

The problem here may be not so much the cost of objc_msgSend() itself (which can indeed often be reduced for a limited number of hot call sites by careful application of IMP caching) than the diffuse cost of the unexploited optimization opportunities across every single ObjC method call, especially if most or all subroutine calls end up being Objective-C method calls. And the combination of the two has likely prevented Objective-C from being significantly used for the implementation of complex infrastructural code where some dynamism is required (and some resistance to reverse-engineering may be welcome…), such as HTML rendering engines, database engines, game engines, media playback and processing engines, etc., where C++ reigns unchallenged. With Swift, Apple has a language that can reasonably be used for the whole infrastructural part of any application down to the most real-time and performance sensitive tasks you could reasonably want to perform on a general purpose computer or server, not just (as is currently mostly the case with Objective-C) for the MVC organization at the top, with anything below model objects not necessarily being written in the same language as the high-level MVC code.

One way Apple could have had both Smalltalk-style dynamism and optimization across method calls (including the cost itself of binding) would have been to use a virtual machine and use incremental, dynamic optimization techniques, such as those developed for JavaScript in Safari, but Apple decided against it; probably for better integration with existing C code and the Cocoa frameworks, but also maybe because of the reputation of virtual machines for inferior performance. In Smalltalk, precisely, the virtual machine was allowed to inline and in general apply optimizations to (a<b) ifThen: foo else: toto (yes, flow control in Smalltalk was implemented in terms of messages to an object); in Objective-C, the compiler cannot do the equivalent, and such an optimization cannot happen at runtime either given that the program is already frozen as machine code. It is also worth mentioning that the virtual machine approach, while allowing a combination of late binding and whole program optimizations, would not have enabled Swift to both have Smalltalk messaging semantics and be suitable for real-time code: the Smalltalk and Objective-C messaging model is basically lazy binding, and laziness is fundamentally incompatible with real-time.

I like:

the transaction-like aspect of tying variables (typically constant ones) with control flow constructs. Very few variables actually do need to vary, most of them are actually either calculation intermediates, or fixtures which are computed once and then keep the same value for as long as they are valid. And when such a fixture is necessary in a scope, it is for a reason almost always tied to the control flow construct that introduces the scope itself: dereferencing a pointer depends on a prior if statement, for instance. The same way, I like the system (and the switch-case variable tying system that results) that allows tying a dependent data structure to enum values, though making that (at least syntactically) an extension of an enumerated type feel odds to me, I rather consider such a thing a tagged union. In fact, I think they should have gone further, and allowed tying a new variable to the current value of the loop induction variable in case of a break, rather than allow access to the loop induction variable outside the loop by declaring it before the loop.

I don’t like:

the kitchen-sink like aspect, which too reminds me a bit too much of C++. This may be the flip side of the previous point, but nevertheless: do we need an exceedingly versatile, “unified” function declaration syntax? Not to mention we are never clearly told in the book which functions are considered to have the same identifier and will collide if used in the same program; this is not an implementation detail, code will break if two functions which did not collide start doing so with a never version of the Swift compiler. By contrast, Objective-C, even with the recent additions such as number, array and dictionary literals is a simple language, defining only what it needs to define.

I don’t like:

the pretense at being a script-like language when actually compiling down to native code. Since Swift compiles down to native code, this means it inherits the linking model of languages that compile to native code, but in order to claim “approachable scripting language” brownie points, Swift makes top level code the entry point of the process… that is, as long as you write that code in a file called “main.swift” (top level code is otherwise forbidden). Sure, “you don’t need a main function”, but if (unless you are working in a playground) you need to name the file containing the main code “main.swift”, what has been gained is unclear to me.

I have reservations on:

the optional semicolon. I was afraid it would be of the form “semicolons are inserted at the points where leaving it out would end up causing an error”, but it is more subtle than that, avoiding the most obvious pitfalls thanks to a few other rules. Indeed, Swift governs where whitespace can go around operators more strictly than C and other mainstream languages do: in principle (there are exceptions), whitespace is not allowed after prefix and before postfix operators, and infix operators can either have whitespace on both sides, or whitespace on neither side; no mix is allowed. As a result, this code:

infix operator *~* {}
func *~* (left: Int, right:Int) -> Int
{
    return left*right;
}

postfix operator *~* {}
postfix func *~* (val: Int) -> Int
{
    return val+42;
}

var bar = 4, foo = 2;
var toto = 0;

toto = bar*~*
foo++;

foo

will result in this execution:

But add one space before the operator, and what happens?

So the outcome here is unambiguous thanks to these operators and whitespace rules, the worst has been avoided. That being said, I remain very skeptical of the optional semicolon feature, to my mind it’s just not necessary while bringing the risk of subtle pitfalls (of which I admit I have not found any so far). Also, I admit my objection is in part because it encourages (in particular with the simplified closure-as-last-function-parameter syntax) the “Egyptian” braces style, which I simply do not like.

I have big reservations on:

custom operator definition. Swift does not just have operator overloading, where one can declare a function that will be called when one uses an operator such as * with at least one variable of a type of one’s creation, say so that mat1 * mat2 actually performs matrix multiplication; Swift also allows one to define custom operators using unused combination of operator symbols, such as *~*. And I don’t really see the point. Indeed, operator overloading in the first place only really makes sense when one needs to perform calculations on types that are algebraic in nature: matrices, polynomials, Complex or Hamiltonian numbers, etc., where it allows the code to be naturally and concisely expressed as mathematical expressions, rather than having to use a function call for every single product or addition; outside of this situation, the potential for confusion and abuse is just too great for operator overloading to make sense. So custom operators would only really make sense in situations when one operates within an algebraic system but with operations that can not be assimilated to addition and multiplication; while I am certain such situations exist (I can’t think of any off the top of my head), this strikes me as extremely specialized tasks that could be implemented in a specialized language, where they would be better served anyway. So the benefit of custom operators is very limited, while the potential cost in abuse and other drawbacks (such as the compiler reporting an unknown operator rather than a syntax error when it meets a nonsensical combination of operators due to a typo) is much greater, so I have big reservations about the custom operators feature of Swift.

I like:

the relatively strict typing (including for widening integer types) and the accompanying type inference. I think C’s typing is too loose for today’s programming tasks, so I welcome the discipline found in Swift (especially with regard to optional types). It does make it necessary to introduce quite a bit of infrastructure, such as generics and tagged unions (mistakenly labeled as enumerations with associated values), but those make the programmer intentions clearer. And Swift allows looser typing when it matters: with class instances and the AnyObject type, such as when doing UI work, where Swift does keep a strength of Objective-C.

I have reservations on:

string interpolation. It’s quite clever, as far as I can tell being syntactically unambiguous (a closing paren is unambiguously one terminating the expression or not simply by counting parens), however I am wondering if such a major feature is warranted if the usefulness is limited to debugging purposes, as indeed for any other purpose the string will need to be localized, which as far as I can tell precludes the use of this feature.

I am very intrigued about:

the full power of switch. I have a feeling it may be going a bit too far in completeness, but the whole principle of having more complex selection and having the first criterion that applies in case two overlap will allow much more natural expression of complex requirements requiring classification of a situation according to a criterion for each case, but where later criteria must not be applied if one applies already.

I have reservations on:

tuple variables and individual element access (other than through decomposition). If you need a tuple enough that you need to keep it in a variable, then you should define a structure instead; same goes for individual element access. Tuple constants might be useful; other than that, tuple types should only be transitorily used as function return and parameters (in case you want to directly use a returned tuple as a parameter for that function), and should have to be composed and decomposed (including when accessing them inside a function that has a tuple parameter) for any other use.

I have reservations on:

tuple type conversions. This is one place where Swift actually does use duck typing, but with subtle rules that can trip you up, let us see what happens when we put this code:

func tupleuser(toto : (min: Int, max: Int)) -> Int
{
    return toto.max - toto.min;
}

func tupleprovider(a :Int, b: Int) -> (max: Int, min: Int)
{
    return (a - b/2 + b, a - b/2);
}

func filter(item: (Int, Int)) -> (Int, Int)
{
    return item;
}

func filter2(item: (min: Int, max: Int)) -> (min: Int, max: Int)
{
    return item;
}


tupleuser(filter2(tupleprovider(100, 9)));

// I tried to use a generic function instead of "filter2", but
// the compiler complained with "Cannot convert the expression's type
// '(max: Int, min: Int)' to type ’T’", it seems that when the
// parameter type and the expected return type disagree, the Swift
// compiler would rather not infer at all.

in a playground:

The code above in a playground, with in the playground margin 105 and 96 being inverted between tupleprovider and filter2, and the final result being 9

But then let us change the intermediate function:

Same code as above in a playground, except filter2 has been replaced by filter in the last line, and as a result 105 and 96 are no longer inverted between tupleprovider and filer, and the final result is -9

Uh?! That’s right: when a tuple value gets passed between two tuple types (here, from function result to function parameter) where at least one of the tuple types has unnamed fields, then tuple fields keep their position. However, when both tuple types have named fields, then tuple fields are matched by name (the names of course have to match) and can change position! Something to keep in mind, at the very least.

I like:

closures, class extensions. Of course they have to be in.

I have reservations on:

all the possible syntax simplifications for anonymous closures. In particular, the possibility of putting the closure passed as the last parameter to a function outside that function’s parentheses is a bit misleading about whether that code is part or not of the caller of that function, so programmers may make the mistake of putting a return in the closure expecting to exit from the caller function, while this will only exit from the closure.

I have reservations on:

structure and enumeration methods. Structure methods is already taking a superfluous feature from C++, but enumeration methods just take the cake. What reasonable purpose could this serve? Is it so hard to write TypeDoStuff(value) rather than value.doStuff()? Because remember, inheritance is only for classes, so there is no purpose for non-class methods other than the use of the method invocation syntax.

I have big reservations on:

the Character type. I am resolutely of the opinion (informed by having seen way too many permutations of issues that appear when leaving the comfortable world of ASCII) that ordinary programmers should never concern themselves with the elementary constituents of a string. Never. When manipulating sound, do you ever consider it a sequence of phonemes or notes that can be manipulated individually? Of course not: you consider it a continuous flow; even when it needs to be processed as blocks or samples, you apply the same processing (maybe with time-dependent inputs, but the same processing nonetheless) to all of them. So the same way, strings and text should be processed as a media flow. Python has the right idea: there is no character type, merely very short strings when one does character-like processing, though I think Python does not go far enough. The only string primitives ordinary programmers should ever need are:

  • defining literal ASCII strings (typically for dictionary keys and debugging)
  • reading and writing strings from byte arrays with a specified encoding
  • printing the value of variables to a string, possibly under the control of a format and locale
  • attempting to interpret the contents of a string as an integer or floating-point number, possibly under the control of a format and locale
  • concatenating strings
  • hashing strings (with an implementation of hashing that takes into account the fact strings that only vary in character composition are considered equal and so must have equal hashes)
  • searching within a string with appropriate options (regular expression or not, case sensitive or not, anchored or not, etc.) and getting the first match (which may compare equal while not being the exact same Unicode sequence as the searched string), the part before that match, and the part after that match, or nothing if the search turned empty.
  • comparing strings for equality and sorting with appropriate options (similar to that of searching, plus specific options such as numeric sort, that is "1" < "2" < "100")
  • and for very specific purposes, a few text transformations: mostly convert to lowercase, convert to uppercase, and capitalize words.

That’s it. Every other operation ordinary programmers perform can be expressed as a combination of those (and provided as convenience functions): search and replace is simply searching, then either returning the input string if the search turned empty, or concatenating the part before the match, the replacement, and the result of a recursive search and replace on the part after the match; parsing is merely finding the next token (from a list) in the string, or advancing until the regular expression can no longer advance (e.g. stopping once the input is no longer a digit) and then further parsing or interpreting the separated parts; finding out whether a file has file extension “avi” in a case-insensitive way? Do a case-insensitive, anchored, reverse, locale-independent search for ".avi" in the file name string. Etc.

None of those purposes necessitate breaking up a string into its constituents Unicode code points, or into its constituents grapheme clusters, or into its constituents UTF-8 bytes, or into its constituents whatevers. Where better access is needed is for very specific purposes such as text editing, typesetting, and rendering, implemented by specialists in specialized libraries that ordinary programmers use through an API, and these specialists will need access down to the individual Unicode code points, with the Character Swift type being in all likelihood useless for them. So I think Swift should do away with the Character type; yes, this means you would not be able to use the example of “reversing” a string (whatever that means when you have, say, Hangul syllables) to demonstrate how to do string processing in the language, but to be honest this is the only real purpose I can think of for which the Character type is “useful”.

I don’t like:

the assumption across the book that we are necessarily writing a Mac OS X/iOS app in Xcode. For instance, runtime errors (integer overflow, array subscript out of bounds, etc.) are described as causing the app to exit. Does this means Swift cannot be used for command-line tools or XPC services, for instance? I suppose that is not the case, or Swift would be unnecessarily limited, so Swift ought to be described in more general terms (in terms of processes, OS interaction, etc.).

I have reservations on:

the Int and UInt type having different width depending on whether the code is running on a 32-bit or 64-bit environment. Except for item count, array offset, or other types that need to or benefit from scaling with memory size and potential count magnitudes (hash values come to mind), it is better for integer types to be predictable and have fixed width. The result of indiscriminately using Int and UInt will be behavior that is unnecessarily different between the same code running on a 32-bit environment and a 64-bit environment.

I don’t like:

a lot of ambiguities in the language description. For instance, do the range operators ... and ..< return values of an actual type which I could manipulate if I wanted to, or are they an optional part of the for and case statements syntax, only valid there? And why this note about capturing that tells “Swift determines what should be captured by reference and what should be copied by value”? This makes no sense, whether variables are captured by reference or by value is part of the language semantics, it is not an implementation detail. What it should tell is that variables are captured by reference, but when possible the implementation will optimize away the reference and the closure will directly keep the value around (the same way that they do describe that Strings are value types and thus are copied in principle, but the compiler will optimize away the copy whenever possible).

I don’t understand:

how are lazy stored properties useful. Either the initializer for lazy stored properties may depend on instance stored properties, in which case I’d love to know under which conditions (if I had to guess, I’d say only let stored properties could be used as parameters of this initializer, which would in turn justify the usefulness of let stored properties), or it can’t, in which case why pay for the expensive object for multiple instances, as they are just going to be creating always the same one, so the expensive object could just be a global.

I don’t understand:

why so many words are expended to specify the remainder operator behavior, while leaving unanswered the behavior of the integer division operator in the same cases. Look, in any reasonable language, the two expressions a/b and a%b are integers satisfying the following equations:

1: a = (a/b) × b + a%b
2: (a%b) × (a%b) < b × b

with the only remaining ambiguity being the sign of a%b; as a corollary, the values of a, b and a%b necessarily determine the value of a/b in a reasonable language. Fortunately, Swift is a reasonable language, so when delving on the behavior of a%b (answer: it is either 0 or has the same sign as a) the book should specify the tied behavior of a/b along with it. Speaking of which: Swift allows using the remainder operator on floating-point numbers, but how do I get the corresponding Euclidian division of these same floating point numbers? I guess I could do trunc(a/b), but I’m sure there are subtleties I haven’t accounted for.

I don’t like:

the lack of any information on a threading model. Hello? It’s 2014. All available Mac and iOS devices are multi-core, and have been for at least the past year. And except for spawning multiple processes from a single app (which as far as I know is still not possible on iOS, anyway), threads and thread-based infrastructure, such as Grand Central Dispatch, are the only way to exploit the parallelism of our current multi-core hardware. So while not all apps necessarily need to be explicitly threaded, this is an important enough feature that I find it very odd that there is no description or documentation of threading in Swift. And yes, I know you can spawn threads using the Objective-C APIs and then try and run Swift code inside that thread; that’s not the point. The point is: as soon as I share any object between two threads running Swift code, what happens? Which synchronization primitives are available, and what happens if an object is used by two threads without synchronization, is there a possibility of undefined behavior (so far there is none in Swift), or is a fault the worst that could happen? Is it even supported to just use Swift code in two different threads, without sharing any object? This is not documented. I’m not asking for much, even an official admission that there is no currently defined threading model, that they are working on one, and that Swift should only be used on the main thread for now would be enough, and allow us to plan for the future (while allowing to reject contributor suggestions that would end up causing Swift code to be used in an unsafe way). But we don’t get even that, as far as I can tell.

I like:

the support for named parameters. Yes, Swift has named parameters, in the sense that you can omit any externally named parameter that has a default value in whichever way you like, it’s not just the N last parameters that can be omitted as in C++, just as long as these optional parameters have different external names; the only other (minor) restriction is that the parameters that are given must be provided in order. On that subject, it is important to note that two functions or methods can differ merely in the optional parameters part and yet not collide, but doing so will force invocations to specify some optional parameters in order to disambiguate between the two (and therefore make these parameters no longer optional in practice), otherwise a compilation error will occur, as seen in this code:

func joinString(a: String, andString b: String = " ",
                andString c: String = ".") -> String
{
    return a + b + c;
}

func joinString(var a: String, andString b: String = " ",
                numTimes i: Int = 1) -> String
{
    for _ in 0..<i
    {
        a = a + b;
    }
    
    return a;
}


joinString("toto", andString: "s", numTimes:3);

which normally executes as follows:

The code above in a playground, with the final result being totosss

But what if we remove numTimes:?

So make sure that the function name combined with the external names of mandatory parameters is enough to provide the function with a unique signature.

On a related note:

external parameter names are part of the function type, such that if you assign a function with external parameter names (with default values or not) to a variable, the inferred type of the variable includes the external names; as a result, when the function is invoked through the variable, the external parameter names have to be provided, as can be seen in this code:

func modifyint(var base: Int, byScalingBy shift: Int) -> Int
{
    for _ in 0..<shift
    {
        base *= 10;
    }
    
    return base;
}

var combinerfunc = modifyint;

combinerfunc(3, 5)

which will result in an error, as seen here:

You need to add the external parameter name even for this kind of invocation:

Same code as above, except the external parameter name has been added as recommended in the last line, and the result in the playground margin is 300,000

In practice this means functions and closures that are to be called through a variable should not have externally named parameters.

I have reservations on:

seemingly simple statements that cause non-obvious activity. For instance, how does stuff.structtype.field = foo; work? Let us see with this code:

struct Simpler
{
    var a: Int;
    var b: Int;
}

var watcher = 0;

class Complex
{
    var prop : Simpler = Simpler(a: 0, b: 0)
    {
        willSet(newSimpler)
        {
            watcher++;
        }
    }
}

let frobz = Complex();

frobz.prop.b = 4;
frobz.prop.a = 6;

watcher;

println("(frobz.prop.a), (frobz.prop.b)");

Which executes as follows:

The code above in a playground, with the result of watcher in the line before last being 2

So yes, a stuff.structtype.field = foo statement, while it looks like a simple assignment, actually causes a read-modify-write of the structure in the class; this is actually a reasonable behavior, otherwise the property observers would not be able to do their job.

I don’t like:

some language features are not documented before the “language reference” part (honestly, who is going to spontaneously read that section from start to finish?), such as dynamicType; this is all the more puzzling as overriding class methods (which is very much described as part of class features in the “language guide”) is useless without dynamicType.

On a related note:

dynamicType cannot be called on self until self is satisfactorily initialized (at least when I tried doing so), as if dynamicType was an ordinary method, even though it is not an ordinary method: after all, dynamicType only gives you access to the type and its type methods, which do not rely on any instance, why would the state of this particular instance matter? This makes dynamicType and overridable class methods that much less useful to control early instance initialization behavior.

I have reservations on:

subscripting on programmer-defined classes and structures. Basically, the questions I have for supporting custom operators are the same I have for support of subscripting: I just don’t see the need in a general-purpose language.

On a related note:

the correct subscript method between the different ones a class can support is chosen according to the (inferred, if necessary) type of the subscript, which sounds like C++’s strictly type (data shape) based overloading, and it is, but it is acceptable in this instance.

I have reservations on:

computed property setters. Modifying a computed property modifies, by definition, at least one stored property, but there is no language feature to document the interdependency, and this absence is going to be felt (just like was felt the lack of any way to mark designated initializers in Objective-C until recently).

I have reservations on:

allowing running a closure for setting the default value of a property. Is it really a good idea?

I like:

the good case examples for the code samples in the book. Each time it is clear why the code construct just introduced is the appropriate way to treat the practical problem.

I don’t like:

the lacks of a narrative, or at least of a progression, in the book. Where is the rationale for some of the less obvious features? Where is the equivalent of Object-Oriented Programming with Objective-C (formerly the first half of “Object-Oriented Programming and the Objective-C Programming Language”)? This matters, we can’t just expect to give developers a bunch of tools and expect them to figure out which tool is for which purpose, or at least not in a consistent way. Providing a rationale for the features is part of a programming language as well.

I like:

the declaration syntax. While compared to C we no longer have the principle that declaration mimics usage, I think it’s worth on the other hand getting rid of this:

char* foo, bar, **baz;

which in C declares foo as a pointer to char, baz as a pointer to pointer to char, but bar as a char, not a pointer to char… In fact, in Swift when you combine the type declaration syntax (colon then type name after the variable/parameter name), function declaration syntax, top level code being the entry point, and nested functions, you get at times a very Pascalian feel… In 2014, Apple languages have gone full circle from 1984 (for the younguns among you, Pascal was the first high level programming language Apple supported for Mac development, and remained the dominant language for Mac application development until the arrival of PowerPC in 1993).

I don’t like:

the lack of any portability information. I guess that it’s a bit early for any kind of cross-platform availability, right now Apple concentrates on making the language run and shine on Apple platforms, I get that. But I’d like some kind of information, even just a rough intent (and the steps they are taking towards it, e.g. working towards standardization? Or making sure Swift support is part of the open-source LLVM releases maybe?) in that area, so that I can know whether I can invest in Swift and eventually leverage this work on another platform, as I can today with, say, C(++). Sorry, but I’m not going to encode my thoughts (at least not for many of my projects) in a format if I do not know whether this format will stay locked to Apple platforms or not. On a related note, some information on which source changes will maintain ABI compatibility and which will not would be appreciated. But this information is not provided. I know that Apple does not guarantee any binary compatibility at this time, but even if it is not implemented yet they have some idea of what will be binary compatible and what will not, and knowing this would inform my API design, for instance.

I like:

the few cases where implicit conversion is provided (that is, where it makes sense). For instance, you might have noticed that, if foo is an optional Int (that is, Int?), you never need to write foo = Some(4);, but simply foo = 4;. This is appreciated when you may or may not do a given action at the end of the function, but if you do a value is necessarily provided, for instance an error code: in that case, you track the need to do this action eventually with an optional of the value’s type, and you have plenty of spots where this optional variable is set, so any simplification is appreciated.

My pessimistic conclusion

Swift seems to go counter to all historical programming language trends: it is statically typed when most of the language work seems to trend towards more loosely typed semantics and even duck typing, it compiles down to machine code and has a design optimized for that purpose when most new languages these days run in virtual machines, it goes for total safety when most new languages have abandoned it. I wonder if Swift won’t end up in the wrong side of history eventually.

My optimistic conclusion

Swift, with its type safety, safe semantics and the possibility to tie variables as part of control flow constructs (if let, etc.), promises to capture programmer intent better than any language that I know of, which ought to ease maintenance and merge operations; this should also help observability, at least in principle (I haven’t investigated Swift’s support for DTrace), and might eventually lead to an old dream of mine: formally defined semantics for the language, which would allow writing proofs (that the compiler could verify) that for instance the code I just wrote could not possibly crash.

Post-scriptum:

let me put a few words of comments on the current state of the toolchain: it still has ways to go in terms of maturity and stability. Most of the time when you make a mistake the error message from the compiler is inscrutable, and I managed to crash the background compilation process of the playground on multiple occasions while researching this post. Nevertheless, as you can see in the illustrations the playground concept has been very useful to experiment with the language, much faster and more enjoyable than with, say, an interactive interpreter interface (as in Python for instance), so it wasn’t a bad experience overall.