Skip to content
Tech

A fast look at Swift, Apple’s new programming language

For better or worse, Apple's new language lets you do things your way.

John Timmer | 609
Story text

If anyone outside Apple saw Swift coming, they certainly weren't making any public predictions. In the middle of a keynote filled with the sorts of announcements you'd expect (even if the details were a surprise), Apple this week announced that it has created a modern replacement for the Objective-C, a programming language the company has used since shortly after Steve Jobs founded NeXT.

Swift wasn't a "sometime before the year's out"-style announcement, either. The same day, a 550-page language guide appeared in the iBooks store. Developers were also given access to Xcode 6 betas, which allow application development using the new language. Whatever changes were needed to get the entire Cocoa toolkit to play nice with Swift are apparently already done.

While we haven't yet produced any Swift code, we have read the entire language guide and looked at the code samples Apple provided. What follows is our first take on the language itself, along with some ideas about what Apple hopes to accomplish.

Why were we using Objective-C?

When NeXT began, object-oriented programming hadn't been widely adopted, and few languages available even implemented it. At the time, then, Objective-C probably seemed like a good choice, one that could incorporate legacy C code and programming habits while adding a layer of object orientation on top.

But as it turned out, NeXT was the only major organization to adopt the language. This had some positive aspects, as the company was able to build its entire development environment around the strengths of Objective-C. In turn, anyone who bought in to developing in the language ended up using NeXT's approach. For instance, many "language features" of Objective-C aren't actually language features at all; they are implemented by NeXT's base class, NSObject. And some of the design patterns in Cocoa, like the existence of delegates, require the language introspection features of Objective-C, which were used to safely determine if an object will respond to a specific message.

The downside of narrow Objective-C adoption was that it forced the language into a niche. When Apple inherited Objective-C, it immediately set about giving developers an alternative in the form of the Carbon libraries, since these enabled a more traditional approach to Mac development.

Things changed with the runaway popularity of the iPhone SDK, which only allowed development in Objective-C. Suddenly, a lot of developers used Objective-C, and many of them already had extensive experience in other programming languages. This was great for Apple, but it caused a bit of strain. Not every developer was entirely happy with Objective-C as a language, and Apple then compounded this problem by announcing that the future of Mac development was Cocoa, the Objective-C frameworks.

What's wrong with Objective-C?

Objective-C has served Apple incredibly well. By controlling the runtime and writing its own compiler, the company has been able to stave off some of the language limitations it inherited from NeXT and add new features, like properties, a garbage collector, and the garbage collector's replacement, Automatic Reference Counting.

But some things really couldn't be changed. Because it was basically C with a few extensions, Objective-C was limited to using C's method of keeping track of complex objects: pointers, which are essentially the memory address occupied by the first byte of an object. Everything, from an instance of NSString to the most complex table view, was passed around and messaged using its pointer.

For the most part, this didn't pose problems. It was generally possible to write complex applications without ever being reminded that everything you were doing involved pointers. But it was also possible to screw up and try to access the wrong address in memory, causing a program to crash or opening a security hole. The same holds true for a variety of other features of C; developers either had to do careful bounds and length checking or their code could wander off into random places in memory.

Beyond such pedestrian problems, Objective-C simply began showing its age. Over time, other languages adopted some great features that were difficult to graft back onto a language like C. One example is what's termed a "generic." In C, if you want to do the same math with integers and floating point values, you have to write a separate function for each—and other functions for unsigned long integers, double-precision floating points, etc. With generics, you can write a single function that handles everything the compiler recognizes as a number.

Apple clearly could add some significant features to the Objective-C syntax—closures are one example—but it's not clear that it could have added everything it wanted. And the very nature of C meant that the language would always be inherently unsafe, with stability and security open to compromise by a single sloppy coder. Something had to change.

But why not take the easy route and adopt another existing language? Because of the close relationship between Objective-C and the Cocoa frameworks, Objective-C enabled the sorts of design patterns that made the frameworks effective. Most of the existing, mainstream alternatives didn't provide such a neat fit for the existing Cocoa frameworks. Hence, Swift.

The scene from WWDC this week. Credit: Megan Geuss

OK, why now?

Several reasons. Since it migrated development to LLVM, Apple has been in control of its own runtime and toolchain. This makes implementing changes (and fully understanding their consequences) a much easier thing to do. Apple has also gained some experience with language development already, adding things like properties, Automatic Reference Counting, and closures to Objective-C.

Making those changes also gave Apple a sense of how developers might respond in the future. When Apple added garbage collection and told its developers "this is a great option," pickup was mixed. When Apple added Automatic Reference Counting and said "this is the future," developers got on board much more quickly. (Apple's decision to kill 64-bit Carbon undoubtedly helped developers read the tea leaves in that case as well.)

All those earlier changes helped pave the way for some of the features of Swift. Closures are present; so is Automatic Reference Counting. Variables are handled in a manner quite similar to Objective-C's properties. And, because Apple's in control of everything, the same runtime can support both Swift and Objective-C, allowing legacy code to be mixed in with the new language. The change isn't anywhere close to being as disruptive as it would have been five years ago.

We'll probably never know how much of this was planned out long in advance, but you can definitely read the last five years of Apple's history as a combination of gaining experience, putting pieces in place, and finding out just how much disruption its developer community would tolerate.

Do you read me?

If you're going to write a new programming language, you face a tradeoff between ease of writing and readability. This is easiest to demonstrate with an example, so we'll borrow one from the Swift developer documentation. If we write out the following code, even someone who doesn't program much can probably get a sense of what it does.

if thisIsTrue {
    that = that + 50
} else {
    that = that + 20
}

Whereas most people would probably get bewildered if they saw the following:

that = that + (thisIsTrue ? 50 : 20)

The two pieces of code do exactly the same thing. The second is much quicker to type; the first is much easier to understand. Designers of programming language have to make a whole series of such decisions about how to balance this tension. Those decisions have wide-ranging consequences, since readability affects things like how quickly newbies can understand code samples or how readily someone can step into a project that is filled with poorly commented code.

Unless you have something against the bracket characters, Objective-C has tended to err on the side of readability. If you had a function that pulled the text between two strings, in most programming languages, it would look generically like this:

myString.getTextBetweenBrackets( leftBracket, rightBracket);

But in actual use, the variables wouldn't be so conveniently named. Unless you remembered the function, you wouldn't necessarily know what order the two brackets came in. Objective-C got rid of the ambiguity at the cost of some added typing. The equivalent code would look like this:

[myString getTextBetweenLeftBracket: leftBracket andRightBracket: rightBracket];

The value of spelling out which argument did what increased as the number of arguments went up and the self-explanatory nature of the variable names went down. Swift looks like a bit of a hybrid, but it actually preserves this approach:

myString.getBracketedText ( LeftBracket: leftBracket, rightBracket: rightBracket );

In this important way, readability is maintained, and everybody who hated that aspect of Objective-C will probably continue to hate Swift. (Or perhaps we should say the readability can be maintained. Swift will also let you skip putting the method signature inside the parentheses. Apple recommends that you include this extra text, but we expect many people won't bother.)

In lots of other ways, Swift is quite a bit harder to read. In key places, the presence of a single character—an exclamation point or question mark—buried in a chain of function calls will completely change the behavior of the line of code. You can do much more with function definitions, like set default values and give the arguments internal and external names, but doing so means the function definition has to be read carefully in order to understand what it does.

To give another example from the language guide, let's look at how you define an enumeration of the planets:

case Mercury = 1, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune

You only have to add an index to the first item. If there are no others present, the compiler will assume you want them incremented by 1 and do so automatically. That's great, and it saves programmer time. But if you don't happen to remember that Saturn is the sixth planet from the Sun—or happen to be working with a list that's less memorable than planets—you'll find yourself wasting time as you try to figure out what number you should expect.

Where does this fall on our personal scale for the readability vs. ease of typing? Until we use the language and read more code samples, it's hard to say. Right now, a lot of Swift code seems like it could be difficult to follow, but we expect that most of it will become second nature with use. Most programmers will likely be happy to take the relatively compressed syntax of Swift, but the language's flexibility may mean that people will end up unintentionally producing perfectly viable code that other Swift users will struggle to read.

Language basics: lots of options

As noted above, developers have the option of leaving out the method signature information from function calls. In fact, lots of things in Swift are optional. Semicolons to define the end of a line of code? If you'd like. Otherwise, the compiler will figure out when it has reached the end of a coherent statement and act accordingly. Parentheses around logical evaluations, like if statements? If they help readability. The compiler doesn't need to see them, though.

So many things are optional that it seems like you can, at your discretion, write Swift code that would require a careful look to distinguish it from other programming languages. Indeed, you can write Swift code that leaves users of those languages confused about what's going on.

Swift looks a bit like Java in that strings are now a fundamental part of the language, rather than simply an array of characters. Strings and numbers are also treated a bit more like objects, as they have properties associated with them. If you want to know the maximum value that can be safely stored in a UInt8 (8-bit unsigned integer), you call UInt8.max. If you try to assign a value above that, the runtime will truncate it and limit it to the maximum value, unless you explicitly tell it to roll over like an odometer. In a nice touch, you can use scientific notation to assign values to floating point variables.

You declare these as variables with a type, or you can simply assign a variable a value and Swift will figure out which type you actually want (i.e., if you assign something a value of 5.7, Swift will make the variable a double-precision floating point). Swift also encourages you to use constants when you've got a value that's not likely to change; instead of "var," those are assigned using "let."

The usual logical, bitwise, and numerical operators are there—+ acts to concatenate strings, as well. You can also define your own operators, and Apple has two new ones for comparing objects. The logical == operator, when used to test two objects, now determines if they are equivalent. So, in the case of an array, it will check if they contain the same objects. To find out if they point to the same place in memory, you now use ===. There are also range operators; 0...50 will get you every integer between the two, including each end point, which is helpful for writing a simple loop.

The usual collection of for/do/while loops exists, as do if/else statements. The big change is with switch/case statements. In most languages, you have to explicitly keep multiple case statements from executing, a common source of errors. With Swift, you have to explicitly tell the compiler if you want to fall through to the next case statement. You can also label different loops and logical operations and then use a "break myLabel" to make it clear which part of a set of nested operations you're breaking out of.

The language also includes arrays and dictionaries that can work with any of the numerical and string values, though you can't, say, mix strings and integers in the same array. These provide functions like .insert and .count, allowing you to access and manipulate the values they store.

Structs, enumerations, and objects

C doesn't provide the sorts of object classes that Objective-C enables, but it does allow structs, a collection of related data and methods. Structs are back in Swift, where they're similar to classes but lack inheritance, deinitializers, and reference counting. They're always passed around by copying them. Classes behave much like classes do in Objective-C and get passed as a reference.

The big difference is the role of enumerators, which Apple calls "first-class types." They can have initializers and aren't limited to associating values with integers. To use the example above, you can assign planets. Mars has a value of four or a value of "fourth planet from the Sun." In fact, you can set Mars to a collection of four numbers representing its mass, radius, orbital period, and orbital distance. Those values can then be retrieved elsewhere in your code.

The blurring of lines extends to basic variables. Arrays and dictionaries apparently respond to most of the same methods available from the NSArray and NSDictionary classes. The similarity among all the major items (classes, structs, etc.) helps get rid of the sometimes-jarring contrast between the portions of Cocoa that were object-oriented and the parts that relied more on a C-like approach.

Apple always encouraged developers to access the variables held by an object using "setter" and "getter" methods, and Swift formalizes that habit, making them part of the normal class definition. Every class now gets a "will change" and "did change" method to help keep dependent values in sync when a variable changes. Initialization methods have also been sorted out so that there's a specific order that subclasses should call their super's initializer and sort out their own contents.

A feature from Objective-C that's returning is the protocol, in which a class can declare itself as guaranteeing to provide a certain amount of functionality. So, a "webPage" protocol might guarantee it will provide access to a URL, page text, and so on. This lets you produce a number of classes that provide webPage functionality and swap the precise implementation into where it's needed.

Categories are also back, except renamed and on steroids. Categories were great for adding functionality to existing classes. So, for example, you could add the bracketed text function mentioned above as a category to NSString, and every single NSString you create in your app would respond to it. Categories are now renamed as extensions and can add new initializers and new variables to the class they're modifying. They can do things like modify basic language features, such as floating point numbers.

There's also a special type of value called a "tuple," which acts as a wrapper for multiple values. While a function can only return one item, that item can be a tuple that wraps a combination of multiple variables of different types. Once you have access to a tuple, you can set variables (or constants) to each of its values and start using them.

Closures, generics, and operator overloading

Apple introduced blocks, small chunks of code that can be passed around within an application, several OS iterations ago. They're great for things like dialogs, which typically have to execute some code when the user dismisses them. You can keep the code for the block where the dialog was handled (where it logically belongs), but hand it to the dialog, which only executes it once a button is pressed. It's both a convenience and a way to improve the readability of code. So, not surprisingly, blocks are back in Swift, this time under the guise of their formal name, closures. 

Apple's Swift guide is free in iBooks. Credit: Apple

It's possible to hand blocks of code around in C as well. Functions end up with an address in memory, and you can make a pointer to that function. Swift offers something similar, allowing you to set variables that can hold any function that matches a specific signature. It's a pretty logical extension from closures, and again abstracts away the risks of working with an actual memory address.

As we mentioned above, Swift adds generics, functions that can work with a variety of variable types. You can create a generic function that sums the contents of an array, for instance, regardless of what kinds of numbers are stored in the array. You can also restrict the types of objects that a generic works on by setting them to limit their inputs to specific protocols. For example, all of Swift's basic variable types implement the Equatable protocol, which allows them to respond to the == operator.

Obviously, it could be useful for any custom classes to also implement Equatable, which means that they have to be able to respond to == as well. To do this, Swift uses a feature called operator overloading, which allows a programmer to define how to implement basic operators for custom classes. So, if you implemented a custom class to represent hard drives, you can overload == to check whether the manufacturer, size, spindle speed, and so on were all identical. You could also overload + to simply provide a new drive with the combined capacity of the two.

What's missing?

I'm sure <your language of choice> contains dozens of features that Swift doesn't, ensuring that Apple's latest effort will be crippled from the start. But the thing that leaps out is a complete lack of error catching. Yes, the compiler is now incredibly smart about spotting common errors, and the language features several intelligent ways to avoid doing dumb things with nil objects.

But no matter how clever the language and compiler are, people (like, say, me) are going to find ways to screw up. And we like to screw up gracefully, rather than see the program come crashing to a halt.

Of course, the current release is a beta, and Apple is almost certain to have designed the core language with extensibility in mind—it's unlikely to have forgotten its experience with adding features to Objective-C. There's still the chance that error catching could appear as early as the public beta if the developer response demands it (and the same goes for other features that you feel Swift is missing). Hopefully, Apple won't try to convince everyone that nobody really needs it until it's introduced as the greatest thing since sliced bread in OS X Weed.

A screenshot of some Swift action.
A screenshot of some Swift action. Credit: Apple

Is it any good?

Swift isn't a radical departure in many ways. Apple likes certain design patterns, and it constructed Objective-C and Cocoa to encourage them. Swift does the same thing, going further toward formalizing some of the patterns that have been adopted in a somewhat haphazard way (like properties). Most of the features Swift adds already exist in other programming languages, and these will be familiar to many developers. The features that have been added are generally good ones, while the things that have been taken away (like pointer math) were generally best avoided anyway.

In that sense, Swift is a nice, largely incremental change from Objective-C. All the significant changes are in the basic syntax. Use semicolons and parentheses—or don't, it doesn't matter. Include the method signature in the function call—but only if you feel like it. In these and many other cases, Swift lets you choose a syntax and style you're comfortable with, in many cases allowing you to minimize typing if you choose to.

Most of the new features have been used in other languages, the syntax changes get rid of a lot of Objective-C's distinctiveness, and you're often able to write equivalent code using very different syntax. All of this enables Swift to look familiar to a lot of people who are familiar with other languages. That sort of rapport has become more important as Apple attracts developers who'd never even touched C before. These people will still have to learn to work with the design patterns of Apple's frameworks, but at least they won't be facing a language that's intimidatingly foreign at the same time.

In general, these things seem like positives. If Apple chose a single style, then chances were good that a number of its choices wouldn't be ones we'd favor. But with the flexibility, we'll still be able to work close to the way we'd want.

Close, but not exactly. There are a couple of specific syntax features I'm personally not a fan of and a number of cases where a single character can make a radical difference to the meaning of a line of code. Combined, the syntax changes could make managing large projects and multiple developers harder than it has been with Objective-C.

What's Apple up to?

For starters, it's doing the obvious. Swift makes a lot of common errors harder and a number of bad practices impossible. If you choose, you can write code in Swift pretty tersely, which should make things easier for developers. It adds some nice new features that should make said developers more productive. All of those are good things.

More generally, though, Apple is probably mildly annoyed with people like me. I spent time getting good at using autorelease pools, my apps didn't leak memory, and I didn't see the point in learning the vagaries of the syntax required to make sure Automatic Reference Counting didn't end up with circular references that couldn't be reclaimed. I wasn't a huge fan of the dot notation for accessing properties, so I only used it when it couldn't be avoided. In short, I was a dinosaur in waiting.

People like me are why the runtime and compiler teams can't have nice things. If everybody's using the same features, it's easier to get rid of legacy support and optimize the hell out of everything that's left. A smaller memory footprint and better performance mean lower component costs and better battery life, which are very good things for the company.

Apple promised better performance with Swift, and you can see some places where it might extract a bit. Constants are a big part of Swift, which makes sense. If you make a stock-tracking app, the price may change every second, but the stock's name and symbol change so rarely that it's just as easy to make a whole new object when this happens. Declare the name and symbol constants, and you can skip all the code required to change them in a thread-safe manner. Presumably, the compiler can manage some optimizations around the use of constants as well.

Unlike the dinosaurs, we can see Chicxulub coming. Two or three years from now, when Apple announces that the future is Swift and it's ready to drop Objective-C, we won't be at all surprised. And I won't be at all upset, because I'll have spent the intervening few years making sure I know how to use the new language.

Listing image: Brendan A. Ryan

Photo of John Timmer
John Timmer Senior Science Editor
John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.
609 Comments