Like most programming languages, Swift includes a full complement of built-in data types that store numbers, characters, strings, and Boolean values.

Like most programming languages, Swift provides built-in numeric data types that represent either integer or floating-point values.

The following table summarizes the available Swift numeric data types:

Type

Min value

Max value

Int8

-128

127

Int16

-32768

32767

Int32

-2.1 x 109

2.1 x 109

Int64

-9.2 x 1018

9.2 x 1018

UInt8

0

255

UInt16

0

65535

UInt32

0

4.3 x 109

UInt64

0

1.8 x 1019

Double

-1.8 x 10308

1.8 x 10308

Float

-3.4 x 1038

3.4 x 1038

Like many fully compiled languages, Swift is a strongly typed language, and requires explicit type conversions (or casts) when assigning the value from one variable type to a variable of a different type.

Many new Swift programmers find that Swift is even stricter than languages they've used before. In many programming languages, the compiler will implicitly convert between data types during an assignment so long as the value contained within the variable being assigned (on the right of the equals sign) could not overflow the variable being assigned to (on the left of the equals sign).

In other words, in many languages, the following code would be legal, since an Int8 is known to always fit into an Int16 without a numeric overflow:

However, this equivalent code in Swift would result in a compile-time error:

This code would generate the following error:

In Swift, it's always the programmer's responsibility to ensure that assignments have the same data type on the left and right of the assignment operator (that is, the equals sign). The following code corrects the compile-time error:

Now, let's see how to use various numeric variable types by following these steps:

  1. Launch Xcode as before, and create a new playground named Topic B Using Numeric Types.playground.
  2. Add the following code to the playground to create three Int variables, using binary, base10, and base16 literal notation, respectively:
    var base2 = 0b101010
    var base10 = 42
    var hex = 0x2A
  3. Now add the following three corresponding lines to print the data type and value for each of the variables you just created.
    print("Printing \(type(of: base2)): \(base2)")
    print("Printing \(type(of: base10)): \(base10)")
    print("Printing \(type(of: hex)): \(hex)")

    Examining the output, note that the three variables all have the same data type (Int) and same value (42 in base 10).

  4. Add the following lines of code to create two more variables, and to print the types and values for each:
    var scientific = 4.2E+7
    let double = 4.99993288828
    print("Printing \(type(of: scientific)): \(scientific)")
    print("Printing \(type(of: double)): \(double)")

    Note that both variables were created as Double types—even though the value of the first is actually an Integer. Swift's inference system doesn't always look at the actual value. In this case, the presence of scientific notation in the literal value caused Swift to assume the value should be a Double.

  5. Now add the following lines to cast and round the variable named double to an Int:
    var castToInt = Int(double)
    var roundToInt = Int(double.rounded())
    print("Printing \(type(of: castToInt)): \(castToInt)")
    print("Printing \(type(of: roundToInt)): \(roundToInt)")

    As you probably expected, the castToInt discarded the fractional value of the original double variable. For the roundToInt variable, we called the .rounded() function on the variable double, and then cast that value. Since 4.999 was rounded up to 5 before being cast, the Int contains the rounded value.

  6. Finally, add the following lines to create a very large unsigned integer and then print its type and value:
    var bigUnsignedNumber:UInt64 = 18_000_000_000_000_000_000
    print("Printing \(type(of: bigUnsignedNumber)): \(bigUnsignedNumber)")

    This code works as expected—printing an integer with 20 digits (the underscore is added to help count how many digits there are).

    Note that in this case, we specified UInt64 should be the data type for this variable. Had we not made the type explicit, Swift's type inference rules would have assigned the smaller Int data type to the variable, and it would have overflowed.

Again, keep in mind the inference engine examines the format of a constant perhaps more than the value of the numeric value being assigned. You should rely on the inference engine by default, but keep in mind you may sometimes need to be explicit when you know more about how a variable will be used than Swift can infer.

The Character data type in Swift is an extended grapheme cluster.

What does that mean?

An extended grapheme cluster is an ordered sequence of one or more Unicode scalars (that is, values) that, when taken together, produce a human-readable character.

Most important to understand is that, unlike ASCII or ANSI character representations many programmers have worked with before, a Character in Swift may be made of more than one Unicode value.

In Swift 4, the underlying complexities of Unicode, scalar values, and extended grapheme clusters are largely managed for you, but as you begin to work natively with Unicode characters and strings, bear in mind that the Swift Character/String architecture was developed from the ground up around Unicode character representation—not ANSI/ASCII as many other languages were.

Strings in Swift are very similar to strings in other programming languages. As string handling is so central to any application development project, we'll dedicate an entire subsequent lesson to Swift's powerful string handling capabilities. In this section, we'll discuss the basics for declaring and using a string.

Fundamentally, strings are arrays of the Character types, supporting the familiar assignment operator (=), substrings, concatenation, and C-inspired escape characters.

Now that you've learned about the various data types available with Swift, let's put this knowledge into practice by using various types together, and also using the Apple Foundation framework.

Use an Xcode playground to practice various data types. You'll be using numeric data types, formatting them as strings, and using string interpolation to print string values from various data types.

  1. Launch Xcode as before, and create a new playground named Data Type Summary.playground.
  2. Add the following code to the playground to create an immutable Double with an initial value:
    let dVal = 4.9876
  3. Next, create a Boolean mutable variable with an initial value of true, and another variable set to the Double variable after rounding to a whole number:
    var iValRounded = true
    var iVal = Int(dVal.rounded())
  4. Next, we're going to use a class from Foundation to create a string representation of the Double value, rounded to two digits. If you're not familiar with NumberFormatter, don't worry. This is just one of the many utility classes Apple provides in its expansive SDK for macOS and iOS:
    var formatDigits = 2
    let nf = NumberFormatter()
    nf.numberStyle = .decimal
    nf.maximumFractionDigits = formatDigits
    let formattedDouble = nf.string(from: NSNumber(value: dVal)) ?? "#Err"

    Because NumberFormatter.string returns an optional, we need either to check it (with if/let, or as here, provide a default value ("#Err") in case the function does return nil.

  5. Now add the following line to print a statement about the values we've created:
    print("The original number was \(formattedDouble) (rounded to \(formatDigits) decimal places), while the value \(iValRounded ? "rounded" : "unrounded") to Integer is \(iVal).")

    The output of this code is as follows:

    The original number was 4.99 (rounded to 2 decimal places), while the value rounded to Integer is 5.
  6. Finally, add the following lines to change the rounding strategy, and print a sentence about the result of the new string conversions:
    formatDigits = 0
    nf.maximumFractionDigits = formatDigits
    formattedDouble = nf.string(from: NSNumber(value: dVal)) ?? "#Err"
    iValRounded = false
    iVal = Int(dVal)
    print("The original number was \(formattedDouble) (rounded to \(formatDigits) decimal places), while the value \(iValRounded ? "rounded" : "unrounded") to Integer is \(iVal).")

    The output of this second sentence is as follows: