Numeric Types

C# has the following predefined numeric types:

C# type

System type

Suffix

Size

Range

Integral—signed

    

sbyte

SByte

 

8 bits

‒27 to 27‒1

short

Int16

 

16 bits

‒215 to 215‒1

int

Int32

 

32 bits

‒231 to 231‒1

long

Int64

L

64 bits

‒263 to 263‒1

Integral—unsigned

    

byte

Byte

 

8 bits

0 to 28‒1

ushort

UInt16

 

16 bits

0 to 216‒1

uint

UInt32

U

32 bits

0 to 232‒1

ulong

UInt64

UL

64 bits

0 to 264‒1

Real

    

float

Single

F

32 bits

± (~10‒45 to 1038)

double

Double

D

64 bits

± (~10‒324 to 10308)

decimal

Decimal

M

128 bits

± (~10‒28 to 1028)

Of the integral types, int and long are first-class citizens and are favored by both C# and the runtime. The other integral types are typically used for interoperability or when space efficiency is paramount.

Of the real number types, float and double are called floating-point types and are typically used for scientific calculations. The decimal type is typically used for financial calculations, where base-10-accurate arithmetic and high precision are required. (Technically, decimal is a floating-point type too, although it’s not generally referred to as such.)

Integral literals can use decimal or hexadecimal notation; hexadecimal is denoted with the 0x prefix (for example, 0x7f is equivalent to 127). Real literals may use decimal or exponential notation, such as 1E06.

The arithmetic operators (+, , *, /, %) are defined for all numeric types except the 8- and 16-bit integral types. The % operator evaluates the remainder after division.

The increment and decrement operators (++, −−) increment or decrement numeric types by 1. The operator can either precede or follow the variable, depending on whether you want the variable to be updated before or after the expression is evaluated. For example:

int x = 0;
Console.WriteLine (x++);   // Outputs 0; x is now 1
Console.WriteLine (++x);   // Outputs 2; x is now 2
Console.WriteLine (−−x);   // Outputs 1; x is now 1

C# supports the following bitwise operations:

Operator

Meaning

Sample expression

Result

~

Complement

~0xfU

0xfffffff0U

&

And

0xf0 & 0x33

0x30

|

Or

0xf0 | 0x33

0xf3

^

Exclusive Or

0xff00 ^ 0x0ff0

0xf0f0

<<

Shift left

0x20 << 2

0x80

>>

Shift right

0x20 >> 1

0x10

The 8- and 16-bit integral types are byte, sbyte, short, and ushort. These types lack their own arithmetic operators, so C# implicitly converts them to larger types as required. This can cause a compilation error when trying to assign the result back to a small integral type:

short x = 1, y = 1;
short z = x + y;          // Compile-time error

In this case, x and y are implicitly converted to int so that the addition can be performed. This means the result is also an int, which cannot be implicitly cast back to a short (because it could cause loss of data). To make this compile, we must add an explicit cast:

short z = (short) (x + y);   // OK

Unlike integral types, floating-point types have values that certain operations treat specially. These special values are NaN (Not a Number), +∞, −∞, and −0. The float and double classes have constants for NaN, +∞, and −∞ (as well as other values, including MaxValue, MinValue, and Epsilon). For example:

Console.Write (double.NegativeInfinity);   // -Infinity

Dividing a nonzero number by zero results in an infinite value:

Console.WriteLine ( 1.0 /  0.0);   //  Infinity
Console.WriteLine (−1.0 /  0.0);   // -Infinity
Console.WriteLine ( 1.0 / −0.0);   // -Infinity
Console.WriteLine (−1.0 / −0.0);   //  Infinity

Dividing zero by zero, or subtracting infinity from infinity, results in a NaN:

Console.Write ( 0.0 / 0.0);                 //  NaN
Console.Write ((1.0 / 0.0) − (1.0 / 0.0));  //  NaN

When using ==, a NaN value is never equal to another value, even another NaN value. To test whether a value is NaN, you must use the float.IsNaN or double.IsNaN method:

Console.WriteLine (0.0 / 0.0 == double.NaN);    // False
Console.WriteLine (double.IsNaN (0.0 / 0.0));   // True

When using object.Equals, however, two NaN values are equal:

bool isTrue = object.Equals (0.0/0.0, double.NaN);

double is useful for scientific computations (such as computing spatial coordinates). decimal is useful for financial computations and values that are “man-made” rather than the result of real-world measurements. Here’s a summary of the differences:

Feature

double

decimal

Internal representation

Base 2

Base 10

Precision

15-16 significant figures

28–29 significant figures

Range

±(~10‒324 to ~10308)

±(~10‒28 to ~1028)

Special values

+0, ‒0, +∞, ‒∞, and NaN

None

Speed

Native to processor

Nonnative to processor (about 10 times slower than double)

float and double internally represent numbers in base-2. For this reason, most literals with a fractional component (which are in base-10) will not be represented precisely:

float tenth = 0.1f;                     // Not quite 0.1
float one   = 1f;
Console.WriteLine (one - tenth * 10f);  // −1.490116E-08

This is why float and double are bad for financial calculations. In contrast, decimal works in base-10 and can precisely represent fractional numbers such as 0.1 (whose base-10 representation is nonrecurring).