In computing, signedness is a property of data types representing numbers in computer programs. A numeric variable is signed if it can represent both positive and negative numbers, and unsigned if it can only represent non-negative numbers (zero or positive numbers).
As signed numbers can represent negative numbers, they lose a range of positive numbers that can only be represented with unsigned numbers of the same size (in bits) because roughly half the possible values are non-positive values, whereas the respective unsigned type can dedicate all the possible values to the positive number range.
For example, a two's complement signed 16-bit integer can hold the values −32768 to 32767 inclusively, while an unsigned 16 bit integer can hold the values 0 to 65535. For this sign representation method, the leftmost bit (most significant bit) denotes whether the value is negative (0 for positive or zero, 1 for negative).
In programming languages
editFor most architectures, there is no signed–unsigned type distinction in the machine language. Nevertheless, arithmetic instructions usually set different CPU flags such as the carry flag for unsigned arithmetic and the overflow flag for signed. Those values can be taken into account by subsequent branch or arithmetic commands.
The C programming language, along with its derivatives, implements a signedness for all integer data types, as well as for "character". For Integers, the unsigned modifier defines the type to be unsigned. The default integer signedness outside bit-fields is signed, but can be set explicitly with signed modifier. By contrast, the C standard declares signed char, unsigned char, and char, to be three distinct types, but specifies that all three must have the same size and alignment. Further, char must have the same numeric range as either signed char or unsigned char, but the choice of which depends on the platform. Integer literals can be made unsigned with U suffix.
Compilers often issue a warning when comparisons are made between signed and unsigned numbers or when one is cast to the other. These are potentially dangerous operations as the ranges of the signed and unsigned types are different.
Bits | Min | Max |
---|---|---|
8 (signed) | −128 | 127 |
16 (signed) | −32768 | 32767 |
32 (signed) | −2147483648 | 2147483647 |
64 (signed) | −9223372036854775808 | 9223372036854775807 |
128 (signed) | −170141183460469231731687303715884105728 | 170141183460469231731687303715884105727 |
8 (unsigned) | 0 | 255 |
16 (unsigned) | 0 | 65535 |
32 (unsigned) | 0 | 4294967295 |
64 (unsigned) | 0 | 18446744073709551615 |
128 (unsigned) | 0 | 340282366920938463463374607431768211455 |
See also
edit- Sign bit
- Signed number representations
- Sign (mathematics)
- Binary Angular Measurement System, an example of semantics where signedness does not matter
External links
edit- "Numeric Type Overview". MySQL 5.0 Reference Manual. mysql.com. 2011. Retrieved 6 January 2012.
- "Understand integer conversion rules", CERT C Coding Standard, Computer emergency response team, retrieved December 31, 2015