data types Flashcards
character set
the mapping of a collection of characters to specific bit sequences or codes
all data types are held in the computer as…
binary
character
a letter, number or special character typically represented in ASCII
max and min values represented by n bits in binary
min =0
max = 2^n -1
(so total characters represented 2^n)
why is hex used rather than binary
less likely to make mistakes
easier to remember
simpler
define bit
fundamental unit of information in the form of either a single 0 or 1.
increasing order of numbers 2^10, 2^20….2^80
kibi (Ki), mebi (Mi), gibi (Gi), Tebi (Ti), Pebi (Pi), Exbi (Ei), Zebi (Zi), Yobi (Yi)
increasing order of numbers 10^3, 10^6…10^24
Kilo (K), Mega (M), Giga (G), Tera (T), Peta (P), Exa (E), Zetta (Z), Yotta (Y)
ASCII- american standard code for information interchange
historically the standard code for representing letters on keyboard. first 32 = non printing. 7 bits = 128 characters. developed to 8 bits (compatible by just adding a 0) so 128 more characters for symbols
UNICODE
1980s, several incompatible coding systems for different languages, difficulty because multilingual data increasingly used, so UTF-16, 65536 characters for different languages e.g. Latin, Greek, Arabic. First 128 same as ASCII so compatible. then UTF-32 inc chinese and japanese.
But now each character- 4 bytes rather than 2 so inc file sizes and transmission times.
overflow
when most significant bit has a carry, requiring an extra bit. when when the largest number a register can hold has exceeded its max word size
why is sign and magnitude bad for arithmetic
can’t just add binary digits, difficult for representing in hardware
range using two’s complement
-2^(n-1) ….. 2^(n-1) -1
fixed point accuracy
less accurate, some fractions can’t be represented at all and truncating to fractional places means less accurate and rounding errors.
range is limited by fractional part: can have larger, less acc numbers or smaller more acc numbers
floating point acc
much larger numbers and more accurate.
mantissa determines precision and exponent determines range.
normalisation
process of moving the binary point to provide max level of precision for a given number of bits
normalised binary mantissa
lies between half and one or minus half and minus one for negative
logical shift instructions
all the bits move left or right
why is logical shift useful
for examining LSB/MSB: carry bit can then be tested and a conditional branch executed
logical shift right explain..
LSB shifted to carry bit and zero moves into MSB to occupy vacated space
arithmetic shift instructions
all bits move left or right except sign bit, which is taken into account and remains the same
arithmetic shift left explain..
shift bypasses the sign bit so the MSB remains the same but all other bits shift left and second bit from left is now in carry bit
effect of arithmetic shift left
multiplying by 2
effect of arithmetic shift right
dividing by 2
how can you multiply two binary numbers
arithmetic shifts and addition
multiply 9 x 5 using arithmetic shifts?
9x1 ADD 9x4 via 2 left shifts
circular shifts/rotate shifts are useful for
shifts in multiple bytes
circular shift right explain…
LSB value moved to carry bit
define bitwise operations
similar to boolean logic operations but they work on individual bits in a byte rather than whole codes or characters
define masks
bit patterns that have been defined by a programmer, allowing specific bits in a piece of data to be altered or tested
masks: how to reset/turn off a bit to 0
can’t use OR, use AND instead
masks: how to turn on a specific bit
can use OR
masks: how to check a specific bit
set all other bits to 0 using ‘AND 0’ as operation and mask except for the specific bit: 1
masks: how to toggle bits- turn bits on and off at the same time
use ‘XOR’ operation and 0 for the ones to remain the same and 1 for the ones to toggle