Component 12 - Key Definitions Flashcards
Primitive data types, integer, real/floating point, character, string and Boolean.
Data types have a significant effect on the compilation of a program. Different data types require different amounts of memory, methods of processing and so forth. Incorrect data type, e.g. integer when you need real, can cause significant logic errors in a program.
Integer = a whole number
Real = a number with fractional parts Character = a single letter, number or symbol String = zero or more characters
Boolean = true or false
Represent positive integers in binary
See the notes hand out in the lesson folder for methods
Use of sign and magnitude and two’s complement to represent negative numbers in binary.
See the notes hand out in the lesson folder for methods
Addition and subtraction of binary integers.
See the notes hand out in the lesson folder for methods
Represent positive integers in hexadecimal
See the notes hand out in the lesson folder for methods
Convert positive integers between binary hexadecimal and denary.
See the notes hand out in the lesson folder for methods
Representation and normalisation of floating point numbers in binary.
See the notes hand out in the lesson folder for methods
Floating point arithmetic, positive and negative numbers, addition and subtraction.
See the notes hand out in the lesson folder for methods
Bitwise manipulation and masks: shifts, combining with AND, OR, and XOR
Shifting allows us to quickly perform integer division and multiplication, logical shifts ignore all sign bits and simply insert zeroes as necessary, arithmetic shifts preserve the sign bit.
Masking is simply a method of applying a Boolean expression, usually to a register, in order to explore the state of individual bits, rather than changing the entire value stored. AND masks are used to check whether a bit is on or off, OR is used to set a bit and XOR masks are often used as a form of encoding/encryption
How character sets (ASCII and UNICODE) are used to represent text
Text in a computer is represented as numerical data. There are two standards – ASCII and UNICODE, and both do the same thing, which is to assign a unique numeric value to each character that the computer is to represent or store.
ASCII is actually now part of the UNICODE standard. ASCII is traditionally a 7 bit standard, which can also be represented in 8 bits per character. Obviously, the more bits per character a standards has, the more symbols it can represent, however there is the storage implication meaning it will take up more space to store each character.
ASCII is limited to 127 characters, whereas UNICODE (depending on which version is used) is a 16 bit standard and can store every printable character in the known world. Even languages that are “extinct” are available in Unicode.
These standards are essential, as without them, computers and devices from different manufacturers could not easily communicate.