Computational Thinking Flashcards
Computational Thinking
Way of approaching problems, particularly useful in computing, also tackle other problems
Several techniques – abstraction, decomposition, algorithms, pattern matching
Involves expressing problems and solutions in a way that a computer could also execute
Pattern Matching
Identifying patterns in data, or similarities between different problems can often save time, lead to more elegant solutions, or lead to solutions where there seemed to be none
Abstraction
Process of finding similarities or common aspects between problems and identifying differences and details that do not matter for the task at hand
Ignoring or hiding some details to capture commonalty between different instances
Child would call Toyota and Ferrari a car, but would not confuse with an airplane
Computers are so complex, impossible for average person to understand all details of how they work
Useful to know some details, only need ones relevant to problem, simplify task
Decomposition
Breaking down larger tasks into a set of smaller tasks
Normally applied to programs or systems to make them simpler to solve or maintain
Separate parts can then be understood, solved, developed, and evaluated separately
Algorithms
Sequence of steps required to solve a problem
Used every day for everything we do
Algorithms for computers must be unambiguous, have finite number of steps and have clear flow
Can be shown through verbal expression, flowcharts, pseudocode, programming languages, math
Sometimes more than one algorithm may exist for a problem, must decide which is most efficient
Algorithmic time complexity is a measure of how long algorithm would take to complete with input n
Heuristics
Simple yet important tasks/ problems considered problematic
No algorithm exists that can solve problem in reasonable time frame
Heuristic algorithm is not guaranteed to return exact answer but one that is sufficient (rule of thumb)
Continuous and Discrete Data
In real world, most data is continuous (analog)
Computers have finite storage space and are not well suited for large quantities of continuous data
Infinite values between 0 and 1, computer cannot store all
Necessary to store continuous data in a discrete form, done by sampling
Detecting errors in discrete signals is easier than in continuous signals
1 bit
Single binary digit
1 byte
8 bits
1 kilobyte
1024 bytes
1 megabyte
1024 kilobytes
1 gigabyte
1024 megabytes
1 terabyte
1024 gigabytes
Bits and bytes
Basic unit of information in computing is a 1 or 0 and is called a bit – stores one piece of information 8 bits (byte) are used to represent one single character of text (a) One byte can store one of a total of 256 different characters (0000 0000 – 1111 1111)
Why use hexadecimal
Represent binary values in fewer digits
One two hex digits are needed to represent one byte (00 – FF)
Computers do not use hex numbers, used by programmers as a shorthand for binary
Unicode
Universal, standard character set to support not only English, but also all of the human languages
Also, can represent emojis
Unicode and ASCII provide standard way for computers to represent character that are readable by other computers and people