Section Ten: Computational thinking Flashcards
Chapter 47 – Thinking abstractly
Computational thinking
Computational thinking refers to the ideas and thinking skills used to design solutions to problems, or create systems so that a computer or computational agent can help
Chapter 47 – Thinking abstractly
Abstraction
The representation of essential features without including unnecessary details. It is used to reduce the complexity of systems for users, hiding how things actually work, applying algorithms to different contexts and producing suitable user interfaces
Chapter 47 – Thinking abstractly
Data abstraction
A similar idea is that of data abstraction.
The details of how data are actually represented are hidden. For example, when you use integers or real numbers in a program, you are not interested in how these numbers are actually represented in the computer.
Chapter 48 – Thinking ahead
Computational problems
Input is the information relevant to the problem, which could for example be passed as parameters to a subroutine.
Output is the solution to the problem, which could be passed back from a subroutine. A clear statement of exactly what the inputs and outputs of a problem are is a necessary first step in constructing a solution.
Chapter 48 – Thinking ahead
Advantages of specifying preconditions
- Specifying preconditions as part of the documentation of a subroutine ensures that the user knows what checks, if any, must be carried out before calling the subroutine.
- If there are no preconditions, then the user can be confident that necessary checks will be carried out in the subroutine itself, thus saving unnecessary coding. The shorter the program, the easier it will be to debug and maintain.
- Clear documentation of inputs, outputs and preconditions helps to make the subroutine reusable. This means that it can be put into a library of subroutines and called from any program with access to that library.
Chapter 48 – Thinking ahead
Nature and benefits of caching
Caching is another aspect of thinking ahead, this time done automatically by the operating system rather than the programmer. Caching is the temporary storage of program instructions or data that have been used once and may be needed again shortly. The last few instructions of a program may be stored in cache memory for quick retrieval.
Web caching, i.e. the storing of HTML pages and images recently looked at, is another example of caching. This gives fast access to pages that have been recently looked at (and may be returned to) and saves having to download pages again, using up bandwidth unnecessarily.
Chapter 49 – Thinking procedurally
Procedural abstraction
Procedural abstraction means using a procedure to carry out a sequence of steps for achieving some task such as calculating a student’s grade from her marks in three exam papers, buying groceries online or drawing a house on a computer screen.
Chapter 49 – Thinking procedurally
Problem decomposition
Most computational problems beyond the trivial need to be broken down into sub-problems before they can be solved. Think of any system which starts off by presenting the user with a menu of choices. Each choice will result in a different, self-contained module.
Chapter 49 – Thinking procedurally
Top-down design
Top-down design is the technique of breaking down a problem into the major tasks to be performed; each of these tasks is then further broken down into separate subtasks, and so on until each subtask is sufficiently simple to be written as a self-contained module or subroutine. Remember that some programs contain tens of thousands, or even millions, of lines of code, and a strategy for design is absolutely essential. Even for small programs, top-down design is a very useful method of breaking down the problem into small, manageable tasks.
Chapter 49 – Thinking procedurally
Advantages of problem decomposition
As well as making the task of writing the program easier, breaking a large problem down in this way makes it very much simpler to test and maintain. When a change has to be made, if each module is self- contained and well documented with inputs, outputs and preconditions specified, it should be relatively easy to find the modules which need to be changed, knowing that this will not affect the rest of the program.
Chapter 49 – Thinking procedurally
Hierarchy charts
A hierarchy chart is a tool for representing the structure of a program, showing how the modules relate to each other to form the complete solution. The chart is depicted as an upside-down tree structure, with modules being broken down further into smaller modules until each module is only a few lines of code (never more than a page).
Chapter 50 – Thinking logically, thinking concurrently
The structured approach
The structured programming approach aims to improve the clarity and maintainability of programs.
Using structured programming techniques, only three basic programming structures are used:
- sequence – one statement following another
- selection – if … then … else… endif and switch/case … endswitch statements
- iteration – while … endwhile, do… until and for … next loops
Languages such as Python and Pascal are block-structured languages which allow the use of just three control structures. They may allow you to break out of a loop, but this is not recommended in structured programming. Each block should have a single entry and exit point.
Chapter 50 – Thinking logically, thinking concurrently
Tools for designing algorithms
Flow diagrams and pseudocode are two methods or tools which are commonly used for designing algorithms. Pseudocode corresponds more closely to the iteration structures in a programming language and is generally more useful for designing algorithms of any complexity.
Chapter 50 – Thinking logically, thinking concurrently
Thinking concurrently
The difference between concurrent computing and parallel computing is debatable and is often taken to mean the same thing. For example, a house may have a burglar alarm system which continually monitors the front door, back door, windows, rooms upstairs and downstairs.
Generally, concurrent computing is defined as being related to but distinct from parallel computing. Parallel computing requires multiple processors each executing different instructions simultaneously, with the goal of speeding up computations. It is impossible on a single processor.
Concurrent processing, on the other hand, takes place when several processes are running, with each in turn being given a slice of processor time. This gives the appearance that several tasks are being performed simultaneously, even though only one processor is being used.
Chapter 50 – Thinking logically, thinking concurrently
Benefits and trade-offs of concurrent processing
Concurrent processing has benefits in many situations.
- Increased program throughput – the number of tasks completed in a given time is increased
- Time that would be wasted by the processor waiting for the user to input data or look at output is used on another task
- The drawback is that If a large number of users are all trying to run programs, and some of these involve a lot of computation, these programs will take longer to complete