2.2.2 computational method Flashcards
Features that make a problem solvable by computational methods
-Not all programs can be solved using computers.
The first stage of problem solving is
identifying whether or not a problem can be solved using computational methods.
A problem that can be solved using an algorithm is computable.
Problems can only be called computable if they can be solved within a finite, realistic amount of time.
Problems that can be solved computationally typically consist of inputs, outputs and calculations.
Although some problems are computable, it may be impractical to solve them due to the amount of resources or length of time they require in order to be completed.
Problem recognition
Once a problem has been determined to be computable, the next stage is to clearly
identify what the problem is. Stakeholders state what they require from the finished
product and this information is used to clearly define the problem and the system
requirements.
e.g
● Analysing strengths and weaknesses with the current way this problem is being
solved
● Considering types of data involved including inputs, outputs, stored data and
amount of data
Problem decomposition
Once a problem has been clearly defined, it is continually broken down into smaller problems. This continues until each subproblem can be represented as a self-contained subroutine.
problem decomposition advantages
- aims to reduce the complexity of the problem by splitting it up into smaller sections which are more easy to
understand - By identifying subproblems, programmers may find that certain sections of the program can be implemented using pre-coded modules or libraries which saves time which would otherwise have been spent on coding and testing.
-Decomposition also makes the project easier to manage , as different software
development teams can be assigned separate sections of the code according to their specialisms
-. Decomposition enables multiple parts of the project to be developed in parallel, making it possible to deliver projects faster
Without problem decomposition, testing can only be carried out once the entire application has been produced therefore making it hard to pinpoint errors.
Use of divide and conquer
Divide and conquer is a problem-solving technique used widely across computer science.
This strategy can be broken down into three parts: divide, conquer and merge. ‘Divide’ involves halving the size of the problem with every iteration.
Each individual subproblem is
solved in the ‘Conquer’ stage, often recursively.
The solutions to the subproblems are then
recombined during the ‘Merge’ stage to form the final solution to the problem.
-One common use of divide and conquer is in binary search
-Divide and conquer is applied to problem-solving in quick sort and merge sort.
advantage of using divide and conquer
-The size of the problem is halved with each iteration which greatly simplifies very complex problems. This means that as the size of a problem grows, the time taken to solve it will not grow as significantly.
disadvantage of divide and conquer
. As divide and conquer mostly makes use of recursion, it faces the same problems that all recursive functions face: stack overflow will cause the program to crash and large programs are very difficult to trace.
Abstraction
● Excessive details are removed to simplify a problem
● Problems may be reduced to form problems that have already been solved
● This allows pre-programmed modules and libraries to be used
● Levels of abstraction allow a complex project to be divided into simpler parts
● Levels can be assigned to different teams and details about other layers hidden
● Makes projects more manageable.
● Abstraction by generalisation used to group together sections with similar
functionality
● Means segments can be coded together, saving time
● Abstraction is used to represent real-world entities with computational elements
Problem solving strategies
-Backtracking
-Data mining
-Heuristics
-Performance modelling
-Pipelining
-Visualisation
Data Mining
● Technique used to identify patterns or outliers in large sets of data collected from a variety of sources, termed big data
● Spots trends or correlations between data which are not immediately obvious
● Insights from data mining can aid predictions about the future
● This makes data mining a useful tool in assisting business and marketing decisions
Data mining is a technique used to identify patterns or outliers in large sets of data, termed big data. Data mining is used in software designed to spot trends or identify correlations between data which are not immediately obvious.
-Insights from data mining can be used to make predictions about the future based on previous trends. This makes data mining a useful tool in assisting business and marketing decisions.
-Data mining has also been used to reveal insights about people’s shopping habits and preferences based on their personal information.
Data mining disadvantage
However, as data mining often involves the handling of personal data, it is crucial that it is dealt with in accordance with the present legislation regarding data protection. As of 2018, all data held and processed by
organisations within the EU must follow the rules set by the GDPR.
backtracking
● Problem-solving technique implemented using algorithms, often recursively
● Methodically builds a solution based on visited paths found to be correct
● If a path is found to be invalid, algorithm backtracks to the previous stage
Heuristics
● Non-optimal, ‘rule-of-thumb’ approach to problem-solving
● Used to find an approximate solution when the standard solution takes too long to find
● Solution found through using heuristics is not perfectly accurate or complete
● Used to provide estimations for intractable problems, shortest path-finding
problems, machine learning and language recognition
Heuristics are used to provide an estimated solution for intractable problems such as the renowned Travelling Salesman Problem as well as the A* algorithm, and are also
used in machine learning and language recognition.
Performance modelling
● Performance modelling eliminates the need for true performance testing by providing Mathematical methods used to test various loads on different operating systems
● Provides cheaper, less time-consuming or safer method of testing applications
● Useful for safety-critical computer systems, where a trial run is not feasible
The results of performance modelling can help companies judge the capabilities of a system, how it will cope in different environments and assess whether it is safe to implement.
Pipelining
● Process in which modules are divided into individual tasks, with different tasks
being developed in parallel
● Enables faster project delivery
● Output of one process typically becomes the input of another, resembling a
production line
● Commonly used in RISC processors: different sections of the Fetch-Decode-Execute cycle are performed simultaneously