Computational Methods Flashcards
What is problem decomposition?
- the breaking down of a problem into smaller parts that are easier to solve
- the smaller parts can sometimes be solved recursively (run again and again until that part of the problem is solved)
- different parts of the problem can be assigned to different programmers making use of their individual skills and lightening the load on them
- can be a hierarchical approach (tree flow) or take into account Parallel processes
What is the problem with decomposition?
- assumes that the whole solution to the problem is knowable in advance
- less useful in more modern applications
What is a data flow diagram?
- break down a problem into processes, data stores and data flows which leads to data flow diagrams
- the major components and activities in a system are laid out before any effort is expended on the finer details
What is structured programming?
- sequence, selection and iteration
- another common method of decomposition
1) What is one way to test if a problem is computable?
2) what is computability?
3) What are some of the limiting factors to the problems we can solve with computers?
1) Test it against the capabilities of a Turing machine
2) whether or not a problem can be solved using an algorithm. There can be no computable solution to some problems. Can combine computed solutions with human insight to get a solution
3) speed they run at and the memory they can access
What is a messy problem?
- not all problems can be neatly described
- this is because; underlying issues are not understood, data is not sufficient, data is erroneous or the underlying issues are very complex
What is abstraction?
Turning real world problems into a reality that can be processed. Also requires decomposition
What is backtracking?
- trial and error
- trying out a sequence of actions and determining how far they succeed until it becomes apparent that it can go no further
- partial solutions are incrementally built up and if they fail these solutions are abandoned and the search begins again
What is problem recognition?
- determine exactly what a problem is
- by using computational and intuitive methods it may be possible to come up with a solution
What is data mining?
- examines large data sets looking for patterns and relationships
- DB’s are designed to store and process data in predefined ways. When they get large enough, unexpected relationships might be uncovered
- incorporates; cluster analysis, pattern matching, anomaly detection and regression analysis
- useful for business modelling and disease prediction
What is performance modelling?
- another use of models
- real-life objects and systems, as well as computer software, can be modelled in order to predict how they will behave when in use
- performance depends on complexity of algorithm
- simulations can be used to predict performance
What is pipelining?
- output of one process can be fed into another
- useful in RISC processors
- complex jobs can be divided up into separate pipelines so that parallel processing can occur
- this parallels real-life situations such as assembly-line processing
What is visualisation?
- problems and data can be better understood when translated into a visual model
- computers have facilitated many new and intensive ways to visualise situations. These show unexpected trends that could not be produced by traditional methods
What is thinking abstractly?
- Abstraction is a representation of reality
- requires recognising what is important in a problem and what isnt. We only include the information that is required
- we then devising a means to effectively code it
- variables are an abstraction; they represent real world values in a calculation
What are levels of abstraction?
- useful to construct an abstraction to represent a large problem and create lower-level abstractions to deal with component parts
- details in each layer can be hidden from the others. Frees up solution process to concentrate on just one issue at a time
- uses layers; divides up the functionality of a big system into separate areas of interest. Eg; car designer might be interested in properties of a new fuel but that issue is treated separately from the design of the dashboard
- specialisation leads to reliability and cost benefits
What is a Heuristic approach?
- Approach to problem solving that makes use of experience
- not guaranteed to produce the best solution but it generally will produce a “good enough” result
- referred to as “rule of thumb”
- important to realise when “good enough is good enough” and when it isn’t.
What is the first stage of problem solving?
Understanding the problem:
- what are the known and unknowns
- what data do we have and what data do we need. What data do we not need
- is it solvable, can it be decomposed.
- can we represent it abstractly using diagrams or variables
What is the second stage of problem solving?
Devise a plan:
- have you seen this problem or a similar one before
- decompose the problem
- make a to-do list
- look for patterns
- think outside the box (but stick to the brief)
- is there a equation that can help
- try solving a similar one if you are stuck
What is the third stage of problem solving?
Carry out the plan:
- Do this carefully, checking as you go
- are you sure each stage is correct
- dont be afraid to start again
What is the fourth and final stage of problem solving?
Look back over solution:
- can it be improved
- might have overlooked bigger picture
- have you learned something that could be applied to future problems
1) What is caching? (RAM sense)
2) what is prefetching?
1) data might be stored in RAM ‘in case’ it is needed again before process shuts down
- if it is required, it does not need to be read again from dusk which gives it a faster response time with less latency
2) an instruction is requested from memory by the CPU before it is required to speed up instruction throughput
- there are algorithms that can predict likely future instructions needed so they are ready in the cache when they are in fact needed
Positives and negatives of RAM caching?
- reduces load on web severs because data required by an app can be anticipated, thereby reducing number of separate access actions.
- can be very complicated to implement effectively
- if wrong data is cached then it can be difficult to re-establish the correct sequence of data items or instructions
What is another advantage of the divide and conquer approach?
- separate modules of any other items such as data stores can be reused in future projects
- this applies for things such as libraries.
- Windows uses DLL (Dynamic link libraries). Called at runtime to provide certain functionality. Eg; don’t need to write code for a messagebox just link a DLL to produce it