System Lifecycle Flashcards
Steps of System Life Cycle
Analysis
Design
Development & Testing
Implementation
Documentation
Evaluation
Analysis
Investigation of how current system works and what is required for new system.
( what needs to be improved )
Methods of Analysis
Observation
Interviews
Questionnaires
Examination of existing documents
Observation adv/dis
First hand real life data
people don’t work properly when being watched
Interviews adv
In-depth details about system
Follow up questions can be asked
Interview dis
Expensive
Time consuming
Not anonymous so might be pressured
Observation dis
Can be time-consuming
May not reveal all issues
People will not work properly when know they being watched
Questionnaire adv
Quick and simple
Less time consuming
Can be done online and automatically marked
Questionnaire dis
People don’t take time to answer a questionnaire well
Information is limited to the questions
May suffer from low response rates
Existing Documents adv
Easy way to find comprehensive info about the system
Finds all inputs and outputs
Can reveal unknown issues
Existing documents dis
Difficult to understand from someone who is not in organization
Could be outdated
Doesn’t always show all processes and procedures in a system
Stuff analysis identifies
inputs and outputs
problems
how data is processed
What needs to be specified for the new system?
(analysis)
purpose
data
how data is processed
user requirements
what software/hardware needed ( justified too)
What should design show?
file/data structures
input/output formats
validation rules
Validation routiunes
Range check ( range eg 0-17)
Character check (certain characters F or M )
Length check
Type check (text or numeric etc)
Format check (must be in a specific pattern eg. __/__/__)
Precense check
Check digit ( extra calculation to make sure it right)
Test strategy definition
A set of guidelines on how testing will be carried out.
Test each module: verify each component works
Test each function: ensure everything works
Test the whole system: test performance and intergration
Test design
The result of the testing
test data
expected outcomes
actual outcomes
remedial action
Test plan
describes all tests to be carried out
Stuff that should be tested:
data file structures
input/output format
validation routines
Types of data used for testing
normal - expected
extreme/boundary - upper and lower limits
abnormal - unacceptable
Methods of implementation
Direct changeover
Parallel Running (run it until it effective and all people are confident in using)
Pilot Running ( small group to test)
Phased implementation ( slowly change to it)
Direct changeover adv/dis
fast implementation
cost effective
high risks of failure
no fallback
users can’t be trained on a new system
Phased implementation dis
takes longer
compatibility issues
confusing with old system and new system at same time
phased implementation adv
reduced risk
easier to manage
more time to adjust
Pilot Running adv
low risk
allows for fine tunning
staff have time to train
few errors
Pilot Running dis
slower implementation
expensive to keep 2 up
systems have to be synchronised
Parallel running adv
lower risk
easy system comparison
if problems old can be used
parallel running dis
expensive 2 systems
have to be synchronised
Stuff needed in technical documentation
(what future technician needs to understand how system works
update and fixing problems also needs it)
purpose of system
limitations
programs used
program language
program flowcharts
system flowcharts
hardware and software requirements
file structures
input/output format
test runs
validation routines
Stuff needed in user documentation
(features and how to use it)
purpose
limitations
hardware/software req
how to run/install
how to deal w error
trouble shooting guide
FAQ’s
glossary
test runs
input output format
Stuff that will be evaluated
If it meets the original requirements
Limitations and issues to improve (user feedback included)
How efficient
Ease of use
suitability ( suitable or too much )