Test 1 Flashcards

Lectures 1 - 7

1
Q

What is Software Engineering?

A

“the application of a
systematic, disciplined,
quantifiable approach to
the development,
operation, and
maintenance of software”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Correctness

A
  • lack of bugs and defects
    ▪ measured in terms of defect rate (# bugs per line of code)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Reliability

A
  • does not fail or crash often
    ▪ measured in terms of failure rate (#failures per hour)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Capability

A
  • does all that is required
    ▪ measured in terms of requirements coverage
    (% of required operations implemented)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Maintainability

A
  • is easy to change and adapt to new requirements
    ▪ measured in terms of change logs (time and effort required to add a new feature)
    and impact analysis (#lines affected by a new feature)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Performance

A
  • is fast and small enough
    ▪ measured in terms of speed and space usage (seconds of CPU time, Mb of
    memory, etc.)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why do we care about bugs?

A

Cheaper and easier to find early on in the program, harder and more expensive to find once released.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Usability

A
  • is sufficiently convenient for the intended users
    ▪ measured in terms of user satisfaction (% of users happy with interface
    and ease of use)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  • is convenient and fast to install
    ▪ measured in terms of user satisfaction (#install problems reported per
    installation)
A

Installability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Documentation

A
  • is well documented
    ▪ measured in terms of user satisfaction (% of users happy with
    documentation)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Availability

A
  • is easy to access and available when needed
    ▪ measured in terms of user satisfaction (% of users reporting access
    problems)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Three General Principles of QA

A
  1. Know what you are doing
  2. Know what you should be doing
  3. Know how to measure the difference
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Name 5 Causes of Software Errors?

A
  1. Faulty definition of requirements
  2. Client-developer communication failures
  3. Deliberate deviations from software requirements
  4. Logical design errors
  5. Coding errors
  6. Non-compliance with documentation and coding instructions
  7. Shortcomings of the testing process
  8. Procedural errors
  9. Documentation errors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Name the 4 Fundamental Process Activities

A

Specification
Development
Validation
Evolution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

6 Steps of the Waterfall Method

A

(1) Requirements Analysis and Definition
(2) System and Software Design
(3) Implementation and Unit Testing
(4) Integration and System Testing
(5) Operation and Maintenance
(6) Retirement and Decommissioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Name 2 Drawbacks to Waterfall Method

A

Early Freezing and Inflexible Partitioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Most common reason Waterfall failes

A

most failures are due to inadequate requirements
understanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the 6 steps to the prototype method

A

(1) Requirements Gathering and Analysis
(2) Quick Design
(3) Build Prototype
(4) Customer Evaluation
(5) Design Refinement
(6) Full-Scale Development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Drawbacks to the Prototype method (3)

A

Wasted Work
Inadequate or Incomplete Prototypes
When to Stop Iterating

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are the steps spiral?

A

Each layer has:
▪ Determine Objectives
▪ Assess and Reduce Risks
▪ Develop and Validate
▪ Review and Plan

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Drawbacks to the spiral method?

A

Heavyweight Process
Not Really a Development Model
Depends on Risk Analysis
Not for Novices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The Iterative Development Model is a ______ ________ Method

A

Subset Development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the iterative development model proccess?

A

Domain Analysis
Software
Architecture

ITERATION:
Risk Assessment

Develop Test
Suite OR Prototype
Highest Risk

Integrate with
Previous

Release

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Drawbacks to the iterative process.

A

Needs Small Team
Needs Early Architecture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is agile programming?
a set of values and principles that guide and shape development.
26
List 5 Agile Methods.
▪ Extreme Programming (XP) ▪ Scrum ▪ Kanban ▪ Crystal Agile Framework ▪ Dynamic System Development Method (DSDM) ▪ Feature-Driven Development (FDD)
27
Extreme Programming has what 3 things?
▪ Values, ▪ Principles and ▪ Practices
28
Values of XP
Communication, Simplicity, Feedback, Courage, Respect…
29
Principles of XP
▪ Humanity - software is developed by people ▪ Economics - software costs money ▪ Mutual Benefit - software activities should benefit everybody ▪ Self-Similarity - try reusing solutions across projects ▪ Improvement - perfect software doesn’t exist ▪ Diversity - different skills benefit a software team ▪ Quality - can not be sacrificed
30
Attacking Risks Before They Arise (7)
(1) Schedule Slips (2) Project Cancellation (3) System Defect Too High, or Degrades with Maintenance (4) Business Misunderstood (5) Business Changes (6) Featuritis (or False Feature Risk) (7) Staff Turnover
31
XP in Practice - Planning Practices
Stories Cycles Slack The Planning Game - Business vs. Technical Constraints Plan for Small Releases
32
XP in Practice – Programming Practices
Pair Programming Test First/Test Driven Programming Incremental Design Coding Standards
33
XP in Practice - Integration Practices
Continuous Integration Ten-Minute Build
34
XP in Practice - General Practices
Sit Together Whole Team Informative Workspace
35
Scrum is ...
is a lightweight framework that helps people, teams and organizations generate value through adaptive solutions for complex problems.”
36
Foundation of Scrum
Empiricism: knowledge is acquired through experience ▪ Lean thinking: focus on the core essentials and reduce waste whenever possible ▪ Transparency: both the process and work artifacts must be visible
37
Scrum Values
Commitment Focus Openness Respect Courage
38
What Method Uses Sprints? and what are they?
The Scrum process is composed of a sequence of sprints ▪ In Scrum a sprint is short – usually <1 month long (1-4 weeks)
39
Members of a scrum team and brief overview.
Developers - “committed to creating any aspect of a usable Increment (in) each Sprint.” Product Owner - “accountable for maximizing the value of the product resulting from the work of the Scrum Team.” Scrum Master - “accountable for the Scrum Team’s effectiveness. They do this by enabling the Scrum Team to improve its practices, within the Scrum framework.”
40
The key Scrum metric is:
Velocity: the number of story points done in a given sprint.
41
Kanban
“Kanban is all about visualizing your work, limiting work in progress, and maximizing efficiency(or flow). Kanban teams focus on reducing the time it takes to take a project(or user story) from start to finish. They do this by using a kanban board and continuously improving their flow of work.”
42
The key Kanban metrics are:
▪ Lead time ▪ Cycle time ▪ Cumulative Flow Diagram (CFD)
43
Kanban Overview
A card is used to represent each work item (task) in a project ▪ The cards are organized on a Kanban board ▪ Multiple columns each represent stages of workflow ▪ The stages of workflow represented in the board will vary across projects ▪ Done rules are used to move cards to the next step
44
The Kanban method ascribes to the principle of
continuous release and collective ownership
45
What is Testing?
Testing is the process of executing software in a controlled manner, to answer the question: “does the software behave as specified?”
46
Verification vs. Validation
Verification - “Are we doing the job right?” Validation - “Are we doing the right job?”
47
Verification vs. Validation, Where is testing more useful?
Verification
48
Debugging vs. Testing
Debugging: the process of analyzing and locating bugs when the software does not behave as expected. Testing: the process of methodically searching for and exposing bugs (not just fixing those that happen to show up) – much more comprehensive.
49
What is Systematic Testing
Systematic testing is an explicit discipline or procedure (a system) for: ▪ choosing and creating test cases ▪ executing the tests and documenting the results ▪ evaluating the results, possibly automatically ▪ deciding when we are done (enough testing)
50
Levels of Specifications
1. Functional specifications (or requirements) give a precise description of the required behaviour of the system – what the software should do, not how it should do it – may also describe constraints on how this can be achieved 2. Design specifications describe the architecture of the design to implement the functional specification – the components of the software and how they are to relate to one another 3. Detailed design specifications describe how each component of the architecture is to be implemented – down to the individual code units
51
Levels of Testing
3. Unit testing addresses the verification that individual components meet their detailed design specification 2. Integration testing verifies that the groups of units corresponding to architectural elements of the design specification can be integrated to work as a whole 1. System testing verifies that the integrated total product has the functionality specified in the functional specification
52
Tests as Goals
It is important that the tests be designed without knowledge of the software implementation
53
Using Tests
If a problem is encountered, then either: a) the tests are revised and applied again, if the tests are wrong, or b) the software is fixed and the tests are applied again, if the software is wrong
54
Test Evolution
Testing does not end when the software is accepted by the customer ▪ Tests must be repeated, modified and extended to insure that no existing functionality has been broken,
55
What are the 3 kinds of testing
the development of code (unit testing), ▪ the integration of the units into subsystems (integration testing) and ▪ the acceptance of the first version of the software system (system testing)
56
What are the 2 test methods
Black box methods – cannot see the software code (it may not exist yet!) – can only base their tests on the requirements or specifications ▪ White box (aka glass box) methods – can see the software’s code – can base their tests on the software’s actual architecture or code
57
Regression Tests
Once a system is stable and in production, we build and maintain a set of regression tests to ensure that when a change is made the existing behaviour has not been broken
58
Failure Tests
As failures are discovered and fixed, we also maintain a set of failure tests to ensure that we have really fixed the observed failures, and to make sure that we don’t cause them again
59
What are the 4 stages of Test Design
Typical test design stages are: ▪ test strategy, test planning, test case design, test procedure
60
Big Bang Testing Strategies
Test the entire software once it is complet
61
Incremental Testing Strategies
Test the software in phases (unit testing, integration testing, system testing) ▪ This testing strategy is what we use in Agile Development ▪ Incremental testing can occur bottom-up (using drivers) or top-down (using stubs) ▪ In general bottom-up is easier to perform but means the whole program behavior is observed at a later stage of development
62
Big Bang vs. Incremental
Big bang testing only works with a very small and simple program ▪ In general, incremental testing has several advantages: ▪ Error identification ✓ ▪ Easier to identify more errors ▪ Error correction ✓ ▪ Simpler and requires less resources
63
Test Plans (4)
the items to be tested ▪ the level they will be tested at ▪ the order they will be tested in ▪ the test environment
64
Test Design (4)
Test Strategy, Test Plans, Test Case Design, Test Procedures
65
to be a systematic test method, we must have
▪ a system (rule) for creating tests ▪ a measure of completeness
66
Need an easy, systematic way to create test cases (to know for sure what to test) ▪ Need an easy, systematic way to run tests (to know how to test) ▪ Need an easy, systematic way to decide when we’re done (to know when we have enough tests)