Final Exam Flashcards

1
Q

What is a process?

A

A collection of work activities, actions and tasks that are performed when some work product is to be created

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What a methodology addresses?

A

Introducing new people to the process, substituting people, delineating responsibilities, demonstrating visibile progress

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you evaluate a methodology?

A

How rapidly you can substitute or train people, how great an effect i has on the sales process, how much freedom it is to people on the project, how fast it allows people to respond to changing situations, how well it protects the organization - legally or from other damages

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the elements of a methodology?

A

Teams, Roles, Skills, Techniques, Activities, Process, Work Products, Milestones, Standards, Quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some characteristics of plan driven methodologies?

A
  • Focus on repeatability and predictability
  • Define, standardized, and incrementally improving processes
  • Thorough documentation
  • Software architecture defined up-front
  • detailed plans, workflows, roles, responsibilities, and work product descriptions
  • process group containing resources for specialists: process monitoring, controlling, and educating
  • ongoing risk management
  • focus on verification and validation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the three plan-driven methodologies?

A
  • Personal software process
  • Team Software Process
  • Rational Unified Process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is PSP (personal software process)?

A

Individual process methodology. Structured framework of forms, guidelines, and procedures intended to guide an engineer in using a defined, measured, planned, and quality controlled process

  • Goal is to access individual development skills to improve performane
  • Evolutionary improvement approach.
  • “early defect detection is much less expensive than later defect removal”
  • each level has detailed scripts, checklists, and templates to guide the engineer through the required steps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some PSP artifacts?

A

Scripts (orderly structure of steps for each phase of development and review), Forms (data collection for defect recording, time recording, and project planning), checklists (design, coding, etc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the advantages of PSP?

A

Improved size & time estimation, improved productivity, reduced testing time, improve quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the disadvantages of PSP?

A

Pushback of forms & detailed data recording, requires discipline and opportunity to work ton TSP teams

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Team Software Process (TSP)?

A

Team planning, building, and control. Project divided into overlapping, iterative development cycles. Each cycle is a mini waterfall consisting of cycle launch, strategy, planning, requirements, design, implementation, test, and postmortem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Advantages of TSP?

A

Scripted (consistent) process activities, teams take ownership of their process and plans, process improvement focus, visibile tracking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Disadvantages of TSP?

A

Similar to PSP (artifact centric, high ceremony), doesn’t scale well for small teams/short projects)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the phases of the RUP model?

A

Inception phase (what to do, business case, scope. Initial project plan, define risks),

Elaboration phase (analyze the problem domain and define a technically feasible architecture. Mitigate highest risks, make detailed project plan with prioritized risks),

Construction phase (Develop, integrate and test. Provide user documentation), and

Transition phase (distribute the product to the customers and maintain it).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the supporting workflows to RUP?

A
  • Project Management (management of competing objectives, risks to the project and successful delivery of a product)
  • Configuration and Change Management (Managment of parallel development, deve done at multiple sites, change requests)
  • Environment (Provide tools to a sw project and adaptation of RUP to a specific project)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the basic process of estimations?

A
  1. Estimate the size of the product
  2. Estimate the effort (man-months)
  3. Estimate the schedule
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are some estimation techniques?

A

Top down, bottom up, analogy, expert judgement, priced to win, parametric or algorithmic method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Describe (inc. pros and cons) of top-down estimations

A
  • Based on overall characteristics of a project.
    + Easy to calculate
    + Effective early on
  • Some models are questionable or may not fit
  • Less accurate because it doesn’t look at the details
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe (inc. pros and cons) of bottom-up estimation

A
  • Create a WBS to identify individual tasks to be done
  • Add from the bottom-up
    + Works well if activities are well understood
  • Specific activities are not always known
  • More time consuming
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Describe (inc. pros and cons) of expert judgement

A

Use somebody who has recent experience on a similar project. You get a guesstimate, accuracy depends on their ‘real’ expertise, comparable application(s) must be accurately chosen

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Describe (inc. pros and cons) of estimation by analogy

A
Use past project (must be sufficiently similar, comparable attributes)
\+ Based on actual historical data
- Difficult matching project types
- Prior data may have been mis-measured
- How to measure differences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Describe algorithmic measures

A

Lines of Code (LOC), function points, feature points or object points. LOC and function poits most common, majority of projects use non of the above

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the pros and cons of code-based estimations?

A
\+ Commonly understood metric
\+ Permits specific comparisons
\+ Actuals easily measured
- Difficult to estimate early in cycle
- Counts vary by language
- Many costs not considered (ex. reqs)
- Programmers may be rewarded based on this (#defects/#LOC)
- Code generators produce excess code
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are some estimation issues with LOC?

A
  • How do you know how many in advance?
  • What about different languages?
  • What about programmer style?
  • Most algo approaches are more effective after requirements (or have to be after)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Describe wideband delphi, including pros and cons

A

Grop consensus approach
+ Easy, inexpensive, utilizes expertise of several people
+ Does not require historical data
- Difficult to repeat
- May fail to reach consensus, reach wrong one, or may have same bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Describe function points

A

Software size measured by number & complexity of functions it performs, more methodical than LOC counts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Describe code reuse and estimation?

A

If code is more than 50% modified, its “new”
Reused code take 30% effort of new
Modified is 60% of new
Integration effort with reused code almost as expensive as with new code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe COCOMO

A

Input - LOC
Output - Persons Months
Biggest weakness is that it requires input of product size estimate in LOC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are some issues with over estimations?

A
  • Project will not be funded
  • Parkinson’s Law: Work expands to take the time allowed
  • Danger of feature and scope creep
  • Beware of double-padding: Team member + manager
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are some issues with under estimations?

A
  • Quality issues (short changing key phases like testing)
  • Inability to meet deadlines
  • Morale and other team motivation issues (DEATH MARCH)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Describe estimation for agile development

A
  • Each user scenario is considered separately
  • Each task is estimated separately
  • Total scenario estimate is computed *sum estimate for each task”
  • THe effort estimates for all scenarios in the increment are summered to get an increment estimate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are some estimation guidelines?

A
  • Estimate iteratively! Process of gradual refinement, make your best estimates at each planning stage, refine estimates and adjust plans iteratively, plans and decisions can be refined in response. Balance: too many revisions vs too few.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Estimation presentation techniques

A
  • Plus or minus quantifiers (e.g., 6 months +/- 1 month
  • Ranges: 6-8 months
  • Risk quantification
    • +/- with added information
    • +1 month if new tools not working as expeted
  • Cases: Best/Planned/Current/Worst case
  • Course Dates: Q3 02
  • Confidence Factor (April 1 - 10% probability, July 1 - 50%, etc_
34
Q

What are some “other” estimation factors”

A
  • Account for resource experience or skill, allow for “non-project” time & common tasks, use commercial estimation tools
35
Q

What is software quality

A

Meet the the explicit and implicit requirements - the needs.

36
Q

What is the purpose of software testing?

A

Asses and evaluate the quality of work performed at each stage of softwre development process. Goal is to ensure software performs as intended to ti improve software quality, reliability, and maintainability

Testing is a measure of quality, it does not deliver quality

37
Q

Goals of quality assurance (QA) acitivites?

A
  • Few, if any, defects remain in the system when its delivered
  • Remaining defects will cause minimal distriptions or damages
38
Q

Verification

A

Are we building the product right, should conform to its design

39
Q

Validation

A

Are we building the right product, validate the implementation. Software should do wha the user really requires.

Validation and Verification: Build the right product and built it right.

40
Q

QA technique classification:

A
  • Defect prevention (remove human error sources, block defects from being injected into software artificats)
  • Defect reduction
    + Detect defects (Inspection, Testing)
    + Remove defects (Debugging, Rework requireiments, design, code, etc)
  • Defect containment
    + Fault tolerance
    + Fault containment
41
Q

What is the sweet spot of testing?

A

The number of defects intersects the cost of testing.

42
Q

What is measure?

A

Provides a quantitative indication of the size of some product or process attribute

43
Q

What is measurement?

A

The act of obtaining a measure

44
Q

What is a software metric?

A

Any type of measurement which relates to a software system, process, or related documentation. LOC in a program, number of person-days required to develop a compnent

  • Allow the software and the software process to be quantified
  • Measures of the software process or product
45
Q

What are the metric categories?

A

Product - Accesses the quality of the design and construction of the software product being built

Process & Project - Quantitative measures that enable software engineers to gain insight into the efficiency of the software process and the projects conducted using the process framework

46
Q

What are some process metrics?

A

Private process metrics: defect rates by individual or module only known to the individual or team concerned

Public process metrics enable organizations to make strategic changes to improve software process

Metrics should not be used to evaluate the performance of individuals

  • Statistical software process imporvement helps and organizatin to discover where tehy are strong and where they are weak
47
Q

What are some classes of product metrics?

A

Should be a predictor of product quality
Classes:
- Dynamic metrics which are collected by measurements made of a program in execution
- Static metrics collected by measurements made of the system representations
- Dynamic metrics help assess efficiency and reliability; Static metrics help assess complexitiy; understandability, and maintainability

48
Q

What should every project measure

A

Inputs, Outputs, and the results

49
Q

What are project metrics used for?

A

Avoid development schedule delays, mitigate potential risks, and to access product quality on an on-going basis

50
Q

What are some direct measures?

A

Process: Cost and effort
Product: LOC, execution speed, memory size, defects reported over some time period

51
Q

What are some indirect measures?

A

Examines the quality of the software product itself (functionality, complexity, efficiency, reliability, maintainability)

52
Q

What are size-oriented metrics?

A

Derived by normalizing (dividing) any direct measure (such as defects or human effort) associated with the product or project by LOC)

Size-oriented metrics are widely used but their validity and applicability is a matter of some debate

53
Q

What are function-oriented metrics?

A
  • They are computed from direct measures of the information domain of a business software application and assessment of its complexity.
  • Once computed, function points are used like LOC to normalize measures of software productivity, quality, and other attributes
  • Relationship of LOC and function points depends on the language used to implement the software
54
Q

What are some metric assumptions?

A

A software property can be measured

The relationship exists between what we can measure and what we want to know

This relationship hsa been formalized and validated

It may be difficult to relate what can be measured to desirable quality attributes

55
Q

What are some internal and external attributes metrics?

A

Maintainability: Number of procedure parameters, Cyclometric complexity, length of user manual

Reliability: Cyclometric complexity, LOC, number of error messages

Portability: Number of procedure parameters, LOC

Usability: Number of error messages, Length of user manual

56
Q

What are three types of automatic data collection?

A
  • Static product analysis
  • Dynamic product analysis
  • Process data collection
57
Q

What are software product metrics?

A

Fan in/Fan out (Fan-in is a measure of the number of functions that call other functions. Fan-out is the number of functions which are called by function X; high value for fan-in means that X is tightly coupled to the rest of the design and changes to X will have extensive knockback effects. A high value for fan-out suggests that the overall complexity of X may be high because of the complexity of the control logic needed to coordinate the called components)

  • Length of Code (Large => More complex + error prone)
  • Cyclomatic Complexity (Measure of control complexity of a program. Can be related to program understandability)
  • Length of identifier (the longer the more likely they will be meaningful)
  • Depth of conditional nesting (Measure the depth of nesting of if-statements in a program. Deeplynested if statements are hard to understand and are potentially error-prone)
  • Fog Index (This measure of the average length of words and sentences in documents. The higher value for the fog index, the more difficulty the doc may be to understand)
58
Q

What are some object oriented metrics?

A
  • Depth of inheritance tree
  • Method fan-in/fan-out
  • Weighted methods per class
  • Number of overriding operations
59
Q

What are some of the top ten metrics (usage)?

A
  • Number of defects found after a release
  • Number of changes or change requests
  • User or customer satisfaction
  • Number of defects found during development
  • Documentation completeness/ accuracy
  • Time to identify/ correct defects
  • Defect distribution by type/class
  • Error by major function/ feature
  • Test coverage by specifications
  • Test coverage of code
60
Q

What are the software change strategies?

A
  • Software maintenance: Changes are made in response to changed requirements but the fundamental software structure is stable
  • Architecture transformation: The architecture of the system is modified generally from a centralized architecture to a distributed architecture
  • Software re-engineering: No new functionality is added to the system but it is restructured and reorganized to facilitate future changes
61
Q

What are the Lehman’s Laws?

A

Continuing Change: A program that is used in real-world environment must change or become progressively less useful in that environment

Increasing complexity: As an evolving program changes, its structure tends to become more complex. Extra resources must be devoted to preserving and simplifying the structure

Large program evolution: Program evolution is a self-regulating process. System attributes such as size, time between releases, and the number of reported errors are approximately invariant for each system release

Organizational stability: Over a program’s lifetime, its rate of development is approximately constant and independent of the resources devoted to a system development

Conservation of familiarity: Over the lifetime of a system, the incremental change in each release is approximately constant

62
Q

What is software maintenance?

A
  • Modifying a program after it has been put into use.
  • Maintenance does not normally involve major changes to the system’s architecture
  • Changes are implemented by modifying existing components and adding new components to the system
63
Q

Maintenance is inevitable!

A
  • System requirements are likely to change while the system is being developed because the environment is changing = delivered system won’t meet its requirements
  • Systems are tightly coupled with their environment => when a system is installed in an environment it changes that environment and therefore changes the system requirements
  • Systems MUST be maintained if they are to remain useful in an environment
64
Q

What are the types of maintenance?

A
  • Maintenance to repair software faults (changing a system to correct deficiencies in the way meets its requirements - corrective maintenance)
  • Maintenance to adapt software to a different operating environment (adaptive maintenance)
  • Maintenance to add or to modify the system’s functionality (e.g., satisfy new requirements - Perfective Maintenance)
65
Q

Describe the maintenance costs!

A
  • Usually greater than development costs (2* to 100* depending on application)
  • Affected by both technical and non-technical factors
  • Increases as software is maintained. Maintenance corrupts the software structure so makes further maintenance more difficult
  • Ageing software can have high support costs (old languages, compilers, etc)
66
Q

What are some maintenance cost factors?

A
  • Team stability (maintenance costs are reduced if the same staff are involved with them for some time)
  • Contractual responsibility (developers of a system may have no contractual responsibility for maintenance so there is no incentive to design for future change)
  • Staff skills (maintenance staff are often inexperienced and have limited domain knowledge)
  • Program age and structure ( as program age, their structure is degraded and they become harder to understand and change)
67
Q

What is maintenance prediction?

A

Concerned with assessing which parts of the system may cause problems and have high maintenance costs.

  • Change acceptance depends on the maintainability of the components affected by the change
  • Implementing change degrades the system and reduces its maintainability
  • Maintenance costs depend on the number of changes and costs of change depends on maintainability
68
Q

Factors influencing relationship between system and its environment

A
  • Number and complexity of system interfaces
  • Number of inherently volatile system requirements
  • Business processes where the system is used
69
Q

What does complexity depend on (metrics)?

A
  • Complexity of control structures
  • Complexity of data structures
  • Procedure and module size
70
Q

What are some process metrics to assess maintainability?

A
  • The number of requests for corrective maintenance
  • Average time required for impact analysis
  • Average time taken to implement a change request
  • Number of outstanding change requests
    => IF ANY OR ALL OF THE ABOVE is increasing, this may indicate a decline in maintainability
71
Q

What are some defect methodologies?

A
  • Defect Counts
  • Defect Density Prediction
  • Defect Pooling
  • Defect Seeding
  • Defect Modeling
72
Q

Defects Fixed vs. Open Sweet Spot

A

Defects are being fixed faster than being found

73
Q

What is defect count?

A
  • Ratio of new defects to defects solved

- Statistics on effort per defect

74
Q

What is defect density prediction?

A
  • Compare to historical data (similar projects, similar or same team)
  • Use pooling, seeding, and modeling
75
Q

What is defect pooling?

A

Prediction technique that separates defects found into two pools (e.g., which team, when found, etc arbitrary)
Defectsunique = DefectsA + DefectsB - DefectsAB
- The number of total defects can then be approximated by Defects Total = (DefectsA * defectsB )/ DefectsAB

76
Q

What is defect seeding?

A

Defects are intentionally inserted into the project and how many found are measured for prediction

IndigenousDefectsTotal = (SeededDefectsPlanted/ SeededDefectsFound) * IndigenousDefectsFound

77
Q

What is defect modeling?

A

Use previous data to build predictive model

  • The speed to which you find defects shoiuld be consistent with historical data
  • Don’t assume no defects found means you are ready for release (testing methodology may be flawed, testing and reporting systems may be biased)
78
Q

What are the CMM Levels?

A
  • Initial (essentially uncontrolled)
    No focus, and the project success primary depends on individuals and their heroics
  • Repeatable (Product management procedures defined and used)
    Focus is on project management. The basic project management processes are established to track cost, schedules, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.
  • Defined (Process management procedures and strategies defined and used)
    Focus is on the engineering process>
    The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects uses an approved, tailored version of the organization’s standard software process for dev and maintaining software.
  • Managed (quality management strategies defined and used)
    The focus is on product and process quality;
    Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.
  • Optimizing (Process improvement strategies defined and used)
    Focus is on continuous process improvement. This is enabled by quantitative feedback rom process and from piloting innovative ideas and technologies.
79
Q

Describe the essence of CMM

A

CMM is the capability maturity model and describes the principles and practices underlying software process maturity. Intended to help software organizations improve the matuirty of their softwre processes in terms of an evolutionary path from ad hoc, chaotic processes to mature, disciplined softwre processes

The ultimate goal is to improve software development and maintenance in the area of cost, schedule, and quality

80
Q

What does CMM not do?

A

Not a silver bullet; does not address expertise in a particular application domains, advocate specific software technologies, suggest how to select, hire, motiviate, and retain competent people.