Final Flashcards
Basis of Scheduling
- Predictive (Waterfall) Scheduling: Activities, broken down from Work Packages, are noted in their necessary order on a planned schedule, with changes needing a change request once the schedule is approved and baselined.
- Adaptive (Agile) Scheduling: The Product Owner owns the Product Backlog and Product Roadmap, showing at least the order of when features are to be delivered.
- Kanban or Pull Systems: The team “pulls” each piece of work when they are ready, respecting “Work in Progress” (WIP) limits to keep multi-tasking to a minimum.
- Siegel’s advice: Plan around tasks, their durations, and their interdependencies (versus fixed dates) then derive the anticipated completion date.
Activity Network Scheduling Method
- Define the tasks.
a. This refines our work breakdown structure.
b. Rolling Wave Planning = near term work packages defined in more detail compared to long term work packages to be more defined at a later date. Plan must be revisited to define long term work packages.
c. Siegel’s advice: the lowest WBS level should consist of work that can be completed in no longer than one or two months. - Identify the interdependencies between the tasks.
a. This assists with sequencing the tasks and finding the critical path.
b. Precedence Diagramming Method = graphical representation of project work flow where arrows link work packages together - Estimate the duration of each task, in a statistical fashion.
a. Analogous (top-down): Uses historical information (an analogy) to estimate (e.g., time, budget, difficulty). Faster but lower accuracy.
b. Parametric: Using a parameter to estimate, like $55 a meter or $100 an hour. Medium effort, medium accuracy.
c. Bottom-up: Adding together the smallest pieces to get an overall estimate (i.e. cost of each work package combined for the project budget). High effort, high accuracy.
d. 3-point: An average of three estimates: Optimistic, Nominal (Most Likely) and Pessimistic. Triangular Distribution (O + M + P) / 3. PERT (Program Evaluation and Review) or Beta Distribution (O + (4 x M) + P) / 6. Siegel’s advice: Weight both deviation and probability toward the pessimistic.
e. Wideband Delphi (“Planning poker” in Agile): the people doing the work estimate the effort (or cost). The high and low estimates discuss their reasons, then re-estimate until a consensus is reached. Useful in complex situations - Only the initial task in each chain is fixed to an actual calendar date; all other dates are derived from the task interdependencies and the statistically expressed task durations.
Precedence Diagramming Method
- Finish to Start
- The next activity cannot start until the previous activity has finished
- Start to Start
- The next activity cannot start until the previous activity has started
- Finish to Finish
- The next activity cannot finish until the previous activity has finished
- Start to Finish
- The next activity cannot finish until the previous activity has started
Dependency Types
- Mandatory: one work package MUST be completed prior to start of next work package.
- E.g., purchasing paint prior to painting walls.
- Discretionary: work packages can occur in tandem.
- E.g., painting walls and laying carpet at the same time.
- External: work package linked to non-project activities.
- E.g., supplier stocking paint prior to your purchase.
- Internal: work package linked to project activities.
- E.g., who is responsible for purchasing the paint and who is responsible for painting the walls.
Leads and Lags
- Lead time is the amount of time you can bring an item forward. You are leading it forward.
you are here <–LeadTime– Item is here
- Lag time is the amount of time you can delay an item. It is lagging behind
you are here Item is here –LagTime–>
Critical Path Method
- Critical Path = sequence of tasks where the project can be
completed in the shortest possible time.- Divide project into tasks
- Estimate each task
- Create network diagram
- Create initial Gantt chart
- Perform resource leveling
- Compress schedule (if necessary)
- Critical Chain = resources required to execute project tasks and their availability.
Resource Leveling
- When shared resources are over-allocated, or a resource has been assigned to two or more activities during the same period.
Resource Smoothing
- Adjusts the activities within their free and total Float (the Critical Path is not changed when doing this).
top left: Early start
top middle: Duration
top right: Early Finish
middle: phase
bottom left: Late Start
bottom middle: Float/Slack
bottom right: Late Finish
Early Finish = Early Start + Duration – 1
Late Start = Late Finish – Duration + 1
Float/Slack = Late Start – Early Start
Schedule Compression Techniques
- Schedule Fast Tracking is when activities on the Critical Path are done in parallel (overlapped) to shorten the project duration. Adds cost and may add additional risk.
- Schedule crashing is approving overtime, adding resources, or paying to expedite delivery of activities on the critical path. May add risk due to project rework.
Reserve Analysis
- AKA Slack Time, Contingency Reserve, Time Reserves, Buffer.
- A percentage or a set determined time allowance.
- Added because of Risk Factors.
Critical Path: Build a new concrete driveway ***
- Step 1: Divide project into tasks
Task# | TaskName | Predecessors | Responsible
1 | Excavation Team | | Team
2 | Build Forms | 1 | Team
3 | Place Rebar | 1 | Subcontractor
4 | Pour concrete | 2, 3 | Subcontractor
5 | Set and cure concrete | 4 | Team
6 | Strip forms | 5 | Team
- Step 2: Estimate tasks
Task# | Task Name | Resource | Duration | Cost/Use | Subtotal | Total
1 | Excavation | Mini-excavator | 4 days | $450/day | $1,800 |
1 | Excavation - Team | Labor | 12 hours | $20/hour | $240 | $2,040
- Step 3: Create Network Diagram
- Add durations for each activity in center top square.
- Perform forward pass enter early start and early finish dates.
- Early Finish = Early Start + Duration – 1
- Perform backward pass enter late start and late finish dates.
- Late Start = Late Finish – Duration + 1
- Enter Float or Slack (= Late Start – Early Start)
- Shortest project duration is early finish of the final task (here, 17 days)
(duration)
Excavation (4) -> Build Forms(2)
Excavation (4) -> Place Rebar (6)
Build Forms(2) -> Pour Concrete(1)
Place Rebar (6) -> Pour Concrete(1)
Pour Concrete(1) -> Set and Cure (5)
Set and Cure (5) -> Strip Forms(1)
- Step 4: Create Initial Gantt Chart
- Step 5: Perform Resource Leveling
- Step 6: Schedule Compression (if necessary)
Cost Management Plan
- Defines how the project costs will be estimated, budgeted, managed, monitored, and controlled.
- Value Engineering = finding a more economic way of doing work to achieve the project goal / scope.
Inputs | Tools and Techniques | Outputs
Project Charter | Expert Judgement | Cost Management Plan
Project Management Plan | Data Analysis | -
Enterprise Environmental Factors | Meetings | -
Organizational Process Assets | - | -
Cost Definitions
Table
- Cost Type:
- Fixed
- Variable
- Direct
- Indirect
- Sunk
- Defintion:
- Costs that stay the same throughout the life of the project (e.g., physical assets).
- Costs that vary on a project (e.g., hourly labor).
- Expenses billed directly to the project (e.g., materials).
- Costs shared and allocated among several projects (e.g., electricity)
- Costs invested into the project.
Cost Estimation
- Management Reserve = money set aside to deal with problems (unexpected activities related to in-scope work) as they arise on the project.
- Contingency Reserve = money set aside to deal with planned risks, should they occur.
- Definitive Estimates: -5% to +10%
- Budget Estimates: -10% to +25%
- Rough Order of Magnitude Estimates: -25% to +75%
Image
Work Package (WP)
Contingency Reserve (CR)
Cost Estimate (CE)
Management Reserve (MR)
Cost Baseline (CB)
Project Budget (PB)
MR -> PB CR -> CB -> PB WP1 -> WP + CE -> CB -> PB WP2 -> WP + CE -> CB -> PB WP3 -> WP + CE -> CB -> PB
Earned Value Analysis
- Budget at Completion (BAC): total planned budget.
- Planned Value (PV) = BAC x (Planned % Completed) - shows the work that should have been completed by that time.
- Earned Value (EV) = BAC x (Actual % Completed) - what we have actually completed (earned) at a given point in time.
- Actual Cost (AC): what was actually spent at that point in time.
- Estimate at Completion (EAC) = AC + (BAC – EV) = BAC/CPI
- To Complete Performance Index (TCPI) = (BAC – EV) / (BAC – AV)
Variance Analysis
- Variance at Completion (VAC) = BAC – EAC
- Cost Variance (CV) = EV – AC
- ”+” is good, “–” is bad.
- Cost Performance Index (CPI) = EV / AC
- > 1 is good (under budget), <1 is bad (over budget).
- Schedule Variance (SV) = EV – PV
- ”+” is good, “–” is bad.
- Schedule Performance Index (SPI) = EV / PV
- > 1 is good (ahead of schedule), <1 is bad (behind schedule).
S Curve
Image Review Week8 Slides
Decision Making Tools
- Benefit-cost ratio (BCR): amount of money a project will make vs. how much it will cost. Large number, good investment.
- Net present value (NPV): actual value at a given time of the project minus all of the costs associated with it. People calculate this number to see if it’s worth doing a project.
- Opportunity cost: The money you forego because you chose not to do a project.
- Internal rate of return: amount of money the project will return or how much money a project is making the company.
Measurement Process Steps (10)
- Decide what to measure
- What measurements (data) do we need to make a certain decision?
- Determine how accurately we need to measure
- Do we need to know the weight to the nearest pound, or to some other degree of
accuracy?
- Do we need to know the weight to the nearest pound, or to some other degree of
- Determine how to achieve that accuracy of measurement
- What tools and methods have the desired accuracy?
- Determine how much data must be collected
- Do we have to measure an item just once, or 10 times, or 1,000,000 times?
- Determine how, where, and when to collect the data
- Under what conditions do we have to measure this item? What tools do we need to do so? What are the operational states of our system when it is valid to collect these data?
- Understand the range of validity of the data
- Are the measurements valid only at certain times? Under certain conditions?
- Validate and calibrate the data
- Is our scale accurate? Is our tool accurate How do we know?
- Analyze the data
- What do the data indicate? Are there alternate explanations for what the data appear to indicate?
- Draw conclusions from the data
- Where are we on solid ground? Where are we making judgments? What is the level of uncertainty? What risks could result from drawing the wrong conclusions?
- Document the entire process, so that it could be reconstructed
- Engineering is an iterative process; therefore we need to capture the process, tools, and data that we used to make this measurement for future comparisons.
What to measure?
- Technical performance measures
- Represent our degrees of design freedom
- Operational performance measures
- Represent the goodness of the system interpreted by the customer
- Management measures
- Represent our progress on the project
Key Performance Indicators (KPIs)
- Metrics used to evaluate the effectiveness of your project at achieving the defined objectives.
- Prioritize to focus on tracking essential and actionable metrics.
- Leading indicators – predict future performance, help take early corrective action
- Lagging indicators – reflect past performance, help assess success of completed activities
- Set targets and thresholds.
- Assess progress and readjust.
Types of KPIs (Schedule)
- Indicator
- Cycle time
- On-time completion %
- Time Spent
- # Schedule Adjustments
- FTE days vs. Calendar Days
- Planned Hours vs. Time Spent
- Resource Capacity
- Definition
- Time to complete a task
- Whether or not task is completed by deadline
- Amount of time spend on tasks
- How many times team adjusted completion date
- How much time team is spending on project
- Estimate vs. actual time spent on tasks
- # individuals working on project multiplied by % effort available for project
- Formula
- End time - Start time
- (# on-time tasks / total # of tasks) x 100
- Sum of hours spent by all team members
- Count of schedule adjustments
- FTE days / calendar days
- Planned hours - Actual hours spent
- # resources x available time (%)
Types of KPIs (Cost)
- Indicator
- Budget Variance
- Budget Creation cycle time
- Line items in budget
- # budget iterations
- Planned Value
- Cost Performance Index
- Defintion
- How much actual budget varies from projected budget
- Time needed to formulate an org’s budget
- Line items help managers track expenditures
- # of budget version produced before final approval
- Value of what is left to complete on the project
- Compares budgeted cost of work completed so far to actual amount spent
- Formula
- Actual Budget - Projected budget
- (End date - Start date) of budget creation
- Count of individual expenditure items
- count of budget variations
- total budget x % work remaining
- Earned value / Actual costs
Types of KPIs (Quality)
- Indicator
- Customer Satisfaction
- Stakeholder Satisfaction
- Net Promoter Score
- # Errors
- Customer Complaints
- Employee Churn Rate
- Definition
- Measures if customer is satisfied with project outcomes
- Assess satisfaction of all stakeholders
- How likely customers are to recommend your services
- How often tasks need to be redone
- Complaints received from customer during project
- Team members who leave company during project
- Formula
- Survey Score
- Survey Score or Feedback Score
- % Promoters - % Detractors
- Count of errors or rework
- Count of customer complaints
- (# departing employees / total employees) x 100
Types of KPIs (Effectiveness)
- Indicator
- Average cost per hour
- Resource profitability
- # milestones completed on-time w/ sign off
- # of returns
- Training/research needed for project
- # cancelled projects
- # change requests
- Risk management effectiveness
- Definition
- Measures cost of labor
- Assess the effectiveness of resource utilization
- tracks # of milestones to measure effectiveness of project execution
- return rate on items
- measures amount of training/research required before the project can start
- High # of cancellations can indicate issues
- Frequent change requests can affect budgets, resources, and timelines
- how effectively the team identifies assesses, and mitigates risks
- Formula
- Total labor costs / Total hours worked
- Revenue generated / Cost of resources
- Count of on-time milestones with sign off
- Count of returned items / Total items
- Total training/research hours
- Count of cancelled projects
- Count of change requests
- (# mitigated risks / total identified risks) x 100
Measurement Caveats
- Test programs should
- confirm that – under a defined set of conditions – the system works.
- identify the conditions where the system fails.
- recognize measurements involve error or uncertainty (noise).
- separate the signal (meaningful change) from the noise (random fluctuation).
- No data have meaning apart from their context.
- Two data points do not make a trend.
- Don’t rely on tables of numbers – graphs are useful.
- Understand common/routine vs. special/exception variation.
Statistics Example
You are a physician. A patient has come to you and asked to be tested for a particular, rare disease that ~1 person in 1000 has in the U.S. You agree and the results come back positive. The test has a false positive rate of 5%, or gives a positive result when it is actually negative 5% of the time. You call the patient and the patient asks you, “What is the probability that I have the disease, given that the test came back with a positive result?”
What do you tell her?
Need to calculate the posterior probability that she actually has the disease, given a positive test result. This is a classic example of applying Bayes’ theorem.
Bayes’ theorem allows us to update our beliefs (probabilities) based on new evidence. In this case, the new evidence is the positive test result.
Bayes’ theorem is:
P(Disease | Positive Test) = (P(Positive Test | Disease) x P(Disease)) / P(Positive Test)
Where:
- P(Disease|PositiveTest ) is the probability that the patient has the disease, given the positive test result (this is what we want to calculate).
- P(PositiveTest|Disease) is the probability of a positive test result given that the patient has the disease (i.e., the sensitivity or true positive rate of the test). Since we don’t have this value, we will assume it to be 100% for simplicity (i.e., the test is perfect when the disease is present).
- P(Disease) is the prior probability of the disease (the prevalence), which is 1/1000 = 0.001
- P(PositiveTest) is the total probability of getting a positive test result, which includes both true positives and false positives.
We can calculate P(PositiveTest) using the law of total probability:
P(PositiveTest)=P(PositiveTest|Disease) x P(Disease)+P(PositiveTest|NoDisease) x P(NoDisease)
Where:
* P(PositiveTest|NoDisease) is the false positive rate of the test, which is 5% or 0.05.
* P(NoDisease)=1−P(Disease)=0.999.
- Step 1: Calculate P(PositiveTest)
Using the assumptions and values provided:- P(PositiveTest)=(1) x (0.001)+(0.05) x (0.999)
- P(PositiveTest)=0.001+0.04995=0.05095
- Step 2: Apply Bayes’ theorem
Now, we can substitute into Bayes’ theorem:- P(Disease|PositiveTest)= (1)x (0.001) / 0.05095
- P(Disease|PositiveTest)= 0.001 / 0.05095 ≈ 0.0196
- Final Answer:
The probability that the patient actually has the disease, given that the test came back positive, is approximately 1.96%. - Explanation:
Even though the test is quite specific (only a 5% false positive rate), the disease is rare, so the probability of actually having the disease after a positive test result remains relatively low. This result highlights the importance of considering both the prevalence of the disease and the characteristics of the test when interpreting medical test results.
Bayes theorem
Prior probability represents information of our actual prior state of knowledge (e.g. likelihood of having a disease before testing, say based on disease prevalence in a population)
Posterior (post-test probability) represents the revised estimate incorporating the test results
P(A|B) = (P(A) x P(B|A)) / P(B)
P(B) = P(B|A) x P(A) + P(B|A^c)xP(A^c)
Example: pathogen testing
P(Path|Pos) = (P(Path) x P(Pos|Path)) / (P(Pos|Path)xP(Path) + P(Pos|NoPath)xP(NoPath))
P(Path) = unknown
P(NoPath) = 1 - P(Path)
P(Pos|Path) = Sensitivity of the test (true positive)
P(Pos|NoPath) = specificity of the test (false positive)
Bayes theorem Example 1
1000 patients, assume a pre-test probability of infection = 90%
Test sensitivity = 70%
Test specificity = 98%
1000 patients, assume a pre-test probability of infection = 90%
- Test sensitivity = 70%
- Test specificity = 98%
TP=Sensitivity×Infectedpatients
FN=Infectedpatients−TP
TN=Specificity×Non-infectedpatients
FP=Non-infectedpatients−TN
PPV= TP/(TP+FP)
NPV=TN/(TN+FN)
(Table)
* 90% Pre-Test Prob
* Test Positive
* Test Negative
* Totals
* Infected
* 630 True Pos
* 270 False Neg
* 900
* Not infected
* 2 False Pos
* 98 True Neg
* 100
* Totals
* 632 Total Pos
* 368 Total Neg
* 1000
- Positive predictive value (PPV) = True Positives / Total Positives = 630/632 = 99.7%
- Negative predictive value (NPV) = True Negatives / Total Negatives = 98/368 = 26.6%
Bayes theorem Example 2
1000 patients, assume a pre-test probability of infection = 5%
Test sensitivity = 70%
Test specificity = 98%
1000 patients, assume a pre-test probability of infection = 5%
- Test sensitivity = 70%
- Test specificity = 98%
(Table)
* 90% Pre-Test Prob
* Test Positive
* Test Negative
* Totals
* Infected
* 35 True Pos
* 15 False Neg
* 50
* Not infected
* 19 False Pos
* 931 True Neg
* 950
* Totals
* 54 Total Pos
* 947 Total Neg
* 1000
- Positive predictive value (PPV) = True Positives / Total Positives = 35/54 = 65%
- Negative predictive value (NPV) = True Negatives / Total Negatives = 931/947 = 98.4%
Wheeler/Kazeef Signal Tests
If any below are yes,
1. There is one point outside the upper or lower natural process limits.
2. Two out of any three consecutive points are in either of the 2 sigma to 3 sigma zones.
3. Four out of any five consecutive points are outside the 1 sigma zones.
4. Any seven consecutive points are on one side of the average.
5. Any measurement of the item called moving range is above the item called moving range limit
Common mistakes in handling quantifiable data
- What you are measuring may not be exactly what you think you are measuring
- Your measurement method may omit a portion of the item you seek to quantify, or include a bit of some other item.
- You underestimate the amount of error and noise in the data
- Every measurement includes errors, both systematic and random (“noise”). You must separate these errors from the item you wish to characterize (“signal”). If you do not, you are likely to mistake the change in the noise (especially random variations) for a change in the signal of interest.
- Failing to understand that the answer from the measurement might just be wrong
- The answer provided by a measurement or a test might just be wrong; there are false positives (the test says “yes” when the correct answer is “no”) and false negatives (the test says “no” when the correct answer is “yes”).
- Missing that the answer depends on a conditional probability
- You base your analysis and eventual decision on too few parameters, failing to notice the essential interconnection between parameters. This may come about through discarding some parameters, or failing to measure them.
- Assuming independence between measurements ignoring sequential effects
- The expected outcome for 1000 people who each bet once in a casino is very different from that for a single person who bets 1000 times, because in the latter the initial conditions change with each instance.
- Using ineffective or weak statistics as a basis for evaluation
- The most common statistic in use is comparing a current measurement to a single prior measurement, and inferring a trend (and a cause for that trend) from that change.
- Using data outside its range of applicability
- We often collect real data, and then we extrapolate that result to different conditions. This works sometimes. For example, based on an experiment where you cooled water from 70°F to 60°F to 50°F to 40°F, what would you predict about continuing to cool the water to 30°F?
- Not collecting repeated data on a meaningful time frame
- There is no point in making measurements at a rate significantly faster than the underlying phenomena can actually change. If you did so, almost all of the change that you detected would just be noise, not an actual change in the signal of interest.
- Poor selection of data
- Selecting data that are not truly representative of your operating conditions.
- Changing the data or the measurement approach during collection
- If you change the way you collect the data mid-stream, there may be no valid way to compare data collected by the first method from data collected by the second method. And, of course, doing things that allow
the data actually to change (e.g. the temperature at which we collect samples is allowed to vary, etc.) invalidates comparison too
- If you change the way you collect the data mid-stream, there may be no valid way to compare data collected by the first method from data collected by the second method. And, of course, doing things that allow
- The limit of the utility of examples (the problem of induction)
- In the real world, no number of observations of a positive phenomenon constitutes a proof that that phenomenon is always true. Yet a single negative observation constitutes a proof that it is not always true.
- Attribution bias
- We attribute our successes to skills and knowledge, rather than to random chance. Conversely, we attribute failures to random events, rather than lack of skill or knowledge.
- Path dependence
- We “fall in love” with the path we used to arrive at an answer, and refuse to adjust that answer even when there is contradictory evidence.
- The fallacy of the silent evidence
- We see what appears to be a compelling set of evidence in favor of a proposition, but we fail to notice that all of the contradictory evidence has been omitted from our sample.
- Round-trip error
- The tendency to confuse the condition “no evidence of flaws in our system” with the condition “there is evidence that there are no flaws in our system.” For example, this latter condition would correspond to a patient truly being “cancer-free,” a very different condition than that of merely no longer presenting any visible indications of having cancer.
- The narrative fallacy
- Correlation does not prove causation. The tendency to create stories for everything, even when not justified, is called the narrative fallacy. Just because a story is appealing does nothing to establish its correctness; an appealing story may be completely wrong.
- Failing to recognize the existence and significance of outliers: the problem of scale
- Many engineered systems experience sampling or input rates that are orders of magnitude beyond normal human experience; people have no useful intuitions about such large sample sets. Many statistical techniques call for discarding outliers; but we must focus on the potential of outliers and, through design strategies, try to prevent them from disrupting our system.
- The tendency to believe what you want; the tendency to ignore evidence, and to explain away evidence that tells the “wrong story”
- Humans have a strong tendency to interpret all new evidence in a way that supports their selected explanation. They are also quick to discount and eliminate evidence that appears to contradict their selected explanation.
What is risk?
- Risk: The likelihood of an event occurring multiplied by its impact
- Risk=Likelihood x Impact
- Likelihood: The probability of an event occurring, considering any control measures in place
- Impact: An estimate of the harm that could be caused by an event
Risk is a function of
Severity of possible harm (Impact) /
Probability of the occurrence of that harm (Likelihood)
Steps in Risk Management
- Identify the Potential Risks and Opportunities
- Hold a brainstorming session
- Document ideas in a Risk Register and an Opportunities Register
- Assess the Risks
- Identify the symptoms
- Select the item to be measured and the measurement method
- Rank the Risks
- Score each risk for both likelihood and impact
- Risks to design exist now (represent uncertainty)
- Risks to project exist in the future (represent prediction)
- Score each risk for both likelihood and impact
- Create mitigation and exploitation plans
- For each mitigation step, define: (i) what you will do; (ii) why it will
- help; (iii) how you measure its effect; (iv) what the resulting risk matrix position; (v) cost and duration estimates.
- Potential responses to risks:
- Accept the risk and its consequences.
- Mitigate the risk by lessening the likelihood or impact or both. This involves spending money via people, resources, facilities.
- Share the risk with another party, such as customers or subcontractors. This involves a formal contract modification.
- Transfer the risk to another party, such as customers or subcontractors. This involves a formal contract modification or through insurance.
- Create triggers and timing requirements for mitigation plans
- Create a method to aggregate all risk assessments into a
periodic overall project impact prediction - Create and use some sort of periodic “management rhythm” to
make periodic decisions based on assessment - Respond to Risks
- When risks actually occur, transition from risks to issues and perform root-cause analysis
- 5 whys
- Fishbone diagram
- When risks actually occur, transition from risks to issues and perform root-cause analysis
Risk identification tools
- Documentation review
- Facilitation techniques
- Brainstorming
- Delphi
- Expert interview
- Root cause analysis
- SWOT analysis: Strengths, weaknesses, opportunities, threats
- Checklist
- Assumption analysis: every assumption and constraint is a risk
- Diagramming: fish bone, flow chart, etc.
Risk Register
- Captures the risks and rankings that emerge from the process of risk assessment
- Table Columns:
- RiskID
- Risk Description
- Probability
- Impact
- Score
- Response
- Owner
- Comments
- Failure Mode and Effects Analysis (FMEA) – top down quantitative technique to identify failures in a design, process, system or existing product, equipment, or service.
- Analyzes components, interactions, and effects on system as a whole.
- Failure modes = ways functional elements may fail
- Effects analysis = consequences of those failures and any existing prevention or control measures
Failure Mode Effects Analysis (FMEA)
- Function or process
- What can go right, what is supposed to happen
- Potential failure modes
- What are the ways that prevent the process or function to achieve its objective
- Potential effect(s) of failure
- What is the consequence of the failure
- Severity (S)
- How significant is the impact
- Potential cause(s) of failure
- How can the failure occur
- Occurrence rating (L)
- What is the likelihood of occurrence
- Current Design Controls (prevent)
- What are in place so that failure is prevented
- Current design controls (detect)
- What are in place so that failure is controlled
- Detection Rating (D)
- How effective are the measures
- RPN
- L x S x D
- Recommended action(s)
- What can be done
Risk Matrix
Table
Likelihood, bottom up
Very Likely
Likely
Possible
Unlikely
Very Unlikely
[impact left to right] (top)
negligible
minor
moderate
significant
severe
[
[LM, M, MH, H, H]
[L, LM, M, MH, H]
[L, LM, M, MH, MH]
[L, LM, LM, M, MH]
[L, L, LM, M, M]
]
Special Risk Types
Risks that we did not yet identify (not on risk matrix)
Low likelihood, high impact
Fault Tree Analysis
- Used for reliability and safety analysis to establish causalities of risk scenarios
- Basic Event (Circle)
- a basic initiating fault or failure event
- External Event (House)
- an event that is normally expected to or not to occur
- Undeveloped Event (Diamond)
- a basic event that needs to further details or resolution
- Conditioning Event (Oval width)
- a specific condition or restriction that applies to a gate
- Transfer (Triangle)
- indicates continuation or transfer to a sub-tree
- AND Gate (typical)
- output event occurs if ALL input events occur
- OR Gate (typical)
- output event occurs if at least one of the input events occurs
New contract with Labor Union -> AND[1]
Significantly Higher Labor Cost Rate -> AND[1]
AND[1] -> OR[1]
Set back in the R&D for new aluminum ally for the engine block -> OR[1]
Devaluation of US$ compared with the Euro -> OR[1]
OR[1] -> Total cost of developing a new engine exceeds $3M
Sensitivity Analysis
- Answers the question “what makes a difference in this decision problem?”
- Associates variation (uncertainty) in the output to different sources of variation in the inputs.
- Example: Tornado diagram
Tornado Diagram
- Determine lower bound, upper bound and best estimate of each uncertain input parameter
- 10%, 90% and 50% quantiles of parameter’s probability distribution
- For each uncertain parameter, calculate model output for lower and upper bounds, while taking best estimate for all other uncertain parameters
- Draw a horizontal bar for each uncertain parameter between value for lower bound and value for upper bound
- Order uncertain parameters by impact
- Put parameters with large output “spread” (width of bar) at top
- Draw a vertical line at position of the expected value
- calculated using best estimate for each uncertain
Profit = (SellingPrice - VariableCost) × MarketSize × MarketShare - FixedCosts
Traditional Vs Agile Methodology
Traditional
- Initiating
- Risk: Incorrect requirements
- Planning
- Executing
- Risk: Resources and workload
- Monitoring and Control
- Risk: Requirements change
- Closing
Agile
- Product / Feature backlog
- Sprint Planning
- Sprint (daily stand up meetings)
- Working Functionality (Sprint Demo)
- Back to Sprint Planning
Boehm’s top 10 software engineering risks
- Personnel shortfalls
- Unrealistic schedules / budgets
- Developing wrong functions
- Developing wrong user interface
- Goldplating
- Changing requirements
- External-supplied part shortfalls
- External-performed tasks shortfalls
- Real-time performance shortfalls
- Straining technical capabilities
–
- Staff with top talent
- Detailed cost-schedule
- User surveys, prototyping
- Scenarios, prototyping
- Requirements scrubbing
- High change threshold
- Benchmarking, inspections
- Reference checking, pre-audit
- Simulation, modeling
- Technical analysis
Generic risks
- Schedules set before requirements defined
- Excessive schedule pressure
- Major requirements change after sign-off
- Inadequate project management skills
- Inadequate pretest defect removal procedures
- Inadequate office space, environment
- Inadequate support for reuse, design
- Inadequate organizational and specialist support
- Too much emphasis on partial solution
- Too new technologies
Technical and Engineering Risks
Some include:
- The invention required to create the desired system.
- The lack of maturity of some of the selected technologies and products; there is often a gap between what a product is advertised as capable of achieving, and what it actually can achieve.
- The scale and complexity of the project, including the problem of managing the dynamic behavior of the system.
- All of the uncertainty and errors that can arise from quantitative measures and analyses.
Non-technical Risks
Some include:
- Poorly defined or changing goals and requirements.
- Tensions between stakeholders: the buying customer, the eventual users, the paying customer, the regulators, and other stakeholders may want slightly different things, and trying to satisfy all of them may over‐constrain your solution.
- Tensions within your development team: people may have different ideas about the appropriate design; if you have subcontractors, they may have different business aspirations from you and your company; there will always be common issues (e.g., people not getting along, conflicting personal goals).
Why Monitor?
- Goal: prevent adverse surprises
- Process: on a monthly basis, update schedule (new predicted completion date), continue rolling wave planning process, replumb the schedule (shortens schedule), update cost estimate (new predicted cost at completion), conduct a variance analysis:
- What is different in the current estimated schedule than in your previous predictions, back to the baseline schedule … and why?
- Are the positive differences sustainable?
- Are the negative differences recoverable?
- Expressed in units of absolute dollars and percentages
Monitoring ITTOs
- Inputs:
- work performance information (status on deliverables, project forecasts, status of change requests), project management plan, project documents
- Tools & Techniques:
- expert judgement, data analysis (alternatives, cost-benefit, earned value, root cause, trend, variance), decision making, meetings
- Outputs:
- work performance reports, change requests, project management plan updates, project document updates
Work Performance Report Contents
- Set time and day of week reports are due
- Report actual work accomplished
- Record historical data and re-estimate remaining work (in progress work only)
- Report start and finish dates
- Record days of duration accomplished and remaining
- Report resource effort (hours/day) spent and remaining (in progress work only)
- Report percent complete
0-100 estimation method
A task is either 0% complete or 100% complete
- Advantages: simple and quick
- Disadvantages: no credit until a task is complete
- Forces teams to schedule shorter (~1 month) task durations
Rolling Wave
iterative planning technique where near term work is planned in detail, and the further out work is planned at a higher level.
Progressive Elaboration = adding more detail to plans as more information becomes available. When we learn something that impacts our plans, we update them.
Schedule Variance
SV =
Earned value (value of work accomplished to date)
–
Planned value (value of work planned to have accomplished by this date)
or SV = EV – PV
Schedule Performance Index (SPI) = EV / PV
Cost Variance
Earned value (value of work accomplished to date)
–
Actual cost (actual cost of work performed)
or CV = EV – AC
Cost Performance Index (CPI) = EV / AC