Final Flashcards

1
Q

Basis of Scheduling

A
  • Predictive (Waterfall) Scheduling: Activities, broken down from Work Packages, are noted in their necessary order on a planned schedule, with changes needing a change request once the schedule is approved and baselined.
  • Adaptive (Agile) Scheduling: The Product Owner owns the Product Backlog and Product Roadmap, showing at least the order of when features are to be delivered.
  • Kanban or Pull Systems: The team “pulls” each piece of work when they are ready, respecting “Work in Progress” (WIP) limits to keep multi-tasking to a minimum.
  • Siegel’s advice: Plan around tasks, their durations, and their interdependencies (versus fixed dates) then derive the anticipated completion date.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Activity Network Scheduling Method

A
  1. Define the tasks.
    a. This refines our work breakdown structure.
    b. Rolling Wave Planning = near term work packages defined in more detail compared to long term work packages to be more defined at a later date. Plan must be revisited to define long term work packages.
    c. Siegel’s advice: the lowest WBS level should consist of work that can be completed in no longer than one or two months.
  2. Identify the interdependencies between the tasks.
    a. This assists with sequencing the tasks and finding the critical path.
    b. Precedence Diagramming Method = graphical representation of project work flow where arrows link work packages together
  3. Estimate the duration of each task, in a statistical fashion.
    a. Analogous (top-down): Uses historical information (an analogy) to estimate (e.g., time, budget, difficulty). Faster but lower accuracy.
    b. Parametric: Using a parameter to estimate, like $55 a meter or $100 an hour. Medium effort, medium accuracy.
    c. Bottom-up: Adding together the smallest pieces to get an overall estimate (i.e. cost of each work package combined for the project budget). High effort, high accuracy.
    d. 3-point: An average of three estimates: Optimistic, Nominal (Most Likely) and Pessimistic. Triangular Distribution (O + M + P) / 3. PERT (Program Evaluation and Review) or Beta Distribution (O + (4 x M) + P) / 6. Siegel’s advice: Weight both deviation and probability toward the pessimistic.
    e. Wideband Delphi (“Planning poker” in Agile): the people doing the work estimate the effort (or cost). The high and low estimates discuss their reasons, then re-estimate until a consensus is reached. Useful in complex situations
  4. Only the initial task in each chain is fixed to an actual calendar date; all other dates are derived from the task interdependencies and the statistically expressed task durations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Precedence Diagramming Method

A
  • Finish to Start
    • The next activity cannot start until the previous activity has finished
  • Start to Start
    • The next activity cannot start until the previous activity has started
  • Finish to Finish
    • The next activity cannot finish until the previous activity has finished
  • Start to Finish
    • The next activity cannot finish until the previous activity has started
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Dependency Types

A
  • Mandatory: one work package MUST be completed prior to start of next work package.
    • E.g., purchasing paint prior to painting walls.
  • Discretionary: work packages can occur in tandem.
    • E.g., painting walls and laying carpet at the same time.
  • External: work package linked to non-project activities.
    • E.g., supplier stocking paint prior to your purchase.
  • Internal: work package linked to project activities.
    • E.g., who is responsible for purchasing the paint and who is responsible for painting the walls.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Leads and Lags

A
  • Lead time is the amount of time you can bring an item forward. You are leading it forward.

you are here <–LeadTime– Item is here

  • Lag time is the amount of time you can delay an item. It is lagging behind

you are here Item is here –LagTime–>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Critical Path Method

A
  • Critical Path = sequence of tasks where the project can be
    completed in the shortest possible time.
    • Divide project into tasks
    • Estimate each task
    • Create network diagram
    • Create initial Gantt chart
    • Perform resource leveling
    • Compress schedule (if necessary)
  • Critical Chain = resources required to execute project tasks and their availability.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Resource Leveling

A
  • When shared resources are over-allocated, or a resource has been assigned to two or more activities during the same period.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Resource Smoothing

A
  • Adjusts the activities within their free and total Float (the Critical Path is not changed when doing this).

top left: Early start
top middle: Duration
top right: Early Finish
middle: phase
bottom left: Late Start
bottom middle: Float/Slack
bottom right: Late Finish

Early Finish = Early Start + Duration – 1
Late Start = Late Finish – Duration + 1
Float/Slack = Late Start – Early Start

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Schedule Compression Techniques

A
  • Schedule Fast Tracking is when activities on the Critical Path are done in parallel (overlapped) to shorten the project duration. Adds cost and may add additional risk.
  • Schedule crashing is approving overtime, adding resources, or paying to expedite delivery of activities on the critical path. May add risk due to project rework.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Reserve Analysis

A
  • AKA Slack Time, Contingency Reserve, Time Reserves, Buffer.
  • A percentage or a set determined time allowance.
  • Added because of Risk Factors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Critical Path: Build a new concrete driveway ***

A
  • Step 1: Divide project into tasks

Task# | TaskName | Predecessors | Responsible
1 | Excavation Team | | Team
2 | Build Forms | 1 | Team
3 | Place Rebar | 1 | Subcontractor
4 | Pour concrete | 2, 3 | Subcontractor
5 | Set and cure concrete | 4 | Team
6 | Strip forms | 5 | Team

  • Step 2: Estimate tasks

Task# | Task Name | Resource | Duration | Cost/Use | Subtotal | Total
1 | Excavation | Mini-excavator | 4 days | $450/day | $1,800 |
1 | Excavation - Team | Labor | 12 hours | $20/hour | $240 | $2,040

  • Step 3: Create Network Diagram
    • Add durations for each activity in center top square.
    • Perform forward pass  enter early start and early finish dates.
    • Early Finish = Early Start + Duration – 1
    • Perform backward pass  enter late start and late finish dates.
    • Late Start = Late Finish – Duration + 1
    • Enter Float or Slack (= Late Start – Early Start)
    • Shortest project duration is early finish of the final task (here, 17 days)

(duration)
Excavation (4) -> Build Forms(2)
Excavation (4) -> Place Rebar (6)
Build Forms(2) -> Pour Concrete(1)
Place Rebar (6) -> Pour Concrete(1)
Pour Concrete(1) -> Set and Cure (5)
Set and Cure (5) -> Strip Forms(1)

  • Step 4: Create Initial Gantt Chart
  • Step 5: Perform Resource Leveling
  • Step 6: Schedule Compression (if necessary)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Cost Management Plan

A
  • Defines how the project costs will be estimated, budgeted, managed, monitored, and controlled.
  • Value Engineering = finding a more economic way of doing work to achieve the project goal / scope.

Inputs | Tools and Techniques | Outputs
Project Charter | Expert Judgement | Cost Management Plan
Project Management Plan | Data Analysis | -
Enterprise Environmental Factors | Meetings | -
Organizational Process Assets | - | -

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Cost Definitions

A

Table

  • Cost Type:
    • Fixed
    • Variable
    • Direct
    • Indirect
    • Sunk
  • Defintion:
    • Costs that stay the same throughout the life of the project (e.g., physical assets).
    • Costs that vary on a project (e.g., hourly labor).
    • Expenses billed directly to the project (e.g., materials).
    • Costs shared and allocated among several projects (e.g., electricity)
    • Costs invested into the project.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Cost Estimation

A
  • Management Reserve = money set aside to deal with problems (unexpected activities related to in-scope work) as they arise on the project.
  • Contingency Reserve = money set aside to deal with planned risks, should they occur.
  • Definitive Estimates: -5% to +10%
  • Budget Estimates: -10% to +25%
  • Rough Order of Magnitude Estimates: -25% to +75%

Image

Work Package (WP)
Contingency Reserve (CR)
Cost Estimate (CE)
Management Reserve (MR)
Cost Baseline (CB)
Project Budget (PB)

                             MR -> PB
              CR      -> CB -> PB WP1 -> WP + CE -> CB -> PB WP2 -> WP + CE -> CB -> PB WP3 -> WP + CE -> CB -> PB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Earned Value Analysis

A
  • Budget at Completion (BAC): total planned budget.
  • Planned Value (PV) = BAC x (Planned % Completed) - shows the work that should have been completed by that time.
  • Earned Value (EV) = BAC x (Actual % Completed) - what we have actually completed (earned) at a given point in time.
  • Actual Cost (AC): what was actually spent at that point in time.
  • Estimate at Completion (EAC) = AC + (BAC – EV) = BAC/CPI
  • To Complete Performance Index (TCPI) = (BAC – EV) / (BAC – AV)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Variance Analysis

A
  • Variance at Completion (VAC) = BAC – EAC
  • Cost Variance (CV) = EV – AC
    • ”+” is good, “–” is bad.
  • Cost Performance Index (CPI) = EV / AC
    • > 1 is good (under budget), <1 is bad (over budget).
  • Schedule Variance (SV) = EV – PV
    • ”+” is good, “–” is bad.
  • Schedule Performance Index (SPI) = EV / PV
    • > 1 is good (ahead of schedule), <1 is bad (behind schedule).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

S Curve

A

Image Review Week8 Slides

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Decision Making Tools

A
  • Benefit-cost ratio (BCR): amount of money a project will make vs. how much it will cost. Large number, good investment.
  • Net present value (NPV): actual value at a given time of the project minus all of the costs associated with it. People calculate this number to see if it’s worth doing a project.
  • Opportunity cost: The money you forego because you chose not to do a project.
  • Internal rate of return: amount of money the project will return or how much money a project is making the company.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Measurement Process Steps (10)

A
  • Decide what to measure
    • What measurements (data) do we need to make a certain decision?
  • Determine how accurately we need to measure
    • Do we need to know the weight to the nearest pound, or to some other degree of
      accuracy?
  • Determine how to achieve that accuracy of measurement
    • What tools and methods have the desired accuracy?
  • Determine how much data must be collected
    • Do we have to measure an item just once, or 10 times, or 1,000,000 times?
  • Determine how, where, and when to collect the data
    • Under what conditions do we have to measure this item? What tools do we need to do so? What are the operational states of our system when it is valid to collect these data?
  • Understand the range of validity of the data
    • Are the measurements valid only at certain times? Under certain conditions?
  • Validate and calibrate the data
    • Is our scale accurate? Is our tool accurate How do we know?
  • Analyze the data
    • What do the data indicate? Are there alternate explanations for what the data appear to indicate?
  • Draw conclusions from the data
    • Where are we on solid ground? Where are we making judgments? What is the level of uncertainty? What risks could result from drawing the wrong conclusions?
  • Document the entire process, so that it could be reconstructed
    • Engineering is an iterative process; therefore we need to capture the process, tools, and data that we used to make this measurement for future comparisons.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What to measure?

A
  • Technical performance measures
  • Represent our degrees of design freedom
  • Operational performance measures
  • Represent the goodness of the system interpreted by the customer
  • Management measures
  • Represent our progress on the project
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Key Performance Indicators (KPIs)

A
  • Metrics used to evaluate the effectiveness of your project at achieving the defined objectives.
  • Prioritize to focus on tracking essential and actionable metrics.
    • Leading indicators – predict future performance, help take early corrective action
    • Lagging indicators – reflect past performance, help assess success of completed activities
  • Set targets and thresholds.
  • Assess progress and readjust.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Types of KPIs (Schedule)

A
  • Indicator
    • Cycle time
    • On-time completion %
    • Time Spent
    • # Schedule Adjustments
    • FTE days vs. Calendar Days
    • Planned Hours vs. Time Spent
    • Resource Capacity
  • Definition
    • Time to complete a task
    • Whether or not task is completed by deadline
    • Amount of time spend on tasks
    • How many times team adjusted completion date
    • How much time team is spending on project
    • Estimate vs. actual time spent on tasks
    • # individuals working on project multiplied by % effort available for project
  • Formula
    • End time - Start time
    • (# on-time tasks / total # of tasks) x 100
    • Sum of hours spent by all team members
    • Count of schedule adjustments
    • FTE days / calendar days
    • Planned hours - Actual hours spent
    • # resources x available time (%)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Types of KPIs (Cost)

A
  • Indicator
    • Budget Variance
    • Budget Creation cycle time
    • Line items in budget
    • # budget iterations
    • Planned Value
    • Cost Performance Index
  • Defintion
    • How much actual budget varies from projected budget
    • Time needed to formulate an org’s budget
    • Line items help managers track expenditures
    • # of budget version produced before final approval
    • Value of what is left to complete on the project
    • Compares budgeted cost of work completed so far to actual amount spent
  • Formula
    • Actual Budget - Projected budget
    • (End date - Start date) of budget creation
    • Count of individual expenditure items
    • count of budget variations
    • total budget x % work remaining
    • Earned value / Actual costs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Types of KPIs (Quality)

A
  • Indicator
    • Customer Satisfaction
    • Stakeholder Satisfaction
    • Net Promoter Score
    • # Errors
    • Customer Complaints
    • Employee Churn Rate
  • Definition
    • Measures if customer is satisfied with project outcomes
    • Assess satisfaction of all stakeholders
    • How likely customers are to recommend your services
    • How often tasks need to be redone
    • Complaints received from customer during project
    • Team members who leave company during project
  • Formula
    • Survey Score
    • Survey Score or Feedback Score
    • % Promoters - % Detractors
    • Count of errors or rework
    • Count of customer complaints
    • (# departing employees / total employees) x 100
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Types of KPIs (Effectiveness)

A
  • Indicator
    • Average cost per hour
    • Resource profitability
    • # milestones completed on-time w/ sign off
    • # of returns
    • Training/research needed for project
    • # cancelled projects
    • # change requests
    • Risk management effectiveness
  • Definition
    • Measures cost of labor
    • Assess the effectiveness of resource utilization
    • tracks # of milestones to measure effectiveness of project execution
    • return rate on items
    • measures amount of training/research required before the project can start
    • High # of cancellations can indicate issues
    • Frequent change requests can affect budgets, resources, and timelines
    • how effectively the team identifies assesses, and mitigates risks
  • Formula
    • Total labor costs / Total hours worked
    • Revenue generated / Cost of resources
    • Count of on-time milestones with sign off
    • Count of returned items / Total items
    • Total training/research hours
    • Count of cancelled projects
    • Count of change requests
    • (# mitigated risks / total identified risks) x 100
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Measurement Caveats

A
  • Test programs should
    • confirm that – under a defined set of conditions – the system works.
    • identify the conditions where the system fails.
    • recognize measurements involve error or uncertainty (noise).
    • separate the signal (meaningful change) from the noise (random fluctuation).
  • No data have meaning apart from their context.
  • Two data points do not make a trend.
  • Don’t rely on tables of numbers – graphs are useful.
  • Understand common/routine vs. special/exception variation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Statistics Example

You are a physician. A patient has come to you and asked to be tested for a particular, rare disease that ~1 person in 1000 has in the U.S. You agree and the results come back positive. The test has a false positive rate of 5%, or gives a positive result when it is actually negative 5% of the time. You call the patient and the patient asks you, “What is the probability that I have the disease, given that the test came back with a positive result?”
What do you tell her?

A

Need to calculate the posterior probability that she actually has the disease, given a positive test result. This is a classic example of applying Bayes’ theorem.

Bayes’ theorem allows us to update our beliefs (probabilities) based on new evidence. In this case, the new evidence is the positive test result.

Bayes’ theorem is:
P(Disease | Positive Test) = (P(Positive Test | Disease) x P(Disease)) / P(Positive Test)

Where:

  • P(Disease|PositiveTest ) is the probability that the patient has the disease, given the positive test result (this is what we want to calculate).
  • P(PositiveTest|Disease) is the probability of a positive test result given that the patient has the disease (i.e., the sensitivity or true positive rate of the test). Since we don’t have this value, we will assume it to be 100% for simplicity (i.e., the test is perfect when the disease is present).
  • P(Disease) is the prior probability of the disease (the prevalence), which is 1/1000 = 0.001
  • P(PositiveTest) is the total probability of getting a positive test result, which includes both true positives and false positives.

We can calculate P(PositiveTest) using the law of total probability:

P(PositiveTest)=P(PositiveTest|Disease) x P(Disease)+P(PositiveTest|NoDisease) x P(NoDisease)

Where:
* P(PositiveTest|NoDisease) is the false positive rate of the test, which is 5% or 0.05.
* P(NoDisease)=1−P(Disease)=0.999.

  • Step 1: Calculate P(PositiveTest)
    Using the assumptions and values provided:
    • P(PositiveTest)=(1) x (0.001)+(0.05) x (0.999)
  • P(PositiveTest)=0.001+0.04995=0.05095
  • Step 2: Apply Bayes’ theorem
    Now, we can substitute into Bayes’ theorem:
    • P(Disease|PositiveTest)= (1)x (0.001) / 0.05095
    • P(Disease|PositiveTest)= 0.001 / 0.05095 ≈ 0.0196
  • Final Answer:
    The probability that the patient actually has the disease, given that the test came back positive, is approximately 1.96%.
  • Explanation:
    Even though the test is quite specific (only a 5% false positive rate), the disease is rare, so the probability of actually having the disease after a positive test result remains relatively low. This result highlights the importance of considering both the prevalence of the disease and the characteristics of the test when interpreting medical test results.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Bayes theorem

A

Prior probability represents information of our actual prior state of knowledge (e.g. likelihood of having a disease before testing, say based on disease prevalence in a population)

Posterior (post-test probability) represents the revised estimate incorporating the test results

P(A|B) = (P(A) x P(B|A)) / P(B)
P(B) = P(B|A) x P(A) + P(B|A^c)xP(A^c)

Example: pathogen testing
P(Path|Pos) = (P(Path) x P(Pos|Path)) / (P(Pos|Path)xP(Path) + P(Pos|NoPath)xP(NoPath))

P(Path) = unknown
P(NoPath) = 1 - P(Path)
P(Pos|Path) = Sensitivity of the test (true positive)
P(Pos|NoPath) = specificity of the test (false positive)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Bayes theorem Example 1
1000 patients, assume a pre-test probability of infection = 90%

Test sensitivity = 70%
Test specificity = 98%

A

1000 patients, assume a pre-test probability of infection = 90%

  • Test sensitivity = 70%
  • Test specificity = 98%

TP=Sensitivity×Infectedpatients
FN=Infectedpatients−TP
TN=Specificity×Non-infectedpatients
FP=Non-infectedpatients−TN

PPV= TP/(TP+FP)
NPV=TN/(TN+FN)

(Table)
* 90% Pre-Test Prob
* Test Positive
* Test Negative
* Totals
* Infected
* 630 True Pos
* 270 False Neg
* 900
* Not infected
* 2 False Pos
* 98 True Neg
* 100
* Totals
* 632 Total Pos
* 368 Total Neg
* 1000

  • Positive predictive value (PPV) = True Positives / Total Positives = 630/632 = 99.7%
  • Negative predictive value (NPV) = True Negatives / Total Negatives = 98/368 = 26.6%
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Bayes theorem Example 2
1000 patients, assume a pre-test probability of infection = 5%

Test sensitivity = 70%
Test specificity = 98%

A

1000 patients, assume a pre-test probability of infection = 5%

  • Test sensitivity = 70%
  • Test specificity = 98%

(Table)
* 90% Pre-Test Prob
* Test Positive
* Test Negative
* Totals
* Infected
* 35 True Pos
* 15 False Neg
* 50
* Not infected
* 19 False Pos
* 931 True Neg
* 950
* Totals
* 54 Total Pos
* 947 Total Neg
* 1000

  • Positive predictive value (PPV) = True Positives / Total Positives = 35/54 = 65%
  • Negative predictive value (NPV) = True Negatives / Total Negatives = 931/947 = 98.4%
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Wheeler/Kazeef Signal Tests

A

If any below are yes,
1. There is one point outside the upper or lower natural process limits.
2. Two out of any three consecutive points are in either of the 2 sigma to 3 sigma zones.
3. Four out of any five consecutive points are outside the 1 sigma zones.
4. Any seven consecutive points are on one side of the average.
5. Any measurement of the item called moving range is above the item called moving range limit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Common mistakes in handling quantifiable data

A
  • What you are measuring may not be exactly what you think you are measuring
    • Your measurement method may omit a portion of the item you seek to quantify, or include a bit of some other item.
  • You underestimate the amount of error and noise in the data
    • Every measurement includes errors, both systematic and random (“noise”). You must separate these errors from the item you wish to characterize (“signal”). If you do not, you are likely to mistake the change in the noise (especially random variations) for a change in the signal of interest.
  • Failing to understand that the answer from the measurement might just be wrong
    • The answer provided by a measurement or a test might just be wrong; there are false positives (the test says “yes” when the correct answer is “no”) and false negatives (the test says “no” when the correct answer is “yes”).
  • Missing that the answer depends on a conditional probability
    • You base your analysis and eventual decision on too few parameters, failing to notice the essential interconnection between parameters. This may come about through discarding some parameters, or failing to measure them.
  • Assuming independence between measurements ignoring sequential effects
    • The expected outcome for 1000 people who each bet once in a casino is very different from that for a single person who bets 1000 times, because in the latter the initial conditions change with each instance.
  • Using ineffective or weak statistics as a basis for evaluation
    • The most common statistic in use is comparing a current measurement to a single prior measurement, and inferring a trend (and a cause for that trend) from that change.
  • Using data outside its range of applicability
    • We often collect real data, and then we extrapolate that result to different conditions. This works sometimes. For example, based on an experiment where you cooled water from 70°F to 60°F to 50°F to 40°F, what would you predict about continuing to cool the water to 30°F?
  • Not collecting repeated data on a meaningful time frame
    • There is no point in making measurements at a rate significantly faster than the underlying phenomena can actually change. If you did so, almost all of the change that you detected would just be noise, not an actual change in the signal of interest.
  • Poor selection of data
    • Selecting data that are not truly representative of your operating conditions.
  • Changing the data or the measurement approach during collection
    • If you change the way you collect the data mid-stream, there may be no valid way to compare data collected by the first method from data collected by the second method. And, of course, doing things that allow
      the data actually to change (e.g. the temperature at which we collect samples is allowed to vary, etc.) invalidates comparison too
  • The limit of the utility of examples (the problem of induction)
    • In the real world, no number of observations of a positive phenomenon constitutes a proof that that phenomenon is always true. Yet a single negative observation constitutes a proof that it is not always true.
  • Attribution bias
    • We attribute our successes to skills and knowledge, rather than to random chance. Conversely, we attribute failures to random events, rather than lack of skill or knowledge.
  • Path dependence
    • We “fall in love” with the path we used to arrive at an answer, and refuse to adjust that answer even when there is contradictory evidence.
  • The fallacy of the silent evidence
    • We see what appears to be a compelling set of evidence in favor of a proposition, but we fail to notice that all of the contradictory evidence has been omitted from our sample.
  • Round-trip error
    • The tendency to confuse the condition “no evidence of flaws in our system” with the condition “there is evidence that there are no flaws in our system.” For example, this latter condition would correspond to a patient truly being “cancer-free,” a very different condition than that of merely no longer presenting any visible indications of having cancer.
  • The narrative fallacy
    • Correlation does not prove causation. The tendency to create stories for everything, even when not justified, is called the narrative fallacy. Just because a story is appealing does nothing to establish its correctness; an appealing story may be completely wrong.
  • Failing to recognize the existence and significance of outliers: the problem of scale
    • Many engineered systems experience sampling or input rates that are orders of magnitude beyond normal human experience; people have no useful intuitions about such large sample sets. Many statistical techniques call for discarding outliers; but we must focus on the potential of outliers and, through design strategies, try to prevent them from disrupting our system.
  • The tendency to believe what you want; the tendency to ignore evidence, and to explain away evidence that tells the “wrong story”
    • Humans have a strong tendency to interpret all new evidence in a way that supports their selected explanation. They are also quick to discount and eliminate evidence that appears to contradict their selected explanation.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is risk?

A
  • Risk: The likelihood of an event occurring multiplied by its impact
    • Risk=Likelihood x Impact
  • Likelihood: The probability of an event occurring, considering any control measures in place
  • Impact: An estimate of the harm that could be caused by an event

Risk is a function of

Severity of possible harm (Impact) /
Probability of the occurrence of that harm (Likelihood)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Steps in Risk Management

A
  • Identify the Potential Risks and Opportunities
    • Hold a brainstorming session
    • Document ideas in a Risk Register and an Opportunities Register
  • Assess the Risks
    • Identify the symptoms
    • Select the item to be measured and the measurement method
  • Rank the Risks
    • Score each risk for both likelihood and impact
      • Risks to design exist now (represent uncertainty)
      • Risks to project exist in the future (represent prediction)
  • Create mitigation and exploitation plans
    • For each mitigation step, define: (i) what you will do; (ii) why it will
    • help; (iii) how you measure its effect; (iv) what the resulting risk matrix position; (v) cost and duration estimates.
  • Potential responses to risks:
    • Accept the risk and its consequences.
    • Mitigate the risk by lessening the likelihood or impact or both. This involves spending money via people, resources, facilities.
    • Share the risk with another party, such as customers or subcontractors. This involves a formal contract modification.
    • Transfer the risk to another party, such as customers or subcontractors. This involves a formal contract modification or through insurance.
  • Create triggers and timing requirements for mitigation plans
  • Create a method to aggregate all risk assessments into a
    periodic overall project impact prediction
  • Create and use some sort of periodic “management rhythm” to
    make periodic decisions based on assessment
  • Respond to Risks
    • When risks actually occur, transition from risks to issues and perform root-cause analysis
      • 5 whys
      • Fishbone diagram
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Risk identification tools

A
  • Documentation review
  • Facilitation techniques
    • Brainstorming
    • Delphi
    • Expert interview
    • Root cause analysis
  • SWOT analysis: Strengths, weaknesses, opportunities, threats
  • Checklist
  • Assumption analysis: every assumption and constraint is a risk
  • Diagramming: fish bone, flow chart, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Risk Register

A
  • Captures the risks and rankings that emerge from the process of risk assessment
  • Table Columns:
    • RiskID
    • Risk Description
    • Probability
    • Impact
    • Score
    • Response
    • Owner
    • Comments
  • Failure Mode and Effects Analysis (FMEA) – top down quantitative technique to identify failures in a design, process, system or existing product, equipment, or service.
    • Analyzes components, interactions, and effects on system as a whole.
    • Failure modes = ways functional elements may fail
    • Effects analysis = consequences of those failures and any existing prevention or control measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Failure Mode Effects Analysis (FMEA)

A
  1. Function or process
    • What can go right, what is supposed to happen
  2. Potential failure modes
    • What are the ways that prevent the process or function to achieve its objective
  3. Potential effect(s) of failure
    • What is the consequence of the failure
  4. Severity (S)
    • How significant is the impact
  5. Potential cause(s) of failure
    • How can the failure occur
  6. Occurrence rating (L)
    • What is the likelihood of occurrence
  7. Current Design Controls (prevent)
    • What are in place so that failure is prevented
  8. Current design controls (detect)
    • What are in place so that failure is controlled
  9. Detection Rating (D)
    • How effective are the measures
  10. RPN
    • L x S x D
  11. Recommended action(s)
    • What can be done
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Risk Matrix

A

Table

Likelihood, bottom up
Very Likely
Likely
Possible
Unlikely
Very Unlikely

[impact left to right] (top)
negligible
minor
moderate
significant
severe

[
[LM, M, MH, H, H]
[L, LM, M, MH, H]
[L, LM, M, MH, MH]
[L, LM, LM, M, MH]
[L, L, LM, M, M]
]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Special Risk Types

A

Risks that we did not yet identify (not on risk matrix)

Low likelihood, high impact

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Fault Tree Analysis

A
  • Used for reliability and safety analysis to establish causalities of risk scenarios
  • Basic Event (Circle)
    • a basic initiating fault or failure event
  • External Event (House)
    • an event that is normally expected to or not to occur
  • Undeveloped Event (Diamond)
    • a basic event that needs to further details or resolution
  • Conditioning Event (Oval width)
    • a specific condition or restriction that applies to a gate
  • Transfer (Triangle)
    • indicates continuation or transfer to a sub-tree
  • AND Gate (typical)
    • output event occurs if ALL input events occur
  • OR Gate (typical)
    • output event occurs if at least one of the input events occurs

New contract with Labor Union -> AND[1]
Significantly Higher Labor Cost Rate -> AND[1]

AND[1] -> OR[1]
Set back in the R&D for new aluminum ally for the engine block -> OR[1]
Devaluation of US$ compared with the Euro -> OR[1]

OR[1] -> Total cost of developing a new engine exceeds $3M

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Sensitivity Analysis

A
  • Answers the question “what makes a difference in this decision problem?”
  • Associates variation (uncertainty) in the output to different sources of variation in the inputs.
  • Example: Tornado diagram
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Tornado Diagram

A
  • Determine lower bound, upper bound and best estimate of each uncertain input parameter
    • 10%, 90% and 50% quantiles of parameter’s probability distribution
  • For each uncertain parameter, calculate model output for lower and upper bounds, while taking best estimate for all other uncertain parameters
  • Draw a horizontal bar for each uncertain parameter between value for lower bound and value for upper bound
  • Order uncertain parameters by impact
    • Put parameters with large output “spread” (width of bar) at top
  • Draw a vertical line at position of the expected value
    • calculated using best estimate for each uncertain

Profit = (SellingPrice - VariableCost) × MarketSize × MarketShare - FixedCosts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Traditional Vs Agile Methodology

A

Traditional

  • Initiating
    • Risk: Incorrect requirements
  • Planning
  • Executing
    • Risk: Resources and workload
  • Monitoring and Control
    • Risk: Requirements change
  • Closing

Agile

  • Product / Feature backlog
  • Sprint Planning
  • Sprint (daily stand up meetings)
  • Working Functionality (Sprint Demo)
  • Back to Sprint Planning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Boehm’s top 10 software engineering risks

A
  1. Personnel shortfalls
  2. Unrealistic schedules / budgets
  3. Developing wrong functions
  4. Developing wrong user interface
  5. Goldplating
  6. Changing requirements
  7. External-supplied part shortfalls
  8. External-performed tasks shortfalls
  9. Real-time performance shortfalls
  10. Straining technical capabilities

  1. Staff with top talent
  2. Detailed cost-schedule
  3. User surveys, prototyping
  4. Scenarios, prototyping
  5. Requirements scrubbing
  6. High change threshold
  7. Benchmarking, inspections
  8. Reference checking, pre-audit
  9. Simulation, modeling
  10. Technical analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Generic risks

A
  • Schedules set before requirements defined
  • Excessive schedule pressure
  • Major requirements change after sign-off
  • Inadequate project management skills
  • Inadequate pretest defect removal procedures
  • Inadequate office space, environment
  • Inadequate support for reuse, design
  • Inadequate organizational and specialist support
  • Too much emphasis on partial solution
  • Too new technologies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Technical and Engineering Risks

A

Some include:

  • The invention required to create the desired system.
  • The lack of maturity of some of the selected technologies and products; there is often a gap between what a product is advertised as capable of achieving, and what it actually can achieve.
  • The scale and complexity of the project, including the problem of managing the dynamic behavior of the system.
  • All of the uncertainty and errors that can arise from quantitative measures and analyses.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Non-technical Risks

A

Some include:

  • Poorly defined or changing goals and requirements.
  • Tensions between stakeholders: the buying customer, the eventual users, the paying customer, the regulators, and other stakeholders may want slightly different things, and trying to satisfy all of them may over‐constrain your solution.
  • Tensions within your development team: people may have different ideas about the appropriate design; if you have subcontractors, they may have different business aspirations from you and your company; there will always be common issues (e.g., people not getting along, conflicting personal goals).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Why Monitor?

A
  • Goal: prevent adverse surprises
  • Process: on a monthly basis, update schedule (new predicted completion date), continue rolling wave planning process, replumb the schedule (shortens schedule), update cost estimate (new predicted cost at completion), conduct a variance analysis:
    • What is different in the current estimated schedule than in your previous predictions, back to the baseline schedule … and why?
    • Are the positive differences sustainable?
    • Are the negative differences recoverable?
    • Expressed in units of absolute dollars and percentages
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Monitoring ITTOs

A
  • Inputs:
    • work performance information (status on deliverables, project forecasts, status of change requests), project management plan, project documents
  • Tools & Techniques:
    • expert judgement, data analysis (alternatives, cost-benefit, earned value, root cause, trend, variance), decision making, meetings
  • Outputs:
    • work performance reports, change requests, project management plan updates, project document updates
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Work Performance Report Contents

A
  • Set time and day of week reports are due
  • Report actual work accomplished
  • Record historical data and re-estimate remaining work (in progress work only)
  • Report start and finish dates
  • Record days of duration accomplished and remaining
  • Report resource effort (hours/day) spent and remaining (in progress work only)
  • Report percent complete
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

0-100 estimation method

A

A task is either 0% complete or 100% complete

  • Advantages: simple and quick
  • Disadvantages: no credit until a task is complete
    • Forces teams to schedule shorter (~1 month) task durations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Rolling Wave

A

iterative planning technique where near term work is planned in detail, and the further out work is planned at a higher level.

Progressive Elaboration = adding more detail to plans as more information becomes available. When we learn something that impacts our plans, we update them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Schedule Variance

A

SV =
Earned value (value of work accomplished to date)

Planned value (value of work planned to have accomplished by this date)

or SV = EV – PV

Schedule Performance Index (SPI) = EV / PV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Cost Variance

A

Earned value (value of work accomplished to date)

Actual cost (actual cost of work performed)

or CV = EV – AC

Cost Performance Index (CPI) = EV / AC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Cost Performance Indicators

A

Budgeted Cost of Work Scheduled (BCWS)

Budgeted Cost of Work Performed (BCWP)

Actual Cost of Work Performed (ACWP)

56
Q

Variance Analysis

A
  • Schedule Variance (SV) = EV – PV
    • ’+’ is good, – is bad.
  • Schedule Performance Index (SPI) = EV / PV
    • > 1 is good (ahead of schedule), <1 is bad (behind schedule).
  • Cost Variance (CV) = EV – AC
    • ’+’ is good, – is bad.
  • Cost Performance Index (CPI) = EV / AC
    • > 1 is good (under budget), <1 is bad (over budget).
57
Q

Mistakes to avoid with variation

A
  • Mistake 1:
    • To react to an outcome as if it came from a special cause, when actually it came from common causes of variation.
  • Mistake 2:
    • To treat an outcome as if it came from common causes of variation, when actually it came from a special cause (Deming, 1994, p. 174)
58
Q

Why Measure Variances?

A
  • Catch deviations from the curve early
  • Dampen oscillations
  • Allow early corrective action
  • Determine weekly schedule variance
  • Determine weekly effort (person hours/day) variance
59
Q

Sales and Profit

A
  • Inception-to-date sales = total dollar value of work performed under contract since the award
  • Projected sales under contract = estimated sales at the completion of the contract
  • Potential sales = dollar value of sales x probability of occurrence
  • Inception-to-date profit = total dollar value earned beyond cost since the award
  • Projected profit upon completion = estimated profit at the completion of the contract
  • Potential profit = profit earned on potential sales
  • Cash Flow = dollars out and dollars in by month or quarter
  • Day-Sales Receivables = average number of days it takes a company to collect payment on a sale
60
Q

Agile

A
  • Is related to the rolling wave planning and scheduling method
    • Uses iterations (“time boxes”) to develop a workable product that satisfies the customer and other key stakeholders.
    • Stakeholders and customers review progress and re-evaluate priorities to ensure alignment with customer needs and company goals.
    • Adjustments are made and a different iterative cycle begins that subsumes the work of the previous iterations and adds new capabilities to the evolving product.
61
Q

Agile Manifesto (2001)

A

Four values:

  1. Individuals and interactions over processes and tools
    • Over processes and tools
    • Projects are done by people not tools
    • Problems occur because of people, therefore are solved by people not processes
  2. Working software over comprehensive documentation
    • Over comprehensive documentation
    • Focus on delivering value versus paperwork
  3. Customer collaboration over contract negotiation
    • Over contract negotiation
    • Be flexible and accommodating instead of fixed and uncooperative
    • Manage change, don’t suppress change
    • Shared definition of “done”
  4. Responding to change over following a plan
    • Over following the plan
    • Spend effort and energy responding to changes
62
Q

12 Agile Principles

A
  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  4. Business people and developers must work together daily throughout the project.
  5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  6. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
  7. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  8. Working software is the primary measure of progress.
  9. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  10. Continuous attention to technical excellence and good design enhances agility.
  11. Simplicity–the art of maximizing the amount of work not done–is essential.
  12. The best architectures, requirements, and designs emerge from self-organizing teams.
63
Q

Agile Terms

A
  • Product owner = designated person that represents the customer on the project
  • Agile project manager / Scrum master = manages the agile project
  • Product Backlog = project requirements from the stakeholders
  • Sprint = a short iteration where the project teams work to complete the work in the spring backlog (1-4 weeks typical)
  • Sprint Planning Meeting = meeting done by the agile team to determine what features will be done in the next sprint
  • Sprint Backlog = work the team selects to get done in the next sprint
  • Daily Stand Up Meeting = quick meeting each day to discuss project statuses, led by the Scrum Master, usually 15 minutes
  • Sprint Review = an inspection done at the end of the sprint by the customer
  • Retrospective = meeting done to determine what went wrong during the spring and what went right. Lessons learned
  • Partial Completed Product
  • Release
64
Q

Agile Process

A
  • Product Vision
  • Product Backlog
    -> Sprint Planning
  • Sprint Backlog
  • Sprint Cycle (Daily Standup,etc)
    -> Sprint Review (retro, etc)
  • Product / Code
    -> Sprint Planning
  • Product / Software Release
65
Q

Agile Triangle

A

Project leaders and customers use the agile triangle to assess project results.

  • Executives tend to look at investment and risk profiles.

Triangle

Top: Value ( Revenue Generating, Cost Reducing, Risk Reduction, Learning, etc)
Left: Quality (Software Debt (technical, quality, config management, design, and platform experience))
Right: Constraints (The “Iron Triangle” (Schedule, Cost, and Scope)

66
Q

Popular Agile Methods

A

Scrum
Extreme Programming
Agile Modeling
Rapid Product Development (PRD)
Crystal Clear
RUP (Rational Unified Process)
Dynamic Systems Development Method (DSDM)
Lean Development
Kanban

67
Q

Traditional Vs Agile

A

Traditional

  • Design Up front
  • Fixed Scope
  • Deliverables
  • Freeze Design as early as possible
  • Low uncertainty
  • Avoid Change
  • Low Customer interaction
  • Conventional Project teams

Agile

  • Continuous design
  • flexible
  • features/requirements
  • freeze design as late as possible
  • high uncertainty
  • embrace change
  • high customer interaction
  • self-organized project teams
68
Q

Who would you like to manage you?

A
  • A collaborator who spends a lot of time getting your input, and blends that at the time with what the customer wants to do? Or,
  • A technical / managerial leader who knows what to tell you to do next, based on a plan?
69
Q

What do the agile and traditional methods agree on?

A
  • We need to deliver a product with value for the customer(s).
    • In a competitive situation, more value, sooner.
  • That delivery is (usually) part of a larger marketing strategy.
    • To keep a stable group of customers happy, and
    • To attract new customers in some market niche.
70
Q

Strengths of each (Traditional vs Agile)

A

Agile:

  • Building off close collaboration with one customer.
  • Getting a product with significant value out faster.
  • Building with a small team working closely together.
  • Iterative delivery by a single organization who come to have the knowledge in their heads.
  • What’s important can only be discovered incrementally.
  • What’s a “perfect example” of an Agile friendly project?
    • Do it when the cost of rework is low. Dealing with things “ad hoc” mostly works. We call it “refactoring.”
  • Useful in developing critical breakthrough technology or defining essential features.
  • Continuous integration, verification, and validation of the evolving product.
  • Frequent demonstration of progress to increase the likelihood that the end product will satisfy customer needs.
  • Early detection of defects and problems.

Traditional:

  • Building a general product to address a wide customer base.
  • Building off standards and consistent design principles.
  • Managing a large project with lots of interdependent pieces.
  • Product releases over a long period of time, by rotating staff who rely on documentation.
  • Goals and rules well known at the start.
  • What’s a “perfect example” of a Traditional-friendly project?
    • Do it when the cost of rework is high. Big surprises would be awful. We’d be starting over.
71
Q

Agile methodology “Fit”

A
  • Uncertainty and complexity may determine what “type” of process should be used.
  • Also to be considered:
    • Cultural factors.
    • Governance and compliance factors.
  • More uncertainty → more agile.
  • More complexity → more structure.
72
Q

Some caveats with Agile

A
  1. Start with a holistic idea of the product and its design
    You need vision
    Initial Problem Statement and Design in “Inception”
  2. End up delivering a whole product that makes sense
    And you need this vision!
    Real Release of a Viable Product

In between, do all those iterations to make it happen

73
Q

Highsmith’s remedies for schedule risk

A
  • Team involvement in planning and estimating
  • Early feedback on delivery velocity
  • Constant pressure to balance the number and depth of features with capacity constraints
  • Close interaction between engineering and customer teams
  • Early error detection/correction to keep a clean working product
74
Q

Highsmith is not in favor of “wish based planning”

A
  • Need to balance product goals with your capacity to deliver the product.
  • Don’t plan on a pace equal to the wildest success story.
    • “An agile team delivered this big product in 1/10th of the usual time!”
    • More on Agile pushback, next week.
  • It’s complex to move fast(er):
    • Staff motivation
    • Escalating requirements
    • Risk and uncertainty
75
Q

Agile For multi-level projects

A
  • Story-level planning is too fine-grained.
  • The whole thing has to fit together.
76
Q

Agile Product Backlog

A
  • Needs to consider the backwards view from eventual goal, as well as immediate direction.
    • Need to know the “minimal releasable product” or MVP
77
Q

Highsmith’s position

A
  • Agile teams can place to much emphasis on adaptation or evolution, and too little on anticipation (in planning, etc.)
  • Failure to take advantage of knowable information leads to sloppy planning, reactive thinking, excessive rework, and delay.
  • Agility is the art of balancing.
78
Q

Agile Key takeaways

A
  • Agile and non-agile approaches have some of the same goals and techniques
  • And they also have some really key disconnects:
    • The extent to which design and documentation is prized over face to face communication OR code
    • The amount of control/review that’s deemed important in the process
  • Scaling and difficulty is a big question here:
    • How far can an ad hoc process take you?
79
Q

History of Scrum

A

Concept published in 1986 by Takeuchi and Nonaka in HBR.
Applied in 1993 by Jeff Sutherland, John Scumniotales, and Jeff McKenna at Easel Corporation.

80
Q

Scrum

A

The Core Principles of Scrum:

  • Empirical Process Control: Scrum is built on the idea that complex problems can’t be fully understood upfront. Instead, it relies on transparency, inspection, and adaptation to navigate uncertainties.
  • Iterative and Incremental Development: Scrum divides the project into small, manageable parts called “sprints,” typically 2–4 weeks long. Each sprint delivers a potentially shippable product increment, allowing for rapid feedback and adaptation.
81
Q

Scrum Roles

A
  • Product Owner: The product owner represents the customer and is responsible for defining and prioritizing the product backlog, ensuring that the team is working on the most valuable features.
  • Scrum Master: The Scrum master serves as a servant-leader for the team, ensuring that Scrum principles and practices are followed, removing impediments, and facilitating collaboration.
  • Development Team: The development team is a self-organizing group responsible for delivering the product. It consists of developers, testers, designers, and others with the necessary skills.
82
Q

Scrum Artifacts

A
  • Product Backlog: This is the prioritized list of all features, user stories, and tasks that need to be completed for the product. The product owner owns and manages the backlog.
  • Sprint Backlog: At the beginning of each sprint, the development team selects a set of items from the product backlog to work on during that sprint.
  • Increment: An increment is the sum of all the product backlog items completed during a sprint. It should be a potentially shippable product increment, even if it’s not released.
83
Q

Scrum Ceremonies

A
  • Sprint Planning: At the start of each sprint, the team conducts sprint planning to select items from the product backlog and create a sprint goal.
  • Daily Standup: Daily meetings, usually 15 minutes, where the team members share progress, discuss obstacles, and plan their day.
  • Sprint Review: At the end of each sprint, a review is held to demonstrate the completed work to stakeholders and gather feedback.
  • Sprint Retrospective: A meeting held after the sprint review to reflect on the sprint and identify areas for improvement
84
Q

Benefits of Scrum

A
  • Faster Time-to-Market: Scrum’s iterative approach allows for quicker delivery of valuable features and regular releases.
  • Customer-Centric: Scrum prioritizes customer needs, leading to higher customer satisfaction and better product market fit.
  • Improved Collaboration: Scrum promotes collaboration among team members, stakeholders, and customers, leading to better communication and shared understanding.
85
Q

Kanban

A

(a Japanese word meaning “sign board”)

  • Introduced in Japan in the 1940s by Taiichi Ohno as part of the Toyota Production System and published in 1978.
  • Kanban is a lean (or Just-In-Time) manufacturing approach.
  • David J. Anderson tested Kanban Method at Microsoft in 2004 and published a book on the Kanban Method in 2010.
86
Q

7 Principles of Lean Practices

A
  1. Eliminate waste: If it doesn’t add business value, it is defined as waste.
    • Non-value-added work involves the consumption of resources (usually people or time) on activities that do not add business value to the final product or process.
  2. Amplify learning: Cultivate chefs (create recipes) vs. cooks (execute recipes)
  3. Decide as late a possible: Keep all options open until a decision must be made. Then make it based on as much information gathered by that point.
  4. Deliver as fast as possible: Give clients deliverables ASAP to allow additional input to base further learning and discovery.
  5. Empower the team: The team must work in an open, honest, and creative environment and not be shackled by heavy process and procedure.
  6. Build integrity in: The ultimate market success of a product speaks to integrity not just the customer.
  7. See the whole: Team members should consider the effectiveness of the whole solution not just the success of their own piece of the solution.
87
Q

Core Principles of Kanban

A
  • Start with what you do now
  • Agree to pursue improvement through evolutionary change
  • Encourage acts of leadership at all levels
88
Q

Kanban Board Design

A
  • Columns: represent the stages of the workflow, from “To Do” to “Done.”
  • Cards: represent work items, each has a description and status.
  • WIP Limits: WIP limits are set for each column to prevent overloading and encourage focus on completing existing work before starting new tasks.
89
Q

Key Kanban Practices

A
  • Visualize Work: use visual boards that represent the workflow, allowing teams to see work items, their status, and bottlenecks at a glance.
  • Limit Work in Progress (WIP): prevents overloading and promotes a smooth, sustainable pace of work.
  • Manage Flow: encourages continuous and smooth flow of work items through pull system, minimizing delays and optimizing throughput.
  • Make Policies Explicit: Clearly defining and documenting workflow rules and policies helps ensure consistency and alignment.
  • Implement Feedback Loops: use stand-up meetings, service delivery review, operations review, and risk review.
  • Improve Collaboratively, Evolve Experimentally
90
Q

Benefits of Kanban

A
  • Easy to start
  • Uses simple methods to promote improvement
  • Adaptable to different processes, team structures, and goals.
  • Creates a consistent, reliable production pace
  • Scrumban = an Agile PM method that combines scrum and Kanban.
91
Q

eXtreme Programming (XP)

A

Developed in 1996 by Kent Beck while a software engineer at Chrysler.

The Core Values of Extreme Programming:

  • Communication: XP promotes open and frequent communication to ensure everyone is on the same page.
  • Simplicity: XP encourages simplicity in design and code, advocating for the “You Ain’t Gonna Need It” principle to avoid unnecessary complexity.
  • Feedback: XP provides rapid feedback through continuous integration, testing, and customer involvement.
  • Courage: XP encourages “effective action in the face of fear”, fostering a culture of continuous improvement.
  • Respect: Team members need to respect each other to work effectively together.
92
Q

Key Practices of XP

A
  • Pair Programming: Developers work in pairs, with one writing code while the other reviews it in real-time. This practice enhances code quality and knowledge sharing.
  • Test-Driven Development (TDD): XP promotes writing automated tests before writing the code itself, ensuring that the code meets requirements and is maintainable.
  • Continuous Integration: Code changes are frequently integrated into a shared repository, allowing for early error detection and reducing integration issues.
  • Small Releases: XP encourages frequent, small releases to deliver valuable features to customers quickly and gather feedback.
93
Q

Roles in XP

A
  • Customer: The customer is an active participant in XP, providing requirements, prioritizing features, and participating in testing.
  • Programmer: Programmers write code, perform testing, and collaborate closely with customers and other team members.
  • Coach: The XP coach provides guidance on XP practices and ensures the team adheres to XP principles.
  • Tracker: The tracker helps manage the project’s schedule, making sure that work is progressing and that the team remains on track.
94
Q

XP, Traditional, Agile matrix

A

Solution[Clear]
Goal[Not Clear]
Emertxe (Q4)

Solution[Not Clear]
Goal[Not Clear]
XP (Q3)

Solution[Not Clear]
Goal[Clear]
Agile (Q2)

Solution[Clear]
Goal[Clear]
Traditional (Q1)

95
Q

Role of a Project Manager on a Team

A
  • Motivate team members
    • Pride in good work
    • Sense of being part of something larger than themselves
    • Motivated employees accomplish more work (3Xs) than unmotivated ones!
  • Align the team
  • Resolve conflict
  • Manage expectations
96
Q

Leadership

A

Team‐building plan on your project start‐up list

The condition essential to being an effective leader:

  • you must persuade people to want to follow you
97
Q

Ways to Motivate/Persuade

A
  • Be an inspiration. Create a compelling vision.
  • Remove obstacles. Make conditions so each person can and feel they are succeeding in their role.
  • Align the team. Drive the team to a real consensus.
  • Recognize the problems. Recognize and articulate what remains hard and unsolved.
  • Provide the opportunity for personal growth. Assign work that is challenging, but not impossible or unreasonable.
  • Value personal growth. Encourage and reward people who acquire additional skills.
  • Be a role model. Always be polite, display good work habits, be calm and rational, listen before talking and before deciding, accept responsibilities, make apologies, and accept blame.
  • Provide empowerment. Delegate actual authority.
  • Insist on responsibility. Meet your commitments, and expect other people to meet theirs.
  • Value reasoned risk‐taking. Seek out and succeed at the hard assignments, and reward others who do the same.
  • Be a communicator. Become an effective communicator – especially in listening and writing.
  • Model and enforce ethics. Talk about and be seen always considering the ethics of each decision.
  • Deal with the important aspects of diversity. Diversity of opinion and thought process is critical to successful engineering.
  • Create the right reward system. Encourage people to see success in terms of the team vs. personal accomplishment.
  • Tell your employees how to get promoted. Everyone is interested in promotion.
98
Q

Alignment w/ Project’s Stakeholders

A
  • Are motivated by the purposes of the project, are willing to take actions and make an effort so that the project can succeed, and at times, will accept less than they want (compromise)
  • Understand and agree with the goals for your project, and understand and agree with the limitations (schedule, cost, capability, etc) within which you are going to attempt to realize those goals
  • Understand and agree with your approach (methods, tools, locations, facilities, key personnel, sequencing of steps, the design, etc)
  • Are willing to work together, and to reach reasonable compromises in order to resolve difficulties
  • Desire to keep the project sold, and see it through to a successful conclusion
99
Q

Tuckman’s Group Development Stages

A
  • Forming: team members feel ambiguous, look to a group leader for direction and guidance. To progress, members must move out of their comfort zones and risk the possibility of conflict.
  • Storming: process of organizing tasks surface interpersonal conflicts. Leadership, power, and structural issues dominate. To progress, members must move from “testing and proving” to a problem-solving mentality.
  • Norming: leadership changes from ‘one’ teammate in charge to shared leadership with developed trust and cohesion. To progress, members must share feelings, ideas, solicit and give feedback, be open to change.
  • Performing: team is flexible as individuals adapt to meet the needs of other team members. Highly productive stage personally, professionally.
  • Adjourning: when teams are temporary, they disband at the end.
100
Q

Effective Communication

A
  • Enumerate: If you want people to know it, write it down! No one can remember everything
  • Stipulate: Work your way to a consensus and to a decision. Then make it clear what the decision was; lots of things get said, and lots of things get written down. Make it clear which is the actual decision, the actual policy, the actual procedure.
  • Disseminate: When you make a decision, lots of people need to know. Everyone on the team needs to be able to find the right guidance document when they need it. Therefore, you need dissemination methods, you need index and search facilities; you need some training for your team members about what are all of the key decisions made for your project, and about the process for making and disseminating decisions.
101
Q

Behaviors for Better Group Dynamics

A
  • For productive discussions, a pre-emptive form of intervention involves tending productive group behaviors.
  • It doesn’t avoid all difficult dynamics but can divert many situations before they grow into a complicated and troubling interaction.
102
Q

9 Examples of Cognitive Biases

A
  • Trigger:
    • Too much information
      • We filter complexity through our experience by focusing on and emphasizing information that confirms our preconceptions.
        • confirmation bias
      • We over rely on one trait or piece of information when making decisions (usually the first piece of info acquired); or we focus on what has changed
        • Anchoring bias
      • We focus on what’s already primed in memory or repeated often, overestimating the likelihood of events due to recency or emotional charge
        • availability heuristic
      • We focus on recurring internal thoughts that prime our memory and affect, reinforce, and distort our perception
        • Attentional Bias
    • Not Enough Meaning
      • We fill in gaps in our knowledge with the opinions and beliefs of others. We do/believe things because people that matter to us do/believe the same
        • bandwagon effect
      • a positive trait “spills over” automatically to other arenas for people we’re familiar with or fond of (Horn effect: an unfavorable overall impression based on a negative trait)
        • Halo / Horn Effect
      • We interpret others’ behaviors as having hostile intent based on its impact on us, with no actual knowledge of the other’s intent or circumstances
        • Hostile Attribution Bias
    • Need to Act Fast
      • To avoid mistakes, we aim to preserve autonomy and group status and avoid irreversible decisions by keeping things relatively the same
        • Status-Quo Bias
      • We gain confidence and move prematurely to decision-making or closure by overestimating the degree to which others agree with us.
        • False Consensus Effect
103
Q

Reasons for Conflict

A
  • Reasons for conflict that may be productive:
    • individuals have different goals
    • individuals have different concepts and preferences for method, sequence and priority
    • Individuals have different preferences for various technical decisions and options
  • Reasons for conflict that are unproductive
    • Perception of needing to assert themselves in order to gain their own aspirations
    • No one taught them how to be effective in a group, or what is expected of them when they are a member of a group
    • Dysfunctional behavior: immaturity, rudeness, selfishness, dishonesty
104
Q

Thomas-Kilmann Model

A
  • Collaborate/Confronting – concerns from both sides are equally important, win-win
  • Compromising – partially satisfying both sides, lose-lose
  • Accomodate/Smoothing – de-emphasize differences, weak mode of conflict resolution
  • Avoid/Withdrawing – break connection from both sides
  • Compete/Forcing – one sides wins, win-lose

4 quadrant with compromise in the middle

Top Left: Compete/Forcing
Bottom Left: Avoid / Withdrawing
Bottom Right: Accommodate / Smoothing
Top Right: Collaborate / Confronting

X axis: Cooperativeness (Their wants and needs)
Y axis: Assertiveness (Our wants and needs)

105
Q

Blanchard’s Situational Leadership

A

4 quadrant

Top Left:
Capable Cautious Performer
Var: Commitment
M-H: Competence
Supporting

Top Right:
Disillusioned Learner
L: commitment
L-M: competence
Coaching

Bottom Left:
self-reliant achiever
H: commitment
H: competence
Delegating

Bottom Right:
Enthusiastic beginner
H: Commitment
L: Competence
Directing

X Axis: Directive Behavior
Y Axis: Supportive Behavior

106
Q

Fundamental interpersonal relationship orientation behavior

A
  • Inclusion (invite vs. participate)
  • Control (direct action vs. take order)
  • Affection (reach-out vs. share)
107
Q

Myers-Briggs type indicator

A
  • Extraversion vs. Introversion: outer vs. inner world focus
  • Sensing vs. Intuition: experience vs. imaginative
  • Thinking vs. Feeling: logical vs. harmonious
  • Judging vs. Perceiving: deciding vs. adaptable
108
Q

Parker’s team player survey

A
  • Contributor: focuses on details and data, but may miss the big picture
  • Collaborator: goal directed and forward looking, but may not confront
  • Communicator: effective listener but will not focus
  • Challenger: candid and assertive, but may push too far
109
Q

Porter’s Strength Deployment Inventory

A
  • Red (Assertive directing): persuades, challenges, takes risks
  • Green (Analytic autonomizing): thinker, cautious, self-reliant
  • Blue (Altruistic nurturing): promotes harmony, warm-hearted
110
Q

Factors that make it hard to intervene

A
  • Fear
  • Ambiguity
  • We take our cues from others
  • Personal Beliefs
  • We identify with the person exhibiting the problematic behavior
111
Q

Siegel’s Mechanics of Project Management

A
  • Take decisions only in writing
  • Implement Siegel’s trio for effective communication
  • Always capture the rationale and methodology used for reaching decisions, not just the decisions themselves
  • Do not use face-to-face meetings simply to convey information, that is done more effectively in writing
  • Do not use electronic mail or any similar electronic forum (social media) for discussion. Instead, hold discussions in face-to-face meetings
  • Organize meeting agendas around discussion of those items about which there is not yet a consensus
  • You may have some meetings with all of your direct reports; you cannot decide most things just by meeting with the relevant people one-on-one
  • Work with, and take advice from, only those people who either have skin in the game, or are bound to you by genuine affection
112
Q

What is Quality?

A

According to the American Society for Quality:

  • “The characteristic of a product or service that bears on its ability to provide stated or implied needs.”
  • “A product or service free of deficiencies.”
  • Derived from Joseph Juran’s “fitness for use” and Phillip Crosby’s “conformance to requirements” philosophies.
113
Q

Five ways to view Quality

A
  1. Transcendental: abstract, ideals
  2. User view: meets user’s needs
  3. Manufacturing view: quality assurance during and after production (e.g., ISO9001, Capability Maturity Model)
  4. Product view: controlling internal properties result in better external properties (e.g., Six Sigma)
  5. Value-based view: more money, better value
114
Q

Pillars of Quality Management

A
  • Customer satisfaction
  • Prevention over inspection
  • Continuous improvement
  • Management responsibility
115
Q

Quality Standards

A
  • Six Sigma
  • Juran
  • Crosby
  • Failure Mode and Effect Analysis
  • Organizational Project Management Maturity Model
  • ISO 9000
  • ISO 10006
  • Malcolm Baldridge
  • SEI-CMM
  • Deming
  • European Quality Award
  • Total Quality Management
116
Q

6 sigma (6σ)

A
  • Method to improve quality by finding sources of unjustified variation and reducing them.
  • Justified variation = unavoidable variation (e.g., small batch to batch differences)
  • Unjustified variation = variation from unnecessary sources (e.g., human mistakes, machine failures, incorrect instructions).
117
Q

Design for X

A
  • Manufacturability (DFM) – aims to streamline and simplify the manufacturing process to reduce manufacturing and assembly costs, thus Maintain or improve product quality.
  • Reliability (DFR) - ensures that products and systems perform a specified function within a given environment for an expected lifecycle.
  • Quality (DFQ, alternatively QbD or Quality by Design) – closes the gap between what the team thinks the customer needs and what the customer really needs and what is actually designed as well as the gap between design and production.
118
Q

ISO-9000

A
  • An organization ought to have requirements for a quality management system, so as to be more effective at meeting customer needs.
  • Continual improvement – after an organization defines a standard, trains their people, and implements it, they must also learn from experience and refine the process / quality standards.
  • ISO-9001 Principles:
    • Customer focus
    • Leadership
    • Engagement of people
    • Process approach
    • Improvement
    • Evidence based decision-making
    • Relationship management
  • Leading others to make improvements
    • Challenge – a teacher defines an appropriate challenge for a learner
    • Current Condition – learner analyzes situation to understand what is happening now
    • Next Target Condition – learner formulates an outcome and work tasks to achieve
    • Experiment – learning identifies ways to overcome obstacles to gain target condition
119
Q

Capability maturity model

A
  • A framework to assess the strength of organizational work processes, typically on a scale from 1 to 5 (e.g., initial, managed, defined, quantitatively managed, optimizing).
120
Q

How to measure Quality? McCall Index

A
  • Product revision
    • Maintenance: how easy is it to fix?
    • Flexibility: how easy is it to change it?
    • Testability: how easy is it to test it?
  • Product transition
    • Interoperability: can it be interfaced to another system easily?
    • Portability: can it be transferred to another environment or machine?
    • Reusability: can some of the software be reused?
  • Product operations
    • Correctness: does it do what we want it to do?
    • Reliability: does it do it accurately?
    • Efficiency: does it not waste resources?
    • Integrity: is it secure?
    • Usability: is it user-friendly?
121
Q

Quality planning tools and techniques

A
  • Benefit/cost analysis
  • Benchmarking
  • Design of Experiments
  • Cost of Quality
    • Cost of conformance – money spent during the project to avoid failure
    • Cost of non-conformance – money spent during and after the project because of failures
  • Control Charts
122
Q

The Fundamental Canons

A
  1. Engineers shall hold paramount the safety, health and welfare of the public in the performance of their professional duties.
  2. Engineers shall perform services only in the areas of their competence.
  3. Engineers shall issue public statements only in an objective and truthful manner.
  4. Engineers shall act for each employer or client as faithful agents or trustees, and shall avoid conflicts of interest.
  5. Engineers shall build their professional reputation on the merit of their services and shall not compete unfairly with others.
  6. Engineers shall act in such a manner as to uphold and enhance the honor, integrity and dignity of the profession.
  7. Engineers shall continue their professional development throughout their careers and shall provide opportunities for professional development of those engineers under their supervision.
123
Q

The technical and social characteristics that lead to bad ethics in engineering

A
  • The human tendency to discount the likelihood of low-probabilty events to essentially zero probability
  • The complexity and scale of modern systems
  • Reliability and availability tend to be under-emphasized, as compared to functionality and capability
  • We tend to accept operator-induced and user-induced failures as being outside our design responsibilities
  • We ignore, or seriously under-emphasize, the potential for use of the system beyond the uses that were originally envisioned, and also do the same for potential uses beyond the originally specified conditions.
124
Q

Obstacles to Ethical Decision-making: Rationalizations

A
  1. If it’s necessary, it’s ethical
  2. “The false Necessity Trap”
  3. If it’s legal and permissible, it’s proper
  4. It’s just part of the job
  5. It’s all for a good cause
  6. I was just doing it for you
  7. I’m just fighting fire with fire
  8. It doesn’t hurt anyone
  9. Everyone’s doing it
  10. It’s OK if I don’t gain personally
  11. I’ve got it coming
  12. I can still be objective
125
Q

Team Modalities

A
  • Traditional team = a group of individuals who are collocated and interdependent in their tasks. They undertake and coordinate their activities to achieve common goals and share responsibility for outcomes.
  • Virtual team = same goals and objectives as traditional teams, but operate across time, geographical locations and often organizational boundaries linked by communication technologies.
  • Hybrid team = in practice represents a more flexible co-located approach. Mixed mode teams are not recommended.
126
Q

Challenges to Remote Teams

A
  • Common Pitfall:
    • Lack of trust
    • Information breakdown
    • isolation
    • learning gaps
  • Why
    • Without daily visibility, managers worry about productivity, creating a culture of mistrust and fear
    • What might have been handled in “drive-by” conversations now require intentional communication.
    • Physical distance can create emotional distance. Isolation is a common complaint among remote workers.
    • Educational opportunities often happen within the walls of HQ, putting remote employees at a deficit
  • How to solve it
    • Start with a mindset of trust, combined with documented communications workflows and goals to ensure visibility and accountability
    • Create strong documentation, and an asynchronous communication strategy to ensure frequent communication.
    • Foster human connection, and build strong culture across offices to avoid connection loss
    • Companies must ensure access to Learning & Development programs for remote workers

Distance
* Introduces
* barriers & complexity
* Negatively impacts
* Coordination & visibility
* Communication & Cooperation

Coordination & visibility
* Impacts
* Communication & Cooperation
* Adds to
* barriers & complexity

Communication & Cooperation
* Impacts
* Coordination & visibility
* Adds to
* barriers & complexity

127
Q

Theory of One Team

A
  • Strategy
    • Same Time
    • Same Space
    • Same Team
    • Same Culture
    • Same Practices
  • High-Level Plan
    • To promote synchronous communication
    • To promote temporary co-location
    • To develop cohesion among team members
    • To promote cross-cultural communication
    • To promote the use of common practices
  • Goal
    • To bridge temporal distance
    • To bridge spatial distance
    • To bridge socio-cultural distance
    • To bridge socio-cultural distance
    • To bridge socio-cultural distance
128
Q

Ideal Structure for Remote Teams

A

The best remote teams:

  • are cross-functional;
  • only tackle a single project at one time;
  • are self-sufficient;
  • understand they can’t do everything.

Remote teams, even when small, need to document and formalize every process. There are little to no opportunities for the casual reminder of a due date or status update while walking by someone in the office, breakroom or hallway

129
Q

Work Breakdown Structure (WBS)

A
  • Hierarchical decomposition of project work
130
Q

PMI’s Definition of WBS

A
  • “The process of subdividing project deliverables and project work into smaller more manageable components.”
  • “The key benefit of this process is that it provides a structured vision of what has to be delivered.”
  • “The WBS is a hierarchical decomposition of the total scope of work to be carried out by the project team to accomplish the project objectives and create the required deliverables.”
  • “The planned work is contained within the lowest level of WBS components which are called work packages.”
  • “A work package can be used to group the activities where work is scheduled and estimated, monitored, and controlled.”
131
Q

When to Create a WBS

A
  • As part of the “Scope Baseline” in the “Project Management Plan” based on PMI process. PMI suggests that you also create a WBS dictionary as well.
  • Prior to a project schedule or used in the project initiation phase.
    • Provide a starting point for the creation of a formal project plan.
    • Allow the project manager and project team to understand the overall effort needed for the project.
    • Can be an effective means for charting the project course, understanding needed resources, and help to facilitate early project conversations in the initiation and planning phases.
  • Facilitates the creation of the Project Schedule and Plan.
132
Q

ITTOs of the WBS

A
  • Inputs
    • Scope management plan
    • Scope statement
    • Requirements document
    • Enterprise environmental factors
    • Organizational process assets
  • Tools and Techniques
    • Decomposition
    • Expert judgement
  • Outputs
    • Scope baseline
    • Project documents updates
133
Q

WBS Organization Strategies

A

Usually by a principle.
By project phase (sequentially)

  • By Function
    • Project XYZ
      • Requirements
      • Design
        • HW Manager
        • SW Manager
      • Implementation
  • By Component
    • Project XYZ
      • Fuselage
      • Engines
        • Compressor
        • Combustor
      • Avionics

Can be done as a Tree or IndentedList

134
Q

WBS Construction Techniques

A
  • Construct either using Top down or Bottom up approaches.
  • Top Down – create summary tasks to represent the project outline. Then add subordinate tasks down to individual tasks, making up the layers of each summary task.
  • Bottom Up – add all of the individual tasks to the project. Then group them by subordinate tasks up to summary tasks.
  • Keep in mind that all the WBS layers may not necessarily be summary tasks from the start.
135
Q

Elements in a WBS

A
  • 100% rule
  • Project Deliverables
  • Team Members
  • Time
  • Budget
136
Q

PM Plan Relationships

A
  • Work Breakdown Structure (What)
  • Schedule Plan (When)
  • Resource Plan (Who)
  • Risk Management Plan (What If)
  • Plan Optimization (How)
137
Q

Requirements Traceability Matrix

A
  • A document that maps out the relationships between requirements and project work (WBS).
  • The RTM provides accountability to project requirements.
  • Three types of traceability:
    • Forward traceability - ability to identify and outline future actions towards requirement completion from design, tests and modifications.
    • Backward traceability – ability to map test cases and project work back to specific requirements, preventing scope creep and ensuring that no unnecessary work is completed.
    • Bidirectional traceability - ability to trace requirements in a hybrid model of both forward and backward traceability.