Six Sigma Green Belt Flashcards
DMAIC
Define, Measure, Analyze, Improve and Control
What is the D in DMAIC
Define the aims of the project - what you want to achieve.
What is the M in DMAIC
Measure the current system to find where the issues are
What is the A in DMAIC
Analyze the data to see what the main issues are and why they are occurring
What is the I in DMAIC
Improve the current system to remove the issues found
What is the C in DMAIC
Control the improved system so the new system is maintained
DEFINE (in the DMAIC methodology)
1-Prioritize 2-Define Scope 3-Choose Team 4-Define Project 5-Plan Timescales
MEASURE (in DMAIC methodology)
1-understand the current process
2-voice of the customer
3-case for change
ANALYZE (DMAIC methodology)
1- Process analysis 2- identify defects 2a- Data analysis 2b- regression analysis 2c- hypothesis tests 2d- experimentation 3- prioritize causes
IMPROVE( DMAIC methodology)
1- create possible solutions 2- refine solutions 3- choose solutions 4- sell solution to stakeholders 5- Pilot Solution 6- Implementation
CONTROL (DMAIC methodology)
1- Monitor the process
2- Determine sigma level
3- sustain the improvement through building the controls in everyday procedures
SIPOC Analysis
high level overview of the process. Concentrates on the inputs and outputs rather than the process itself.
Used in the DEFINE phase of DMAIC methodology
SIPOC Stands for ….
Suppliers – Either the external suppliers or the step before in the internal process which supply the ‘input’ material
Inputs – Anything required for the process, e.g. order form, raw materials, machinery needed
Process – Brief overview of the process (doesn’t have to be a step by step guide)
Outputs – the end product(s), which can include e.g. finished item, invoice, instructions for next step
Customers – The next user after the process, whether the next stage of production or the external customer
COPIS Analysis
used when you want to see process from a customer focused view (opposite of SIPOC) Looks more at the process than the output.
DMADV stands for…
DFSS stands for…
Design Measure Analyze Design Verify
Design For Six Sigma
Same thing
When to use DMADV over DMAIC
when you need a completely redesigned or even a new process or product instead of just an improved one
what are the 5 steps for DMADV methodology?
Define- essentially the same as in DMAIC
1- Team Creation
2- Project Charter/timeframe
3- Authorization from management
Measure - essentially the same as in DMAIC
Analyze- Analyze the data collected in the previous stage and match them up with goals from the define stage.
Design- you’re creating a new process from scratch, using the data that you’ve put together in the 3 previous steps.
Verify- make sure it is in place how it was intended and working according to plan.
Actual Quality
This is the current amount of value you add as output per ‘unit’ of input, to be contrasted with ‘Potential Quality’ which is the maximum amount possible.
DPMO
Defects per Million Opportunities – number of mistakes that are made out of a million functions that could lead to a mistake.
Gantt Chart
Time planning chart showing the tasks, and when they are expected to be achieved
pareto Chart / Diagram
A chart showing where your issues are concentrated, to see if you have a few key issues or lots of smaller ones.
Potential Quality
The maximum amount of value you can add per unit of input, to be contrasted with Actual Quality.
Project Charter
Project Charter is the first thing you do after choosing your project – it shows key information such as team, resources, timeline etc.
TIMWOOD
An acronym to remember the 7 wastes of Lean Manufacturing (‘Who is TIM WOOD?). It stands for Transport, Inventory, Motion, Waiting, Over-Processing, Overproduction and Defects.
force field analysis
Force Field Analysis was developed by Kurt Lewin (1951) and is widely used to inform decision making, particularly in planning and implementing change management programmes in organizations.
FMEA
Failure Modes and Effects analysis
CBA
Cost Benefit Analysis
discrete data (aka Attribute data)
Discrete data is information that can be categorized into a classification. Discrete data is based on counts. Only a finite number of values is possible, and the values cannot be subdivided meaningfully.
e.g. # of parts damaged in shipment
Continuous data
information that can be measures on a continuous scale
e.g. length, size, width
Nominal Data
Variables with no inherent order or ranking sequence
e.g. race, gender
Ordinal Data
Variables with an ordered series
e.g. blood group, performance
Binary Data
Variables with only two options
e.g. pass/fail, yes/no
Qualitative data vs Quantitative
Qualitative data is subjective in nature and cannot be measured objectively.
Quantitative data is objective in nature and can be measured.
Qualitative data is further bifurcated as Nominal, Ordinal and Binary whereas Quantitative data is either Discrete or Continuous.
Use X Bar S Control Charts When
- When you can rationally collect measurements in subgroups of generally more than 10 observations.
- Use the X Bar R Control chart when you have between 2 and 10 subgroups .
- Greater than 10 amounts of constant, continuous data
- In other words, the sample sizes must be constant.
- The measurements are at regular intervals.
- We can assume the data is normally distributed.
- The sampling procedure is same for each sample and is carried out consistently.
Which type of Control Chart is used to monitor DISCRETE data?
p Chart (proportion chart)
np Chart
C Chart
u chart
When to use a P Chart (proportion chart)
- Sample sizes are NOT equivalent
- Have Discrete Data
When to use an np chart
- Sample sizes are equal
- Subgroups are the same size
- Attributes are discrete and binary (ex. yes vs no, up vs down)
When to use a C CHart
- Total opportunity population is large compared to number of defects
- When you cannot count ‘not a defect’
- data type is discrete but each count has an equal opportunity of coming up
When to use a U charte
-Sample size varied - ex. multiple types of defect
Question: Which of the following control charts is most appropriate for monitoring the number of defects on different sample sizes?
(A) u
(B) np
(C) c
(D) p
Answer: The u chart is the most appropriate because it evaluates the stability of counted data.
Which of the following control charts is used to monitor discrete data?
(A) p
(B) I & mR
(C) X Bar
(D) X Bar – R
Answer: p charts are used to monitor discrete data. See the control chart matrix in downloads. Also, review attribute charts.
Question: The purpose of using control charts is to
(A) determine if the process is performing within specifications
(B) evaluate process performance over time
(C) determine how to recreate the process
(D) detect the causes of nonconformities
Answer: (B) Control charts are used to evaluate the performance of a process overtime. It is irrelevant how the specifications are set. A control chart may tell you if non-onformaties are present, but it will not tell you what they are without root cause analysis.
Defects vs Defectives
A product may have many defects – imperfections. But a product is not defective unless the defects prevent the product from functioning. If a product is not usable, it is considered defective.
A measurement system analysis is designed to assess the statistical properties of Please choose the correct answer. a) gage variation b) process performance c) process stability d) engineering tolerances
a) gage variation
Gage R&R
Gage R&R, which stands for gage repeatability and reproducibility, is a statistical tool that measures the amount of variation in the measurement system arising from the measurement device and the people taking the measurement
Measurement System Analysis – MSA
Measurement system analysis (MSA) is an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability.
There are five parameters to investigate in an MSA: bias, linearity, stability, repeatability and reproducibility.
For a normal distribution, two standard deviations on each side of the mean would include what percentage of the total population? Please choose the correct answer. a) 68% b) 47% c) 34% d) 95%
d) 95%
68–95–99.7 rule
used to remember the percentage of values that lie within a band around the mean in a normal distribution with a width of two, four and six standard deviations, respectively; more accurately, 68.27%, 95.45% and 99.73% of the values lie within one, two and three standard deviations of the mean, respectively
Positional, cyclical, and temporal variations are most commonly analyzed in Please choose the correct answer. a) SPC charts b) multi-vari charts c) cause and effect diagrams d) run charts
b) multi-vari charts
Question: In an “X” Sifting exercises a Belt will use a(n) _______________ to assist in isolating families of variation that may exist within a subgroup, between subgroups or vary over time.
(A) Multi – Vari Chart
(B) Pareto Chart
(C) FMEA
(D) Shewhart Analysis
Answer: (A) Multi – Vari Chart. This is by definition. The Mulit-vari chart helps you isolate variation inside a subgroup.
Which of the following is an example of mistake-proofing?
Please choose the correct answer.
a) Using x and R chart to prevent errors
b) Using 100% inspection to detect and contain defects
c) Using color coding as an error signal
d) Having the team that created the errors repair them
Answer: (c) Using color coding as an error signal is an example of mistake proofing. (This isn’t the best example as anyone who suffers from color blindness can tell you! A better example is the pumps at the gas station. Notice how different fuel types get different shaped nozzles? That makes it very difficult to put the diesel pump into your gasoline tank!)
Mistake Proofing (Poka-Yoke)
Mistake Proofing is about adding techniques to prevent defects and detect defects as soon as possible
Poka-Yoke is often used as a synonymous term but its meaning is to eliminate product defects by preventing human errors
Successive inspection
Successive inspection is a DETECTION inspection which is reactive and provides information back to the source of the error.
Self inspection
Self inspection is a DETECTION inspection in which the operator or device checks the work at the process for a quicker feedback loop to the source of the error. This is more desirable than successive inspection.
Source inspection
Source inspection is the most desirable inspection since the other two occur later in the process resulting more lost time and costs. The error is found after it has occurred. This is a PREVENTION inspection
two variations of mistake proofing
- Warnings/Alarms – provides information
2. Controls – prevents and/or stops the process
three types of poka-yoke
- Contact method - identifies defects by testing product characteristics.
- Fixed-value - a specific number of movements every time.
- Sequence method - determines if procedure were followed.
Which of the following tools is used extensively in quality function deployment (QFD)? Please choose the correct answer. a) Affinity diagram b) Matrix diagram c) Cause and effect diagram d) Activity network diagram
b) Matrix diagram
The affinity diagram is used in brainstorming to group like terms together. The cause and effect diagram helps with root cause analysis and looks like a fish bone. The activity network diagram helps show your project timeline.
Quality Function Deployment
a planning process for products and services that starts with the voice of the customer. Basically, it enables people to think together.
Which of the following measures is used to show the ratio of defects to units? Please choose the correct answer. a) DPU b) DPO c) DPMO d) PPM
a) DPU
Defects per Opportunity (DPO)
number of defects you have in your process (usually found by sampling) and divide it by however many opportunities there are
Let’s say Joe’s Burgers serves 1,000 customers a day. 50 customers had the wrong order, 75 felt they waited too long, 25 said their order was cold, and 50 more said their burgers just tasted bad. That’s 200 defects (assuming they are mutually exclusive.)
Calculate the DPO (Defects per opportunity)
200 defects in 1,000 opportunities is 20%.
Defects per Unit (DPU)
the average number of defects per unit of product.
DPU = Total # of defects found in a sample/Sample Size
Defects per Million Opportunities (DPMO)
a ratio of the number of defects (flaws) in 1 million opportunities when an item can contain more than one defect. To calculate DPMO, you need to know the total number of defect opportunities.
DPMO= (Total # of defects found in a sample/Total # of defect opportunities in the sample)x1,000,000
Rolled Throughput Yield (RTY) (also known as the First Pass Yield)
the probability (or percentage of time) that a manufacturing or service process will complete all required steps without any failures.
RTY= (Y1) (Y2) (Y3) (Y4) … (Yn) where Y is the yield (proportion good) for each step
For example, a four-step process has a yield of 0.98 in step 1, 0.95 in step 2, 0.90 in step 3, and 0.80 in step 4.
RTY = (0.98)(0.95)(0.90)(0.80) = 0.67032
Which of the following is a commonly accepted level for alpha risk? Please choose the correct answer. a) 0.05 b) 0.50 c) 0.70 d) 0.95
a) 0.05
Alpha Risk
Alpha risk (also level of significance) is the risk of incorrectly deciding to reject the null hypothesis.
The most common level for Alpha risk is 5% (usually not over 10%) but it varies by application and this value should be agreed upon with your BB/MBB.
In summary, it’s the amount of risk you are willing to accept of making a Type I error.
Beta Risk
Beta risk is the risk that the decision will be made that the part is not defective when it really is.
Which of the following tools is used to translate broad requirements into specific requirements? Please choose the correct answer. a) A quality control plan b) The theory of constraints (TOC) c) A critical to quality (CTQ) tree d) A process flowchart
c) A critical to quality (CTQ) tree
Critical to Quality (CTQ) Trees
diagram-based tools that help you develop and deliver high quality products and services. You use them to translate broad customer needs into specific, actionable, measurable performance requirements.
When an inspection process rejects conforming product, what type of error is being made? Please choose the correct answer. a) α b) β c) σ d) H0
a) α An alpha error is made when you reject the null hypothesis when it is actually true. In this case, the null is that the product conformed
A Beta error is when you fail to reject the null when the null is false. I this case, a beta error would be made if the inspection process accepted a non-conforming product.
The sigma symbol has nothing to do with error types. It refers to standard deviation.
The H0 symbol stands for Null Hypothesis, which is not an error type
Errors in Hypothesis Testing
- Type A or 1 Error: The null hypothesis is correct, but is incorrectly rejected.
- Type B or 2 Error: The null hypothesis is incorrect, but is not rejected.
Question: Which of the following terms is used to describe the risk of a type I error in a hypothesis test?
(A) Power
(B) Confidence level
(C) Level of significance
(D) Beta risk
Answer: (C) Level of significance. The null hypothesis is rejected if the p-value is less than the significance or α level. The α level is the probability of rejecting the null hypothesis given that it is true (type I error) and is most often set at 0.05 (5%).
We can reject Power as it makes no sense.
A confidence level refers to the percentage of all possible samples that can be expected to include the true population parameter. It is a type of interval estimate of a population parameter. Does not makes sense here.
Beta risk is a type 2 error, so we can ignore that option
Statistical Significance
Statistical significance is the probability of rejecting the null hypothesis (i.e., concluding that there is a difference between specified populations or samples) when it is true. Significance level is denoted as alpha, or α.
The statistics that summarize a population are referred to as (A) categorical statistics (B) descriptive statistics (C) probabilistic statistics (D) control statistics
Answer: (B) Descriptive statistics summarize a population. The other options are nonsensical.
Alias structure
Alias structures are used in fractional factorial designs to determine which combinations of factors and levels will be tested.
Blocking
Blocking involves recognizing uncontrolled factors in an experiment – for example, gender and age in a medical study – and ensuring as wide a spread as possible across these nuisance factors
Classification factor
A classification factor is an element that cannot be specified or set by the experiment designer, but can be used in sample selection. The sex of a subject is an example of a classification factor.
Confounding
Confounding occurs when we can’t be sure which factors – or combinations of factors – are affecting a result. Blocking can help to minimize confounding.
Experimental factor
An experimental factor is one that can be modified and set by the person designing the experiment.
Factor
Factors are elements in your experiment – whether in your control or outside of it – that affect the outcomes
Fractional factorial
Experiment design that tests only a subset of the possible factor-level combination
Full Factorial
Experiment design that tests the full set of possible factor-level combinations
Interaction
Factor interactions occur when multiple factors affect the output of an experiment.
Nuisance factor
A nuisance factor is the opposite of a treatment factor – an element that is of no interest for the experiment, but needs to be considered anyway in case it skews results.
Qualitative factor
A qualitative factor is a treatment factor that contains a number of categories.
Quantitative factor
A quantitative factor is a treatment factor that can be set to a specific level as required.
Replicates
Replicates are multiple experimental runs of the same factors and experiments. You do this so you can get a sense of the variability in the data set
Treatment factor
A treatment factor is an element that is of interest to you in your experiment, and that you will be manipulating in order to test your hypothesis.
Which of the following measures is increased when process performance is improved? (A) Variability range (B) Capability index (C) Repeatability index (D) Specification limits
Answer: (B) Capability Index. The capability index increases as the process improves.
Process Capability
Process Capability Pp measures the process spread vs the specification spread. In other words, how distributed the outcome of your process is vs what the requirements are.
process spread
The process spread is the distance between the highest value generated and the lowest. This is sometimes referred to as the Voice of the Process.
What is a ‘Good’ Process Capability (Pp) Number?
According to Six Sigma, we want a Pp of above 1.5 because that would reflect a process with less than 3.4 DPMO – the definition of 6 Sigma quality.
Which of the following tools can be used to identify and quantify the source of a problem? (A) Affinity diagram (B) Control chart (C) Pareto chart (D) Quality function deployment
Answer: (C) Pareto Charts are used to identify and quantify the source of a problem. Affinity diagrams help reduce processes to a few key steps. Control charts can show you your process has gotten out of control but won’t identify the source or quantify it. QFD is a planning process for adapting client’s needs to your plans.
Question: When in the process of trying to identify the Critical X’s for a LSS project a Belt creates a(n) _____________ because frequently it is 20% of the inputs that have an 80% impact on the output.
(A) Pareto Chart
(B) FMEA
(C) Np Chart
(D) X-Y Diagram
Answer: (A) Pareto Charts are used to identify and quantify the source of a problem. Also see FMEA, X-Y Diagram, NP Chart.