Lecture 3 - why measurement Flashcards
Why do we need measurement in psychology?
We need a consensus on key constructs & definitions; we need objectivity; we must minimise subjective judgements.
What are we measuring?
- We measure the ATTRIBUTES of people, objects, & events.
What are these measurement rules?
- scaling: what scale will we use? (Cm, feet for height). How much of an attribute is present? (Low / high depression)
- classification; yes or no for sleeping patterns.
What is the issue with measurement?
- desirability
- positivity
- recall bias
When using statistical analysis, its possible that …
The statements made based on patterns of relationships can be untrue (its because of extraneous variables; longer studies can help with controlling for external factors; but it’s not always ideal because of time, resources, motivation).
(2) Psychological assessments
- Items form a scale & represent an agreed internal mental construct
These attributes measured should be related to the construct
Organisational commitment
- affective
- normative
- cognitive
Example for items on the scale
- positive & negative affect schedule (PANAS)
(3) Factor Analysis
To measure whether or not things are similar. If not similar, then we have an issue (the attribute is highly unrelated).
Spearman’s Factor Analysis
A combinations of theory & statistics based on how similar the questions are
Job satisfaction vs Organisational commitment
These two constructs are different and are analysed using two different sets of questions.
What are the Latent Factor?
What are the Optimal factors?
Intention to leave (related to Job Sat & Commit)
Job satisfaction (positive)
Intention to leave (negative)
* but both are still different areas - so measuring job satisfaction may not be the best construct to measure to see how many employees will remain in the organisation within a year).
(4) Reliability
Refers to the consistency of a measure
Types of reliability
- inter-rather reliability
- Test-retest reliability
- inter-method reliability
- internal consistency reliability
Reliability of measurement requires at least 3 measures
Assignment:
- Job satisfaction / engagement
- Stress
- Burnout
Polygraph
- measures physiological reactions (heart rate, sweat)
- people can lie, maintain their heart rate, & the machine won’t necessarily pick up on it (lack of control fo extraneous variables)
- so not exactly measuring what it proposes to measure (which is if the person is lying when responding to a question; not particularly valid or reliable form of measurement).
(6) The Net Promoter Score (Reichheld)
- what is the NPS measuring? (Feelings, or behaviour?)
- unrealistic measure of behaviour (referring to the scale)
NPS validity
- what is the standing behind a detractor & passive - shows a skewed measure because the range & scale does not align. Additionally, what does 0 mean for any behaviour. The NPS seems to generate responses in the extreme because of the vagueness of the scale rating).
Other ways to strengthen NPS conclusions
- bring in some responders for a focus group to ask why they have this opinion.
- its useful for gaining some insights into the current levels of & satisfaction regarding customer service
- in survey design, its important to use a 1-5 , & 6-10 scale.
- you can change the question to get the answers that you want.
(7) Interpretation & results of measurement
- examine current trends
- see patterns of relationships between variables (positive, negative)
Independent variables: Role conflict, burnout, medication & IV incidents
- no account of extraneous variables (patients, ward organisation)
- missing mediation variable
- there should be a different order of variables
How can we determine the causality?
- deductive reasoning/ logic/ argument
- test data using experimental methods (clinical trials)
The Scientific Method
Follow this model
- observation
- theory
-predict
- evidence
* use theory to make a reasonable hypothesis.
(5) Validity
The extent to which you measure actually measures the thing of interest. This requires a look at theory (OB scholars - leadership, engagement, commitment)
Many different types of validity
- Face
- Criterion
- Content
- Construct
Examples of dependent variables
- productivity
- work quality
- employee turnover
- employee absenteeism
- employee satisfaction/ engagement
Measuring Job satisfaction
Assessment of worker satisfaction is not as easily measured as turnover or absenteeism because there are a wide range of elements in the work environment that needs to be considered.
Job satisfaction & employee turnover & absenteeism might have a negative relationship
A negative relationship occurs when higher job satisfaction results in lower employee turnover & absenteeism.
Job satisfaction & Organisational commitment
- makes sense for both constructs to have a positive relationship (which means a high (increase) in job satisfaction also has a higher org. Commitment)
Internal validity
The extent to which extraneous or confounding variables are removed
External validity
Whether research results obtained in one setting will apply to another setting
Application: Research Design: Mixed Method Approach (use experimental/ laboratory & non-experimental/ field experiment methods)
- Observation - surveys/ interviews with team members & department in organisation.
- Theory (support your decision to use your chosen research design based on previous research on what your observation reveals about the causes of this conflict):
> McGregogor’s Theory X & Y (understanding people’s motivations) (generalised helps with replicability); - Theory X managers believe their employees dislike work & only work for a paycheck, & will likely use an authoritarian style of management.
- Theory Y managers believe employees like to work & want to make decision with less supervision, & managers will use a decentralised, participative management style.
- Predict; with the theory we build a hypothesis! & your research design helps you collect data to decide whether your hypothesis is true or untrue.
- Evidence: Conclude with the best evidence you have gathered (refer to evidence-based practice; aggregate the best evidence; recognise flaws/ limitations of your research study).