BiasAdrian Flashcards
Who is Welsey Hardy Ballard
Wesley Hardy Ballard
What is confirmation bias in the context of RAKT’s chatbot, and how might it affect the chatbot’s performance?
Confirmation bias occurs when the chatbot’s training data is skewed toward a particular viewpoint, such as only including customer queries related to certain types of insurance policies. This could lead to the chatbot being less accurate in handling queries about other types of policies, reducing its effectiveness.
How could historical bias impact RAKT’s chatbot, and what is an example of this?
Historical bias occurs when the training data does not reflect recent changes. For example, if RAKT’s chatbot is trained on data from several years ago, it may not accurately respond to current customer queries about new insurance products or regulations, leading to outdated or incorrect advice.
What is labelling bias, and how could it affect RAKT’s chatbot?
Labelling bias occurs when the labels applied to training data are subjective, inaccurate, or incomplete. For example, if customer queries are labeled too generically, the chatbot might fail to accurately predict the customer’s intent, leading to irrelevant or incorrect responses.
How does linguistic bias manifest in RAKT’s chatbot, and what could be the consequences?
Linguistic bias occurs when the dataset is biased toward certain linguistic features, such as formal language. If RAKT’s chatbot is trained on formal written language, it might struggle to understand and respond appropriately to informal or colloquial customer queries, reducing its effectiveness.
What is sampling bias, and how could it affect RAKT’s chatbot?
Sampling bias occurs when the training dataset is not representative of the entire population. For example, if RAKT’s chatbot is trained only on data from one demographic, it may perform poorly when interacting with customers from other demographics, leading to unfair or inaccurate responses.
What is selection bias, and how might it impact RAKT’s chatbot?
Selection bias occurs when training data is not randomly selected but is chosen based on specific criteria. For example, if RAKT’s chatbot is trained on data suggesting that certain demographics are more likely to file claims, it might unfairly favor or disfavor those groups, leading to biased outcomes.
How could bias in training data lead to ethical challenges for RAKT’s chatbot?
If the training data is biased, the chatbot might perpetuate stereotypes or discriminatory practices. For example, if the data reflects societal prejudices, the chatbot could provide unfair or discriminatory advice, violating ethical principles of fairness and inclusivity.
What steps can RAKT take to mitigate bias in their chatbot’s training data?
RAKT can ensure the dataset is diverse, representative, and unbiased by including data from various demographics, using accurate labeling, and regularly updating the dataset to reflect current trends and changes in the insurance industry.