Responsible Use Flashcards
Who is accountable for AI use in engineering?
The user
Describe 3 dangers of using AI in an engineering context
Incorrect information - incompetent
Bias - integrity and objectivity
Incoherency and irrelevance- coherency
Where does AI stereotyped outputs originate?
The training data
Describe how generalisation is a good thing and contrast it to how it could lead to a bad outcome.
It means that the LLM has the ability to work on tasks outside its training data. It is good because it is possible to learn from the past to do better in the future.
Engineering problems are complex. A prior solution may not work for a new problem.
The problem context needs to be very specific to avoid it
Explain how curation can introduce new bias
Training data can be bias, we can unknowingly adopt someone else’s bias.
What are the four ways AI can misinform the reader? How does verification avoid these?
Factual correctness - open links/references to verify
Bias - ask for alternative opinions
Incompleteness - ask for a peer review of the output
Incoherency or irrelevance - does it answer the question? ask the AI to explain its answer
How does AI use conflict with the engineer’s ethical obligation to act competently.
Are you competent in the area you are asking for AI help with, can you critically evaluate the output? Could you have arrived at the same solution on your own eventually? Have you avoided using your own skills? Are you misinterpreting your own expertise or competence?
Explain how AI usage could go wrong for each case
- Tutor, editor, researcher, peer reviewer, colleague