w7 Flashcards

1
Q

4P’s of Creativity

A
  1. Product (output, like list of creative ideas)
  2. Process (how one comes up with the ideas, e.g., optimal foraging)
  3. Person (characteristics of a creative person, e.g., openness, # published poems)
  4. Press (effects of context/environment, e.g., instructional manipulation)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

alternative uses task, GPT vs humans

A

Humans:
- Excel in originality, generating creative and semantically distant ideas.
Show a tradeoff between originality and utility, balancing both effectively.
GPT:
- Excels in utility, producing practical and functional responses.
Struggles with originality, often generating predictable or less novel ideas due to reliance on training data patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is abstraction?

A

the ability to discover patterns from a few noisy images,
sounds or other senses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is analogical reasoning?

A

learning about new things by relating it to what you know

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is analogy-making?

A

Using what you know about a situation to infer knowledge about a new, somehow related instance.
Calculator is to arithmetic as ChatGPT is to?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In adults the processing steps are:

A

(1) encode analogy elements A, B and C
(2) search for relationship between A and B (“stands on”)
(3) align A and C (“body and tree are things that stand”)
(4) map A:B relationship to C to get D (“tree stands on roots”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In Children the processing steps are

A

(1) encode analogy elements C (maybe A, B, instruction)
(2) ignore / forget about A and B?
(3) search for similar instances to C (“leaves”, “branches”, “bush”, “nest”)
(4) use perceptual/semantic similarity with C to get D (“tree has leaves”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why do LLMs have trouble with analogical transfer?

A
  • Lack domain knowledge
  • Lack conceptual abstraction of what constitutes an alphabet, such as being an ordered sequence, so
    can’t flexibly map to less familiar domains
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what do Good (A)GI tests have

A
  • unlimited rules
  • limited training examples
    to force AI to “think
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Associative error: Duplication

A

Associative Error:
- Errors made by AI models due to their reliance on statistical associations within the training data. These errors reflect overreliance on patterns or repetitions rather than reasoning.
Duplication:
- A specific type of associative error where the model repeats parts of the input or closely related content, mistaking it for a valid response.
Example: In language generation, if prompted with a sentence about a “brick wall,” the model might redundantly describe “a wall made of bricks” without adding new information or insight.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Literal solutions and conceptual

A

Literal Solutions:
- Solutions that rely on surface-level patterns or direct, observable features of a problem without engaging with deeper abstract principles or relationships.
In AI (as highlighted in the lecture and book chapters), literal solutions are often the result of statistical correlations learned during training.
Example: In an analogy problem, if AI matches answers based solely on visible similarities without understanding the underlying relationship, it provides a literal solution.
Conceptual Solutions:
- Solutions that involve understanding and applying abstract principles, relationships, or rules that go beyond surface-level features.
These solutions demonstrate an ability to generalize and capture deeper meanings, which is challenging for LLMs.
Example: Correctly solving analogies like “Athens : Greece :: Paris : France” by understanding the city-country relationship rather than pattern-matching.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Do LLMs solve kidsARC items like children?

A
  • Young kids and LLMs make numerous duplication errors.
  • Humans make more concept errors.
  • LLMs make more literal errors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

t/f

Analogy & abstraction lags far behind that of
humans and they currently cannot generalize as
well as children can

A

t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Challenges of Evaluating AI Intelligence:

A

Data Contamination: Inflated performance occurs when test data leaks into training sets, making results unreliable (e.g., GPT models performing well on benchmarks they were exposed to).
Benchmark Limitations: Tests like the Bar Exam or GLUE measure surface-level skills and statistical patterns but do not reflect true general intelligence.
Robustness Issues: AI struggles with edge cases and generalizing to unseen data.
Example: GPT-4’s ability to pass standardized tests does not equate to deep understanding or reasoning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Creativity in AI:Discuss how the 4P’s of Creativity framework from the lecture applies to AI systems. Which aspects of this framework do AI models struggle with the most, and why?

A
  • The 4P’s framework (Product, Process, Person, Press) shows AI excels in creating products but struggles with Process and Person aspects. AI lacks emotional nuance, meaning, and flexibility in ideation.
    Example: In the “Alternative Uses Test,” GPT-3 gave functional but predictable responses (“brick as decoration”), whereas humans generated abstract ideas tied to emotional and situational contexts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Theory of Mind in AI:

A

GPT-4’s apparent “theory of mind” success is attributed to statistical associations rather than genuine psychological reasoning.
Example: In false-belief tasks, GPT-4 predicts answers based on learned patterns but lacks an internal model of beliefs or mental states, as highlighted in the Melanie Mitchell article.

17
Q

Human-Centered AI:

A

Principles like fairness, inclusivity, accountability, and transparency ensure AI aligns with human values.
Addressing bias in hiring algorithms or ensuring equitable access to AI technologies exemplifies these principles. Regulation is essential to mitigate risks like job displacement and surveillance

18
Q

Analogical Reasoning:

A

AI fails in analogical reasoning because it lacks relational understanding. Humans use context and knowledge transfer (e.g., “roots anchor a tree like feet anchor a body”), while AI relies on surface-level patterns.
Example: ARC tests expose AI’s inability to generalize relationships from unfamiliar symbols or novel tasks.

19
Q

What is the primary issue with data contamination in AI evaluations?
a) AI systems fail to recognize patterns in new datasets.
b) AI training datasets unintentionally include test questions, inflating performance.
c) AI systems are unable to distinguish between training and test data.
d) AI evaluations are too costly to administer reliably.

A

b) AI training datasets unintentionally include test questions, inflating performance.

20
Q

In the “Alternative Uses Test,” how did GPT-3’s responses differ from human responses?
a) GPT-3 generated highly abstract uses compared to humans.
b) GPT-3 responses were more flexible but lacked persistence.
c) GPT-3 provided functional but less semantically distant uses.
d) GPT-3 outperformed humans in generating original ideas.

A

c) GPT-3 provided functional but less semantically distant uses.

21
Q

What is the primary reason AI struggles with theory-of-mind tasks?
a) Lack of sufficient training data on psychological reasoning.
b) Dependence on shallow heuristics rather than robust conceptual understanding.
c) AI models lack the processing power for abstract reasoning.
d) AI systems misinterpret linguistic prompts in tasks.

A

b) Dependence on shallow heuristics rather than robust conceptual understanding

22
Q

According to Chapter 15, which principle is NOT part of human-centered AI design?
a) Accountability and transparency.
b) Maximizing automation to replace human labor.
c) Ensuring fairness and inclusivity.
d) Aligning AI with human values.

A

b) Maximizing automation to replace human labor.

23
Q

Why are benchmarks for AI often criticized as “broken”?
a) They are too complex for most AI models.
b) They encourage overfitting by focusing on statistical shortcuts.
c) They fail to include tasks that humans perform poorly on.
d) They prioritize open-source models over proprietary ones.

A

b) They encourage overfitting by focusing on statistical shortcuts.

24
Q

True or False: Melanie Mitchell argues that AI models’ high performance on benchmarks like the Bar Exam proves they possess general intelligence.

25
Q

True or False: The lecture emphasized that humans outperform AI in tasks requiring emotional nuance and contextual understanding.

26
Q

True or False: Chapter 16 argues that transparency in AI training data is unnecessary for creating ethical systems.

27
Q

True or False: The 4P’s of Creativity framework applies equally well to evaluating human and AI creativity.

A

False (AI struggles with aspects like emotional depth and flexibility.)

28
Q

True or False: The “last 10%” problem refers to AI’s inability to generalize across tasks after achieving near-human performance in specific domains.

29
Q

Why does the presentation argue that benchmarks like the Bar Exam and Raven’s Progressive Matrices are insufficient for measuring AI intelligence?

A

Benchmarks like the Bar Exam and Raven’s Progressive Matrices measure surface-level performance, relying on statistical patterns rather than demonstrating deep reasoning or understanding. The presentation highlights that AI models often exploit shortcuts in datasets, such as common patterns in answer options, rather than solving problems through genuine cognitive processes. This leads to inflated performance that does not reflect true general intelligence or adaptability.

30
Q

Why does the presentation argue that benchmarks like the Bar Exam and Raven’s Progressive Matrices are insufficient for measuring AI intelligence?

A

Benchmarks like the Bar Exam and Raven’s Progressive Matrices measure surface-level performance, relying on statistical patterns rather than demonstrating deep reasoning or understanding. The presentation highlights that AI models often exploit shortcuts in datasets, such as common patterns in answer options, rather than solving problems through genuine cognitive processes. This leads to inflated performance that does not reflect true general intelligence or adaptability.

31
Q

How does the dual pathway to creativity model explain differences between human and AI creativity?

A

The dual pathway model describes two processes for creativity: flexibility, which explores diverse categories and novel ideas, and persistence, which generates multiple ideas within a single category. Humans balance these pathways, often producing abstract or semantically distant ideas. In contrast, AI models like GPT-3 excel in persistence but struggle with flexibility, often providing predictable or context-limited responses. For instance, when asked for alternative uses for a brick, AI responses focus on practical uses, while humans produce more imaginative answers.

32
Q

Why does the presentation suggest that data contamination undermines claims about AI intelligence?

A

Data contamination occurs when test data inadvertently overlaps with training data, leading AI systems to produce correct answers without genuine reasoning. The presentation emphasizes that this issue inflates AI performance on benchmarks, as models may “memorize” answers rather than generalize concepts. This problem questions the reliability of benchmarks like SuperGLUE, which claim to evaluate language understanding but fail to account for dataset leakage.

33
Q

What role does analogy play in evaluating AI intelligence, and why do AI models struggle with it?

A

Analogies require understanding relationships between concepts and applying them to novel contexts, a hallmark of human intelligence. AI models struggle with analogical reasoning because they rely on statistical associations rather than relational understanding. For example, in tasks like “tree:roots :: body:feet,” humans recognize the functional relationship (anchoring), while AI often fails to grasp the deeper connection, especially when presented with unfamiliar symbols or terms.

34
Q

How does the presentation describe the relationship between AI creativity and the 4P’s framework (Product, Process, Person, Press)?

A

AI performs well in the Product aspect, generating outputs like art or stories that appear creative. However, it struggles with Process, as its ideas stem from statistical patterns in training data rather than iterative or imaginative exploration. AI lacks a “Person,” meaning it does not possess traits like curiosity or emotional depth, and it is less influenced by Press (environmental factors shaping creativity). This framework highlights that while AI mimics creative output, it does not replicate the underlying processes or motivations of human creativity.

35
Q

what is semantic distance

A

distance in meaning between 2 phrases or words

36
Q

what is the difference between associative and analogical reasoning

A

similarities vs complex relationships
associative
1. encode analogy elements C
2. ignire/forget abt A and B
3. search for similar instances to C
4. use perceptual/semantic similarioty with C to get D

analogical reasoning
1. encode analogy elements A,B and C
2. search for relationship between A and B
3. align A and C
4. map A:B relationship to C to get D