Vasco Flashcards

1
Q

What happened in the Winter of AI of 1973? Was this the death of Classical AI
(e.g. expert systems)? Why or why not?

A

●The “Winter of AI” in 1973 did not mark the end of Classical AI.
● It led to the development of Nouvelle AI in robotics, where instead of building
detailed world models and associating objects with symbols, a subsumption
architecture was used.
● This period also saw the initial limited use of neural networks/connectionism,
eventually leading to the emergence of machine learning, deep learning, and
large language models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Consider Descartes’ two tests to determine whether a hypothetical cyborg that
looks like a human being is human or not. Should Descartes receive credit for the
Turing test? Why or why not?

A

● Descartes proposed two tests to distinguish machines from humans, focusing
on behaviour and the reasons for behaviour.
● The first test questioned a machine’s ability to use language with the same
complexity as humans in all situations.
● The second test suggested that machines act from specific arrangements
(programming), not knowledge.
● While Descartes’ tests predate and somewhat resemble the Turing Test, they are
not the same. Turing’s test specifically measures a machine’s ability to exhibit
indistinguishable behavior from a human in a conversational setting. Descartes’
tests, on the other hand, are broader and involve a deeper inquiry into the nature
of machine intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

It is true that humans are machines? How we answer questions today regarding
the feasibility of AI is deeply rooted in long-held assumptions linked to how we
have been answering this question in the past. In your answer, discuss the views
of Descartes and La Mettrie on the topic. Make sure that you go beyond their
basic claims and also go into the reasons they provide to support them.

A

● Descartes viewed the human body as a machine but maintained that the mind, or
thought, is a function of the soul and not physical, thus supporting Cartesian
dualism.
● Julien de La Mettrie extended Descartes’ view to assert that the human body is a
machine that winds its own springs, challenging the existence of the soul as a
separate substance.
● Both philosophers contributed to long-held assumptions regarding the feasibility
of AI, with Descartes separating mind and body, and La Mettrie proposing a more
unified, physicalist view

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the frame problem and what’s its relevance in the history of AI? In your
view, does ChatGPT suffer from the frame problem?

A

The frame problem, as described in Daniel Dennett’s document, refers to the challenge
of enabling a robot or artificial intelligence system to determine which elements of its
knowledge are relevant in a given situation. It’s a fundamental issue in AI that concerns
how an intelligent agent should use its understanding effectively when considering an
action, particularly in distinguishing relevant implications of its actions from irrelevant
ones.
In the history of AI, the frame problem has been significant because it highlights the
difficulties in creating AI systems that can effectively interpret and interact with the real
world. It underscores the complexity of real-world environments and the challenges in
programming AI to understand and adapt to these complexities.
Regarding ChatGPT and the frame problem, while ChatGPT is designed to process and
generate human-like text based on the input it receives, it is not immune to the frame
problem. ChatGPT may struggle with understanding the full context or implications of
certain scenarios, particularly those that require a deep understanding of the physical
world or human experience beyond textual information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Nick Bostrom’s view on super-intelligence, and is he correct? Why?

A

● Bostrom suggests that superintelligence is inevitable and argues that an inferior
intelligence (humans) will always depend on a superior one for survival.
● He points out the limitations of biological neurons compared to machines and
poses questions about the future of human creativity and authenticity in the face
of superior machine capabilities.
● Bostrom’s view is not definitively correct or incorrect; it’s a theoretical perspective
on the future of AI and its implications for humanity. The accuracy of his
predictions remains a topic of debate and exploration in the field of AI ethics and
philosophy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In cyberwarfare, are the most economically advanced countries better protected?
Explain, discussing implications to conflicts occurring today or recently.

A

Most economically advanced countries have robust offensive cyber capabilities but are
also highly vulnerable due to their extensive connectivity, including reliance on the
Internet of Things and AI-driven systems. This increased reliance on technology makes
them susceptible to attacks, including power grid disruptions and election interference
via AI-driven methods like deep fakes. Cyber threats have escalated in importance in
national security, with intellectual property theft, general disruption to life, and military
attacks being key concerns today.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How are recent changes in neural network algorithms changing the way we
should think about the use of AI in cyberwarfare (consider the traditional vs more
recent capabilities of neural networks)?

A

The evolution of neural networks, particularly with the development of Generative
Adversarial Networks (GANs), has significantly changed the landscape of AI in
cyberwarfare. Historically, AI’s role was primarily in defense, such as identifying threats
through pattern recognition in anti-virus software. However, with advancements like
GANs, AI’s role has expanded to include offensive capabilities like creating
sophisticated attacks and automating human impersonation, making cyber attacks more
sophisticated and challenging to detect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is the military so interested in drone swarms (discuss some of their
characteristics), and what are some of the relevant ethical implications we should
worry about?

A

The military is interested in drone swarms due to their cost-effectiveness, superior
performance in pattern recognition, and enhanced target acquisition capabilities. Drone
swarms, being low-cost and numerous, can overwhelm defences and offer redundancy
and attrition tolerance. Their small size makes them hard to detect and track, and they
can operate autonomously, reducing the need for human intervention and the risk of
electronic interference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are some ethical advantages of using autonomous drone swarms with
autonomous lethal capabilities (i.e. decision whether to kill can be made
autonomously). Consider Arkin’s views.

A

Autonomous drone swarms have several ethical advantages, as argued by Arkin. They
can act more conservatively than humans in target identification, possess advanced
sensory capabilities, lack human emotions that might cloud judgment, avoid
psychological biases, integrate information more rapidly, and monitor ethical behavior
on the battlefield. Autonomous systems are seen as potentially capable of making more
ethical decisions than human soldiers, who might be driven by emotions, stress, or
biased perceptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Consider Arkin’s arguments for why human soldiers fail to make correct ethical
decisions. Is he correct in arguing for an alternative, and why?

A

Arkin argues that human soldiers often fail to adhere to ethical and legal standards on
the battlefield due to factors like emotional stress, revenge-seeking, dehumanization of
the enemy, and unclear orders. He suggests that autonomous systems, being devoid of
these human frailties, could perform more ethically. However, the perfection of ethical
behavior in autonomous systems is not claimed; rather, the aim is for them to perform
better than humans in similar situations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Outline the two proposals for ethical compliance mechanisms for autonomous
drones and discuss their feasibility.

A

There are two main proposals for ensuring ethical compliance in autonomous drones.
The first is Arkin’s Ethical Governor, which imposes constraints based on the law of war
and rules of engagement, requiring proportionality and necessity tests before employing
lethal force. The second proposal involves the development of strong AI, which could
ensure compliance with international humanitarian law, though it is recognized that such
AI would lack human qualities like emotion and compassion, necessitating human
oversight for adequate civilian protection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Consider the main social/ethical advantages and drawbacks of the use of
autonomous AI in war, and discuss in what ways we should be excited or
concerned about the implementation of these technologies in the near future

A

The use of autonomous AI in war presents both social and ethical advantages and
drawbacks. Advantages include potential for more ethical conduct than human soldiers,
reduced cost, and enhanced performance in target acquisition. However, there are
serious ethical concerns, such as the potential for increased civilian casualties without
proper oversight, the distancing of human decision-making in lethal strikes, and the risk
of autonomous systems being misused or malfunctioning. The future of autonomous AI
in warfare brings both excitement for technological advancements and concern over
ethical and moral implications of such technologies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is reinforcement learning and how can it impact the performance of an
autonomous robot?

A

Reinforcement learning, a subset of machine learning, is not specifically addressed in
the document with respect to autonomous robots. However, the document highlights the
emergence of self-training neural networks and Generative AI, including Large
Language Models (LLMs). Reinforcement learning could impact autonomous robots by
enabling them to learn and adapt through trial and error, improving their performance
over time in complex and dynamic environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are deep fakes, are they likely to be used in cyberwarfare, and what is the
likely impact of Canadian society?

A

Deepfakes, which utilize deep learning involving multiple layers of neural networks,
have significant implications. They can undermine trust in democratic societies and
support sophisticated cons, potentially being used in cyberwarfare. For instance, a fraud
case involving a deepfake voice mimicking a CEO caused substantial financial loss. The
impact on Canadian society would likely involve challenges to media credibility and trust
in public communication, possibly leading to political and social unrest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Neural networks are known for being good at pattern recognition (a major
drawback of Classical AI was its poor performance in this area as it is hard to
formalize this task). How are the capabilities of neural networks changing, and
how can this affect our lives?

A

The capabilities of neural networks are expanding, with implications such as solving
arithmetic problems, language translation, understanding others’ perspectives (theory of
mind), and generating synthetic data when training data runs out. This “double
exponential” growth in AI capabilities could dramatically impact various aspects of life,
from enhanced personalized services to potential privacy concerns and shifts in
employment landscapes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Provide your own examples (one example each) of the weaponization of a deep
fake by a government actor and a rogue state.

A

● Government Actor: A hypothetical example could be a government releasing a
deepfake video of a foreign leader making inflammatory statements, aiming to
manipulate public opinion or international relations.
● Rogue State: A rogue state might use deepfakes to spread disinformation about
political dissidents, falsely depicting them engaging in criminal or unethical
behavior to discredit them.

17
Q

How can we spot deep fakes and are there are problems related to this that could
affect democracy?

A

Spotting deepfakes involves identifying inconsistencies like unnatural blinking or details
around hair and eyes. AI-driven deepfake detection technologies are being developed,
but there remains a concern about whether these technologies can be trusted. The
prevalence of deepfakes could significantly impact democracy by eroding trust in media
and public figures, leading to misinformation and manipulation of public opinion.

18
Q

If a human uses Midjourney or other AI software to produce something that looks
like art, who is the artist? Midjourney? The person? The programmer of
Midjourney? The artists Midjourney took its data from (the network is trained on
art)? Present a clear claim and support it with persuasive arguments.

A

The debate on who is the artist in AI-generated art revolves around intent and the
creative process. AI-generated art is novel, surprising, and eccentric, often classified as
conceptual art. The human who chooses the AI medium, training samples, and engages
in post-curation could be considered the artist, as they set the framework for creation.

19
Q

What are the new ethical responsibilities that are being created with AI (AI
dilemma video)?

A

The document doesn’t specifically mention new ethical responsibilities created by AI.
However, the emergence of LLMs and their capabilities implies ethical considerations in
areas like privacy, accuracy of information, and the potential manipulation of public
opinion.

20
Q

What are the two contacts with AI and what effect is this having on humans? Is
this primarily positive or negative, and why?

A

The document describes how knowledge in fields like computer vision, speech
recognition, and robotics has converged into LLMs, treating all inputs as language. This
convergence could have profound effects on human interaction, privacy, and the nature
of work. However, whether this is positive or negative depends on the context and
governance of AI technologies

21
Q

What is the double exponential issue the authors of AI Dilemma discuss?

A

The “double exponential” issue refers to the phenomenon where AI not only learns from
existing data but also creates synthetic data to further its learning, leading to a rapidly
accelerating pace of AI development. This could lead to unforeseen advancements and
challenges in managing AI’s capabilities and impacts.

22
Q

What is the advantage of creating synthetic data, and how can synthetic data be
helpful at all (not being actual data)? Relate to the forms of self-learning we
discussed.

A

Synthetic data, created by AI, can be used to train AI systems without the limitations of
real-world data availability. This can be particularly useful in areas where data is scarce
or sensitive. The ability to generate and learn from synthetic data enables AI systems to
improve themselves, a form of self-learning discussed in the context of AI’s rapidly
advancing capabilities

23
Q

Consider the performance of different versions of ChatGPT on standardized tests
(e.g. LSATs, GRE). What social impact is this likely to have in the near future?

A

The document doesn’t directly address ChatGPT’s performance on standardized tests.
However, given the rapid advancement of AI capabilities, such performance could have
significant social impacts, potentially challenging the traditional educational and testing
systems and raising questions about the role of AI in learning and assessment.

24
Q

Yuval Harari (author: Sapiens) said AI is equivalent to the nukes of the virtual and
symbolic world. In what sense is this true or possible false?

A

Yuval Harari suggests that AI is to the virtual and symbolic world what nuclear weapons
are to the physical world. This analogy highlights the transformative and potentially
disruptive power of AI. However, the document does not elaborate on this comparison in
detail. The comparison could be interpreted in various ways, reflecting both the
enormous potential and the risks associated with AI.