Security and Technology Flashcards

1
Q

what is an attractiveness factor? by what is it influenced?

A

the degree of interest in a target

influenced by: a) goal of opponent, b) critical infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is the difference between technique and technology?

A

technique - man-made, tangible, concrete solution

technology - science of technique; expanding our human capabilities (time component)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how has security technology changed over time?

A

before: for survival (military tech for national security)
after: for survival + facilitate our lives + solve problems… (much broader)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what are 4 types of security technology?

A
  1. preventive
  2. detective
  3. responsive
  4. chronological
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are TAs goals?

A
  1. identify possible side effects/threats of new techse.g. privacy, job security, ethical, security…)
  2. anticipate and prevent unacceptable scenarios
  3. develop strategies to mitigate consequences when they happen
  4. provide responsible design and deployment suggestions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what are Reponsible Innovation’s 2 aspects?

A
  1. substantive: concept to amplify set of moral obligations

2. process: who is responsible in a multi-actor system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what does RI consist of according to the EU definition? (the obligations)

A

during design and development of new tech:

  1. obtain knowledge on (a) consequences, (b) range of options
  2. evaluate outcomes and options in terms of moral values
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are the 4 options of RI’s state?

A
  1. business as usual
  2. improved business as usual (specific funding for RRI)
  3. improved coordination with m.s. w/o legally binding intiative
  4. improved coordination with m.s. with legally binding intiative (top-down)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

which, out of the 4 options is the best RI state and why?

A
  1. improved coordination with m.s. w/o legally binding intiative
    a. efficient research funding (many actors + funding)
    b. flexible
    c. harmonize approaches
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

is technology value neutral?

A

no.

  • tech inherits our values of society and shapes our lives
  • Bethlehem church ‘door of humility’
  • NY low bridges
  • AI guest lecture
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

is the need for RI (need to balance conflicting ethical values) always bad

A

no, it can lead to commitment and creativity to find solutions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what do TAs aim to do?

A
  1. aim to map as many effects as possible (1st, 2nd, 3rd order effects)
  2. reflect on options/measures
  3. early warning system
  4. support political decision-making
  5. map uncertainty and ambiguity
  6. suggestions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is the difference between a 1st, 2nd and 3rd order effect?

A

1st: planned, foreseen, desired
2nd: unplanned, (partly) unforeseen, sometimes desired
3rd: difficult to foresee

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is (un)intended use of technologies? what are 2 examples

A

tech being used for other purposes than designed for

e.g. 3d printers for weapons; speed limits as games

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is (un)foreseen use of technologies?

A

unforeseen effects/impact of tech

e.g. not thinking computers would become so widely used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is the difference between hard and soft impact?

A

hard impact: more visible; harmful (e.g. chernobyl nuclear accident)
soft impact: less visible; harmful (e.g. privacy breaches, exclusion from society)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what is an uncertainty risk problem?

A

unknown likelihood of occurence (due to lack of knowledge)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what is an ambiguous risk problem?

A

effects’ desirability up for debate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what is the constructive ta (cta) method?

A
  • addressing the gap between scientists (promotion) and scholars/stakeholders (control)
  • involve more actors and aspects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what are cta’s goals?

A
  1. learn about social consequences
  2. reflexivity (awareness of complexity of multiple actors)
  3. anticipate possible consequences
  4. broaden tech dev by a co-creation process
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

how does one go about cta?

A
  • method of insertion:
  • inserting scholars/stakeholders in the scientific world
    a. explore world of developing tech (labs, conferences, debates…) (identify endogenous features)
    b. formulate diagnoses on current discussions
    c. orchestrate workshops with socio-technical scenarios (with many stakeholders)
  • soft intervention
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what is the midstream modulation (mm) ta method?

A

rather than public upstream or downstream engagement, engage in the midstream
more feedback loops

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what are the issues with solely public upstream engagement for TAs?

A
  1. unclear how to accomodate
  2. linear (inflexible)
  3. too early
    …?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what are the issues with solely downstream engagement for TAs?

A
  1. lack of public trust
  2. more perspectives -> more contexts taken into -> more desirable outcomes (more robust)
  3. too late
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

what is the network approach for moral evaluation (NAME) ta method?

A

reaching consensus on which ethical issues to address through 1. reflexivity and 2. inclusiveness
- R&D = network of bounded rational actors cooperating (vs. hierarchy and clear task division)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

how does one go about the NAME TA method?

A
  1. identify ethical issues in the R&D
  2. evaluate by deliberation and seeking coherence on which issues to address
  3. distribute responsibilities in acceptable way to all and still effective
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

what is the political TA (pta) method?

A

involving policy-makers in the tech development

through debates between science and politics (boundary work)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

what is required for the pTA method?

A
  1. trustworthy scientific identity
  2. trust connections
  3. seize cooperation opportunities
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

what is the ethical TA (eTA) method?

A
  • ethical implication of new tech
  • continuous dialogue with tech developers early on
  • focus on 9 values
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

what are the eTA values?

A
  1. dissemination and use of info
  2. control, influence and power
  3. impact on social contact patterns
  4. privacy
  5. sustainability
  6. human reproduction
  7. gender, minorities and justice
  8. international relations
  9. impact on human values
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

how can you quantify risk?

A

likelihood x consequence

likelihood = exposure/vulnerability (so need info on the impact and occurence)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

what is ignorance of effects?

A
  • little knowledge on impact

- any amount of knowledge on the occurences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

what is uncertainty of effects?

A
  • lots of knowledge about the impact

- little knowledge about occurences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

what are reasons for a lack of knowledge on tech effects?

A

a. lack of data
b. lack of adequate testing methods
c. inadequate models
d. long-term effects
e. interaction effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

how can one approach making decisions under risk?

A

risk analysis (limit the negative consequences)

a. risk matrix (likelihood x consequences severity)
b. risk management cycle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

how can one make decisions under uncertainty?

A

precautionary principle (no probability needed)
or
security/safety-by-design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

what is the Collingridge/Control Dilemma?

A

each moment of a tech’s timeline has limitations in being able to steer the technology
early in development phase:
a. tech = malleable
b. but lack of info makes it difficult to predict how best to steer tech
later phase:
a. less malleable tech (embedded in society)
b. but risks are clearer so you know where you would need to steer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

what is the precautionary principle?

A

anticipatory action taking even w/o knowledge on severity and likelihood of the impact
“if there is a threat, which is uncertain, then some kind of action is mandatory”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

what are critiques of the precautionary principle?

A

“(1) if there is a threat, which is uncertain, then (2) some kind of action is (3) mandatory”

  1. vague (when do you invoke - which phase?)
  2. contradiction (uncertainty impedes knowing suitable measure)
  3. hinders innovation (can’t deal with ignorance)
  4. from sunscreen case: risks not taking into account long-term cumulation, interaction effects…
40
Q

what principle was used for dealing with the uncertainty of titanium dioxide sunscreen?

A
  • NL advice: use precautionary principle
  • but limits: risks not taking into account long-term cumulation, interaction effects, lab isn’t real life…
  • so used social experiment (safety/security-by-design)
41
Q

what are the conditions for safety/security by design?

A
  1. absence of alternatives
  2. controllability of experiment
  3. informed consent (knowledge of risks, voluntary, can withdraw)
  4. on-going evaluation
42
Q

what is the problem of many hands?

A
  • a collective is responsible for an outcome (vs. one indiv)

- responsibility gaps

43
Q

what are the different classifications of responsible group actors?

A
  1. engineers

2. users

44
Q

what are the conditions for engineers to be held responsible?

A
  1. freedom (to act - no pressure)
  2. knowledge (of possible negative outcomes)
  3. causal connection (between act and outcome - doing nothing is also an act)
  4. transgression of norm (violating a societal norm)
45
Q

what is the difference between backwards and forwards looking responsibility?

A
  • responsibility of engineers
  • backwards-looking: after things went wrong
  • forward-looking: before something happened
46
Q

what might be true to hold users responsible?

A

unintended use (than designers’ intentions)

47
Q

how can technology impact the responsibility of users?

A
  • tech can enable and inhibit some from carrying out responsibility
    a. take over responsibility (enable) (e.g. tesla, good design in control room)
    b. hinder from taking over responsibility (bad design in control room)
48
Q

what are issues in governance of technology?

A
  1. too many stakeholders -> conflicting ideas (tech developers, users, non-users but affected, gov…)
  2. lack of technical expertise of policy-makers (complex issues, inadequate knowledge) -> rely on advice
    a. experts need independence
    b. experts need credibility
  3. international context (laws) e.g. GDPR -> too much regulation -> loses developers
49
Q

what are different tools to govern new technologies?

A
  1. national and international law
    - e.g. GDPR
    - limits: not all stakeholders agree with legislation (legislation doesn’t equal acceptance)
  2. stakeholder involvement
    - direct and indirect (unaware, affected non-users)
  3. raising awareness on possible implications
    - to all stakeholders (incl. unaware users_
    - how to balance awareness and fear
  4. funding for R&D
  5. national and international institutes
    - advise, provide info, provide oversight, fund…
50
Q

What are the Rathenau Institute’s tasks

A
  1. support forming both political and public opinion on social issues
  2. inform both parliament and public
  3. stimulate public debate
  4. build bridges between science, tech and society
51
Q

what are the Rathenau Institute’s mode of operation

A
  • many methods for large-scale TA
    1. e.g. lit review, surveys, interviews, focus groups…
    2. communicate results to public
    3. connect TA to politics (put on parliamentary agenda, make research comprehensible…0
52
Q

how can technology (e.g. AI) influence morality?

A
  1. by co-shaping moral perceptions
  2. by co-shaping moral actions
  3. by co-shaping and changing values
53
Q

how can technology (e.g. AI) influence morality by co-shaping moral perceptions?

A
  • e.g. what it means to be safe in the city
  • coloured lights regulate aggressive behaviour or dispatch police
  • issues: incorrect identification, need human intervention too
  • self-fulfilling prophecy (dispatch often -> determined to find crime)
  • safety city maps influencing choices
54
Q

how can technology (e.g. AI) influence morality by co-shaping moral actions?

A
  • e.g. fair treatment
  • predictive policing: algorithm for directing efforts towards high-risk persons and locations
  • Sensing Project: analyse car passengers for pickpockets/shoplifters of eastern european origin (mobile banditry) (eastern european indicators… vs only crime indicators)
  • software predicting chances of recommitting crimes to decide sentence length
  • issue: algorithm discriminatory
55
Q

how can technology (e.g. AI) influence morality by co-shaping and changing values?

A
  • e.g. interdependence in the city

- covid-19 social norms (what we expect from others): masks, solidarity, honesty in tracing apps..

56
Q

how does trust affect how we should conduct decision-making with AI?

A
  1. relational vulnerability
  2. informed choices beyond AI indicators
  3. critical engagement with info from AI
57
Q

what did the case study of SyRI illustrate on risk-assessments using AIs ?

A
  • AI in advance risk prediction of citizens (2014)
  • result: childcare benefit scandal -> flagged many people of colour as committing fraud and made to repay -> bankruptcy
  • problems:
    a. discriminatory criteria by humans (immigration background, double nationality)
    b. understood honest mistakes as fraud (missing signature)
  • learned: need algorithmic transparency, safeguards, accountability
58
Q

what are responsible design solutions for AI?

A
  1. diversity and inclusion
  2. checklist to check for biases
  3. know AI isn’t neutral; mediates our perceptions
  4. get more info (on how its working in practice and before)
59
Q

what are the different alarming situations?

A

perception of attack detected vs reality: attack

  1. missed alarm (f,t)
  2. alarm (t,t)
  3. at rest (f,f)
  4. false alarm (t,f) -> unwanted alarms, unidentified alarms
60
Q

what is the difference between unwanted alarms and unidentified alarms?

A

unwanted alarms: mistake but for understandable reason (e.g. house alarm detects dog outside)
unidentified alarms: mistake but ununderstandable reason (e.g. house alarm detects nothing)

61
Q

what are the 2 alarm failures?

A
  1. missed alarm (f,t)

2. false alarm (t,f) -> unwanted alarms, unidentified alarms

62
Q

how does one minimize alarm failures?

A
  • move system thresholds
  • decide which you want to avoid most
  • difficult to avoid both, but solution to minimize both: multiple thresholds (different sensors and technologies) -> multi-layer detection
63
Q

what is multi-layer detection?

A
  • depending on combo and order of detection -> alarm goes off
  • lasers, videos, mics…
64
Q

how have alarms changed on the dutch national scale

A
  • increase in verification in 2007 before police dispatch
65
Q

what are different verification tools for alarms?

A
  1. cameras (live and secure)
  2. local security
  3. seperate secondary alarm
  4. mic
66
Q

what is the link between terrorism and tech?

A
  • tech can be used for prevention, mitigation of terrorism

- tech can also be used by terrorists (unintended uses + intended uses + dual-use)

67
Q

what kind of relationship does the resources required to use a tech and the tech’s impact have?

A

positive linear

68
Q

what kind of relationship does the complexity of a weapon and the probability of success have?

A

negative linear

69
Q

what influences the probability of the use of a weapon by terrorists?

A
  1. difficulty of the technology’s acquisition (law, money, organisation…)
  2. difficulty of acquiring explicit knowledge about the tech (internet, manuals, training…)
  3. difficulty of acquiring the tacit knowledge about the tech (trial and error, training, military…)
70
Q

what are the different levels on the web?

A
  1. surface web
    - in search engines
    - no software or The Onion Router (Tor)
    - .com, .net…
  2. deep web
    - protected by authentification, passwords, firewalls
    - e.g. emails, bank info…
  3. dark web
    - sites within deep web
    - need Tor browser
    - legal and illegal purposes
71
Q

what are some advantages of cyberspace?

A
  1. simple to access
  2. poor control/monitoring
  3. anonymity
  4. interactive
  5. cheap
  6. instant reach to masses
72
Q

what are advantages of using the cyberspace for cyberattacks

A
  1. stealthy pre-attach
  2. organization = low risk
  3. can do extensive damage
73
Q

what is doxing?

A

revealing private info

74
Q

what is narrowcasting?

A

transmitting restricted info to specific category

75
Q

what are some purposes of cyberspace for cyberattacks?

A
  1. access to materials (weapons, docs)
  2. training (manuals, advice…)
  3. secure communication
  4. doxing
  5. recruit
  6. fundraise
  7. narrow casting
  8. get info on targets (maps, images, locations)
76
Q

what is the issues with Value-Sensitive Design (VSD) and Design for Values?

A
  • doesn’t take into account how some technologies can be subject to changes in values
77
Q

what is Value-Sensitive Design (VSD) and Design for Values?

A

framework for ethical intervention in design of socio-technical systems
VSD:
- empirical, conceptual, technological investigations (cycle)
DFV:
- design requirements, norms, values (pyramid)

78
Q

what are different types of value changes and an example for each?

A
  1. changes in value’s conceptualization (e.g. privacy - many types)
  2. emergence of new values (e.g. sustainability)
  3. changes in values relevant for tech design (e.g. traffic safety in phones)
  4. changes in priority and relative importance of values (e.g. pedestrian safety in cars)
  5. changes in values’ specification (translation into norms and design requirements) (e.g. battery cages for chickens)
79
Q

what are 3 ways that tech design can be helped in case of value changes and what is the ultimate goal?

A
  • > resilience!
    1. adaptability: possibility to change composition or configuration (for same or new function) (with modularization)
    2. flexibility: different possibilities for using the design (double-edged)
    3. robustness: ability to perform while respecting values despite changes in relevancy, conceptualization and prioritization of values
80
Q

what are 3 perspectives in how responsibility should be conceptualized?

A
  1. consequentialist (forward-looking; instrumental; efficacy)
  2. merit-based (backwards-looking; fairness; conditions)
  3. rights-based (no-harm principle; duty; consent)
81
Q

According to Kudina, how should privacy and solidarity values be bridged in COVID-19 tracking apps? and why should that happen?

A
  1. institutional embedding (shared decision-making citizen panels and prevent societal coercion, app as additional measures)
  2. public awareness
  3. regulatory foresight
  4. transparency
  • > because tech is non-neutral
  • > creates trust in the techs
82
Q

what are some ways that Collingridge contributes to RRI (social control of technology)?

A
  1. keep technical options open
  2. increase insensitivity of tech to error
  3. escape hedging circle
  4. enhance controllability
  5. manage entrenchment and competition (to not hinder responsiveness)
  6. reduce dogmatism of experts (transparent scrutiny, monitoring, independence)
  7. minimise diseconomies of scale
83
Q

what was the sensing project

A
  • how technology/AI used in NL to make risk assessment for shoplifting and pickpocketting
  • algorithm indicators were very eastern european targetted -> because there is assumption in police that they are main culprits of mobile banality (even though actually only 22%)
84
Q

does the use of WMDs by terrorists pose a real threat?

A

theres a clear interest by them but little capability for them to do so

85
Q

in what 2 scenarios might WMD terrorism actually pose a potential threat?

A
  1. toxic industrial chemical or highly radiological materials released near urban area in-situ
  2. nuclear or biological weapons taken from a state arsenal (weaker state)
86
Q

what might be motives for terrorists to use WMDs?

A
  1. mass casualties
  2. psychological effect
  3. long-term area denial
  4. boost status
87
Q

what are the 2 most used CBNR attacks by terrorists?

A
  1. chemical

2. biological

88
Q

what groups use CBNR WMDs the most?

A
  1. religiously-inspired groups
  2. lone actors or autonomous cells
  3. ethno-nationalist groups
89
Q

what are technological advances that may risk more WMD use by terrorists?

A
  1. miniaturization of manufacturing (e.g. biotechnology kits)
  2. rapid prototyping and marginal cost production
  3. spread by commercial off-the-shelf applications
  4. adversaries easy access to online edu and social media
90
Q

what are the 2 types of constraints identified for terrorrist use of WMDs?

A
  1. practical constraints

2. strategic constraints

91
Q

what are the different types of practical constraints identified for terrorrist use of WMDs?

A

a. technical constraints (build, transport, use, store, uncertainty in no testing
b. environmental constraints (limited sources, expensive, local conditions to hide behind fronts)
- -> but increasing availability hasn’t shown to have contributed to an increase in WMD attacks

92
Q

what are the different types of strategic constraints identified for terrorrist use of WMDs?

A

a. generate condemnation by supporters?
b. provoke more gov efforts to destroy group
c. ideology role (if want to govern and othering - religious and ethno-nationalist)

93
Q

where do most terrorist orgs feature on constraints

A

high practical and strategic

94
Q

what the features of far-right CBNR terrorism?

A
  • contrary to most assumptions of CBNR terrorism
    1. lone-actors or autonomous cells
    2. white-collar edu and profession (male middle aged)
    3. far-right non-religious ideology
    4. indiscriminate attack and weapon design
    5. use of easily obtainable and low tech. knowledge agents (ricin and cyanide)
95
Q

what CBNR terrorism assumptions does far-right CBNR terrorism challenge?

A
  1. lone-actors -> not groups which could more easily obtain capabilties
  2. non-religious actor’s use of CBNR and indiscriminate tactics -> no rational fear of political backlash
96
Q

what are different stakeholders?

A
  1. tech developers
  2. users
  3. non-users but affected
  4. government