SB Flashcards
Social engineering
People are integral to security, and their behaviour can’t always be controlled by policies
“Entire ruse was based on one of the fundamental tactics of social engineering: gaining access to information that a company employee treats as innocuous, when it isn’t” - Mitnick, 2001
Six principles of influence
Cialdini: RCASLS
Reciprocity
Commitment and consistency
Authority
Social Proof
Liking
Scarcity
“You say you’re an author or a movie writer, and everybody opens up” - Mitnick, 2001
-> Liking/social proof pretext
Final stage of social engineering attack
Escalation and exploitation
“Burning the source … allows a victim to recognise that an attack has taken place, making it extremely difficult to exploit the same source in future attacks” - Mitnick, 2001
-> Attacker maintains long-term access
Mitnick paper
Art of deception - shows how social engineers use harmless-seeming information to exploit systems
Kane Gamble
Gained access to sensitive accounts by impersonating customer service representatives, using social engineering to reset account credentials.
Targeted high-ranking CIA and FBI officials by pretending to be them, including posing as the CIA director to manipulate support staff into granting access.
Cheswick paper
1992 - describes the account of defending AT&T security system against the hacker, “Berferd” via honeypots (fake services)
Cyber-killchain and countermeasures+ limitations
Disrupts cyberattacks with steps:
Reconnaissance = gathering data
Weaponisation = creating exploit/attack payload
Delivery = transmitting payload
Exploitation = using payload to exploit weakness
Installation = setting up backdoor for long-term
Command and control = connecting system to attacker infrastructure
Actions on objectives = achieving attack’s goal
Countermeasures = detect, deny, disrupt, degrade, deceive
Limitations - focuses on technical attacks and assumes attacker has clear objectives
“Attackers have remarkable persistence, delaying them gives defenders time to identify their methods and plan responses”
Cheswick, 1992
-> Deception
IDS
Monitors system for unusual behaviour via misuse (attack patterns) or anomalies (deviations from normal behaviour)
“We led him on to study his techniques, feeding him false information to waste his time and protect real systems” Cheswick, 1992
-> Learning patterns, jail environment
Hutchings and Pastrana
2019, discuss the act of eWhoring which defrauds individuals online through fake personas
Silk Road
Was a darknet marketplace facilitating the anonymous trade of illegal goods, such as drugs and weapons using cryptocurrency
Types of organised cybercrime
Swarms - loosely coordinated groups with shared goals like Anonymous
Hubs - centralised groups with core members and supporting roles
Traditional organised crime groups which have extended online
“eWhorers capitalise on the emotional aspects of their victims, creating a sense of attachment or trust” - Hutchings and Pastrana, 2019
-> Example of a traditional method of crime that has moved online
Countermeasures and consequence of cybercrime
Human-focused interventions like warning messages or mass media campaigns
However this could just shift crime to new targets or methods
“Awareness and education are essential in equipping individuals with the tools to recognise and avoid eWhoring schemes” - Hutchings and Pastrana, 2019
-> Education for cybercrime
Cybercrime in recent times
Cybercrime has evolved significantly with more organised and professionalised methods
“Platforms should prioritise user safety by implementing stronger verification processes and reporting mechanisms” - Hutchings and Pastrana, 2019
-> Most cybercrime is profit-driven however some are idealogical and this evolution of cybercrime needs to be equally matched by platforms
“Tragedy of the Commons”
When individual actors prioritise short-term gains over collective security
“Buyers generally have no idea whether what they are buying is secure software, so a security lemon market is born—cheaper, less-secure products drive out more secure products.” (Rao et al., 2019)
-> Results of short-term gains and economic pressures
Rao et al.
2019, Explain importance of open source software and vulnerabilities due to insufficient incentives for security
Gordan-Loeb
Cost benefit model that assesses optimal level of security investment
“Participants like end-users, ISPs, and software developers all make choices that increase their individual reward while decreasing collective trust and security.” (Rao et al., 2019)
-> Reason for security breaches
Spam economy
Relies on specialised roles like harvesters and online vendors with each role benefiting from an anonymous value chain
Payment processing bottlenecks allowed law enforcement to significantly disrupt spam operations
Interventions of law enforement
Bottlenecks like payments processors can disrupt entire cybercriminal networks so should focus on exploiting weak points in value chain
“A defender that invests 10 million hours of work might recover and patch 10,000 bugs, while an attacker investing just 1,000 hours can find one vulnerability—gaps in defenses allow entry.” (Rao et al., 2019)
-> Efficiency advantage of cybercriminals when this kind of thing might not be looked into
Human in the Loop
Humans play critical role in operation of security systems, often required to make security decisions but this can fail due to human error
“Despite a worldwide recession, the computer security industry grew 18.6% in 2008, totaling over $13 billion.” (Walsh, 2010)
-> Economic impact of resultant cyber attacks
How to improve practices
Better communication and user-centered design are necessary rather than briefing models which users are less likely to understand
“security education efforts should focus not only on recommending what actions to take, but also emphasise why those actions are necessary.” (Walsh, 2010)
-> Would lead to better practices
Over-reliance on technology
Can lead to ignoring critical updates which then leaves entry points for potential attack
“users often find ways to delegate the responsibility for security to some external entity … technological (like a firewall), social (another person or IT staff), or institutional (like a bank)” (Walsh, 2010)
-> Could be because they do not believe that they possess the technical knowledge to manage the threat
Mirai botnet
Demonstrated interplay between poor user practices and systemic design flaws which allowed attackers to weaponise IoT devices for large-scale disruptions
Walsh paper
2010, discusses how home computer users conceptualise and make decisions regarding security threats
Sasse and Flechais paper
2005, importance of designing security systems that users can effectively engage with
Blame culture
Blame focuses on individual failures whereas a systemic approach views errors as consequences of poor design which mitigates failures by addressing the root concerns
Economic constraints for organisations
Many organisations adopt ad hoc fixes e.g. password reminder systems which may introduce vulnerabilities
“If secure systems require users to behave in a manner that conflicts with their norms, values, or self-image, most users will not comply” (Sasse and Flechais, 2005)
-> Security should align with business goals, requiring buy-in from all stakeholders
Chernobyl disaster
Was a result of operators’ active failures and were exacerbated by latent design flaws in the reactor
Craggs and Rashid
2017, explore the concept of security ergonomics, emphasizing the need to integrate human factors into system design
Fundamental Attribution Error
Blaming individuals for errors without considering systemic or contextual factors
“The more effort placed into better smarter technology the more likely it is that the human is seen as an error” (Craggs and Rashid, 2017)
-> excessive reliance on technology
Just culture
Emphasises learning and accountability rather than blame. Also encourages open reporting of security incidents
“Being able to recognise and learn from those errors is fundamental in moving socio-technical systems, such as IoT, forward.” (Craggs and Rashid, 2017)
-> Errors should be seen as opportunity to learn rather than place blame
Proactive design
Anticipate human errors and latent failures by learning from past (e.g. historical data of phishing attacks)
Tay
Microsoft chatbot AI, Tay, influenced by biased training data, quickly generated racist and sexist outputs