principles & design patters for building secure systems (week 4) Flashcards
how do unix and windows do in terms of least privilege?
unix: Pretty lousy. Every program gets all the privileges of the user that invokes it. For instance, if I run a editor to edit a single
file, the editor receives all the privileges of my user account, including the powers to read, modify, or delete all my files. That’s much more than is needed; strictly speaking, the editor
probably only needs access to the file being edited to get the job done.
windows: Just as lousy. Arguably worse, because many users run under an Administrator account, and many Windows programs require that you be Administrator to run them. In this case, every program receives total power over the whole computer. Folks on the Microsoft security team have recognized the risks inherent in this, and have taken many steps to warn people away from running with Administrator privileges, so things have gotten better in this respect.
what does it mean to use fail-safe defaults?
start by denying all access, then only that which has been explicitly permitted.
ensure that if security mechanisms fail or crash, they will default to secure behavior.
what is separation of responsibility?
split up privilege so that no one person or program has complete power. require more than one party to approve before access is granted.
what is defense in depth?
use multiple redundant protects
what is meant by psychological acceptability?
you must have your users buy into the security model
what does it mean to rely on security through obscurity?
refers to systems that rely on the secrecy of their design, algorithms, or source code to be secure.
what are some security advantages you lose if you don’t design security in from the start?
least privilege, separation of privilege, complete mediation, defense in depth, etc.
3 principles widely accepted in the cryptographic community:
- conservative design: evaluate systems according to the worst security failure that is at all plausible, under assumptions favorable to the attacker
- kerkhoff’s principle: similar to don’t rely on security through obscurity. cryptosystems should remain secure even when the attacker knows all internal details of the system and it should be easy to change the keys if they are leaked. it is easier to change the key than to replace very instance of the software.
- proactively study attacks: put effort into breaking your own systems
What is a question central to Design Patterns for Building Secure Systems?
How can you choose an architecture that will help reduce the likelihood of flaws in your system, or increase the likelihood that you will be able to survive such flaws?
trusted computing base (tcb)
portion of the system that must operate correctly in order for the security goals of the system to be assured
the tcb must be large enough so that nothing outside the TCB can violate security
if my security goal is that only authorized users are allowed to log into my system using SSH, what is the TCB?
- SSH daemon, makes the authentication and authorization decisions (if it has a bug then it will be able to violate my security goal)
- OS
- CPU
(not a web browser)
TCB Design principles
- unypassable: there must be no way to breach system security by bypassing the TCB
- tamper-resistant: the TCB should be protected by anyone else (e.g. other parts of the system outside the TCB should not be able to modify the TCB’s code or state)
- verifiable: it should be possible to verify the correctness of the TCB; TCB should be as simple as possible
general rule: simple and small is good practice; the less code, the fewer chances there are to make errors
TOCTTOU Vulnerabilities
time of check to time of use
in unix this often comes up with filesystem calls because the sequences of system calls do not execute in an atomic fashion, and the filesystem is where most long-lived state is stored. risk is not specific to files. these can arise anywhere that there is a mutable state that is shared between two or more entities. for instance, multi-threaded java servlets and applications are at risk for this kind of flaw.
how can we leverage modularity in building security systems?
modules interact with each other only through well-defined interfaces each module should perform a clear function strengthens the security properties of a system by providing forms of isolation -- keeping potential problems localized and minimizing assumptions made between components