Course 5 - Assets, Threats, and Vulnerabilities Flashcards
After the core, the next NIST component we’ll discuss is its tiers. These provide security teams with a way to measure performance across each of the five functions of the core. Tiers range from Level-1 to Level-4. Level-1, or passive, indicates a function is reaching bare minimum standards. Level-4, or adaptive, is an indication that a function is being performed at an exemplary standard. You may have noticed that CSF tiers aren’t a yes or no proposition; instead, there’s a range of values. That’s because tiers are designed as a way of showing organizations what is and isn’t working with their security plans.
The practice of keeping data in all states away from unauthorized users
Information security (InfoSec):
A catalog of assets that need to be protected
Asset inventory:
The practice of labeling assets based on sensitivity and importance to an organization
Asset classification
A __________ is a person who decides who can access, edit, use, or destroy their information.
data owner
A __________ is anyone or anything that’s responsible for the safe handling, transport, and storage of information.
data custodian
____________ is the process of transforming information into a form that unintended readers can’t understand. Data of any kind is kept secret using a two-step process: encryption to hide the information, and decryption to unhide it.
Cryptography
An ___________ is a set of rules that solve a problem.
algorithm
Specifically in cryptography, a ________ is an algorithm that encrypts information.
cipher
A ____________ key is a mechanism that decrypts ciphertext.
cryptographic
brute force attack —— a trial-and-error process of discovering private information.
Public key infrastructure, or PKI, is an encryption framework that secures the exchange of information online. It’s a broad system that makes accessing information fast, easy, and secure.
Symmetric encryption involves the use of a single secret key to exchange information.
PKI addresses the vulnerability of key sharing by establishing trust using a system of digital certificates between computers and networks.
A digital certificate is a file that verifies the identity of a public key holder. Most online information is exchanged using digital certificates. Users, companies, and networks hold one and exchange them when communicating information online as a way of signaling trust.
Digital certificates are a lot like a digital ID badge that’s used online to restrict or grant access to information. This is how PKI solves the trust issue.
A hash function is an algorithm that produces a code that can’t be decrypted. Unlike asymmetric and symmetric algorithms, hash functions are one-way processes that do not generate decryption keys. Instead, these algorithms produce a unique identifier known as a hash value, or digest.
_________ are access controls that serve a very basic purpose. They ask anything attempting to access information this simple question: who are you?
Authentication systems
Single sign-on, or SSO, is a technology that combines several different logins into one. Can you imagine having to reintroduce yourself every time you meet up with a friend? That’s exactly the sort of problem SSO solves.
Instead of requiring users to authenticate over and over again, SSO establishes their identity once, allowing them to gain access to company resources faster. While SSO systems are helpful when it comes to speeding up the authentication process, they present a significant vulnerability when used alone.
Multi-factor authentication, or MFA, is a security measure, which requires a user to verify their identity in two or more ways to access a system or network. MFA combines two or more independent credentials, like knowledge and ownership, to prove that someone is who they claim to be.
SSO and MFA are often used in conjunction with one another to layer the defense capabilities of authentication systems. When both are used, organizations can ensure convenient access that is also secure.
When it comes to securing data over a network, there are a couple of frequently used access controls that you should be familiar with: HTTP basic auth and OAuth.
Some websites still use basic auth to tell whether or not someone is authorized to access information on that site. However, their protocol is considered to be vulnerable to attacks because it transmits usernames and password openly over the network. Most websites today use HTTPS instead, which stands for hypertext transfer protocol secure. This protocol doesn’t expose sensitive information, like access credentials, when communicating over the network.
OAuth is an open-standard authorization protocol that shares designated access between applications. For example, you can tell Google that it’s okay for another website to access your profile to create an account. Instead of requesting and sending sensitive usernames and passwords over the network, OAuth uses API tokens to verify access between you and a service provider.
An API token is a small block of encrypted code that contains information about a user. These tokens contain things like your identity, site permissions, and more. OAuth sends and receives access requests using API tokens by passing them from a server to a user’s device.
HTTP uses what is known as basic auth, the technology used to establish a user’s request to access a server. Basic auth works by sending an identifier every time a user communicates with a web page.
Accounting is the practice of monitoring the access logs of a system. These logs contain information like who accessed the system, and when they accessed it, and what resources they used.
Anytime a user accesses a system, they initiate what’s called a session. A session is a sequence of network HTTP basic auth requests and responses associated with the same user, like when you visit a website. Access logs are essentially records of sessions that capture the moment a user enters a system until the moment they leave it.
Two actions are triggered when the session begins. The first is the creation of a session ID. A session ID is a unique token that identifies a user and their device while accessing the system. Session IDs are attached to the user until they either close their browser or the session times out.
The second action that takes place at the start of a session is an exchange of session cookies between a server and a user’s device.
A session cookie is a token that websites use to validate a session and determine how long that session should last. When cookies are exchanged between your computer and a server, your session ID is read to determine what information the website should show you.
Cookies make web sessions safer and more efficient. The exchange of tokens means that no sensitive information, like usernames and passwords, are shared. Session cookies prevent attackers from obtaining sensitive data. However, there’s other damage that they can do. With a stolen cookie, an attacker can impersonate a user using their session token. This kind of attack is known as session hijacking.
Session hijacking is an event when attackers obtain a legitimate user’s session ID. During these kinds of attacks, cyber criminals impersonate the user, causing all sorts of harm. Money or private data can be stolen. If, for example, hijackers obtain a single sign-on credential from stolen cookies, they can even gain access to additional systems that otherwise seem secure.
Pro tip: Another way to remember this authentication model is: something you know, something you have, and something you are.
User provisioning is the process of creating and maintaining a user’s digital identity. For example, a college might create a new user account when a new instructor is hired. The new account will be configured to provide access to instructor-only resources while they are teaching. Security analysts are routinely involved with provisioning users and their access privileges.
Pro tip: Another role analysts have in IAM is to deprovision users. This is an important practice that removes a user’s access rights when they should no longer have them.
New vulnerabilities are constantly being discovered. These are known as zero-day exploits. A zero-day is an exploit that was previously unknown
The first layer of defense in depth is the perimeter layer. This layer includes some technologies that we’ve already explored, like usernames and passwords. Mainly, this is a user authentication layer that filters external access. Its function is to only allow access to trusted partners to reach the next layer of defense.
Second, the network layer is more closely aligned with authorization. The network layer is made up of other technologies like network firewalls and others.
Next, is the endpoint layer. Endpoints refer to the devices that have access on a network. They could be devices like a laptop, desktop, or a server. Some examples of technologies that protect these devices are anti-virus software.
After that, we get to the application layer. This includes all the interfaces that are used to interact with technology. At this layer, security measures are programmed as part of an application. One common example is multi-factor authentication. You may be familiar with having to enter both your password and a code sent by SMS. This is part of the application layer of defense.
And finally, the fifth layer of defense is the data layer. At this layer, we’ve arrived at the critical data that must be protected, like personally identifiable information. One security control that is important here in this final layer of defense is asset classification.
One of the most popular libraries of vulnerabilities and exposures is the CVE list. The common vulnerabilities and exposures list, or CVE list, is an openly accessible dictionary of known vulnerabilities and exposures. It is a popular resource.
Security teams commonly use the CVE list and CVSS scores as part of their vulnerability management strategy. These references provide recommendations for prioritizing security fixes, like installing software updates before patches.
Libraries like the CVE list, help organizations answer questions. Is a vulnerability dangerous to our business? If so, how soon should we address it?
These online libraries bring together diverse perspectives from across the world. Contributing to this effort is one of my favorite parts of working in this field. Keep gaining experience, and I hope you’ll participate too!
A CNA is an organization that volunteers to analyze and distribute information on eligible CVEs. All of these groups have an established record of researching vulnerabilities and demonstrating security advisory capabilities. When a vulnerability or exposure is reported to them, a rigorous testing process takes place.
OWASP is a nonprofit foundation that works to improve the security of software. OWASP is an open platform that security professionals from around the world use to share information, tools, and events that are focused on securing the web.
The NIST National Vulnerabilities Database uses what’s known as the common vulnerability scoring system, or CVSS, which is a measurement system that scores the severity of a vulnerability. Security teams use CVSS as a way of calculating the impact a vulnerability could have on a system. They also use them to determine how quickly a vulnerability should be patched.