Security Operations Flashcards

Domain 4, Chapters 14-22

1
Q

What are the 3 pillars of Secure baselines?

A

Fundamental security configuration standards:
Establish: Define security measures
Deploy: Implement security measures
Maintain: Sustain and update security measures

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the 2 options for Establishing secure baselines?

A

Center for Internet Security (CIS) Benchmarks: CIS benchmarks are comprehensive, community-driven guides meticulously crafted to establish secure configurations for various computing resources. IT professionals and organizations worldwide actively contribute to the
creation and refinement of these benchmarks. This collaborative effort ensures that the benchmarks remain current, adaptable to emerging threats, and applicable to a broad spectrum of technology stacks. CIS benchmarks provide a detailed roadmap for organizations to fortify their defenses by implementing industry-recognized best practices and security recommendations.
Security Technical Implementation Guide (STIG): STIG is a comprehensive repository of cybersecurity guidelines and best practices curated by the United States Department of Defense (DoD). Its primary mission is to enhance the security posture of DoD information
systems and networks. Implementing STIG recommendations involves a systematic approach whereby organizations assess their systems and networks against the guidelines, identify vulnerabilities or areas of noncompliance, and take remedial actions to align with the prescribed security configurations. This iterative process not only fortifies defenses but also ensures continuous monitoring and adaptation to evolving threats. Despite its origins, STIG’s impact also extends far beyondthe defense sector, influencing cybersecurity practices in both government and private industries.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

2 Options to Deploy a security baseline

A

Microsoft Group Policy: Microsoft Group Policy is an indispensable tool for organizations that predominantly rely on Windows operating systems. It allows administrators to define and enforce security configurations across a network of Windows devices. With Group Policy, a set of predefined security baselines can be created and applied uniformly to all Windows systems within an organization.
Puppet Forge: Puppet Forge is a versatile platform-agnostic solution. It provides a repository of pre-built modules and configurations that can be used to deploy security baselines across a range of operating systems, including Windows, Linux, and macOS. Puppet Forge’s flexibility makes it a favored choice for heterogeneous environments. It leverages the expertise of an open source community, ensuring constant updates and improvements.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are x tools to Maintain a baseline?

A

SCAP Compliance Checker: The Security Content Automation Protocol (SCAP) is a standardized framework for maintaining system security. SCAP Compliance Checker operates by comparing a system’s security settings against a predefined checklist of security requirements. If discrepancies are found, it generates reports highlighting areas of
non-compliance so that organizations can take corrective actions swiftly. A benefit of SCAP Compliance Checker is that it evaluates systems against a wide array of security benchmarks, including those published by the National Institute of Standards and Technology (NIST) and other industry-specific standards.
CIS Configuration Assessment Tool (CIS-CAT): CIS-CAT is a configuration assessment tool designed to evaluate systems and applications against CIS benchmarks, which are curated by the Center for Internet Security (CIS). These benchmarks represent a gold standard for secure configurations and best practices across various technologies, from operating systems to web browsers. Benefits of CISCAT include the tool’s flexibility, which allows organizations to tailor assessments to their specific needs and requirements, and automated scanning, which increases the efficiency of the process and reduces the risk of human error.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

10 Hardening targets

A

Making targets more secure:
Mobile devices: Secure smartphones and tablets
Workstations: Enhance security on desktop computers
Switches: Secure network switches for data protection
Routers: Strengthen security on network routers
Cloud infrastructure: Secure cloud-based resources
Servers: Enhance security on server systems
ICS/SCADA: Secure industrial control systems and SCADA
Embedded systems: Strengthen security for embedded devices
RTOS: Secure real-time operating systems
IoT devices: Enhance security for Internet of Things devices

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are two considerations when deploying a Wireless network?

A

Mobile and wireless technology equipment:
Installation considerations: Factors for successful setup
Site surveys: Assess location for optimal wireless coverage
Heat maps: Visualize signal strength and coverage areas

Site Survey: Conducting site surveys is an essential step in optimizing wireless network performance. These surveys involve a comprehensive analysis of the environment, which includes identification of sources of interference, such as load-bearing walls, cordless phones, microwaves, elevators, metal frames, metal doors, and radio waves. A site survey will help to determine the best places to install the wireless access points that users connect to.
Heat Maps: A heat map is a valuable tool in the hands of a network administrator when addressing reports of inadequate coverage. By visually pinpointing areas with subpar coverage on the map, administrators can efficiently identify potential issues, including malfunctioning WAPs, which may be the root cause of the problem.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Mobile solutions

A

Solutions for mobile device management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Mobile device management (MDM)

A

A Mobile Device Management (MDM) solution provides centralized control and maintenance of mobile devices to ensure strict adherence to the security protocols established by an organization and empowers IT administrators to oversee, configure, and safeguard mobile devices from a remote location. Among its primary responsibilities, MDM is set up by the IT staff to enforce security guidelines such as encryption, password prerequisites, and application whitelisting. These measures guarantee that all devices utilized within the organization align with the prescribed security standards, thereby diminishing the probability of data breaches.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the three mobile device deployment models?

A

Bring Your Own Device (BYOD): BYOD policies allow employees to use their personal devices for work-related tasks. While this can boost productivity, it also presents a security risk as the nature of such policies means that company data and access are carried on a device that is regularly removed from business premises and otherwise employed for personal use. To mitigate these risks, organizations should implement containerization techniques to separate work and personal data and enforce strict security policies on the work-related portion of the device. The device must be compliant with security policies. The owner of the
device cannot use the device for social purposes during working hours and must allow company-owned applications to be installed.
Choose Your Own Device (CYOD): CYOD is a policy in which the company provides employees with a selection of approved devices to choose from. These devices are owned and managed by the organization. This model allows for increased flexibility with company devices but still maintains security control.
Corporate-Owned, Personally Enabled (COPE): In this model, organizations provide employees with corporate-owned devices that can be used for both business and personal use but must comply with company policies. Full device encryption will be used on these devices
to prevent data theft if the device is left unattended. It is important that mobile devices have strong passwords and screen locks to protect the data stored on the device.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the four wireless Connection methods?

A

Cellular: Mobile network connectivityCellular networks (the latest versions of which are 4G and 5G) are responsible for providing mobile voice and data services over large geographical areas. They rely on a network of cell towers and satellites to connect mobile devices to the internet and each other. Cellular networks are generally considered secure due to their encryption protocols; however, vulnerabilities such as SIM card cloning and eavesdropping still exist.
Bluetooth is a short-range wireless technology commonly used for connecting peripherals such as headphones and keyboards.
NFC is another technology that leverages cellular connections. NFC allows devices to communicate when they are in close proximity, typically within a few centimeters. This technology is the foundation of contactless payment systems such as Apple Pay and Google Wallet. It enables secure transactions by simply tapping smartphones or credit cards on a compatible terminal. You should store your NFC-enabled card inside an aluminum pouch or wallet to prevent someone standing very close to you from skimming your card.
Global Positioning Services, more commonly known as GPS, is a satellite-based technology that provides precise location information by triangulating signals from multiple satellites. This is known as geolocation. GPS is used by the satellite navigation system in cars to guide you to a destination, and GPS tracking uses these signals to determine the exact geographical coordinates of a device. While GPS itself relies on satellites, the device that receives GPS signals can transmit its location data over a cellular network to be accessed remotely or used in various applications.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Wi-Fi Protected Access 3 (WPA3) and its 5 key features

A

Wi-Fi Protected Access 3 (WPA3) primarily relies on Simultaneous Authentication of
Equals (SAE) for key establishment and encryption compared to WPA2’s 128-bit encryption. The following list has some key features of WPA3:
Protected Management Frames (PMF): This can provide multicast transmission and can protect wireless packets against Initialization Vector (IV) attacks, in which the attacker tries to capture the encryption keys.
WPA3-Enterprise: In contrast to the 128 bits supported by WPA2, WPA3 has an Enterprise version that makes it suitable for government and finance departments. WPA3-Enterprise uses Elliptic-Curve Diffie Hellman Ephemeral (ECDHE) for the initial handshake.
SAE: SAE replaces WPA2-PSK. SAE uses a very secure Diffie Hellman handshake called Dragonfly and protects against brute-force attacks. It uses Perfect Forward Secrecy (PFS), which ensures that your session keys cannot be compromised.
Wi-Fi Easy Connect: This makes it very easy to connect IoT devices, such as a smartphone, by simply using a QR code.
Wi-Fi Enhanced Open: This is an enhancement of WPA2 open authentication that uses encryption. It can be used in public areas such as hotels, cafés, and airports where no password is required. It also prevents eavesdropping as it uses PMF.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

AAA/Remote Authentication Dial-In User Service (RADIUS) and its 5 key features

A

**RADIUS **is a network protocol and a server-client architecture widely used for centralizing authentication, authorization, and accounting (AAA) functions in corporate networks. Key features and aspects of RADIUS include the following:
Authentication: Authentication is the process of verifying who you are using an authentication method such as a password or PIN.
Authorization: Authorization determines the level of access granted to an authenticated user.
Accounting: RADIUS’s accounting feature maintains detailed logs of user activities. This supports security incident detection and responses, post-incident analysis, and compliance.
RADIUS clients: RADIUS clients are not desktop clients but servers in their own right. Examples include VPNs, WAPs, and 802.1x authenticated switches, the last of which requires an endpoint certification.
Shared Secret: A “shared secret” (also known as a shared key or shared password) is used by the RADIUS client to communicate with a RADIUS server for authentication and authorization purposes.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

4 Cryptographic Protocols

A

Wired equivalent privacy (WEP): WEP’s key management is an outdated protocol that was problematic due to insufficient security. The encryption keys used only a 64-bit encryption key with the RC4 stream cipher to protect data, leaving them vulnerable to attacks. WEP used a
24-bit initialization vector (IV) to help encrypt data packets. However, the IVs were reused, which made it relatively easy for attackers to predict and crack the encryption keys.
WPA: WPA was designed to fix critical vulnerabilities in WEP standards. WPA still uses the RC4 stream cipher but also uses a mechanism called the Temporal Key Integrity Protocol (TKIP) to enhance Wi-Fi security by dynamically changing encryption keys.
Wi-Fi Protected Access version 2 (WPA2): WPA2 is currently the most commonly used protocol. It uses an Advanced Encryption Standard with Counter Mode Cipher Block Chaining Message Authentication Code Protocol (WPA2 CCMP) with a 128-bit encryption key and AES encryption, offering strong protection for wireless networks.
Wi-Fi Protected Access version 3 (WPA3): WPA3 primarily relies on SAE for key establishment and encryption, making it stronger than WPA2-CCMP.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

5 Authentication Protocols

A

Protected Extensible Authentication Protocol (PEAP): PEAP is a version of Extensible Authentication Protocol (EAP) that encapsulates and encrypts the EAP data using a certificate stored on the server, making it more secure for Wireless Local Area Networks (WLANs).
802.1x: This is an overarching access control standard. 802.1x allows access to only authenticated users or devices and is therefore used by managed switches for port-based authentication. It needs a certificate installed on the endpoint (client or device), which is used for authentication. For wireless authentication, the switch needs to use a RADIUS server for enterprise networks.
EAP-TLS: EAP-TLS is a specific, secure version of wireless authentication that requires a certificate stored on the endpoint (client or device) to verify identity and authorization.
EAP-TTLS: EAP-TTLS uses two phases. The first is to set up a secure session with the server by creating a tunnel using certificates that are stored on the server, and seen by the client. The second is to authenticate the client’s credentials.
EAP-FAST: EAP-FAST, developed by Cisco, is used in wireless networks and point-to-point connections to perform session authentication. It is the only one of these authentication protocols that does not use a certificate.

Chaptger 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

5 key features of Application Security

A

Input validation: Input validation ensures that all data, (whether entered via a web page or a wizard), complies with predefined rules, formats, and permissible ranges. Imagine filling out a web form swiftly, only to mistakenly place your zip code in the wrong field. Input validation steps in like a helpful guide, promptly detecting and highlighting such errors in a vivid red, signaling that certain parameters require correction. Once these inaccuracies are rectified, the form will graciously accept and process the submission. But input validation’s role extends far beyond the user interface. Input validation protects against attacks such as SQL injection, buffer overflow, and integer overflow attacks by ensuring malicious data is rejected.
Secure cookies: Cookies are small packets of data that serve as a fundamental component of web browsing. They can be both friendly and, in some cases, potentially treacherous. Cookies are tiny pieces of information (packets) that websites send to your web browser and are stored on your computer or device. Their primary purpose is to enhance
your web browsing experience. These encrypted packets preserve user sessions, preferences, and authentication tokens, fortifying applications against data theft and identity compromise. However, they can also be treacherous as they can pose privacy risks and introduce security
vulnerabilities if not properly managed.
Static code analysis: In the process of static code analysis, developers meticulously inspect the source code of their software to identify and eliminate any potential bugs or vulnerabilities that could expose it to security threats such as buffer overflow or integer injection. This examination occurs without executing the code.
Code signing: Code signing is a digital mechanism that functions as a cryptographic seal, providing assurance regarding the authenticity and reliability of software. It verifies that the software has not been tampered with and comes from a trusted source.
Secure coding practices: Secure coding practices are a set of guidelines and principles that software developers follow to write code in a way that prioritizes security and reduces the risk of vulnerabilities or weaknesses that could be exploited by attackers. These practices are
essential to creating software that is secure, resilient, and less prone to security breaches.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Sandboxing

A

Sandboxing an application means isolating it from the network for testing, patching, or complete malware inspection.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

5 Acquisition/procurement process considerations

A

Change management: When you procure new assets or replace existing assets, it is vital that you submit a case to the Change Advisory Board to get approval for the purchase and implementation.
Vendor selection: Selecting the right vendor is crucial for quality, cost efficiency, reliability, and compliance. It’s not just about finding the best deal but also about ensuring the vendor aligns with your organization’s security and compliance requirements. Organizations should thoroughly vet vendors, examining their security protocols, track record, and
adherence to industry standards and regulations.
Total cost of ownership: Not only should you consider the purchase price of an asset but you must also consider maintenance costs and the cost of replacement parts. You don’t want to purchase an acquisition that will become financially burdensome.
Risk assessment: Security considerations must be addressed at every stage of the acquisition process. A comprehensive risk assessment helps identify potential vulnerabilities and threats associated with the new assets. This assessment is essential for developing strategies to mitigate risks and ensure that the acquisition aligns with the organization’s
overall security objectives.
Compliance alignment: Adherence to legal and regulatory requirements is non-negotiable, and security and compliance go hand in hand. Organizations must ensure that the assets they acquire comply with relevant data protection, privacy, and industry-specific regulations. Failure to do so can result in legal repercussions and reputational
damage.

Chapter 15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the two major elements of Assignment/accounting?

A

Asset register: An asset register is a comprehensive record of an organization’s assets, including details such as location, value, and ownership. It is vital that any asset that an organization procures is added to the asset register to ensure all assets are accounted for. If an asset found on your network is not in the asset register, then it is likely to be a rogue device.
Standard naming convention: A standard naming convention is required so that organizations can distinguish between different assets. For example, you might call your desktops PC1 and PC2, your domain controllers DC1 and DC2, and your servers SQL1 and SQL2.

Chapter 15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Two important factors in assignment and accounting

A

Ownership: Ownership goes hand-in-hand with accountability. When assets are assigned to specific owners, it becomes easier to enforce accountability for their condition and usage. This should be reflected in the asset register. Owners should have a clear understanding of their responsibilities regarding the asset’s security. Access control mechanisms, such as user authentication and authorization, are often tied to ownership to ensure that only authorized individuals can interact with the asset.
Classification: Asset classification involves categorizing assets into critical, essential, and non-essential assets. The value and the sensitivity of the asset are important so that when an asset fails, it gets the correct level of support. For example, if someone’s computer fails, it will not have a high priority for repair, whereas if a network intrusion prevention system fails, it will have immediate support as it is a critical asset within the organization. Depending on the equipment’s classification, it will be afforded the appropriate level of security.

Chapter 15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Chapter 15

Tracking can be conducted by maintaining an asset inventory and enumeration with what two principles?

Chapter 15

A

*Inventory: An up-to-date record of assets

Enumeration: Identifying and tracking all assets*

REMINDER
Purchasing hardware and software must be done through a reputable vendor and not an unknown third party.

Chapter 15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the 4 steps to Disposal/decommissioning?

A

*Sanitization: Safely wiping data from retired assets via data wiping/overwriting, secure erase, and degaussing.

Destruction: Properly disposing of obsolete assets by shredding, incineration, pulverization, crushing, chemical decomposition, or pulping

Certification: Verifying secure asset disposal

Data retention: Managing data storage for compliance*

REMINDER
When introducing new assets and disposing of legacy assets, it is important that the proper change management process is followed.

Chapter 15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The two types of vulnerability scans

A

A **non-credentialed **scan operates with restricted privileges and can only identify vulnerabilities that are visible from the network. This is the same view available to external attackers. Noncredentialed scans are quick and efficient in spotting vulnerabilities that require immediate attention, highlighting security gaps that demand immediate remediation to fortify the network’s external perimeter.
A credentialed scan, by comparison, is a much more powerful version of the vulnerability scanner. It has elevated privileges, thereby providing more accurate information. It can scan documents, audit files, and check certificates and account information. The credentialed scan can see information from both native and third-party software, which is essential for maintaining a secure and well-managed IT environment.

Chapter 16

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Security Content Automation Protocol (SCAP)

A

Security Content Automation Protocol (SCAP) is a framework that enables compatible vulnerability scanners to see whether a computer adheres to a predefined configuration baseline.

Chapter 16

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

The 3 types of application scanners

A

Static analysis: Static analysis, a foundation of application security, is a proactive method that involves inspecting the source code, binaries, or application artifacts without executing the program. This process enables security experts to unveil vulnerabilities, coding errors, and
potential weaknesses within the application’s structure. By meticulously dissecting the code base, static analysis scanners can identify issues such as code injection vulnerabilities, insecure authentication mechanisms, and poor data validation practices.
Dynamic analysis: In contrast to static analysis, dynamic analysis scanners take a runtime approach to vulnerability detection. They interact with the application while it’s running, probing for vulnerabilities and weaknesses as the program executes. This method
provides a real-world simulation of how an attacker might exploit vulnerabilities in a live environment.
Web application scanners: Web application scanners are specialized tools tailored to the unique challenges posed by web applications. They assist with the security of web-based software, such as online portals, ecommerce platforms, and web services. Their job is to inspect web applications for vulnerabilities such as SQL injection, XSS, security misconfigurations, and authentication weaknesses that can be exploited by attackers via the web. Web application scanners simulate real-world attacks by sending crafted requests and observing how an application responds. By doing so, they reveal vulnerabilities that might otherwise remain hidden until exploited by cybercriminals.

Chapter 16

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Package Monitoring

A

Package typically refers to a software component or module that is used within an application. These packages can include libraries, frameworks, plugins, or other pieces of code that are integrated into an application to provide specific functionality.
At the heart of package monitoring lies access to comprehensive vulnerability databases. These repositories catalog known vulnerabilities associated with specific software packages. Security
teams rely on these databases to cross-reference the components they use in their applications against reported vulnerabilities.

Chapter 16

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

CVE

A

Common Vulnerabilities and Exposures (CVE) list, which is a database of publicly disclosed cybersecurity vulnerabilities and exposures that is maintained by the MITRE Corporation, helping organizations manage the security of their systems against known vulnerabilities.

Chapter 16

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Threat Feeds

A

Threat feeds are curated streams of real-time information that provide insights into current and emerging cyber threats. These feeds aggregate data from various sources, including the following:
Security vendors: Leading cybersecurity companies often maintain their own threat feeds, offering insights into the latest threats and vulnerabilities.
**Government agencies: **National cybersecurity organizations such as the United States’ Cybersecurity and Infrastructure Security Agency (CISA) provide threat feeds with information on threats that may have national or global significance. More information can be found on its
website at https://www.cisa.gov/news-events/cybersecurity-advisories.
Open Source Intelligence (OSINT): OSINT feeds gather data from publicly available sources, including forums, social media, and dark web monitoring. Alien Vault is a community threat feed, and more detailed information can be found at https://otx.alienvault.com/.
Commercial threat intelligence providers: Many companies specialize in collecting, analyzing, and distributing threat intelligence data to subscribers.

Chapter 16

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Three types of Penetration Testing

A

Known environment: In a known environment, testers (known as white-box pen testers) are provided with extensive information about an organization’s systems and infrastructure. This allows them to focus on specific targets and vulnerabilities within the environment.
Partially known environment: Pen testers (known as gray-box pen testers) are given limited information about an organization’s systems and infrastructure in a partially known environment. This simulates a scenario where an attacker has acquired some knowledge about the target but not all of it.
Unknown environment: In an unfamiliar setting, pen testers (known as black-box pen testers) operate without prior information about an organization’s systems, infrastructure, or security protocols. This simulates an attacker with no inside information attempting to breach
the organization.

Chapter 16

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Common Vulnerability Scoring System (CVSS)

A

Common Vulnerability Scoring System (CVSS) is a standardized system for assessing the severity of vulnerabilities, according to factors such as the impact, exploitability, and ease of remediation.

Chapter 16

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Application Monitoring

A

Continuously observe for potential issues
Using logging and alerting systems, systems responsible for monitoring can detect threats and malicious activity. Enhanced monitoring enables security analysts to act swiftly on the detailed information provided. Commercial applications such as SolarWinds Security Event Manager and Splunk offer robust monitoring and alerting solutions for businesses to help them detect
and respond to potential security threats. They use methods such as data collection, real-time analysis, and alerts.

Chapter 14

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

5 Vulnerability response and remediation methods

A

*Patching: Applying updates to fix vulnerabilities

Insurance: Coverage for financial losses due to cyber incidents

Segmentation: Dividing networks for security and isolation

Compensating controls: Alternative safeguards for vulnerability mitigation

Exceptions/exemptions: Managing vulnerabilities not immediately remediated*

Chapter 16

32
Q

3 stages of Validation of remediation

A

Rescanning: Post-remediation vulnerability checks. This involves running vulnerability assessments or scans on the affected systems or applications, verifying that the identified vulnerabilities have indeed been remediated
Audit: A detailed review of the remediation process. A meticulous examination of the entire remediation process, including the steps taken to address vulnerabilities. Audits are often conducted internally or by thirdparty assessors, with the aim of ensuring that remediation efforts align with organizational policies and best practices.
Verification: Ensuring long-term vulnerability mitigation. Involves ongoing monitoring and assurance that vulnerabilities remain mitigated over time.

Chapter 16

33
Q

5 things to include in vulnerability Reporting

A

Communicating vulnerability status and risks

Vulnerability overview: This is a summary of the current vulnerability landscape, including the total number of vulnerabilities, their severity distribution, and trends over time.
CVSS scores: These relate detailed information on the varying levels of severity for identified vulnerabilities, and those of the highest priority that require immediate attention should be highlighted.
Remediation progress: This is an update on the status of remediation efforts, including the number of vulnerabilities addressed and those still pending.
Risk reduction: The report should include metrics by which to measure vulnerability management activities that have contributed to reducing the organization’s overall cybersecurity risk.
Recommendations: Clear recommendations on the prioritization and allocation of resources for vulnerability remediation efforts should also be provided. With the information provided by the vulnerability report, management may decide to add additional resources to prevent any exposed vulnerabilities in the future.

Chapter 16

34
Q

5 monitoring resources to review

A

Log files: Log files are text files that reside on every device, recording events as they happen.
Security logs: Security logs record all authorized and unauthorized access to resources and privileges.
Systems monitors: Systems refers to the servers, workstations, and endpoints that make up an organization’s network. Monitoring systems involves keeping a vigilant eye on their performance metrics, such as CPU usage, memory utilization, and network traffic.
Application monitors: Applications are software programs that enable users to perform various tasks on their computers and devices and monitoring applications involves tracking their performance, availability, and security.
Infrastructure monitors: Infrastructure encompasses the network components, databases, and cloud services that support an organization’s digital operations. Monitoring infrastructure involves ensuring the integrity and availability of these critical resources with
tools including network monitoring software, database activity monitoring, and cloud security solutions.

Chapter 17

35
Q

4 components of monitoring with Simple Network Management Protocol (SNMP)

A

SNMP agents: SNMP agents are software modules or processes running on network devices, such as routers, switches, servers, and even IoT devices.
**SNMP managers: **SNMP managers are centralized systems responsible for monitoring and managing network devices. They initiate SNMP requests to gather information from SNMP agents and can also configure and control devices. Managers use SNMP protocol operations
such as GET, SET, and GETNEXT to retrieve or modify information stored in the Management Information Base (MIB), which stores information about devices on the network. SNMP managers play a vital role in network monitoring and troubleshooting by polling SNMP
agents for data and making decisions based on the collected information.
SNMP traps: SNMP traps are asynchronous notifications sent by SNMP agents to SNMP managers without a prior request. They are used to inform managers of specific events or conditions. Traps are triggered when predefined thresholds or conditions are met, such as hardware failures, high resource utilization, or security breaches, and they provide real-time alerts to network administrators, allowing them to respond promptly to critical events.
Network Management System (NMS): Many NMS or monitoring tools use SNMP data to provide visual representations of device statuses. These tools allow the organization to define thresholds or conditions for different states (e.g., up, down, warning, or critical) and
then use colors to visually indicate the status of devices based on these conditions: green to indicate that a device is “up” or in a healthy state; yellow for a “warning” or condition that requires attention but is not critical; and red to signify that a device is “down” or in a critical state.

Chapter 17

36
Q

Network Intrusion Detection Systems (NIDSs)

A

NIDSs are passive security mechanisms designed to monitor network traffic and identify suspicious or malicious activities.

Chapter 17

37
Q

Network Intrusion Prevention Systems (NIPSs)

A

NIPSs are proactive security measures that not only detect but also actively block or mitigate security threats within a network.

Chapter 17

38
Q

5 activiities of using Log aggregation

A

Collecting and consolidating log data from multiple sources (SIEM)
Alerting: Notifications or alarms in response to events Scanning: Examining networks, systems, or applications to identify security weaknesses
Reporting: Creating and disseminating summaries of alerts and scans
Archiving: Storing historical data securely
Alert response and remediation/validation: Reacting to security alerts
Quarantine: Isolating potentially compromised systems or devices
Alert tuning: Adjusting alerting systems to minimize false positives and negatives

Chapter 17

39
Q

9 essential security Tools (general not specific)

A

Systems and protocols for implementing security
Security Content Automation Protocol (SCAP): A framework for security compliance
Benchmarks: Standardized guidelines or best practices for security
Agents/agentless: Data collection using two methods: agent based (using software agents installed on devices) and agentless (using existing network protocols for remote data
collection)
Security information and event management (SIEM): A system that aggregates, analyzes, and reports on security-related data
Antivirus: Designed to detect, prevent, and remove malicious software
Data loss prevention (DLP): Tools and processes used to prevent unauthorized access or transmission of data
Simple Network Management Protocol (SNMP) traps: Alerts generated by network devices
NetFlow: Network protocol used for collecting and monitoring data about network traffic flow
Vulnerability scanners: Tools that systematically scan networks, systems, or applications

Chapter 17

40
Q

Firewall

A

Protects networks via traffic filtering
Rules: Sets guidelines for network interactions
Inbound and outbound rules: Firewalls use outbound rules that dictate what your
internal network can access externally, while inbound rules regulate incoming traffic to
determine what external requests can access your network. For every outbound rule,
there needs to be a corresponding inbound rule.
Explicit allow and explicit deny: In firewall configurations, the sequence of rules is
crucial for effective security. Explicit deny rules serve as the first line of defense, blocking specific traffic types or sources that should never gain entry, while allow rules specify what’s allowed after the denial criteria have been established. For this reason, best practice dictates that you place deny rules first, prior to configuring your allow rules.
Access lists: Determines who gets entry. The ACL prevents access by using port numbers, protocols, or IP addresses. When you install a new firewall or router, there are no rules, except the last rule of deny all. The default state for either a router or firewall is to block all traffic until exceptions are created, by configuring allow rules (both outbound and inbound) for the traffic you want to allow through. If there are no allow rules, the last rule of “deny all” applies. This is called an implicit deny.
Ports/protocols: Communication gateways and standards. Protocols are how applications exchange data and facilitate actions such as remote command execution, email
communication, and file downloads. Think of them as the rules of engagement in the vast landscape of the internet, ensuring that data travels securely and efficiently from one point to another. For example, we use the Simple Mail Transfer Protocol (SMTP) to transfer email between mail servers.
While a protocol dictates the rules of how data is formatted and transmitted, a port acts as a virtual endpoint for communication between devices and applications over a network. Think of a port as a designated channel through which data can be sent and received. It’s like a TV channel
Screened subnets: Isolated network sections for safety.

REMINDER
Once a firewall has been installed, the default rule is “deny all.” To enable traffic to flow through, you need to add an “allow rule” for traffic that you want to pass through the firewall.

REMINDER
If you are port-forwarding from a firewall and want the traffic to go to a single host, then you use an IP address with a CIDR mask of /32 when installing your firewall rules (e.g., 1.1.1.1/32).

REMINDER
Once a firewall has been installed, the default rule is “deny all.” To enable traffic to flow through, you need to add an “allow rule” for traffic that you want to pass through the firewall.

Chapter 18

41
Q

IDS/IPS

A

Monitors/prevents suspicious network activities
Trends: Emerging patterns in data/behavior
Signatures: Recognizable digital patterns

An Intrusion Detection System (IDS) is known as passive, as it takes no action
to protect or defend your network beyond its role as an alarm system. It uses sensors and collectors to detect suspicious or unauthorized activities, sounding the alarm when potential threats are discovered.
Conversely, an Intrusion Prevention System (IPS) is more aggressive and actively protects a network by not only identifying suspicious activities but also taking swift action to actively block or mitigate threats, ensuring that the network remains resilient against potential threats. The network-based IPS (NIPS) is placed very close to the firewall to filter all traffic coming into the network. For this reason, it is considered an inline device.

The network-based versions of the IPS and IDS (called NIPS and NIDS, respectively) can only operate on a network, not on a host device. When the IPS and IDS are placed on computers, they are known as host-based versions. These are called HIDS and HIPS and can only protect the host, not the network.

Chapter 18

42
Q

6 Web filtering methods

A

Blocks unwanted online content
Agent-based filtering: Agent-based filtering deploys software agents on individual
devices to enforce internet filtering rules, ensuring compliance with organizational policies. These agents act like cybersecurity detectives, scanning network components and services to identify potential threats. They also function as firewalls, fortifying network security by blocking connections based on customized rules. Moreover, they offer real-time protection at the host and application level, safeguarding every aspect of the network. These agents play a crucial role in defending against cyber threats by blocking attacks and patching live systems. Importantly, they operate autonomously, not requiring a central host, and can take security actions independently, even when not connected to the corporate network.
Centralized proxy filtering: In the world of web filtering, centralized proxy servers are
the intermediaries as they intercept and scrutinize each internet request, apply filtering rules, and only allow approved traffic to flow through.
**Universal Resource Locator (URL) scanning: **URL scanning analyzes the web addresses you visit, comparing them against a database of known malicious sites. If it
detects a match, it raises the alarm, ensuring you don’t navigate into dangerous territory.
Content categorization: Think of content categorization as organizing books in a
library. Web filtering systems classify websites into categories such as “news,” “social media,” or “shopping.” This helps organizations control access by allowing or blocking entire categories, ensuring that users stay productive and safe.
Block rules: Block rules allow administrators to specify which websites or types of
content should be off-limits. If a user attempts to access a blocked resource, the web
filter steps in, redirecting them away from danger.
Reputation-based filtering: Reputation-based filtering assesses the trustworthiness of
websites based on their history. If a website has a bad reputation for hosting malware or
engaging in malicious activities, this filter steps in to protect users from harm.

REMINDER
A WAF protects the web server and its application from attack. It operates on the application layer (Layer 7) of the OSI reference model.

Chapter 18

43
Q

Operating system security for Windows and Linux

A

System protection measures
Group Policy: Admin-set computer/user regulations Group Policy is used to uniformly apply security settings in the form of a Group Policy object to users and computers within Active Directory. These policies can be configured at various levels, including the domain level, at which you can implement password policies that affect password settings so that the whole
domain receives that policy; and the organizational unit (OU) level for policies with more granular control—for example, a departmental security policy.
SELinux: Security-Enhanced Linux (SELinux) is a robust security mechanism that operates at the core of many Linux distributions. Its primary purpose is to provide fine-grained access control – for example, control over individual files and mandatory access controls (MAC) to enhance the overall security of the Linux OS. Unlike traditional discretionary access controls (DAC), which grant users and processes considerable control over access to resources, SELinux imposes strict policies and enforces them rigorously.
SELinux maintains a security policy that defines what actions are allowed or denied for various system resources, such as files, processes, and network communications. This policy is enforced through a combination of kernel-level controls and user-space utilities. SELinux relies on the principle of least privilege, ensuring that each process and user can only access the resources necessary for their specific tasks. This means that even if an attacker gains control of a process, SELinux restrictions can limit the potential damage by preventing unauthorized
access to sensitive system components.

Chapter 18

44
Q

Implementation of secure protocols

A

Adopting safe communication methods
Protocol selection: Protocol selection stands as the first line of defense for enterprises.
By carefully choosing the correct secure protocols to govern data exchange within your
organization, you establish the groundwork for a secure environment. It is important that
cybersecurity personnel know the reason why each of these secure protocols are used.
Port selection: As mentioned previously with the example of TV channels, each protocol
uses different ports. Selecting the appropriate one, therefore, requires an understanding
of which ports to open and which to keep closed on your firewall, according to the
selected protocol. This helps you reduce the attack surface available to cybercriminals.
Transport method: The two different types of transport methods are TCP (connectionorientated) and UDP (connectionless),

Chapter 18

45
Q

4 things DNS filtering does

A

Blocks access to malicious sites: DNS filtering identifies and blocks access to malicious
websites, preventing users from inadvertently stumbling upon phishing sites, malware
distribution hubs, or known threat sources.
Content filtering: DNS filtering allows organizations to enforce content policies. It can
restrict access to certain types of websites, such as social media or gambling sites,
enhancing productivity and protecting against legal liabilities.
Enhancing privacy: DNS filtering can also provide a layer of privacy protection by
blocking access to websites that may track or collect user data without their consent,
safeguarding personal information.
Security reinforcement: By blocking access to malicious domains, DNS filtering
fortifies network security, reducing the risk of cyberattacks and data breaches.

Chapter 18

46
Q

6 Email security encryption and authentication methods.

A

S/MIME: This uses Public Key Infrastructure (PKI) to either encrypt emails or digitally sign emails to prove the integrity of the message. It is very cumbersome, as it requires each user to exchange their public key with others and does not scale very well.
Pretty Good Privacy (PGP): With PGP, emails are encrypted end to end, meaning only
the intended recipient can unlock and decipher the content, even if it is intercepted during transit. This secure email method relies on a pair of keys – a public key, which is shared openly, and a private key, closely guarded by the user. The sender encrypts the message with the recipient’s public key, and only the recipient, possessing the corresponding private key, is able to decrypt it. It does not use PKI infrastructure.
Domain-Based Message Authentication Reporting and Conformance (DMARC):
DMARC stands as a robust secure email security protocol, empowering domain owners
to precisely dictate the actions taken when their emails fail authentication tests. It provides instructions to email receivers (such as ISPs and email providers) on how to deal with messages that do not pass authentication – for example, a directive to quarantine or delete them.
DomainKeys Identified Mail (DKIM): DKIM is an email authentication method that
enables a sender to digitally sign their email messages. These signatures are then validated by the recipient’s email server to confirm the message’s authenticity. This way, DKIM prevents email tampering when an email is in transit.
Sender Policy Framework (SPF): SPF is another email authentication mechanism. It
checks whether the sender’s IP address is authorized to send mail on behalf of a particular domain. Each sender needs to create a text (TXT) record DNS of their domain. When an email is received, the receiving email server checks the SPF record to verify that it has come from the legitimate sender. It helps prevent email spoofing and phishing by validating that the sending server is legitimately associated with the sender’s domain.
Gateway: Email gateways serve as a crucial line of defense against various email threats, such as spam, malware, and phishing attacks. Gateways allow policies to be created based on attachments, malicious URLs, and content to prevent them from entering your mail server. They can also use data loss prevention to stop PII and sensitive data from leaving the network via email. They act as filters that inspect incoming and outgoing emails, applying security policies and checks to identify and block malicious content.

Chapter 18

47
Q

File integrity monitoring (FIM)

A

File Integrity Monitoring (FIM) safeguards systems by establishing a baseline of normal file and system configurations. It continuously monitors these parameters in real time, promptly alerting the security team or IT administrators when unauthorized changes occur. FIM helps mitigate threats early, ensures compliance with regulations, detects insider threats, protects critical assets, and provides valuable forensic assistance after security incidents.

Chapter 18

48
Q

DLP

A

DLP prevents unauthorized or inadvertent leakage of PII and sensitive information, whether it’s through email or a USB drive. DLP operates on a foundation of pattern recognition and regular expressions. It scans the data within your network, searching for predefined patterns or expressions that match the criteria of sensitive information, such as credit card numbers, Social Security numbers, or proprietary business data. Once a match is detected, DLP takes action to prevent data loss.

Chapter 18

49
Q

Network Access Control (NAC)

A

Controls network access based on policies

NAC ensures that every remote device is fully patched so that they are not vulnerable to attacks. The key components of NAC are as follows:
Agents: Every device subject to NAC has an agent installed so that health assessments
can be carried out by the Health Authority (HAuth). There are two types of agents:
Permanent agents: These agents are installed on the host device, providing continuous monitoring and assessment
Dissolvable agents: Also known as “temporary” or “agentless” agents, these are deployed for single-use health checks, allowing for flexibility in assessment without long-term installations
Health authority: Following user authentication, the HAuth diligently inspects the client device’s registry to determine whether it is fully patched. A device that is up to date with all the necessary patches is labeled “compliant” and granted seamless access to the LAN. If a device has missing patches, it is categorized as “non-compliant” and redirected to what’s often referred to as a boundary network or quarantine network, where it will encounter a remediation server.
Remediation server: Positioned within the boundary or quarantine network, the remediation server plays a pivotal role. When a non-compliant device is redirected to this
network, it gains access to the missing updates and patches from the remediation server. Once the device achieves a fully patched status, it is then permitted to access the LAN without compromising security.

Chapter 18

50
Q

Endpoint Detection and Response (EDR) and Extended Detection and Response (XDR)

A

Monitors/responds to endpoint threats
While EDR focuses on protecting endpoints, XDR extends its reach to cover a broader spectrum of data sources, providing a more comprehensive and proactive approach to threat detection and response.
EDR
Data collection: EDR solutions continuously collect data from endpoints, including system logs, file changes, network activity, and application behavior.
Detection: Using a combination of signature-based and behavior-based analysis, EDR
identifies anomalies and potentially malicious activities. It compares the collected data
against known threat indicators and behavioral patterns.
Alerting: When EDR detects a suspicious activity or potential threat, it generates alerts
and notifications for security personnel to investigate further. These alerts are often ranked by severity, allowing security teams to prioritize their responses.
Response: EDR empowers security teams to respond swiftly to threats. It provides tools
to isolate compromised endpoints, contain threats, and remove malicious software.

XDR
Data integration: XDR integrates data from various sources, such as EDR, network detection and response (NDR), and cloud security, into a unified platform. This data consolidation enables cross-domain threat detection and correlation.
Advanced analytics: XDR employs advanced analytics and machine learning algorithms to detect complex, multi-stage attacks that may go unnoticed by traditional security solutions.
Automation and orchestration: XDR often includes automation and orchestration
capabilities, allowing security teams to automate response actions and streamline incident response workflows.
Scalability: XDR is designed to scale with an organization’s evolving needs, accommodating the growing complexity of modern cyber threats

Chapter 18

51
Q

User behavior analytics

A

Analyzes user activities for anomalies

Chapter 18

52
Q

Provisioning user accounts

A

Provisioning is the process of creating, managing, and configuring user access rights to an organization’s resources according to their job role. It involves the allocation of permissions and resources to new users, thereby enabling them to perform their roles effectively. The process incorporates several key steps, including the creation of user identities, assignment of privileges, and allocation of resources, which are tailored to meet individual user needs and
organizational policies.

Chapter 19

53
Q

Deprovisioning user accounts

A

Deprovisioning a user account in this context refers to the process of disabling or removing access to a user’s account and associated resources when they are no longer authorized to use them. This could be due to an employee leaving the organization, a contractor completing their project, or any other reason for revoking access.

Chapter 19

54
Q

Permission assignments and implications

A

Permission assignment refers to the process of allocating specific rights and privileges to users in an organization. These permissions dictate the range of actions they can perform, the data they can access, and the extent of modifications they can make. The assignments are usually structured around the principle of least privilege, granting users the minimum levels of access—or permissions—they need to accomplish their tasks. Assigning users excessive permissions can lead to unauthorized access, data leaks, and security breaches.

Chapter 19

55
Q

Identity proofing

A

Identity proofing is the process of verifying a person’s identity to confirm the authenticity and legitimacy of their actions. It is the foundational step in the identity and access management lifecycle, helping organizations to mitigate fraudulent activities. Methods of identity proofing may require the presentation of certain forms of evidence such as a passport, driving license, or Social Security Number (SSN) for identification.

Chapter 19

56
Q

Federation

A

Federation services allow identity information to be shared across organizations and IT systems, normally for authentication purposes. The most common uses for federation services are joint ventures and cloud authentication, where third-party authentication is required. When two entities seek to do business on a joint project, rather than merge their entire IT infrastructures, they use federation services to authenticate the other third-party users for the
purposes of the joint project.

Federation services implemented via wireless connections areknown as RADIUS Federation. Shibboleth is an open source version of federation services.

Chapter 19

57
Q

Single Sign-On (SSO) and its 3 types

A

Single Sign-On (SSO) is an authentication process that allows users to access multiple applications or services with a single set of credentials. It is designed to simplify user experiences by reducing the number of times users must log in to relevant applications or
devices to access various services—for example, a mail server. As there is no need to log in to every application separately, SSO significantly improves productivity and user satisfaction, while also reducing the time spent on password resets and support. However, it necessitates stringent security measures as any compromise of SSO credentials could potentially lead to unauthorized access to all linked services.
Kerberos authentication: Kerberos authentication uses TGTs to obtain service tickets to provide access to network resources without users needing to re-enter credentials. This is an example of seamless authentication using SSO.
Open Authorization (OAuth): OAuth is an open standard for access delegation that is commonly used for internet-based authentication using tokens. It allows third-party services to access user information on other sites without exposing the user’s password. When a user logs in using OAuth, they authenticate against an authorization server and receive a token that they can use to access various services. The services can use the token to understand the user’s identity and permissions. OAuth lets users access multiple applications without needing to present their credentials a second time, resulting in SSO functionality.
Security Assertions Markup Language (SAML): SAML is an XMLbased standard used to exchange authentication and authorization data between third parties. It allows a user to log in to multiple applications with a single username and password by sharing the user’s credentials and attributes across the connected systems securely. Federation services use SAML to ensure secure communication between the identity provider and the service provider and enhance security by eliminating the need to store user credentials at multiple locations. It is pivotal in facilitating SSO and is highly extensible, allowing organizations to implement customized solutions according to their specific needs.

Chapter 19

58
Q

Interoperability

A

Interoperability is the ability of different platforms, systems, or technologies to work together (inter-operate) seamlessly or to exchange and use information in a compatible and effective
manner.

Chapter 19

59
Q

Attestation

A

Attestation in IAM involves verifying the specific attributes, conditions, or credentials of an entity. This validation is supplied by a trusted source or authority, such as certificates, tokens, federation, or Active Directory:
Certificates, issued by trusted Certificate Authorities (CAs), function as digital passports, serving to confirm the legitimacy of entities and ensuring secure and encrypted communication across networks.
Tokens, frequently employed in OAuth, provide a secure means to confirm user identity and privileges, thereby granting controlled access to valuable resources.
Federation serves as a mechanism to establish cross-domain trust and enables seamless resource-sharing among diverse organizations, confirming user identities and facilitating SSO capabilities.
Microsoft’s Active Directory, a powerful directory service tailored for Windows domain networks, contributes to the confirmation of attestation by managing user data, safeguarding valuable resources, and enforcing policies to uphold the overall integrity and security of
networked environments. The amalgamation of these cutting-edge technologies further reinforces the attestation process.

Chapter 19

60
Q

6 types of Access controls

A

*Mandatory access controls: Enforcing strict access rules
Discretionary access controls: Where users control access to their data
Role-based access controls: Access is based on user roles
Rule-based access controls: Access is determined by specific rules
Attribute-based access controls: Access is based on user attributes
Time-of-day restrictions: Access is based on the time

Least privilege: Providing the minimum necessary access*

61
Q

3 types of (MFA) Multi-factor authentication implementations

A

Biometrics: Unique physical or behavioral characteristics
Hard authentication tokens: Using a physical device
Soft authentication tokens: Using passwords and PINs

Chapter 19

62
Q

5 factors of (MFA) Multi-factor authentication

A

Something you know: This involves knowledge-based information such as usernames, passwords, PINs, or dates of birth and functions as the initial layer of security in many systems.
Something you have: This factor relates to the possession of physical objects including secure tokens, key fobs, and smart cards. A hardware token, for example, generates a unique PIN periodically, and a proximity card grants access when in close range to the corresponding
reader.
Something you are: Biometric authentication falls under this category, using unique physiological or behavioral attributes of individuals for verification, such as fingerprints, vein, retina, or iris patterns, and voice.
Something you do: This encompasses actions performed by users, such as swiping a card or typing, and can include behavioral biometrics such as gait (that is, the way you walk), keystroke dynamics, or signature analysis.
Somewhere you are: Location-based factors consider the geographic location of the user, adding another layer of contextual security and ensuring users access systems from secure and approved locations.

Chapter 19

63
Q

Compliance monitoring

A

Ensuring adherence to regulations
Due diligence/care: Ensure facts are correct
Attestation and acknowledgment: Confirming compliance
and recognizing it
Internal and external: Monitoring within and outside the
organization
Automation: Automated processes and controls for efficiency

64
Q

Privacy

A

Protecting individuals’ personal information and rights
Legal implications: Legal consequences and obligations
Local/regional: Regulations specific to local or regional areas
National: Regulations at the national level
Global: Worldwide data protection regulations
Data subject: Individuals whose data is processed
Controller: Entity that determines data processing purposes
Processor: Entity processing data on behalf of the controller
Ownership: Legal rights to data control
Data inventory: Cataloging and managing data assets
Data retention: Policies for data storage duration
Right to be forgotten: Individuals’ right to have their data
erased

65
Q

8 Principles for secure passwords

A

Password length: Password length refers to the number of characters or digits in a password, and longer passwords are generally considered more secure as they are harder to guess or crack through brute-force attacks.
Password complexity: Often referred to as “strong passwords,” complex passwords contain elements from at least three out of four groups: lowercase letters, uppercase letters, numbers, and special characters not commonly used in programming.
Password reuse: Password reuse is the same as password history but used by various products, including smartphones and email applications. Policies around both serve to prevent the recycling of old passwords, which could represent a security risk.
Password expiry: Password expiry is a security measure that requires users to change their passwords after a set period to reduce the risk of unauthorized access.
Password age: Password age policies, which include minimum and maximum password ages, are implemented to enhance security by encouraging users to regularly update their passwords.
Minimum password age: This policy prevents users from changing their password too frequently, which could be a security risk, by setting a minimum period that must elapse
before a password can be changed again. This prevents users from repeatedly cycling through a small set of passwords.
Maximum password age: This policy sets a maximum period after which a user’s password must be changed, reducing the risk of unauthorized access due to long-term, potentially compromised passwords. It ensures that passwords are refreshed periodically.
Account lockout: This policy determines how many incorrect login attempts a user can make before the system locks them out. Companies often set it to three or five failed attempts before locking the account.

Chapter 19

66
Q

Privileged access management tools (PAM)

A

Privileged Access Management (PAM) is a practice that restricts and protects administrative rights for administrator, privileged, service, and root accounts. To do so, PAM uses ephemeral credentials, meaning that they are single-use only and normally have a time limit.

Chapter 19

67
Q

3 Privileged access management (PAM) tools

A

*Just-in-time permissions: Granting temporary access as needed

Password vaulting: Safely storing and managing passwords

Ephemeral credentials: Short-lived access tokens for security*

Chapter 19

68
Q

9 Use cases of automation and scripting

A

User provisioning: User provisioning ensures that user accounts are created, configured, and granted appropriate access rights swiftly and accurately. This not only minimizes manual overhead but also reduces the risk of errors and unauthorized access.
Resource provisioning: Resource provisioning automation allows organizations to allocate and de-allocate resources, such as virtual machines, storage, and network resources, as needed. This dynamic allocation ensures resource optimization, cost efficiency, and scalability, aligning IT infrastructure with business needs.
Guard rails: Automation and scripting can establish guard rails by enforcing predefined policies and configurations. This ensures that all systems and resources operate within specified parameters to reduce the potential for misconfigurations or security vulnerabilities.
Security groups: Automation enables the creation and management of security groups by defining who can access specific resources or services. This granular control over access helps organizations bolster their security posture by limiting exposure to potential threats.
Ticket creation: Automated ticket creation and tracking can enhance IT support and incident response. When an issue arises, a script can generate a ticket, prioritize the support call, assign it to the appropriate team, and then track its progress, ensuring swift resolution and
accountability.
Escalation: In the event of a critical incident, automation can trigger predefined escalation procedures, in which the call is raised as a high priority and dealt with immediately. This ensures that incidents are addressed promptly, that the right personnel are involved at the right time, and that downtime and potential damage are minimized. A SOAR system uses artificial intelligence and machine learning when analyzing incidents and can automatically notify the Security Operations Center (SOC) of any critical incidents that it identifies.
Enabling/disabling services and access: Automation scripts can be used to automate the enabling or disabling of services and access within systems.
Continuous integration and testing: In software development, automation and scripting play a pivotal role in continuous integration and testing, in which multiple developers write code independently before merging (integrating) the completed code. The next phase in
automation and scripting is called continuous verification and validation, in which developers run automated tests to validate code changes, ensuring that they meet quality standards and do not introduce vulnerabilities.
Integrations and application program interface (APIs): APIs play an essential role in the automation and streamlining of complex processes by linking together the tools and systems they rely on. Whether it’s automating data transfers between databases, triggering actions in response to specific events, or enabling cross-platform functionality, integrations and APIs are the architects of efficiency.

Chapter 20

69
Q

7 Benefits of automation and scripting:

A

Efficiency/time saving: At the core of automation and orchestration lies time-saving efficiency. Tedious, repetitive tasks that once consumed valuable hours are now executed swiftly and accurately by automated processes. This newfound efficiency allows staff to be redirected toward strategic tasks and innovation, increasing productivity throughout the
organization.
Enforcing baselines: Automation ensures that systems consistently adhere to predefined baselines and configurations. This consistency is a foundation for security, as it minimizes the risk of misconfigurations or deviations that could open doors to vulnerabilities. In essence, automation ensures standardization.
Standard infrastructure configurations: The deployment of standardized infrastructure configurations is very easy with automation. Whether it’s setting up network firewalls, server environments, or access control lists, automation ensures that these configurations are deployed consistently and securely across an organization without the
risk of human error, thereby reducing the potential for security gaps.
Scaling in a secure manner: Organizations that are successful and expanding must evolve to meet the demands of that growth. Automation and orchestration enable the seamless scaling of resources while maintaining security. Resources can be provisioned and de-provisioned efficiently to adapt to fluctuating workloads without compromising the
safety of their staff.
Employee retention: Automation relieves employees of the burden of repetitive, mundane tasks, allowing them to focus on more intellectually stimulating and meaningful work. This can boost job satisfaction and, consequently, employee retention, as team members are engaged in more rewarding tasks.
Reaction time: In the world of cybersecurity, speed is of the essence. Automation ensures that threat detection and response are lightning-fast. Security incidents are identified, assessed, and acted upon in real time, reducing potential damage and downtime.
Workforce multiplier: Automation acts as a force multiplier for the workforce. A well-orchestrated system can automate complex, multistep processes and reduce the time that the workers spend on boring tasks, as well as the number of mistakes made by employees. SOAR
systems search mundane log files so that your staff can be released for more important strategic tasks that cannot be automated.

Chapter 20

70
Q

5 issues with automation to consideration

A

Complexity: While automation promises streamlined operations, it can introduce a layer of complexity to the management and oversight of systems. Automated workflows, scripts, and processes must be carefully designed and maintained, and they can become more intricate as
security needs evolve and more steps, triggers, and protocols are added. Overly complex automation may hinder adaptability and result in unintended consequences, such as difficulty in troubleshooting or scaling issues, if not managed effectively.
Cost: Automation can be a substantial investment, both in terms of technology and human resources. The initial costs associated with implementing automation tools and training personnel should be weighed against the expected benefits, such as efficiency gains and
improved security. Understanding the long-term cost-benefit analysis is crucial for making decisions about what should be automated and how.
Single point of failure: While automation enhances operational efficiency, if not diversified, it can also create a single point of failure, meaning a single component could crash a whole system if it fails. Because of this, relying on a single automation system or process to
handle critical security functions can pose a significant risk to your operations. Implementing redundancy and failover mechanisms is essential to ensure the continued security of your network, even in the face of automation failures.
Technical debt: In the rush to automate security operations, organizations may resort to quick fixes, such as easy-to-implement automation, that accumulate technical debt over time. Technical debt refers to the extra time it will take to compensate for issues that arise when shortcuts are taken or when automation is implemented without considering long-term maintainability. This debt can lead to increased security risks and operational challenges in the future.
Ongoing supportability: Automation solutions require ongoing support, maintenance, and updates. Neglecting these aspects can lead to outdated or vulnerable systems. Evaluating the sustainability of automation solutions and ensuring that they align with an organization’s
long-term security strategy is essential to maintaining their effectiveness.

Chapter 20

71
Q

7 Sequential steps for effective incident management

A

Preparation: In the preparation phase, organizations establish and maintain incident response plans. These plans should be regularly updated to address evolving threats. This is the stage at which the Cybersecurity Incident Response Team (CSIRT) is assembled and a
discrete communication plan established to notify them about any new incidents without advising the general public. It should only become available to the general public after the incident has been contained. Additionally, system configurations, network diagrams, and an
inventory of critical assets should be documented to assist in the response process.
Detection: In the incident response playbook, detection is the early warning system. This starts with the identification of abnormal behaviors, which can be accomplished by placing an EDR on the endpoint. The IDS system monitors the network creating log files for all devices that are then reviewed by the log collector (a Syslog server) and the SIEM server to provide real-time monitoring.
Analysis: At the analysis stage, SIEM takes the lead, using correlation techniques to analyze the type of incident flagged, prioritizing its impact and category. To do this analysis, we can use tools such as the MITRE ATT&CK framework, the Cyber Kill Chain, or the diamond model of intrusion analysis. Using frameworks like MITRE ATT&CK, Cyber Kill Chain, or the diamond model of intrusion analysis is crucial for enhancing cybersecurity posture. They provide structure, common language, and a systematic approach to understanding, detecting, and responding to cyber threats and incidents. By incorporating these frameworks into your security strategy, you can better protect your organization’s assets and data.
Containment: In the containment stage, the primary goal is to limit the incident’s impact. This often involves isolating affected systems or quarantining them to prevent the attack from spreading. Simultaneously, volatile evidence (such as running processes and network connections) should be collected for analysis, and any compromised user accounts or access credentials should be disabled.
Eradication: Eradication focuses on destroying the root cause of the incident. For example, if malware is detected, efforts should be made to remove it completely. This may involve patching systems, deleting infected files, or disabling unnecessary services to protect the environment against future attacks.
Recovery: In the recovery phase, the organization aims to restore its operations to a normal state. This includes activities like data restoration, in which essential systems (such as domain controllers) are brought back online once they are clean and secure. The goal is to
achieve the Recovery Point Objective (RPO) as closely as possible. The RPO is the amount of time a company can operate without its systems.
Lessons Learned: After the incident has been effectively contained and resolved, it’s essential to conduct a post-incident analysis. This Lessons Learned phase involves reviewing how the incident was handled to identify the strengths and weaknesses of the organization’s response. The insights gained here help organizations refine their incident response plans and take preventive measures to reduce the likelihood of similar incidents in the future.

Chapter 21

72
Q

2 types of Testing cybersecurity staff

A

*Validating response plans with exercises and simulations:

Tabletop exercise: Collaborative scenario testing for response plan assessment
Simulation: Realistic, hands-on practice to assess incident response strategies*

Chapter 21

73
Q

Root Cause Analysis

A

Unearthing why incidents occurred

Chapter 21

74
Q

4 phases of Digital forensics

A

Collection: Law enforcement collects evidence from a crime scene, ensuring that the integrity of the evidence is maintained and that it is bagged and tagged ready for a forensic examination.
2. Examination: Prior to examination, the data will be hashed, and then an investigation will be carried out with the relevant forensic tool. When the examination has concluded, the data is once again hashed to ensure that neither the examiner nor the tools have tampered with it.
3. Analysis: When all of the forensic data has been collected, it is analyzed using legal methods and then transformed into information that can be used as evidence.
4.** Reporting:** After the forensics team creates a comprehensive report it can be presented as evidence in legal proceedings. This report serves as a powerful instrument in securing convictions by offering a clear and concise account of the investigative journey.

Chapter 21

75
Q

6 Digital forensics gathering…things?

A

Delving into digital artifacts for evidence:

Legal hold: Safeguarding evidence from alteration or deletion
Chain of custody: Documenting evidence handling meticulously
Acquisition: Collecting digital evidence for analysis
Reporting: Documenting findings and actions taken
Preservation: Safeguarding digital evidence from tampering
E-Discovery: Electronic evidence retrieval for legal purposes

Chapter 21

76
Q

7 Log data types

A

Detailed records crucial for investigations:
Firewall logs: Track network traffic and security breaches
Application logs: Capture user interactions and errors
Endpoint logs: Document user activities and security events
**OS-specific security logs: **Record system-level security activities
IPS/IDS logs: Identify network threats and patterns
Network logs: Records data flow and network performance
Metadata: Provides context to enhance investigations

REMINDER: Ensure you know the different types of data logs.

Chapter 22

77
Q

4 Data sources

A

Vital elements in cybersecurity investigations:
Vulnerability scans: Identify and prioritize system weaknesses
Credentialed vulnerability scanning provides the scanner with valid credentials, such as usernames and passwords, allowing it to access and inspect the inner workings of your devices and applications.
Non-credentialed vulnerability scanning, on the other hand, has restricted access, meaning that it can only see what an attacker with no permissions could see on your network.
Automated reports: Offer real-time insights and efficiency
Dashboards: Visualize critical data for real-time monitoring
Packet captures: Support forensics and network analysis. Packets are the data that runs up and down our network. By capturing packets, cybersecurity administrators can analyze what is happening on the organization’s network.

REMINDER:
Ensure you know the differences between non-credentialed and credentialed vulnerability scanners.

Chapter 22