Security Operations - Quiz Revision Flashcards
Which of the following data sources would provide you with information about potential weaknesses and security flaws within a network or system?
A) IDS/IPS logs
B) Automated reports
C) Vulnerability scans
D) Network logs
Vulnerability scans provide information about potential weaknesses and security flaws within a network or system. Vulnerability scans involve automated assessments of network devices, servers, and applications to identify potential weaknesses and security vulnerabilities. These scans analyze system configurations, software versions, and patch levels to detect known vulnerabilities and misconfigurations that could be exploited by attackers. Reports generated from/by these vulnerability scans provide detailed information about identified vulnerabilities, including severity ratings, affected systems, and recommended remediation steps. By conducting vulnerability scans regularly, organizations can proactively identify and address security risks, thereby enhancing the overall security posture of their network and systems.
What are Automated reports?
Automated reports may include various types of information generated by security tools and systems, such as IDS/IPS alerts, network activity summaries, and system status reports. While automated reports can provide valuable insights into security events and operational metrics, they do not specifically focus on identifying potential vulnerabilities or weaknesses within the network or system. Automated reports may complement other data sources in a cybersecurity investigation but are not primarily used for assessing security vulnerabilities.
IPS/IDS logs?
Intrusion Prevention System (IPS) and Intrusion Detection System (IDS) logs record alerts and events related to security threats detected within the network. These logs contain information about suspicious network traffic, attempted intrusions, and malicious activities identified by the IPS/IDS sensors. While they can help identify suspicious network activity, they do not find unexploited weaknesses or vulnerabilities, such as unpatched software, open unnecessary ports, or affected systems. They identify and record information about issues as they are discovered. They record what is “happening now”, as opposed to alerting that you may have a potential problem in the future.
What does network logs capture?
Network logs capture information about network traffic, including source and destination IP addresses, port numbers, protocols, and packet details. These logs are generated by network devices such as routers, switches, and firewalls, and they provide insights into network activity and communication patterns. Like IPS and IDS logs, they do not analyze endpoints or the overall security posture of the network.
Management at your company has requested that you implement DLP. What is the purpose of this technology?
A) It monitors data on computers to ensure the data is not deleted or removed.
B) It protects against malware.
C) It allows organizations to use the Internet to host services and data remotely instead of locally.
D) It implements hardware-based encryption.
Data Loss Prevention (DLP) is a network system that monitors data on computers to ensure the data is not deleted or removed. If your organization implements a DLP system, you can prevent users from transmitting confidential data to individuals outside the company.
What is cloud computing?
Cloud computing is a technology that allows organizations to use the Internet to host services and data remotely instead of locally.
What is TPM?
Trusted Platform Module (TPM) and Hardware Security Module (HSM) are both chips that implement hardware-based encryption. The main difference between the two is that a TPM chip is usually mounted on the motherboard and HSM chips are PCI adapter cards.
What are the different solutions are provided with DLP?
DLP provides different solutions based on data location:
Network-based – deals with data in motion and is usually located on the network perimeter.
Storage-based – operates on long-term storage (archive)
Endpoint-based – operates on a local device and focuses on data-in-use.
Cloud-based – operates in “the cloud” data in use, motion, and at rest
What services does DLP provide?
DLP identifies and controls end-point ports as well as block access to removable media by providing the following services:
Identify removable devices connected to your network by type (USB thumb drive, DVD burner, mobile device), manufacturer, model number, and MAC address.
Control and manage removable devices through endpoint ports, including USB, Wi-Fi, and Bluetooth.
Require encryption, limit file types, and limit file size.
Provide detailed forensics on device usage and data transfer by person, time, file type, and amount.
DLP includes USB blocking, cloud-based, and email services.
Management has decided to install a network-based intrusion detection system (NIDS). What is the primary advantage of using this device?
A) It is low maintenance.
B) It launches no counterattack on the intruder.
C) It has a high throughput of the individual workstations on the network.
D) It has the ability to analyze encrypted information.
The primary advantage of an NIDS is the low maintenance involved in analyzing traffic in the network. An NIDS is easy and economical to manage because the Signatures are not configured on all the hosts in a network segment. Configuration usually occurs at a single system, rather than on multiple systems. By contrast, host-based intrusion detection systems (HIDSs) are difficult to configure and monitor because the intrusion detection agent should be installed on each individual workstation of a given network segment. HIDSs are configured to use the operating system audit logs and system logs, while NIDSs actually examine the network packets.
Does the packet that goes through the VPN be readable with NIDS?
Individual hosts do not need real-time monitoring because intrusion is monitored on the network segment on which the NIDS is placed, and not on individual workstations. An NIDS is not capable of analyzing encrypted information. For example, the packets that travel through a Virtual Private Network tunnel (VPN) cannot be analyzed by the NIDS. The lack of this capability is a primary disadvantage of an NIDS.
Does NIDS effect the throughput of a workstation?
The high throughput of the workstations in a network does not depend on the NIDS installed in the network. Factors such as the processor speed, memory, and bandwidth allocated affect the throughput of workstations.
Does Network Switches affect Switched Network if NIDS is used?
The performance of an NIDS can be affected in a switched network environment because the NIDS will not be able to properly analyze all the traffic that occurs on the network on which it does not reside. An HIDS is not adversely affected by a switched network because it is primarily concerned with monitoring traffic on individual computers.
You need to restrict access to resources on your company’s Windows Active Directory domain. Which criteria can be used to restrict access to resources?
A)location
B)all of these choices
C)transaction type
D)time of day
E)roles
F)groups
Roles, groups, location, time of day, and transaction type can all be used to restrict access to resources. Regardless of the criteria used, access administration can be simplified by grouping objects and subjects. Access control lists (ACLs) can be used to assign users, groups, or roles access to a particular resource. If you implement time of day restrictions with ACLs, security is improved.
Examine the following exhibit. Based on the output, what is displayed?
A) A firewall log
B) A packet capture
C) Metadata
D) An endpoint log
The exhibit is a Windows Firewall log. Firewall logs record every attempt to access the network. They record information such as source and destination IP addresses, ports, protocols, and connection status. Analyzing firewall logs can help security analysts identify unauthorized access attempts, suspicious network behavior, and potential threats such as port scanning or denial-of-service (DoS) attacks. Firewall logs are valuable for monitoring and enforcing network security policies and detecting anomalous activities at the network perimeter.
Firewall logs are a type of network log. Network logs capture information about network traffic, including source and destination IP addresses, port numbers, protocols, and packet details. These logs are generated by network devices such as routers, switches, and firewalls, and they provide insights into network activity and communication patterns. The following exhibit shows output from the netstat command:
What is an example of an endpoint log?
Endpoint logs record events and activities generated by endpoints such as desktops, laptops, and mobile devices. These logs contain information about user logins, file access, system processes, and network connections initiated by endpoint devices. Analyzing endpoint logs can help detect security threats such as malware infections, insider attacks, and unauthorized access attempts. Endpoint logs are crucial for monitoring endpoint security, investigating security incidents, and identifying potential security risks or vulnerabilities within the organization’s endpoint infrastructure.
Eventviewer
What is packet capture?
Packet captures, also known as packet sniffing or network traffic analysis, capture and record data about individual network packets exchanged between devices on a network. Packet captures provide a more detailed record of network traffic than does a network log, including source and destination IP addresses, port numbers, protocols, packet payloads, and communication patterns. By analyzing packet captures, cybersecurity investigators can identify potential security threats, such as malicious activities, network intrusions, and data exfiltration attempts. The following example shows a packet capture made using Wireshark, with a detail pane expanding on packets captured from source IP address 10.2.2.101
What is metadata?
Metadata refers to descriptive information about data, such as the time stamps, file sizes, sender/recipient details, and other attributes associated with files or communication sessions. As an example, metadata associated with an email communication may include the sender’s email address, recipient’s email address, subject line, time and date of transmission, and file attachments. Analyzing this metadata can help investigators trace the source of suspicious emails, identify potential data exfiltration attempts, and reconstruct the timeline of cyber incidents. Metadata is valuable for understanding the characteristics and context of data exchanges, enabling investigators to gain a deeper understanding of security events and formulate effective response strategies.
You need to remove data from a storage media that is used to store confidential information. Which method is NOT recommended?
A) zeroization
B) degaussing
C) destruction
D) formatting
Formatting is not a recommended method. Formatting or deleting the data from a storage media, such as a hard drive, does not ensure the actual removal of the data, but instead removes the pointers to the location where the data resides on the storage media. The residual data on the storage media is referred to as data remanence. The main issue with media reuse is remanence. The residual data can be recovered by using data recovery procedures. This can pose a serious security threat if the erased information is confidential in nature. Sanitization is the process of wiping the storage media to ensure that its data cannot be recovered or reused. Sanitization includes several methods, such as zeroization, degaussing, and media destruction. All of these methods can be used to remove data from storage media, depending on the type of media used. Most storage media with a magnetic base can be sanitized. However, CDs and DVDs often cannot be degaussed. If this is the case, the only option is physical destruction of the CD or DVD.
How can you implement proper controls when ensuring data security?
When implementing appropriate controls to ensure data security, you need to design the appropriate data policies, including the following:
Data wiping – ensures that the contents of the media are no longer accessible.
Data disposing – destroys the media to ensure that media is unusable.
Data retention – ensures that data is retained for a certain period. The data retention policies should also define the different data types and data labeling techniques to be used.
Data storage – ensures that data is stored in appropriate locations. In most cases, two copies of data should be retained and placed in different geographic locations.
No matter which type of media you must dispose of or reuse, you need to ensure that your organization understands the legal and compliance issues that will affect the disposal. Certain types of protected data, such as personally identifiable information (PII) or personal health information (PHI), may require special handling when stored on media.
You want to configure password policies that ensure password strength. Which password setting most affects a password’s strength?
A) password history
B) password age
C) password complexity
D) password lockout
Password complexity is the most important setting to ensure password strength. Password complexity allows you to configure which characters should be required in a password to reduce the possibility of dictionary or brute force attacks. A typical password complexity policy would force the user to incorporate numbers, letters, and special characters. Both uppercase and lowercase letters can be required. A password that uses a good mix, such as Ba1e$23q, is more secure than a password that only implements parts of these requirements, such as My32birthday, NewYears06, and John$59. Note that password complexity rules are less effective when users make common character substitutions in dictionary words, such as zero for O, @ for a, and 3 for E.
A large financial institution needs to securely manage and grant temporary access to privileged accounts for third-party contractors performing system maintenance. Of the choices given, which solution would be most appropriate for privileged access management?
A) Just-in-time permissions
B) Password vaulting
C) Time-limited authorization
D) Ephemeral credentials
Ephemeral credentials would be the most appropriate solution for privileged access management. Ephemeral credentials refer to temporary, short-lived credentials generated dynamically for accessing privileged accounts or resources. Ephemeral credentials can be generated on-demand and automatically revoked after a predefined period, reducing the risk of credential theft, misuse, or exposure. This ensures that third-party contractors have access only for the duration required to perform system maintenance tasks, enhancing security and compliance.
What is the temporary solution for Just-in-time permissions?
Just-in-time permissions involve granting temporary access to resources or privileged accounts for a specific period, typically based on user requests or predefined policies. While just-in-time permissions provide a means to limit exposure and reduce the risk of unauthorized access, they may not be the most appropriate solution for the scenario described. Just-in-time permissions are typically used for user accounts within an organization’s internal network, rather than third-party contractors requiring temporary privileged access for system maintenance.
What is password Vaulting?
Password vaulting involves securely storing and managing passwords, encryption keys, and other sensitive credentials used to access privileged accounts. While password vaulting helps centralize and secure access to privileged accounts, it may not be the most appropriate solution for the scenario described. Password vaulting solutions are designed to manage static credentials and do not provide temporary access or time-limited authorization capabilities required for third-party contractors performing system maintenance.
How can you grant access where once a task is completed or when the time is expired and access is no longer granted?
Time-limited authorization involves granting access permissions for a specific period or until a certain event occurs, such as the completion of a task or the expiration of a predefined timeframe. While time-limited authorization shares similarities with just-in-time permissions and ephemeral credentials, it may not specifically address the need for managing and securing privileged access for third-party contractors in the scenario described. Time-limited authorization can be part of a broader privileged access management strategy but may require additional capabilities, such as credential rotation and monitoring, to effectively manage temporary access for contractors performing system maintenance.
You have asked your assistant to configure a firewall with the following access control list (ACL).
access list outbound deny ip 0.0.0.0 0.0.0.0/0 port 23
access list outbound permit ip 192.168.5.6/32 0.0.0.0/0 port 23
access list outbound permit ip 0.0.0.0 0.0.0.0/0
What will be the effect of these commands?
A) No devices will be able to send outbound Telnet requests.
B) Only the device at 192.168.5.6 will be able to send outbound DNS requests.
C) Only the device at 192.168.5.6 will be able to send outbound Telnet requests.
D) No devices will be able to send outbound DNS requests.
As a result of the first rule, no device will be able to send outbound Telnet requests from inside the network:
access list outbound deny ip 0.0.0.0 0.0.0.0/0 port 23
The list specifies Telnet traffic (port 23). DNS traffic operates on port 53.
Setting the source and destination address to 0.0.0.0 affects all traffic. Telnet operates on port 23. The order of the statements is important because the system processes them from top to bottom. Once traffic matches a rule, the rule is applied in the specified direction, and the traffic is not evaluated further. Because ALL traffic matches the first line, all traffic is blocked on port 23 and the second statement is never processed.
You need to install a network device or component that ensures the computers on the network meet an organization’s security policies. Which device or component should you install?
A) NAT
B) DMZ
C) IPSec
D) NAC
Network Access Control (NAC) ensures that the computers on the network meet an organization’s security policies. NAC user policies can be enforced based on the location of the network user, group membership, or some other criteria. Media access control (MAC) filtering is a form of NAC. NAC provides host health checks for any devices connecting to the network. Hosts may be allowed or denied access or placed into a quarantined state based on this health check.
When connecting to a NAC, the user should be prompted for credentials. If the user is not prompted for credentials, the user’s computer is missing the authentication agent.
NAC can be permanent or dissolvable. Permanent or persistent NAC is installed on a device and runs continuously, while dissolvable NAC, also referred to as portal-based, downloads and runs when required and then disappears.
NAC can also be Agent-based or agentless. With Agent-based, a piece of code is installed on the host that performs the NAC functions on behalf of the NAC server. Agentless NAC integrates with a directory service.
Which type of test relies heavily on automated scanning tools and reporting?
A)penetration test
B)unknown environment test
C)vulnerability test
D)known environment test
Automated scanning tools and reporting are used to perform a vulnerability test. A vulnerability test identifies the vulnerabilities in a network. After the vulnerabilities are identified, a penetration test exploits the identified vulnerabilities to prove that the vulnerability actually exists.
A vulnerability test and a penetration test are NOT the same thing. A vulnerability test leads to the penetration test. You must first identify the vulnerabilities in the vulnerability test and then attempt to exploit the vulnerabilities using a penetration test.
Your organization has recently undergone a hacker attack. You have been tasked with preserving the data evidence. You must follow the appropriate eDiscovery process. You are currently engaged in the Preservation and Collection process. Which of the following guidelines should you follow? (Choose all that apply.)
A)The data acquisition should include both bit-stream imaging and logical backups.
B)The data acquisition should be from a live system to include volatile data when possible.
C)The chain of custody should be preserved from the data acquisition phase to the presentation phase.
D)Hashing of acquired data should occur only when the data is acquired and when the data is modified.
When following the eDiscovery process guidelines, you should keep the following points in mind regarding the Preservation and Collection process:
The data acquisition phase should be from a live system to include volatile data when possible.
The data acquisition should include both bit-stream imaging and logical backups.
The chain of custody should be preserved from the data acquisition phase to the presentation phase.
While it is true that the hashing of acquired data should occur when the data is acquired and when the data is modified, these are not the only situations that require hashing. Hashing should also be performed when a custody transfer of the data occurs.
Other points to keep in mind during the Preservation and Collection process include the following:
A consistent process and policy should be documented and followed at all times.
Forensic toolkits should be used.
The data should not be altered in any manner, within reason.
Logs, both paper and electronic, must be maintained.
At least two copies of collected data should be maintained.
List The stages of Forensic Discovery?
The stages of Forensic Discovery include the following:
Verification – Confirm that an incident has occurred.
System Description – Collect detailed descriptions of the systems in scope.
Evidence Acquisition – Acquire the relevant data in scope, minimizing data loss, in a manner that is legally defensible. This is primarily concerned with the minimization of data loss, the recording of detailed notes, the analysis of collected data, and reporting findings.
Data Analysis – This includes media analysis, string/byte search, timeline analysis, and data recovery.
Results Reporting – Provide evidence to prove or disprove statements of facts.
List The stages of eDiscovery?
The stages of eDiscovery include the following:
Identification – Verify the triggering event that has occurred. Find and assign potential sources of data, subject matter experts, and other required resources.
Preservation and Collection – Acquire the relevant data in scope, minimizing data loss, in a manner that is legally defensible. This is primarily concerned with the minimization of data loss, the recording of detailed notes, the analysis of collected data, and reporting findings.
Processing, Review, and Analysis – Process and analyze the data while ensuring that data loss is minimized.
Production – Prepare and produce electronically stored information
(ESI) in a format that has already been agreed to by the parties.
Presentation – Provide evidence to prove or disprove statements of facts.
When preparing an eDiscovery policy for your organization, you need to consider?
When preparing an eDiscovery policy for your organization, you need to consider the following facets:
Electronic inventory and asset control – You must ensure that all assets involved in the eDiscovery process are inventoried and controlled. Unauthorized users must not have access to any assets needed in eDiscovery.
Data retention policies – Data must be retained as long as required. Organizations should categorize data and then decide the amount of time that each type of data is to be retained. Data retention policies are the most important policies in the eDiscovery process. They also include systematic review, retention, and destruction of business documents.
Data recovery and storage – Data must be securely stored to ensure maximum protection. In addition, data recovery policies must be established to ensure that data is not altered in any way during the recovery. Data recovery and storage is the process of salvaging data from damaged, failed, corrupted, or inaccessible storage when it cannot be accessed normally.
Data ownership – Data owners are responsible for classifying data. These data classifications are then assigned data retention policies and data recovery and storage policies.
Data handling – A data handling policy should be established to ensure that the chain of custody protects the integrity of the data.
What is the process of identifying IoT and other devices that are not part of the core infrastructure so that hackers cannot use them to compromise an organization’s core network?
A) Penetration testing and adversary emulation
B) Passive discovery
C) Edge discovery
D) Security controls testing
Edge discovery is the process of identifying Internet of Things (IoT) and other devices that are not part of the core infrastructure. Once identified, they can be configured so that hackers cannot use them to compromise an organization’s core network.
Edge discovery is a key component of edge security for attack surface management. Edge security is the process of securing nodes that are outside a company’s network core. The edge of the network needs the same level of security as the core network. Nodes at the edge are not fully covered by the security perimeter of the organization and so are the most vulnerable to cybersecurity risks. Computing on the edge involves computing occurring closer to edge devices rather than the infrastructure of the network. Self-driving cars, sensors, fitness bands, and IoT devices are examples of edge devices. These devices often handle sensitive data, and their compromise can compromise the full network. For this reason, it is essential that these devices are not discoverable by hackers on the Internet. Physical controls involve securing the devices and only allowing authorized personnel to use them. Logical controls involve encryption of device data both in transit and at rest and implementing authorization and authentication.
How to secure edge devices?
The growth in the use of edge devices has increased the attack surface for an organization. To secure edge devices, you use routers and firewalls as well as wide area network (WAN) devices which are built for security. Some best practices for edge security include:
Keep a zero-trust model throughout the company
Ensure internal configuration and control of edge devices and reject compromised devices
Use AI and ML tools to monitor edge device activity
Ensure edge devices are isolated in a public cloud to avoid discovery
What is passive discovery?
Passive discovery helps to protect the network through the use of security appliances, including firewalls, intrusion detection systems (IDSes), intrusion prevention systems (IPSes), malware protection systems, and others. It is the role of these systems to monitor events and, when an event occurs, create an alert for humans to intervene.
How do can you verify all security controls have been implemented properly?
Security controls testing determines whether controls have been properly implemented and are performing as expected and producing the appropriate results. For example, a test of a physical security control could be checking to see if an access control card denies entry into a specific area. This would be an example of a preventative type of control.
What is critical for attack surface management?
Penetration testing and adversary emulation are critical for attack surface management. The goal of penetration testing is to determine as many vulnerabilities as possible within defined time and scope parameters. Adversary emulation (also known as threat emulation) adopts current threat intelligence methodologies and tactics to identify, expose, and correct vulnerabilities. Adversary emulation is particularly suited to measure the organization’s ability to withstand an attack from advanced persistent threats.
What preserves the existence and integrity of relevant electronic records (and paper records) when litigation is imminent?
A) Incident response plan
B) Chain of custody
C) Data sovereignty
D) Legal hold
Legal hold is the term for the preservation of information relevant to an impending lawsuit. Personnel will be instructed not to destroy or alter information relating to the topic of the lawsuit. Chain of custody deals with how the evidence is handled once it has been collected and guarantees the identity and integrity of the evidence from the collection stage to its presentation in the court of law. There should be a log of who has had custody of the evidence, where it has been, and who has seen it. Active logging should also be used to document access to the evidence, including photographic or video records, showing the manner in which the evidence is secured. Preserving data for a legal hold just ensures that data is retained for the appropriate period and has nothing to do with chain of custody, although chain of custody is vital to preserving evidence.
An incident response plan describes how to respond to various types of security incidents. Incident response plans provide details on how to preserve data and logs related to an incident. Data sovereignty means that the data is subject to the laws of the location where it is stored. Different countries may differ in their laws for preserving the existence and integrity of records prior to litigation.
In a security investigation, which of the following would provide you with the best data source for detailed information about network transmissions?
A) Application logs
B) Packet captures
C) Dashboards
D) IPS/IDS logs
Packet captures would provide you with the best data source for detailed information about network transmission. Packet captures, also known as packet sniffing or network traffic analysis, involve capturing and recording individual network packets exchanged between devices on a network. Packet captures provide a detailed record of network traffic, including source and destination IP addresses, port numbers, protocols, packet payloads, and communication patterns. By analyzing packet captures, cybersecurity investigators can identify potential security threats, such as malicious activities, network intrusions, and data exfiltration attempts. Packet captures are essential for conducting in-depth network traffic analysis, identifying anomalous behavior, and supporting incident response efforts in cybersecurity investigations.
What are dashboard used for?
Dashboards provide an overall view of combined datasets without a lot of detail. Dashboards provide visual representations of key performance indicators (KPIs), metrics, and data insights related to various aspects of an organization’s IT environment. While dashboards can aggregate and display data from multiple sources, including logs and security alerts, they do not capture the detailed content of network packets exchanged between devices.
How are application logs captured?
Application logs capture events and activities generated by software applications running on servers or client devices. These logs contain valuable information about user interactions, application errors, and system events within the application environment. Analyzing application logs can help identify security incidents such as unauthorized access attempts, application vulnerabilities, and abnormal user behavior. Application logs are essential for troubleshooting application issues, detecting malicious activities, and ensuring the security and reliability of business-critical applications.
What are the benefits of IPS and IDS in terms of reporting logs?
Intrusion Prevention System (IPS) and Intrusion Detection System (IDS) logs record alerts and events related to security threats detected within the network. These logs contain information about suspicious network traffic, attempted intrusions, and malicious activities identified by the IPS/IDS sensors. IPS/IDS examine active network traffic based on a profile, or heuristic signatures. Their primary purpose is not to capture every packet coming across the network for later analysis, but rather provide automated detection and response to security threats in real time. By contrast, packet capture tools like Wireshark are often used selectively for targeted analysis or troubleshooting tasks rather than continuous monitoring of all network traffic. IDS/IPS focus on “what is happening,” while packet capture focuses on “what happened.”
In which of the following scenarios would the ability to interpret suspicious commands be helpful?
A) when an email-based attack uses embedded and malicious links
B) when an attacker alters an email header to obscure the sender
C) when an attacker compromises the DNS system
D) when an attacker accesses a shell
When an attacker is able to install a shell (also called dropping a shell), they will be able to access a command line interface to the system. In this scenario, one’s ability to interpret any strange commands they may have entered and executed may help to identify exactly what the attacker did or was attempting to do.
What is an email-based attack?
An email-based attack uses embedded and malicious links in an email. While training users not to click any hyperlinks in incoming emails is one solution, you can go a step further and disable hyperlinks in emails.
Why is it critical to know DKIM function?
When an attacker alters an email header to obscure the sender and perform impersonation, a solution would be to implement DomainKeys Identified Mail (DKIM), an email authentication method designed to detect forged sender addresses in email (email spoofing), a technique often used in phishing and email spam.
What is DMARC good for when the attacker compromises the DNS system?
When an attacker compromises the DNS system, a solution would be to implement Domain-based Message Authentication, Reporting, and Conformance (DMARC). When a DMARC DNS entry is published, any receiving email server can authenticate the incoming email, preventing a delivery based on an altered header. DMARC extends two email authentication mechanisms: Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM). It allows the administrative owner of a domain to publish a policy in their DNS records to specify how to check the From: field presented to end users.
You are investigating a security incident, and need more information about what a log entry contains. Which of the following data sources would you consult to find information about the characteristics of other data?
A) IPS/IDS logs
B) Network logs
C) Firewall logs
D) Metadata
Metadata literally means data about the data. Metadata refers to descriptive information about data, such as the time stamps, file sizes, sender/recipient details, and other attributes associated with files or communication sessions. As an example, metadata associated with an email communication may include the sender’s email address, recipient’s email address, subject line, time and date of transmission, and file attachments. Analyzing this metadata can help investigators trace the source of suspicious emails, identify potential data exfiltration attempts, and reconstruct the timeline of cyber incidents. Metadata is valuable for understanding the characteristics and context of data exchanges, enabling investigators to gain a deeper understanding of security events and formulate effective response strategies.
Which asset management activity typically involves scanning to locate assets?
A)Ownership
B)Inventory
C)Enumeration
D)Classification
Enumeration is the asset management activity that typically involves scanning to locate assets. Unlike inventory management, which relies on existing records or information, enumeration actively scans systems and networks to identify and list all of the technology resources and devices within the organization. This process helps ensure that all assets are discovered and accounted for, even if they were not previously documented in the inventory.
The IT department has been tasked with implementing a new identity and access management (IAM) solution to streamline user authentication across various systems and applications. They want to ensure that the IAM solution seamlessly integrates with existing infrastructure components, such as Active Directory, cloud services, and custom applications. Which aspect of IAM implementation is most critical in this scenario?
A)Permission assignments and implications
B)Attestation
C)Interoperability
D)Provisioning/de-provisioning user accounts
Interoperability would be the most critical in this scenario. Interoperability refers to the ability of different systems or applications to seamlessly work together and exchange information. In the context of IAM implementation, interoperability ensures that the IAM solution can integrate with various systems, directories, and applications across the organization’s infrastructure. This enables centralized management of user identities and access controls, simplifies administration, and enhances security by enforcing consistent policies across multiple systems.
What is Attestation?
Attestation refers to the process of verifying the accuracy and validity of user access rights and permissions through periodic reviews and audits. While attestation helps ensure that access rights remain aligned with business needs and regulatory requirements over time,
What is provisioning and deprovisioning?
Provisioning involves creating and granting access to user accounts, while deprovisioning entails removing access privileges and disabling user accounts when they are no longer needed. In the context of IAM implementation, efficient provisioning and deprovisioning processes ensure that users have appropriate access rights to resources based on their roles and responsibilities. However, in this scenario, where integration with existing infrastructure is the primary concern, provisioning/deprovisioning user accounts may not directly address the interoperability requirements of the IAM solution.
What is IAM solutions?
Identity and Access Management (IAM) solutions are systems and processes designed to manage and control users’ access to resources within an organization. These solutions ensure that the right individuals have the appropriate access to technology resources, safeguarding sensitive information and maintaining compliance with regulatory requirements.
Your vulnerability analysis scan has identified several vulnerabilities and assigned them a CVSS score. Issue A has a score of 4.3, Issue B has a score of 9.1, Issue C has a score of 1.6, and Issue D has a score of 7.7. Which issue should take priority?
A)Issue D
B)Issue B
C)Issue A
D)Issue C
Issue B should take priority because it has the highest CVSS value of 9.1, which is considered a critical issue.
The Common Vulnerability Scoring System (CVSS) is a system of ranking vulnerabilities that are discovered based on pre-defined metrics. This system ensures that the most critical vulnerabilities can be easily identified and addressed after a vulnerability test is met. Scores are awarded on a scale of 0 to 10, with the values having the following ranks:
0 – No issues
0.1 to 3.9 – Low
4.0 to 6.9 – Medium
7.0 to 8.9 – High
9.0 to 10.0 – Critical
In most cases, companies will attempt to resolve the vulnerabilities with the highest score. However, in some cases, you may find that a less critically scored vulnerability can be resolved relatively quickly. In that case, you may decide to handle that vulnerability.
Keep in mind that tool updates and plug-ins for vulnerability scanners are just as important as updates are to anti-malware and antivirus products. Tool updates and plug-ins allow the scanner to recognize the latest vulnerabilities that have been discovered. It is important to keep the vulnerability scanning tool you use up to date.
As part of the incident response team, you have been called in to help with an attack on your company’s web server. You are currently working to identify the root cause of the attack. During which step of incident response does root cause analysis occur?
A) Containment
B) Identification
C) Lessons Learned
D) Preparation
E) Eradication
F) Recovery
There are six steps in incident response:
Preparation – Ensure that the organization is ready for an incident by documenting and adopting formal incident response procedures.
Identification – Analyze events to identify an incident or data breach. If the first responder is not the person responsible for detecting the incident, the person who detects the incident should notify the first responder. This step is also often referred to as detection.
Containment – Stop the incident as it occurs and preserve all evidence. Notify personnel of the incident. Escalate the incident if necessary. Containing the incident involves isolating the system or device by either quarantine or device removal. This step also involves ensuring that data loss is minimized by using the appropriate data and loss control procedures.
Eradication – Fix the system or device that is affected by the incident. Formal recovery/reconstitution procedures should be documented and followed during this step of incident response. This step is also referred to as remediation
Recovery – Ensure that the system or device is repaired. Return the system or device to production. This step is also referred to as resolution.
Lessons Learned – Perform a root cause analysis, and document any lessons learned. Report the incident resolution to the appropriate personnel. This step may also be referred to as review and close.
During the preparation step of incident response, you may identify incidents that you can prevent or mitigate. Taking the appropriate prevention or mitigation steps is vital to ensure that your organization will not waste valuable time and resources on the incident later.
Match the following types of log data and/or log data with their characteristics. Choose the BEST answer.
The log terms should be matched to their characteristics as follows:
Firewall logs – Log data that includes information about network traffic passing through the firewall, such as source and destination IP addresses, ports, protocols, and connection status.
Application logs – Record events and activities generated by software applications running on servers or client devices.
Endpoint logs – Capture information about events and activities generated by devices, such as user logins, file access, and network connections.
OS-specific security logs – Provide information about security-related events and activities recorded by the operating system, such as authentication events and system configuration changes
Firewall logs – Log data that includes information about network traffic passing through the firewall, such as source and destination IP addresses, ports, protocols, and connection status.
Application logs – Record events and activities generated by software applications running on servers or client devices.
Endpoint logs – Capture information about events and activities generated by devices, such as user logins, file access, and network connections.
OS-specific security logs – Provide information about security-related events and activities recorded by the operating system, such as authentication events and system configuration changes
Which vulnerability management activity would you perform to confirm that remediation actions have been successfully implemented and have addressed the identified vulnerabilities?
A) Auditing
B) Rescanning
C) Verification
D) Endpoint management
E) Reporting
Rescanning involves conducting additional vulnerability scans after remediation actions have been implemented to reassess the security posture of the environment and verify whether the identified vulnerabilities were effectively addressed. Rescanning confirms that the actions taken to address vulnerabilities were successful and that no new vulnerabilities were generated as a result.
What activity involves in vulnerability management?
Rescanning involves conducting additional vulnerability scans after remediation actions have been implemented to reassess the security posture of the environment and verify whether the identified vulnerabilities were effectively addressed. Rescanning confirms that the actions taken to address vulnerabilities were successful and that no new vulnerabilities were generated as a result.
Verification is the ongoing process of confirming or validating that remediation actions were successfully implemented and that they effectively resolved the identified vulnerabilities over time. Verification is part of the validation of remediation. During verification, security teams may conduct manual or automated checks to ensure that the remediation steps were applied correctly and that the associated vulnerabilities have been mitigated as intended. Verification is a critical step in the vulnerability management lifecycle, as it provides assurance that the organization’s security controls are functioning as expected and that potential risks have been mitigated in the long term.
Auditing involves examining and evaluating various aspects of an organization’s operations, processes, or systems to ensure compliance with established policies, standards, or regulations. While auditing may include reviewing the effectiveness of vulnerability management practices, it typically encompasses a broader scope of activities beyond validating a specific remediation action.
Reporting involves documenting and communicating information about vulnerabilities, remediation efforts, and security incidents to relevant stakeholders within the organization. Reporting plays a crucial role in vulnerability management by providing visibility into the status of vulnerabilities, tracking remediation progress, and facilitating decision-making processes related to risk management and resource allocation. Effective reporting enables security teams to prioritize remediation efforts, demonstrate compliance with regulatory requirements, and improve overall security posture.
Endpoint management, or installing endpoint management software, is a method of hardening workstations, servers, and other clients by centralizing their configuration and management. It allows administrators to monitor their health and configuration from a single console. It is a hardening technique and not part of active vulnerability management. However, the report generated from a vulnerability audit may recommend endpoint management be installed.
Your organization needs to implement a system that logs changes to files. What category of solution should you research?
A)File integrity checks
B)Host-based firewall
C)HIDS/HIPS
D)Antivirus
File integrity checks examine selected files to see if there have been any changes and logs changes to files. Some file integrity checks just notify you of a change, while others can actually return a file to its previous state if the change is unauthorized.
How can you use application log list used to monitor a cyber attack?
Application allow listing (previously referred to as whitelisting) is the practice of denying all applications from running on a device except for those that are approved, which are designated as whitelisted. Several products are available that check for applications that are not on the allowed list, including attempts to install those applications. For example, the logs generated by the security product would tell you if someone had attempted to install a keylogger.
Why is RMC important in every environment?
Removable media control (RMC) is important in many environments. USB drives, SD cards, CDs, DVDs, and BluRay devices can all present dangers to the system. As an example, someone can use a USB drive to copy sensitive information and deliver it to someone outside the organization. Another example could be a CD that appears to be a music CD but is actually installation media for unauthorized software. Examine the RMC logs to determine attempts to violate removable media policies.
What is an advantage of advanced malware tool check?
Advanced malware tools check for malicious code that would otherwise slip by standard antivirus and antimalware tools. Patch management tools assist with the installation of patches, which can present a significant challenge to an enterprise environment. Best practices dictate that you install a patch on a test machine and verify that the patch performs as expected prior to deploying it throughout the network. It is important to examine the logs to check for failed updates, incompatible patches, and unsuccessful patch installations.
What is UTM?
Unified Threat Management (UTM) incorporates several threat management devices and systems into one appliance. The biggest advantage to a UTM, from an analysis standpoint, is that all the logs are in one place, as opposed to checking multiple systems.
What is DEP?
Data execution prevention (DEP) forces the user to approve an application before it executes or launches. Logs will record execution attempts, including failed attempts. Notification of failed attempts is important, as it could tell you that your antimalware application successfully blocked an attempt to install malware.
What is WAF?
A web application firewall (WAF) uses a set of defined rules to manage incoming and outgoing web server traffic, as well as attack prevention. Organizations can define their own rules based on their vulnerabilities.
Your CEO, who is not a network engineer, wants to implement a technology to enhance the organization’s wireless security. He’s not sure what would provide the most robust solution, and he has heard a lot of buzzwords. Of the options below, which would be the most suitable?
A)RADIUS
B)Authentication protocols
C)Cryptographic protocols
D)WPA3
Wi-Fi Protected Access 3 (WPA3) would be the BEST way to enhance wireless security. The most secure wireless security configuration is WPA3 plus AES-CCMP/AES-GCMP.
WPA3 is the latest standard in wireless security protocols, offering enhanced protection against various cyber threats. It provides stronger encryption through the use of the Simultaneous Authentication of Equals (SAE) protocol, which protects against offline dictionary attacks and brute-force attacks. Additionally, WPA3 introduces individualized data encryption, ensuring that each wireless connection is encrypted separately. This feature significantly enhances security by preventing attackers from intercepting and decrypting wireless traffic.