SecurityX Practice Exam #3 (Dion) Flashcards

1
Q

An organization integrates APIs into its Zero Trust architecture. To ensure security, the organization mandates that all APIs are validated for authentication and data integrity. Which of the following practices best aligns with Zero Trust principles for API integration?
Implementing a shared API key for all clients to simplify access
Using IP allowlisting to control access to APIs
Enforcing token-based authentication for all API requests
Allowing unauthenticated access to APIs for internal systems only

A

Enforcing token-based authentication for all API requests

Explanation:
OBJ 2.6: Token-based authentication ensures each API request is authenticated and authorized, which aligns with Zero Trust principles of verifying every request and minimizing trust assumptions. Allowing unauthenticated access violates Zero Trust principles of continuous verification. IP allowlisting is static and less effective in dynamic, cloud-based environments. A shared API key compromises security by failing to enforce unique authentication for each client. For support or reporting issues, include Question ID: 6751010723df37e1b5ec6009 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your company is adopting a cloud-first architecture model. Management wants to decommission the on-premises SIEM your analysts use and migrate it to the cloud. Which of the following is an issue with using this approach?
A VM escape exploit could allow an attacker to gain access to the SIEM
The company will be dependent on the cloud provider’s backup capabilities
Legal and regulatory issues may prevent data migration to the cloud
The company will have less control over the SIEM

A

Legal and regulatory issues may prevent data migration to the cloud

Explanation:
OBJ 2.5: If there are legal or regulatory requirements that require the company to host their security audit data on-premises, then moving to the cloud will not be possible without violating applicable laws. For example, some companies must host their data within their national borders, even if migrating to the cloud. The other options presented are all low risk and can be overcome with proper planning and mitigations. Most cloud providers have degrees of redundancy far above what any individual on-premises provider will be able to generate, making the concern over backups a minimal risk. If the SIEM is moved to a cloud-based server, it could still be operated and controlled in the same manner as the previous on-premise solution using a virtualized cloud-based server. While a VM or hypervisor escape is possible, they are rare and can be mitigated with additional controls. For support or reporting issues, include Question ID: 63fe06c13b7322449ddbc564 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

As part of the reconnaissance stage of a penetration test, Kumar wants to retrieve information about an organization’s network infrastructure without causing an IPS alert. Which of the following is his best course of action?

Perform a DNS brute-force attack
Perform a DNS zone transfer
Use a nmap stealth scan
Use a nmap ping sweep

A

Perform a DNS brute-force attack

Explanation:
OBJ 4.2: The best course of action is to perform a DNS brute-force attack. The DNS brute-force attack queries a list of IPs and typically bypasses IDS/IPS systems that do not alert on DNS queries. A ping sweep or a stealth scan can be easily detected by the IPS, depending on the signatures and settings being used. A DNS zone transfer is also something that often has a signature search for it and will be alerted upon since it is a common attack technique. For support or reporting issues, include Question ID: 63fe07183b7322449ddbc99a in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following identity and access management controls relies upon using a certificate-based authentication mechanism?

Proximity card
TOTP
Smart card
HOTP

A

Smart card

Explanation:
OBJ 3.1: Smart cards, PIV, and CAC devices are used as an identity and access management control. These devices contain a digital certificate embedded within the smart card (PIV/CAC) presented to the system when it is inserted into the smart card reader. When combined with a PIN, the smart card can be used as a multi-factor authentication mechanism. The PIN unlocks the card and allows the digital certificate to be presented to the system. For support or reporting issues, include Question ID: 63fe06db3b7322449ddbc6a8 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following layers within software-defined networking focuses on providing network administrators the ability to oversee network operations, monitor traffic conditions, and display the status of the network?

Infrastructure layer
Control layer
Management plane
Application layer

A

Management plane

Explanation:
OBJ 2.3: The management plane is used to monitor traffic conditions, the status of the network, and allows network administrators to oversee the network and gain insight into its operations. The application layer focuses on the communication resource requests or information about the network. The control layer uses the information from applications to decide how to route a data packet on the network and to make decisions about how traffic should be prioritized, how it should be secured, and where it should be forwarded to. The infrastructure layer contains the physical networking devices that receive information from the control layer about where to move the data and then perform those movements. For support or reporting issues, include Question ID: 63fe07103b7322449ddbc932 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An analyst’s vulnerability scanner did not have the latest set of signatures installed. Due to this, several unpatched servers may have vulnerabilities that were undetected by their scanner. You have directed the analyst to update their vulnerability scanner with the latest signatures at least 24 hours before conducting any scans. However, the results of their scans still appear to be the same. Which of the following logical controls should you use to address this situation?

Create a script to automatically update the signatures every 24 hours
Ensure the analyst manually validates that the updates are being performed as directed
Test the vulnerability remediations in a sandbox before deploying them into production
Configure the vulnerability scanners to run a credentialed scan

A

Create a script to automatically update the signatures every 24 hours

Explanation:
OBJ 2.4: Since the analyst appears not to be installing the latest vulnerability signatures according to your instructions, it would be best to create a script and automate the process to eliminate human error. The script will always ensure that the latest signatures are downloaded and installed in the scanner every 24 hours without any human intervention. While you may want the analyst to manually validate the updates were performed as part of their procedures, this is still error-prone and likely not to be conducted properly. Regardless of whether the scanners are being run in uncredentialed or credentialed mode, they will still miss vulnerabilities if using out-of-date signatures. Finally, the option to test the vulnerability remediations in a sandbox is a good suggestion. Still, it won’t solve this scenario since we are concerned with the scanning portion or vulnerability management and not remediation. For support or reporting issues, include Question ID: 63fe07203b7322449ddbc9fd in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Dion Training is evaluating the security of its endpoint configurations. During the evaluation, one of the analysts identified that data being stored on an internal solid state device is being encrypted using BitLocker. The analyst is concerned that the data-at-rest could be compromised if someone was able to collect the encryption key stored in the system’s RAM. Which of the following endpoint security controls would eliminate this vulnerability while still providing data-at-rest for the data stored on an internal solid state device or hard disk drive?

Local drive encryption
Self-encrypting drive (SED)
Secure encrypted enclaves
Attestation services

A

Self-encrypting drive (SED)

Explanation:
OBJ 3.4: A self-encrypting drive (SED) is a type of solid state device (SSD) or hard disk drive (HDD) that conducts transparent encryption of all data as it is written to the device using an embedded hardware cryptographic processor. A self-encrypting drive uses transparent encryption by implementing a cryptographic hardware processor with embedded encryption keys to prevent the theft of encryption keys from the system’s RAM. Local drive encryption protects the contents of a solid state device (SSD) or hard disk drive (HDD) when the operating system is not running through the use of software-based encryption such as BitLocker, FileVault, or TrueCrypt. Attestation services are used to ensure the integrity of the computer’s startup and runtime operations. Hardware-based attestation is designed to protect against threats and malicious code that could be loaded before the operating system is loaded. Secure encrypted enclaves protect CPU instructions, dedicated secure subsystems in a system on a chip (SoC), or a protected region of memory in a database engine by only allowing data to be decrypted on the fly within the CPU, SoC, or protected region. For support or reporting issues, include Question ID: 63fe07da3b7322449ddbd31d in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Tamera just purchased a Wi-Fi-enabled Nest Thermostat for her home. She has hired you to install it, but she is worried about a hacker breaking into the thermostat since it is an IoT device. Which of the following is the BEST thing to do to mitigate Tamera’s security concerns? (Select TWO)

Configure the thermostat to use the WEP encryption standard for additional confidentiality
Configure the thermostat to use a segregated part of the network by installing it into a screened subnet
Upgrade the firmware of the wireless access point to the latest version to improve the security of the network
Disable wireless connectivity to the thermostat to ensure a hacker cannot access it
Enable two-factor authentication on the device’s website (if supported by the company)
Configure the thermostat to connect to the wireless network using WPA2 encryption and a long, strong password

A

Configure the thermostat to use a segregated part of the network by installing it into a screened subnet

Configure the thermostat to connect to the wireless network using WPA2 encryption and a long, strong password

Explanation:
OBJ 3.5: The BEST options are to configure the thermostat to use the WPA2 encryption standard (if supported) and place any Internet of Things (IoT) devices into a DMZ/screened subnet to segregate them from the production network. While enabling two-factor authentication on the device’s website is a good practice, it will not increase the IoT device’s security. While disabling the wireless connectivity to the thermostat will ensure it cannot be hacked, it also will make the device ineffective for the customer’s normal operational needs. WEP is considered a weak encryption scheme, so you should use WPA2 over WEP whenever possible. Finally, upgrading the wireless access point’s firmware is good for security, but it isn’t specific to the IoT device’s security. Therefore, it is not one of the two BEST options. For support or reporting issues, include Question ID: 63fe07e53b7322449ddbd3a0 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You have noticed some unusual network traffic outbound from a certain host. The host is communicating with a known malicious server over port 443 using an encrypted TLS tunnel. You ran a full system anti-virus scan of the host with an updated anti-virus signature file, but the anti-virus did not find any infection signs. Which of the following has MOST likely occurred?

Directory traversal
Session hijacking
Zero-day attack
Password spraying

A

Zero-day attack

Explanation:
OBJ 4.1: Since you scanned the system with the latest anti-virus signatures and did not find any signs of infection, it would most likely be evidence of a zero-day attack. A zero-day attack has a clear sign of compromise (the web tunnel being established to a known malicious server). The anti-virus doesn’t have a signature yet for this particular malware variant. Password spraying occurs when an attacker tries to log in to multiple different user accounts with the same compromised password credentials. Session hijacking is exploiting a valid computer session to gain unauthorized access to information or services in a computer system. Based on the scenario, it doesn’t appear to be session hijacking since the user would not normally attempt to connect to a malicious server. Directory traversal is an HTTP attack that allows attackers to access restricted directories and execute commands outside of the web server’s root directory. A directory traversal is usually indicated by a dot dot slash (../) in the URL being attempted. For support or reporting issues, include Question ID: 63fe07203b7322449ddbca02 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Alexa is an analyst for a large bank that has offices in multiple states. She wants to create an alert to detect if an employee from one bank office logs into a workstation located at an office in another state. What type of detection and analysis is Alexa configuring?

Behavior
Anomaly
Heuristic
Trend

A

Behavior

Explanation:
OBJ 4.3: This is an example of behavior-based detection. Behavior-based detection (or statistical- or profile-based detection) means that the engine is trained to recognize baseline traffic or expected events associated with a user account or network device. Anything that deviates from this baseline (outside a defined level of tolerance) generates an alert. The heuristic analysis determines whether several observed data points constitute an indicator and whether related indicators make up an incident depending on a good understanding of the relationship between the observed indicators. Human analysts are typically good at interpreting context but work painfully slowly, in computer terms, and cannot hope to cope with the sheer volume of data and traffic generated by a typical network. Anomaly analysis is the process of defining an expected outcome or pattern to events and then identifying any events that do not follow these patterns. This is useful in tools and environments that enable you to set rules. Trend analysis is not used for detection but instead to better understand capacity and the system’s normal baseline. Behavioral-based detection differs from anomaly-based detection. Behavioral-based detection records expected patterns concerning the entity being monitored (in this case, user logins). Anomaly-based detection prescribes the baseline for expected patterns based on its observation of what normal looks like. For support or reporting issues, include Question ID: 63fe07023b7322449ddbc88d in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You are conducting a review of a VPN device’s logs and found the following URL being accessed:

https://sslvpn/dana-na/../diontraining.html5acc/teach/../../../../../etc/passwd?/diontraining/html5acc/teach

Based upon this log entry alone, which of the following most likely occurred?

An SQL injection attack caused the VPN server to return the password file
An XML injection attack caused the VPN server to return the password file
The passwd file was downloaded using a directory traversal attack if input validation of the URL was not conducted
The passwd file was downloaded using a directory traversal attack

A

The passwd file was downloaded using a directory traversal attack if input validation of the URL was not conducted

Explanation:
OBJ 4.2: The exact string used here was the attack string used in CVE-2019-11510 to compromise thousands of VPN servers worldwide using a directory traversal approach. However, its presence in the logs does not prove that the attack was successful, only that it was attempted. To verify that the attacker successfully downloaded the passwd file, a cybersecurity analyst would require additional information and correlation. If the server utilizes proper input validation on URL entries, then the directory traversal would be prevented. As no SQL or XML language elements are present, this is not an SQL or XML injection attack. For support or reporting issues, include Question ID: 63fe07663b7322449ddbcd5e in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Vulnerability scans must be conducted continuously to meet regulatory compliance requirements for the storage of PHI. During the last vulnerability scan, a cybersecurity analyst received a report of 2,592 possible vulnerabilities and was asked by the Chief Information Security Officer (CISO) for a plan to remediate all the known issues. Which of the following should the analyst do next?

Place any assets that contain PHI in a sandbox environment and then remediate all the vulnerabilities
Filter the scan results to include only those items listed as critical in the asset inventory and remediate those vulnerabilities first
Attempt to identify all the false positives and exceptions, then resolve any remaining items
Wait to perform any additional scanning until the current list of vulnerabilities have been remediated fully

A

Filter the scan results to include only those items listed as critical in the asset inventory and remediate those vulnerabilities first

Explanation:
OBJ 3.6: PHI is an abbreviation for Personal Health Information. When attempting to remediate numerous vulnerabilities, it is crucial to prioritize the vulnerabilities to determine which ones should be remediated first. In this case, there is a regulatory requirement to ensure the security of the PHI data. Therefore, those critical assets to the secure handling or storage of PHI are of the highest risk should be prioritized for remediation first. It is impractical to resolve all 2,592 vulnerabilities at once. Therefore, you should not identify all the false positives and exceptions and then resolve any remaining items since they won’t be prioritized for remediation. You should also not wait to perform additional scanning because a scan is only a snapshot of your current status. If it takes 30 days to remediate all the vulnerabilities and do not scan, new vulnerabilities may have been introduced. Placing all the PHI assets into a sandbox will not work either because you have removed them from the production environment and can no longer serve their critical business functions. For support or reporting issues, include Question ID: 63fe077e3b7322449ddbce94 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Fail to Pass Systems has recently moved its corporate offices from France to Westeros, a country with no meaningful privacy regulations. The marketing department believes that this move will allow the company to resell all of its customer’s data to third-party companies and shield the company from any legal responsibility. Which policy is violated by this scenario?

Data minimization
Data enrichment
Data sovereignty
Data limitation

A

Data sovereignty

Explanation:
OBJ 1.3: While the fictitious Westeros may have no privacy laws or regulations, the laws of the countries where the company’s customers reside may still retain sovereignty over the data obtained from those regions during the company’s business there. This is called Data Sovereignty. Data sovereignty refers to a jurisdiction (such as France or the European Union) preventing or restricting processing and storage from taking place on systems that do not physically reside within that jurisdiction. Data sovereignty may demand certain concessions on your part, such as using location-specific storage facilities in a cloud service. Fail to Pass Systems will likely face steep fines from different regions if they go through with their plan to sell all of their customers’ data to the highest bidders. Fail to Pass Systems may even be blocked from communicating with individual regions. Although data minimization and data limitation policies may be violated depending on the company’s internal policies, these policies are not legally binding like the provisions of GDPR are. Data enrichment means that the machine analytics behind the view of a particular alert can deliver more correlating and contextual information with a higher degree of confidence, both from within the local network’s data points and from external threat intelligence. For support or reporting issues, include Question ID: 63fe08043b7322449ddbd52c in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An employee reports being unable to access a file share that their department uses frequently. Upon investigation, the security team confirms the user is assigned to the correct group with the necessary permissions. What should the team check next to resolve the issue?

Whether the file share is accessible over the network
If the file share permissions conflict with inherited group policies
Whether the user’s session token has expired or is invalid
If the user’s role was modified in the access control system

A

If the file share permissions conflict with inherited group policies

Explanation:
OBJ 3.1: Inherited group policies or permissions can override explicit permissions set for a resource, causing access issues despite correct group assignments. This is a common challenge in complex subject access control systems, where overlapping rules can create conflicts. While checking network accessibility or expired session tokens may identify other problems, they are not as likely to cause this specific issue, as the user already has the correct group assignment. Modifications to the user’s role could also be relevant but are less likely if the group permissions remain accurate. For support or reporting issues, include Question ID: 67513c39eb96f8a01c175369 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

William would like to use full-disk encryption on his laptop. He is worried about slow performance, though, so he has requested that the laptop have an onboard hardware-based cryptographic processor. Based on this requirement, what should William ensure the laptop contains?

TPM
PAM
AES
FDE

A

TPM

Explanation:
OBJ 3.4: This question is asking if you know what each acronym means. Trusted Platform Module (TPM) is a hardware-based cryptographic processing component that is a part of the motherboard. A Pluggable Authentication Module (PAM) is a device that looks like a USB thumb drive and is used as a software key in cryptography. Full Disk Encryption (FDE) can be hardware or software-based. Therefore, it isn’t the right answer. The Advanced Encryption System (AES) is a cryptographic algorithm. Therefore, it isn’t a hardware solution. For support or reporting issues, include Question ID: 63fe07d83b7322449ddbd2ff in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A software development team is implementing a CI/CD pipeline for an enterprise application. To ensure security in the coding phase, they want to enforce consistent code quality and identify potential vulnerabilities early in the process. Which of the following practices would best support these objectives?

Schedule monthly penetration tests for the application
Use linting tools to enforce coding standards during development
Implement code reviews after deployment to catch any vulnerabilities
Perform dynamic analysis after the code is pushed to production

A

Use linting tools to enforce coding standards during development

Explanation:
OBJ 2.2: Linting tools analyze source code during development to enforce coding standards and detect issues such as poor coding practices and potential vulnerabilities. This ensures consistent code quality and addresses problems early in the CI/CD process. Code reviews after deployment miss the opportunity to catch issues earlier, leading to higher costs for remediation. Monthly penetration tests focus on runtime vulnerabilities but do not address coding standards. Dynamic analysis in production identifies security issues but does not enforce coding quality during development. For support or reporting issues, include Question ID: 6750fc7223df37e1b5ec5f82 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

After completing an assessment, you create a chart listing the associated risks based on the vulnerabilities identified with your organization’s password policy. The chart contains the asset values and exposure factors for each server being protected by a single-factor complex password authentication system and their expected single loss expectancy values written in dollars. Which of the following types of assessments did you just complete?

Qualitative risk assessment
Vendor assessment
Quantitative risk assessment
Privacy impact assessment

A

Quantitative risk assessment

Explanation:
OBJ 1.2: Quantitative risk analysis involves the use of numbers (generally money) to evaluate impacts. Qualitative risk analysis describes the evaluation of risk through the use of words. For this reason, qualitative risk analysis is much more subjective than quantitative analysis. A privacy impact assessment is conducted by an organization for it to determine where its privacy data is stored and how that privacy data moves throughout an information system. It evaluates the impacts that may be realized by a compromise to the confidentiality, integrity, and/or availability of the data. A vendor assessment is used to determine the viability and reliability of a particular vendor or supplier in your supply chain. Based on the question, it appears that a quantitative risk analysis is performed since the table contains monetary values for the single loss expectancy of each server. For support or reporting issues, include Question ID: 63fe07ed3b7322449ddbd40a in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Dion Training has just acquired Small Time Tutors and ordered an analysis to determine the sensitivity level of the data contained in their databases. In addition to determining the sensitivity of the data, the company also wants to determine exactly how they have collected, used, and maintained the data throughout its data lifecycle. Once this is fully identified, Dion Training intends to update the terms and conditions on their website to inform their customers and prevent any possible legal issues from any possible mishandling of the data. Based on the information provided, which of the following types of analysis is the team at Dion Training going to perform?

Business impact analysis
Privacy impact analysis
Gap analysis
Tradeoff analysis

A

Privacy impact analysis

Explanation:
OBJ 1.1: A privacy impact assessment is conducted by an organization for it to determine where its privacy data is stored and how that privacy data moves throughout an information system. It evaluates the impacts that may be realized by a compromise to the confidentiality, integrity, and/or availability of the data. A tradeoff analysis compares potential benefits to potential risks and determines a course of action based on adjusting factors that contribute to each area. A business impact analysis describes the collaborative effort to identify those systems and software that perform essential functions, meaning the organization cannot run without them. A gap analysis measures the difference between the current state and desired state to assess the scope of work included in a project. By measuring ALE, MTTR, MTBF, TCO, and other factors, the organization can identify how closely it is performing to the desired outcomes or requirements. For support or reporting issues, include Question ID: 63fe08173b7322449ddbd612 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Dion Consulting Group has been hired to analyze the cybersecurity model for a new videogame console system. The manufacturer’s team has come up with four recommendations to prevent intellectual property theft and piracy. As the cybersecurity consultant on this project, which of the following would you recommend they implement first?

Ensure that all games require excessive storage sizes so that it is difficult for unauthorized parties to distribute
Ensure that all games for the console are distributed as encrypted so that they can only be decrypted on the game console
Ensure that all screen capture content is visibly watermarked
Ensure that each individual console has a unique key for decrypting individual licenses and tracking which console has purchased which game

A

Ensure that each individual console has a unique key for decrypting individual licenses and tracking which console has purchased which game

Explanation:
OBJ 3.1: Ensuring that each console has a unique key will allow the console manufacturer to track who has purchased which games when using digital rights management licensing. This can be achieved using a hardware root of trust, such as a TPM module in the processor. While encrypting the games during distribution will provide some security, the games could be decrypted and distributed by unauthorized parties if the encryption key were ever compromised. The recommendation of making the game arbitrarily large will frustrate both authorized and unauthorized, which could negatively impact sales, so it is a poor recommendation to implement. Visibly watermarking everything will only aggravate the user, provide a negative customer experience, and not help fight software piracy. For support or reporting issues, include Question ID: 63fe06f03b7322449ddbc7ac in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

An organization implements a new PKI to secure its internal communications. Shortly after deployment, users report being unable to validate server certificates issued by the new CA. The administrator verifies the certificates are correctly installed and have not expired. What is the most likely cause of this issue?

The CA uses an unsupported hashing algorithm
Revoked certificates have not been updated in the CRL
Certificates were issued with the wrong key length
The root CA certificate is not trusted by client systems

A

The root CA certificate is not trusted by client systems

Explanation
OBJ 3.3: If the root CA certificate is not trusted by client systems, they will reject all certificates issued by the CA, regardless of validity. Incorrect key lengths would lead to certificate generation failures, not trust issues. An unsupported hashing algorithm would cause issues during signature verification but is less common in new PKI implementations. A stale Certificate Revocation List (CRL) would only impact revoked certificates, not the validation of all issued certificates. For support or reporting issues, include Question ID: 6751abe397eb9dce4020fc09 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Dion Automation Group specializes in installing ICS and SCADA systems. You have been asked to program a PLC to open the fill valve when the level of liquid in a tank reaches a sensor located at 1 foot above the bottom of the tank. Also, the value should shut again once the level reaches another sensor at 9 feet above the bottom of the tank. Which of the following would you use to create the control sequence used by the PLC?

Ladder logic
Data historian
Human-machine interface
Safety instrumented system

A

Ladder logic

Explanation:
OBJ 3.5: Ladder Logic is a graphical, flowchart-like programming language used to program the special sequential control sequences used by a programmable logic controller (PLC). The human-machine interface (HMI) provides the input and output controls on a PLC to allow a user to configure and monitor the system. The HMI is the manual way to open and shut the valve, but it is not used to create the programming or automated sequences described in the scenario. The data historian is a type of software that aggregates and catalogs data from multiple sources within an industrial control system’s control loop. A Safety Instrumented System (SIS) is composed of sensors, logic solvers, and final control elements (devices like horns, flashing lights, and/or sirens) used to return an industrial process to a safe state after predetermined conditions are detected. For support or reporting issues, include Question ID: 63fe07cc3b7322449ddbd269 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Which of the following services should you install to connect multiple remote branch offices to your cloud service provider’s virtual private cloud (VPC) using an IPSec site-to-site connection?

API gateway
VPN gateway
XML gateway
NAT gateway

A

VPN gateway

Explanation:
OBJ 2.5: A VPN gateway is a type of networking device that connects two or more devices or networks in a VPN infrastructure. It is designed to bridge the connection or communication between two or more remote sites, networks, or devices and/or to connect multiple VPNs. An extensible markup language (XML) gateway acts as an application layer firewall specifically to monitor XML formatted messages as they enter or leave a network or system. An XML gateway is used for inbound pattern detection and the prevention of outbound data leaks. XML is a document structure that is both human and machine-readable. Information within an XML document is placed within tags that describe how the information within the document is structured. Application programming interface (API) gateway is a special cloud-based service that is used to centralize the functions provided by APIs. An API is a type of software interface that offers a service to other pieces of software to build or connect to specific functions or features. A NAT Gateway within a cloud platform allows private subnets in a Virtual Private Cloud (VPC) access to the Internet. For support or reporting issues, include Question ID: 63fe07113b7322449ddbc946 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What mechanism ensures that users maintain a consistent session when interacting with a web application behind a load balancer?
Dynamic IP addressing
Active-active failover
Round-robin traffic distribution
Server affinity

A

Server affinity

Explanation:
OBJ 2.1 - Server affinity, or sticky sessions, ensures that a user’s session is consistently routed to the same server, which is critical for applications that store session data locally on the server. Round-robin distribution and dynamic IP addressing handle traffic routing but do not ensure session persistence. Active-active failover ensures availability but does not address session consistency. For support or reporting issues, include Question ID: 674f60406a7159c3dccefc51 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Which of the following will an adversary do during the weaponization phase of the Lockheed Martin kill chain? (SELECT THREE)

Conduct social media interactions with targeted individuals
Select a decoy document to present to the victim
Compromise the targets servers
Select backdoor implant and appropriate command and control infrastructure for operation
Obtain a weaponizer
Harvest email addresses

A

Select a decoy document to present to the victim
Select backdoor implant and appropriate command and control infrastructure for operation
Obtain a weaponizer

Explanation:
OBJ 1.4: During the weaponization phase, the adversary is exploiting the knowledge gained during the reconnaissance phase. During this phase, the adversary is still not initiating any contact with the target, though. Therefore, obtaining a ‘weaponizer’ (a tool to couple malware and exploit into a deliverable payload), crafting the decoy document, determining C2 infrastructure, and the weaponization of the payload all occur during the weaponization phase. Social media interactions may present an opportunity to deliver a payload; therefore, it occurs in the delivery phase. Compromising a server is also beyond the scope of weaponization, as it occurs in the exploitation phase. Harvesting emails is considered a reconnaissance phase action. For support or reporting issues, include Question ID: 63fe07923b7322449ddbcf92 in your ticket. Thank you.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
A forensic investigation reveals that a threat actor has modified a system's firmware to maintain persistent access even after the operating system is reinstalled. This attack bypasses traditional software-based detection mechanisms. What tactic is the attacker most likely using? EMI BIOS/UEFI tampering Shimming Memory corruption
BIOS/UEFI tampering Explanation: OBJ 3.4: BIOS/UEFI tampering allows attackers to compromise a system's firmware, providing persistent and hard-to-detect access. EMI attacks disrupt hardware through electromagnetic means but do not involve firmware manipulation. Shimming focuses on intercepting communications between hardware and software but does not affect firmware. Memory corruption impacts running processes, not firmware. For support or reporting issues, include Question ID: 6751ae6a2f8fe131b6ff6d09 in your ticket. Thank you.
26
Which type of system would classify traffic as malicious or benign based on explicitly defined examples of malicious and benign traffic? Artificial intelligence Machine learning Generative adversarial network Deep learning
Machine learning Explanation: OBJ 4.2: A machine learning (ML) system uses a computer to accomplish a task without being explicitly programmed. In the context of cybersecurity, ML generally works by analyzing example data sets to create its own ability to classify future items presented. If the system was presented with large datasets of malicious and benign traffic, it will learn which is malicious and categorize future traffic presented to it. Artificial Intelligence is the science of creating machines to develop problem-solving and analysis strategies without significant human direction or intervention. AI goes beyond ML and can make a more complicated decision than just the classifications made by ML. A deep learning system can determine what is malicious traffic without having the prior benefit of being told what is benign/malicious. A generative adversarial network is an underlying strategy used to accomplish deep learning but is not specific to the scenario described. For support or reporting issues, include Question ID: 63fe06d33b7322449ddbc649 in your ticket. Thank you.
27
An organization detects unusual activity on a critical server, including failed login attempts followed by access using valid administrative credentials. Upon investigation, it was discovered that the attacker extracted hashed passwords from memory and used them to gain unauthorized access. Which of the following best describes the attacker’s tactic? Session hijacking to impersonate legitimate users on the network Privilege escalation and gaining access through exploitation of software vulnerabilities Credential dumping to obtain authentication information for lateral movement Brute-forcing login credentials to bypass authentication controls
Credential dumping to obtain authentication information for lateral movement Explanation: OBJ 3.2 - Credential dumping involves extracting authentication information, such as hashed passwords, from memory, disk, or system processes. This tactic allows attackers to use valid credentials to access systems and move laterally within the network. While privilege escalation and session hijacking are related tactics, they do not specifically involve extracting stored credentials. Brute-forcing relies on systematically guessing passwords, which is distinct from credential dumping. For support or reporting issues, include Question ID: 675065183af44d9e9e56c767 in your ticket. Thank you.
28
GlobalBitzness is a multinational organization that is implementing a site-to-site VPN to securely connect its regional offices. The VPN experiences frequent interruptions due to network instability, causing disruption to critical applications. Which of the following configurations would best improve VPN reliability? Enabling split tunneling to reduce the volume of traffic sent over the VPN Increasing the VPN encryption level to ensure higher data security Switching to a client-based VPN solution for all remote offices Correct answer Configuring dynamic failover using multiple VPN gateways
Configuring dynamic failover using multiple VPN gateways Explanation: OBJ 2.1: Dynamic failover with multiple VPN gateways enhances reliability by providing redundancy. If one gateway fails or becomes unstable, traffic is automatically rerouted to a backup gateway, ensuring continuous connectivity. Enabling split tunneling reduces traffic but does not address network instability. Increasing encryption levels enhances security but has no impact on VPN reliability. Switching to a client-based VPN is impractical for site-to-site connections and increases management overhead. For support or reporting issues, include Question ID: 6750f3d7ec280b7d2c7fa24d in your ticket. Thank you.
29
A multinational corporation is transitioning to a deperimeterized network to better support its remote workforce and cloud applications. The IT team seeks a solution that enables secure, high-performance connectivity across global offices while integrating with cloud-based security tools. What approach should the organization adopt? Deploy a software-based network solution to prioritize and secure traffic dynamically Establish dedicated leased lines for secure connections between offices Use firewalls at each branch location to enforce security policies locally Implement a dynamic routing protocol to manage traffic across the WAN
Deploy a software-based network solution to prioritize and secure traffic dynamically Explanation: OBJ 2.6 - SD-WAN is a software-based solution that supports secure, dynamic traffic management and integrates with cloud security tools, making it ideal for deperimeterized networks. Dynamic routing protocols alone lack the security integration required. Firewalls and leased lines provide security but are less flexible and do not optimize traffic dynamically. For support or reporting issues, include Question ID: 674f73b025e0bdcbe9af8fb2 in your ticket. Thank you.
30
Your company is required to remain compliant with PCI-DSS due to the type of information processed by your systems. If there was a breach of this data, which type of disclosure would you be required to provide during your incident response efforts? Notification to federal law enforcement Notification to Visa and Mastercard Notification to local law enforcement Notification to your credit card processor
Notification to your credit card processor Explanation: OBJ 1.3: Any organization that processes a credit card will be required to work with their credit card processor instead of working directly with the card issuers (Visa and Mastercard). Conducting notification to your bank or credit card processor is one of the first steps in the incident response effort for a breach of this type of data. Typically, law enforcement does not have to be notified of a data breach at a commercial organization. For support or reporting issues, include Question ID: 63fe07f33b7322449ddbd455 in your ticket. Thank you.
31
Flipfolio is a large financial organization that is deploying an IPS to secure its network. The IPS must meet the following requirements:- Detect and block malicious traffic targeting internal systems Minimize latency for high-frequency trading applications Generate detailed logs for compliance audits Avoid blocking legitimate traffic during initial deployment Which of the following deployment strategies best aligns with these requirements? Passive mode to monitor and log traffic but not block it Inline mode with all traffic immediately subject to blocking rules Inline mode with an alert-only policy during the initial deployment phase Bypass mode for trading traffic while enforcing full blocking rules for other traffic
Inline mode with an alert-only policy during the initial deployment phase Explanation: OBJ 2.1: Inline mode ensures malicious traffic can be blocked before reaching internal systems, while an alert-only policy during the initial deployment minimizes the risk of blocking legitimate traffic due to misconfigurations. This approach balances security, compliance and operational requirements. Inline mode with blocking rules immediately enforced risks legitimate traffic being disrupted, which is unsuitable during initial deployment. Passive mode cannot block malicious traffic, failing to meet the blocking requirement. Bypass mode for trading traffic compromises security for critical applications, creating unacceptable risks. For support or reporting issues, include Question ID: 6750efcbec280b7d2c7fa1ce in your ticket. Thank you.
32
A manufacturing company uses Internet of Things (IoT) devices in its production line, all of which rely on embedded firmware. After a ransomware attack, investigators find that the firmware on several IoT devices was modified to disrupt operations and resist restoration attempts. The modified firmware disables updates, leaving the devices permanently compromised. What is the most effective strategy to restore operational integrity and prevent similar firmware tampering? Install a hardware-based root of trust to enforce firmware integrity on all devices Reset the IoT devices to factory settings and apply the latest firmware updates Disable the use of IoT devices in production environments to eliminate the attack surface Replace all compromised IoT devices with new hardware to eliminate the risk
Install a hardware-based root of trust to enforce firmware integrity on all devices Explanation: OBJ 3.4 - A hardware-based root of trust validates firmware integrity before execution, preventing tampered firmware from running. Replacing all IoT devices is costly and does not address the root cause. Resetting devices to factory settings and updating firmware may work temporarily but does not ensure future integrity. Disabling IoT devices eliminates functionality and is an impractical solution for a production environment. For support or reporting issues, include Question ID: 675083cadfd365846f563584 in your ticket. Thank you.
33
Dave's company utilizes Google's G-Suite environment for file sharing and office productivity, Slack for internal messaging, and AWS for hosting their web servers. Which of the following cloud models type of cloud deployment models is being used? Private Public Community Multi-cloud
Multi-cloud Explanation: OBJ 2.5: Multi-cloud is a cloud deployment model where the cloud consumer uses multiple public cloud services. In this example, Dave is using the Google Cloud, Amazon's AWS, and Slack's cloud-based SaaS product simultaneously. A private cloud is a cloud that is deployed for use by a single entity. A public cloud is a cloud that is deployed for shared use by multiple independent tenants. A community cloud is a cloud that is deployed for shared use by cooperating tenants. For support or reporting issues, include Question ID: 63fe06c03b7322449ddbc55a in your ticket. Thank you.
34
Your company has decided to move all of its data into the cloud. Your company is concerned about the privacy of its data due to some recent data breaches that have been in the news. Therefore, they have decided to purchase cloud storage resources that will be dedicated solely for their use. Which of the following types of clouds is your company using? Community Hybrid Private Public
Private Explanation: OBJ 2.5: A private cloud contains services offered either over the Internet or a private internal network and only to select users instead of the general public. A private cloud is usually managed via internal resources. The terms private cloud and virtual private cloud (VPC) are often used interchangeably. A public cloud contains services offered by third-party providers over the public Internet and is available to anyone who wants to use or purchase them. They may be free or sold on-demand, allowing customers to pay only per usage for the CPU cycles, storage, or bandwidth they consume. A community cloud is a collaborative effort in which infrastructure is shared between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third party and hosted internally or externally. A community cloud is a collaborative effort in which infrastructure is shared between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third party and hosted internally or externally. For support or reporting issues, include Question ID: 63fe06d53b7322449ddbc662 in your ticket. Thank you.
35
Dion Training is in early discussions with a large university to license its cybersecurity courses as part of their upcoming semester. Both organizations have decided to enter into an exploratory agreement while negotiating the detailed terms of the upcoming contract. Which of the following documents would best serve this purpose? ISA NDA MOU SLA
MOU Explanation: OBJ 1.1: A Memorandum of understanding (MOU) is used as a preliminary or exploratory agreement to express their intent for the two companies to work together. A service level agreement (SLA) is a contractual agreement setting out the detailed terms under which a service is provided. The interconnection security agreement (ISA) governs the relationship between any federal agency and a third party interconnecting their systems. A non-disclosure agreement (NDA) is the legal basis for protecting information assets. For support or reporting issues, include Question ID: 63fe07f63b7322449ddbd478 in your ticket. Thank you.
36
Dion Training Solutions is currently calculating the risk associated with building a new data center in a hurricane-prone location. The data center would cost $3,125,000 to build and equip. Based on their assessment of the history of the location, a major hurricane occurs every 20 years and their data center would risk losing 60% of its value due to downtime and possible structural damages. If the data center is built in this location, what is the annual loss expectancy for this data center? 156,250 1,875,000 93,750 625,000
93,750 Explanation: OBJ 1.2: The annual loss expectancy (ALE) of the data center would be $93,750. The annual loss expectancy (ALE) is the average amount that would be lost over a year for a given asset. The annual loss expectancy is calculated by multiplying the single loss expectancy and the annual rate of occurrence together. The single loss expectancy is the amount of value lost in a single occurrence of a risk factor being realized. The single loss expectancy is calculated by multiplying the asset value and the exposure factor together. The annual rate of occurrence is the number of times a risk might be realized in a given year. Therefore, the annual loss expectancy equals the ARO (1 occurrence divided by 20 years) multiplied by the SLE (exposure factor of 60% time the asset value of $3,125,000 equals ), which equals $92,750 (0.05 x $1,875,000). For support or reporting issues, include Question ID: 63fe08183b7322449ddbd626 in your ticket. Thank you.
37
Which of the following is a key consideration when developing organizational policies on the use of AI? Prioritizing non-explainable models to reduce development time Ensuring AI systems comply with applicable legal and ethical standards Delegating policy creation to third-party vendors to avoid bias Avoiding AI implementation in decision-making processes entirely
Ensuring AI systems comply with applicable legal and ethical standards Explanation: OBJ 1.5: Organizational policies on AI use should prioritize compliance with legal and ethical standards to mitigate risks such as bias, privacy violations, or misuse. These policies must outline clear guidelines for ethical AI development, deployment, and monitoring to align with regulatory requirements. Prioritizing non-explainable models undermines the transparency and accountability needed for compliance. Avoiding AI implementation is impractical, as AI is increasingly integral to organizational operations. Delegating policy creation to third parties may lead to gaps in alignment with the organization’s specific needs and compliance obligations. For support or reporting issues, include Question ID: 674fed6bdb3fddf57c662c66 in your ticket. Thank you.
38
During an assessment of the POS terminals that accept credit cards, a cybersecurity analyst notices a recent Windows operating system vulnerability exists on every terminal. Since these systems are all embedded and require a manufacturer update, the analyst cannot install Microsoft's regular patch. Which of the following options would be best to ensure the system remains protected and are compliant with the rules outlined by the PCI DSS? Identify, implement, and document compensating controls Remove the POS terminals from the network until the vendor releases a patch Build a custom OS image that includes the patch Replace the Windows POS terminals with standard Windows systems
Identify, implement, and document compensating Explanation: OBJ 1.3: Since the analyst cannot remediate the vulnerabilities by installing a patch, the next best action would be to implement some compensating controls. If a vulnerability exists that cannot be patched, compensating controls can mitigate the risk. Additionally, the analyst should document the current situation to achieve compliance with PCI DSS. The analyst will likely not remove the terminals from the network without affecting business operations, so this is a bad option. The analyst should not build a custom OS image with the patch since this could void the support agreement with the manufacturer and introduce additional vulnerabilities. Also, it would be difficult (or impossible) to replace the POS terminals with standard Windows systems due to the custom firmware and software utilized on these systems. For support or reporting issues, include Question ID: 63fe08023b7322449ddbd50e in your ticket. Thank you.
39
NA
40
Dion Training hosts its learning management servers in the cloud. The cloud provider they selected uses a proprietary virtual machine format for their compute instances which more efficiently uses the vCPUs processing power and leads to immediate cost savings for Dion Training each month. Unfortunately, these compute instances are not cross-cloud compatible and cannot be interconnected with other storage or compute resources outside of this cloud provider’s architecture. The cloud provider also has an option to use a standard, open-source virtual machine format instead that offers complete interoperability with other cloud providers but costs an additional 20% more on average. What vendor risk is Dion Training assuming if they decide to use the proprietary compute instances instead of the standard type? Vendor Lockout Vendor viability Vendor visibility Vendor Lock-in
Vendor Lockout Explanation: OBJ 1.2: This scenario describes vendor lockout. Vendor lockout occurs when a vendor's product is developed in a way that makes it inoperable with other products, the ability to integrate it with other vendor products is not a feasible option or does not exist. Vendor Lock-in occurs when a customer is dependent on a vendor for products or services because switching is either impossible or would result in substantial complexity and costs. Vendor viability occurs when a vendor has a viable and in-demand product and the financial means to remain in business on an ongoing basis. Vendor visibility is a term used to define how transparent a supplier is with their payment and shipment status details. For support or reporting issues, include Question ID: 63fe080f3b7322449ddbd5b3 in your ticket. Thank you.
41
Tony works for a company as a cybersecurity analyst. His company runs a website that allows public postings. Recently, users have started complaining about the website having pop-up messages asking for their username and password. Simultaneously, your security team has noticed a large increase in the number of compromised user accounts on the system. What type of attack is most likely the cause of both of these events? Rootkit Cross-site request forgery SQL injection Cross-site scripting
Cross-site scripting Explanation: OBJ 4.2: This scenario is a perfect example of the effects of a cross-site scripting (XSS) attack. If your website's HTML code does not perform input validation to remove scripts that may be entered by a user, then an attacker can create a popup window that collects passwords and uses that information to compromise other accounts further. A cross-site request forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they are currently authenticated. An XSS will allow an attacker to execute arbitrary JavaScript within the victim's browser (such as creating pop-ups). A CSRF would allow an attack to induce a victim to perform actions they do not intend to perform. A rootkit is a set of software tools that enable an unauthorized user to control a computer system without being detected. SQL injection is the placement of malicious code in SQL statements via web page input. None of the things described in this scenario would indicate a CSRF, rootkit, or SQL injection. For support or reporting issues, include Question ID: 63fe073b3b7322449ddbcb51 in your ticket. Thank you.
42
In a Zero Trust architecture, what is the primary goal of defining subject-object relationships? To simplify resource management by grouping all objects into a single access policy To establish granular access controls based on the interaction between users and resources To ensure that network devices can communicate freely with one another Your answer is incorrect To enforce consistent permissions for all users across the organization, regardless of location
To establish granular access controls based on the interaction between users and resources Explanation: OBJ 2.6 - Defining subject-object relationships involves creating policies that specify which users (subjects) can access specific resources (objects) and under what conditions. This approach ensures granular control over access, aligning with Zero Trust principles. Applying consistent permissions to all users, allowing unrestricted device communication, or grouping all objects into a single policy undermines security and control. For support or reporting issues, include Question ID: 674f743225e0bdcbe9af8fbd in your ticket. Thank you.
43
What is the primary objective of the DMA in the context of cybersecurity and compliance? Regulating digital platform practices to ensure fair competition Governing the use of cryptographic methods in secure communication Protecting consumer health information in the healthcare industry Establishing standards for financial reporting and fraud prevention
Regulating digital platform practices to ensure fair competition Explanation: OBJ 1.3: The Digital Markets Act (DMA) is a European Union regulation aimed at ensuring fair competition in the digital marketplace. It imposes specific obligations on large online platforms designated as "gatekeepers" to prevent monopolistic practices and protect consumer and business interests. Protecting consumer health information pertains to regulations like HIPAA, not DMA. Establishing standards for financial reporting aligns with the Sarbanes-Oxley Act (SOX), not DMA. Governing cryptographic methods is more aligned with standards such as NIST CSF, not DMA. For support or reporting issues, include Question ID: 674feb69ec1a5f7ce5d83877 in your ticket. Thank you.
44
A cybersecurity analyst working at a major university is reviewing the SQL server log of completed transactions and notices the following entry: "select ID, GRADE from GRADES where ID = 1235235; UPDATE GRADES set GRADE='A' where ID=1235235' Based on this transaction log, which of the following most likely occurred? Someone used an SQL injection to assign straight A's to the student with ID #1235235 The SQL server has insufficient logging and monitoring The application and the SQL database are functioning properly A student with ID #1235235 used an SQL injection to give themselves straight A's
Someone used an SQL injection to assign straight A's to the student with ID #1235235 Explanation: OBJ 4.2: Based on this transaction log entry, it appears that the ID# field was not properly validated before being passed to the SQL server. This would allow someone to conduct an SQL injection and retrieve the student's grades and set all of this student's grades to an 'A' at the same time. It is common to look for a '1==1' type condition to identify an SQL injection. There are other methods to conduct an SQL injection attack that could be utilized by an attacker. If input validation is not being performed on user-entered data, an attacker can exploit any SQL language aspect and inject SQL-specific commands. This entry is suspicious and indicates that either the application or the SQL database is not functioning properly. Still, there appears to be adequate logging and monitoring based on what we can see and whether the question never indicates logging was an issue. An SQL database would not be designed to set ALL of a particular student's grades to A's, thus making this single entry suspicious. Most SQL statements in an SQL log will be fairly uniform and repetitive by nature when you review them. This leaves us with the question as to who person this SQL injection. Per the question choices, it could be the student with ID# 1235235 or "someone." While it seems as if student #1235235 had the most to gain from this, without further investigation, we cannot prove that it actually was student #1235235 that performed the SQL injection. Undoubtedly, student #125235 should be a person of interest in any ensuing investigations, but additional information (i.e., whose credentials were being used, etc.) should be used before making any accusations. Therefore, the answer is that "someone" performed this SQL injection. For support or reporting issues, include Question ID: 63fe07653b7322449ddbcd54 in your ticket. Thank you.
45
Praveen is currently investigating activity from an attacker who compromised a host on the network. The individual appears to have used credentials belonging to a janitor. After breaching the system, the attacker entered some unrecognized commands with very long text strings and then began using the sudo command to carry out actions. What type of attack has just taken place? Phishing Social engineering Privilege escalation Session hijacking
Privilege escalation Explanation: OBJ 4.2: The use of long query strings points to a buffer overflow attack, and the sudo command confirms the elevated privileges after the attack. This indicates a privilege escalation has occurred. While the other three options may have been used as an initial access vector, they cannot be confirmed based on the question's details. Only a privilege escalation is currently verified within the scenario due to the use of sudo. For support or reporting issues, include Question ID: 63fe071a3b7322449ddbc9af in your ticket. Thank you.
46
A cybersecurity analyst just finished conducting an initial vulnerability scan and is reviewing their results. To avoid wasting time on results that are not related to actual vulnerabilities, the analyst wants to remove any false positives before remediating the findings. Which of the following is an indicator that something in their results would be a false positive? A scan result that shows a version that is different from the automated asset inventory An HTTPS entry that indicates the web page is securely encrypted A finding that shows the scanner compliance plug-ins are not up-to-date Items classified by the system as Low or as For Informational Purposes Only
Items classified by the system as Low or as For Informational Purposes Only Explanation: OBJ 4.1: When conducting a vulnerability scan, it is common for the report to include some findings that are classified as “low” priority or “for informational purposes only.” These are most likely false positives and can be ignored by the analyst when starting their remediation efforts. "An HTTPS entry that indicates the web page is securely encrypted" is not a false positive but a true negative (a non-issue). A scan result showing a different version from the automated asset inventory should be investigated and is likely a true positive. A finding that shows the scanner compliance plug-ins are not up-to-date would likely also be a true positive that should be investigated. For support or reporting issues, include Question ID: 63fe073d3b7322449ddbcb6a in your ticket. Thank you.
47
Which analysis framework provides a graphical depiction of the attacker's approach relative to a kill chain? Diamond Model of Intrusion Analysis OpenIOC MITRE ATT&CK framework Lockheed Martin cyber kill chain
Diamond Model of Intrusion Analysis Explanation: OBJ 1.4: The Diamond Model provides an excellent methodology for communicating cyber events and allowing analysts to derive mitigation strategies implicitly. The Diamond Model is constructed around a graphical representation of an attacker's behavior. The MITRE ATT&CK framework provides explicit pseudo-code examples for detecting or mitigating a given threat within a network and ties specific behaviors back to individual actors. The Lockheed Martin cyber kill chain provides a general life cycle description of how attacks occur but does not deal with the specifics of how to mitigate them. OpenIOC contains a depth of research on APTs but does not integrate the detection and mitigation strategy. For support or reporting issues, include Question ID: 63fe07403b7322449ddbcb88 in your ticket. Thank you.
48
A new alert has been distributed throughout the information security community regarding a critical Apache vulnerability. What action could you take to ONLY identify the known vulnerability? Perform an unauthenticated vulnerability scan on all servers in the environment Perform a web vulnerability scan on all servers in the environment Perform a scan for the specific vulnerability on all web servers Perform an authenticated scan on all web servers in the environment
Perform a scan for the specific vulnerability on all web servers Explanation: OBJ 3.6: Since you wish to check for only the known vulnerability, you should scan for that specific vulnerability on all web servers. All web servers are chosen because Apache is a web server application. While performing an authenticated scan of all web servers or performing a web vulnerability scan of all servers would also find these vulnerabilities, it is a much larger scope. It would waste time and processing power by conducting these scans instead of properly scoping the scans based on your needs. Performing unauthenticated vulnerability scans on all servers is also too large in scope (all servers) while also being less effective (unauthenticated scan). For support or reporting issues, include Question ID: 63fe07313b7322449ddbcad4 in your ticket. Thank you.
49
What kind of security vulnerability would a newly discovered flaw in a software application be considered? HTTP header injection vulnerability Time-to-check to time-to-use flaw Your answer is correct Zero-day vulnerability Input validation flaw
Zero-day vulnerability Explanation: OBJ 4.2: A zero-day vulnerability refers to a hole in software unknown to the vendor and newly discovered. This security hole can become exploited by hackers before the vendor becomes aware of it and can fix it. An input validation attack is any malicious action against a computer system that involves manually entering strange information into a normal user input field that is successful due to an input validation flaw. HTTP header injection vulnerabilities occur when user input is insecurely included within server response headers. The time of check to time of use is a class of software bug caused by changes in a system between checking a condition (such as a security credential) and using the check's results and the difference in time passed. This is an example of a race condition. For support or reporting issues, include Question ID: 63fe076e3b7322449ddbcdc7 in your ticket. Thank you.
50
A company is configuring Security-Enhanced Linux (SELinux) on its servers to enforce mandatory access controls (MAC). During testing, several critical applications fail to run as expected. What is the most effective way to resolve this issue without compromising SELinux security? Adjust SELinux policies to grant the necessary permissions for the applications Switch SELinux to permissive mode to log issues but not enforce policies Use discretionary access control (DAC) instead of SELinux for flexibility Disable SELinux to allow the applications to run without restrictions
Adjust SELinux policies to grant the necessary permissions for the applications Explanation: OBJ 3.2: SELinux uses mandatory access controls to restrict application behavior based on defined policies. When applications fail, adjusting SELinux policies to explicitly grant required permissions ensures functionality without disabling critical security controls. Switching to permissive mode or disabling SELinux compromises security, and discretionary access control lacks the strict enforcement provided by SELinux. For support or reporting issues, include Question ID: 675143c8b702e8776b8d5525 in your ticket. Thank you.
51
Dion Training is creating a new security policy that states all access to system resources will be controlled based on the user's job functions and tasks within the organization. For example, only people working in Human Resources can access employee records, and only the people working in finance can access customer payment histories. Which of the following policies or security practices is BEST described by this new policy? Job rotation Least privilege Separation of duties Mandatory vacation
Least privilege Explanation: OBJ 1.2: Least privilege is a security policy that states someone or something should be allocated the minimum necessary rights, privileges, or information to perform the specific role. Separation of duties is a security policy that states that duties and responsibilities should be divided among individuals to prevent ethical conflicts or abuse of powers. Job rotation is a security policy that prevents any one individual from performing the same role or tasks for too long. Job rotation is useful in deterring fraud and providing better oversight of the person's duties. Mandatory vacation is a security policy that states when and how long an employee must take time off from work so that their activities may be subjected to a security review by having another employee conduct their job functions. For support or reporting issues, include Question ID: 63fe081a3b7322449ddbd63a in your ticket. Thank you.
52
An enterprise is integrating AI-enabled digital assistants to streamline operations. These assistants require access to various corporate systems and data. During a security review, the team identifies several potential risks. Which of the following risks represents a directly exploitable vulnerability? Prompt injection attacks, leading to manipulation of AI outputs and responses Access control misconfiguration, allowing unauthorized access to corporate systems Excessive agency of the AI, leading to unapproved actions Data loss prevention failures, exposing sensitive enterprise information
Access control misconfiguration, allowing unauthorized access to corporate systems Explanation: OBJ 1.5 - Access control misconfiguration poses the most significant risk when using AI-enabled digital assistants, as these tools often require extensive permissions to perform tasks within enterprise systems. If improperly configured, unauthorized individuals or malicious actors could exploit the assistant, leading to the compromise of sensitive data or critical operations. While the excessive agency of AI represents a potential risk, it typically stems from misaligned objectives rather than directly exploitable vulnerabilities. Data loss prevention (DLP) failures may expose data but are less likely to result from access mismanagement. Prompt injection attacks, though potentially dangerous, usually require prior knowledge or specific system weaknesses to execute effectively. For support or reporting issues, include Question ID: 674f5f936a7159c3dccefc47 in your ticket. Thank you.
53
An e-commerce platform integrates a CDN to enhance performance and security. The organization needs to achieve the following objectives: Reduce latency for users accessing the platform globally Mitigate DDoS attacks- Cache static content like images and videos to reduce load on origin servers Ensure the integrity of data delivered to users Which of the following configurations best meets these requirements? Use load balancing, token-based authentication, and IP blocking Enable serverless computing, API key management, and file encryption Enable geo-distribution, a WAF, and TLS encryption Configure rate limiting, dynamic content caching, and DNS filtering
Enable geo-distribution, a WAF, and TLS encryption Explanation: OBJ 2.1: Geo-distribution reduces latency by serving content from edge servers located closer to users. A Web Application Firewall (WAF) integrated with the Content Delivery Network (CDN) helps mitigate Distributed Denial of Service (DDoS) attacks and protect against other web-based threats. TLS encryption ensures the integrity and confidentiality of data delivered to users. Load balancing and token-based authentication are useful but do not directly address global latency and content caching. Rate limiting and DNS filtering may improve security but are not core CDN functions for performance enhancement. Serverless computing and API key management are unrelated to the stated objectives of latency reduction and static content delivery. For support or reporting issues, include Question ID: 6750f9a2ec280b7d2c7fa26b in your ticket. Thank you.
54
Which of the following is an example of a threat to an AI model that could compromise its integrity or functionality? Insufficient model documentation Training data poisoning Overuse of explainable AI techniques High computational requirements
Training data poisoning Explanation: OBJ 1.5: Training data poisoning occurs when malicious actors introduce corrupted or misleading data into the training set of an AI model, causing the model to make incorrect or biased predictions. This threat directly compromises the integrity and reliability of the AI system. Insufficient model documentation may hinder understanding but is not a direct threat to the model. High computational requirements can impact resource efficiency but do not threaten the model's integrity. Overuse of explainable AI techniques is not a recognized threat to model functionality. For support or reporting issues, include Question ID: 674feea9db3fddf57c662c70 in your ticket. Thank you.
55
Which of the following best describes the primary function of Open Authorization (OAuth) in an enterprise environment? It provides password management for users accessing multiple systems on enterprise networks It enables users to grant applications access to their resources without sharing credentials It enforces multifactor authentication (MFA) to secure enterprise-level user accounts It validates certificates for encrypted communication between servers
It enables users to grant applications access to their resources without sharing credentials Explanation: OBJ 3.1 - OAuth is a widely used authorization framework that allows users to grant third-party applications access to their resources securely without exposing their credentials. This ensures a high level of security and user convenience. Password management, MFA, and certificate validation are separate security mechanisms that do not describe OAuth's core functionality. For support or reporting issues, include Question ID: 67506312f5372d1adb3df71c in your ticket. Thank you.
56
A popular game allows for in-app purchases to acquire extra lives in the game. When a player purchases the extra lives, the number of lives is written to a configuration file on the gamer's phone. A hacker loves the game but hates having to buy lives all the time, so they developed an exploit that allows a player to purchase 1 life for $0.99 and then modifies the content of the configuration file to claim 100 lives were purchased before the application reading the number of lives purchased from the file. Which of the following type of vulnerabilities did the hacker exploit? Dereferencing Race condition Broken authentication Sensitive data exposure
Race condition Explanation: OBJ 4.2: Race conditions occur when the outcome from execution processes is directly dependent on the order and timing of certain events. Those events fail to execute in the order and timing intended by the developer. In this scenario, the hacker's exploit is racing to modify the configuration file before the application reads the number of lives from it. Sensitive data exposure is a fault that allows privileged information (such as a token, password, or PII) to be read without being subject to the proper access controls. Broken authentication refers to an app that fails to deny access to malicious actors. Dereferencing attempts to access a pointer that references an object at a particular memory location. For support or reporting issues, include Question ID: 63fe07813b7322449ddbceb7 in your ticket. Thank you.
57
After 9 months of C++ programming, the team at Whammiedyne systems has released their new software application. Within just 2 weeks of release, though, the security team discovered multiple serious vulnerabilities in the application that must be corrected. To retrofit the source code to include the required security controls will take 2 months of labor and will cost $100,000. Which development framework should Whammiedyne use in the future to prevent this situation from occurring in other projects? DevOps Waterfall Model Agile Model Develops
DevSecOps Explanation: OBJ 2.2: DevSecOps is a combination of software development, security operations, and systems operations and refers to the practice of integrating each discipline with the others. DevSecOps approaches are generally better postured to prevent problems like this because security is built-in during the development instead of retrofitting the program afterward. The DevOps development model incorporates IT staff but does not include security personnel. The agile software development model focuses on iterative and incremental development to account for evolving requirements and expectations. The waterfall software development model cascades the phases of the SDLC so that each phase will start only when all of the tasks identified in the previous phase are complete. A team of developers can make secure software using either the waterfall or agile model. Therefore, they are not the right answers to solve this issue. For support or reporting issues, include Question ID: 63fe06d43b7322449ddbc64e in your ticket. Thank you.
58
Fail To Pass Systems has just been the victim of another embarrassing data breach. Their database administrator needed to work from home this weekend, so he downloaded the corporate database to his work laptop. On his way home, he left the laptop in an Uber, and a few days later, the data was posted on the internet. Which of the following mitigations would have provided the greatest protection against this data breach? Require data at rest encryption on all endpoints Require a VPN to be utilized for all telework employees Require all new employees to sign an NDA Require data masking for any information stored in the database
Require data at rest encryption on all endpoints Explanation: OBJ 3.8: The greatest protection against this data breach would have been to require data at rest encryption on all endpoints, including this laptop. If the laptop were encrypted, the data would not have been readable by others, even if it was lost or stolen. While requiring a VPN for all telework employees is a good idea, it would not have prevented this data breach since the laptop's loss caused it. Even if a VPN had been used, the same data breach would still have occurred if the employee copied the database to the machine. Remember on exam day that many options are good security practices, but you must select the option that solves the issue or problem in the question being asked. Similarly, data masking and NDAs are useful techniques, but they would not have solved this particular data breach. For support or reporting issues, include Question ID: 63fe07ae3b7322449ddbd0f2 in your ticket. Thank you.
59
Which of the following access control methods utilizes a set of organizational roles in which users are assigned to gain permissions and access rights? DAC RBAC MAC ABAC
RBAC Explanation: OBJ 2.4: Role-based access control (RBAC) is a modification of DAC that provides a set of organizational roles that users may be assigned to gain access rights. The system is non-discretionary since the individual users cannot modify the ACL of a resource. Users gain their access rights implicitly based on the groups to which they are assigned as members. For support or reporting issues, include Question ID: 63fe06db3b7322449ddbc6a3 in your ticket. Thank you.
60
Which role validates the user’s identity when using SAML for authentication? SP IdP User agent RP
IdP Explanation: OBJ 2.4: The IdP provides the validation of the user's identity. Security assertions markup language (SAML) is an XML-based framework for exchanging security-related information such as user authentication, entitlement, and attributes. SAML is often used in conjunction with SOAP. SAML is a solution for providing single sign-on (SSO) and federated identity management. It allows a service provider (SP) to establish a trust relationship with an identity provider (IdP) so that the SP can trust the identity of a user (the principal) without the user having to authenticate directly with the SP. The principal's User Agent (typically a browser) requests a resource from the service provider (SP). The resource host can also be referred to as the relying party (RP). If the user agent does not already have a valid session, the SP redirects the user agent to the identity provider (IdP). The IdP requests the principal's credentials if not already signed in and, if correct, provides a SAML response containing one or more assertions. The SP verifies the signature(s) and (if accepted) establishes a session and provides access to the resource. For support or reporting issues, include Question ID: 63fe06ce3b7322449ddbc607 in your ticket. Thank you.
61
A global enterprise uses asymmetric encryption to secure sensitive communications between its offices. During a security audit, it was discovered that private keys are being stored on shared servers without access restrictions, putting encrypted data at risk of exposure if the servers are compromised. The organization must implement a solution to secure these keys and ensure they are only accessible to authorized systems. Which technique should they prioritize? Implementing key splitting techniques Implementing homomorphic encryption Implementing digital signatures Implementing a key management system
Implementing a key management system Explanation: OBJ 3.8: A key management system (KMS) securely generates, stores, and manages cryptographic keys, ensuring they are accessible only to authorized systems. Key splitting divides keys for security but does not centralize management. Homomorphic encryption enables computations on encrypted data but does not address key storage. Digital signatures authenticate and validate data but do not manage keys. For support or reporting issues, include Question ID: 6751b63841c57c5f41caed11 in your ticket. Thank you.
62
A blockchain-based application is being developed to store financial transactions securely and ensure they cannot be altered after being recorded. To maintain the integrity and non-repudiation of these transactions, which cryptographic technique is most appropriate? Key stretching Symmetric encryption Digital signatures Homomorphic encryption
Digital signatures Explanation: OBJ 3.8: Digital signatures ensure the integrity and non-repudiation of blockchain transactions by verifying their authenticity and confirming they have not been altered. Key stretching strengthens weak passwords but is irrelevant to transaction integrity. Symmetric encryption secures data confidentiality but does not provide authenticity or integrity verification. Homomorphic encryption enables computations on encrypted data but does not ensure immutability. For support or reporting issues, include Question ID: 6751b6f29192dec49e1159bf in your ticket. Thank you.
63
An organization uses an AI system for critical decision-making processes, including risk assessments and compliance monitoring. What is a key risk associated with overreliance on this AI system? The AI system may consume excessive computational resources The organization may overlook system biases or inaccuracies The organization may require extensive staff training on AI usage The AI system may fail to scale as the organization grows
The organization may overlook system biases or inaccuracies Explanation: OBJ 1.5 - Overreliance on AI systems can lead to the organization ignoring potential biases or inaccuracies in the system's outputs, which could result in flawed decision-making. Scalability, resource consumption, and training are operational concerns but do not specifically address the risks posed by overreliance on AI. For support or reporting issues, include Question ID: 674ea832f0442e03b476b6ee in your ticket. Thank you.
64
During the analysis of data as part of ongoing security monitoring activities, which of the following is NOT a good source of information to validate the results of an analyst's vulnerability scans of the network's domain controllers? Log files DMARC and DKIM Configuration management systems SIEM systems
DMARC and DKIM Explanation: OBJ 3.6: Vulnerability scans should never take place in a vacuum. Analysts should correlate scan results with other information sources, including logs, SIEM systems, and configuration management systems. DMARC (domain-based message authentication, reporting, and conformance) and DKIM (domain keys identified mail) are configurations performed on a DNS server to verify whether an email is sent by a third-party are verified to send it on behalf of the organization. For example, if you are using a third-party mailing list provider, they need your organization to authorize them to send an email on your behalf by setting up DMARC and DKIM in your DNS records. While this is an important security configuration, it would not be a good source of information to validate the results of an analyst's vulnerability scans on a domain controller. For support or reporting issues, include Question ID: 63fe077b3b7322449ddbce71 in your ticket. Thank you.
65
Host 4 Pay is an e-commerce platform that needs to secure customers' credit card numbers to comply with PCI DSS standards. To reduce the risk of exposure, the company decides to replace sensitive credit card numbers with a non-sensitive equivalent that can be used in its database without exposing the original data. Which technique should be implemented? PKI Symmetric encryption Hashing Tokenization
Tokenization Explanation: OBJ 3.8: Tokenization replaces sensitive data with non-sensitive tokens, allowing the organization to protect credit card numbers while complying with PCI DSS. Symmetric encryption provides confidentiality but does not replace sensitive data with a token. Hashing secures data integrity but does not allow reversible substitution. Public key infrastructure (PKI) manages digital certificates and keys but is not suitable for this use case. For support or reporting issues, include Question ID: 6751b52b41c57c5f41caed0c in your ticket. Thank you.
66
Stephane was asked to assess the technical impact of a reconnaissance performed against his organization. He has discovered that a third party has been performing reconnaissance by querying the organization's WHOIS data. Which category of technical impact should he classify this as? Critical High Low Medium
Low Explanation: OBJ 3.6: This would be best classified as a low technical impact. Since WHOIS data about the organization's domain name is publicly available, it is considered a low impact. This is further mitigated by the fact that your company gets to decide what information is published in the WHOIS data. Since only publicly available information is being queried and exposed, this can be considered a low impact. For support or reporting issues, include Question ID: 63fe077f3b7322449ddbcea3 in your ticket. Thank you.
67
While investigating a data breach, you discover that the account credentials used belonged to an employee who was fired several months ago for misusing company IT systems. The IT department never deactivated the employee's account upon their termination. Which of the following categories would this breach be classified as? Zero-day Advanced persistent threat Insider Threat Known threat
Insider Threat Explanation: OBJ 4.4: An insider threat is any current or former employee, contractor, or business partner who has or had authorized access to an organization’s network, system, or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organization’s information or information systems. Based on the details provided in the question, it appears the employee’s legitimate credentials were used to conduct the breach. This would be classified as an insider threat. A zero-day is a vulnerability in software unpatched by the developer or an attack that exploits such a vulnerability. A known threat is a threat that can be identified using a basic signature or pattern matching. An advanced persistent threat (APT) is an attacker with the ability to obtain, maintain, and diversify access to network systems using exploits and malware. For support or reporting issues, include Question ID: 63fe07263b7322449ddbca43 in your ticket. Thank you.
68
NA
69
Dion Training has contracted AWS to host its databases in the cloud. Dion Training is concerned that if a natural disaster occurred in the Northern Virginia area then access to their data could be interrupted. Which of the following would allow Dion Training to store their data in two data centers located in different states? Availability zone VPC/Vnet Region Data zone
Region Explanation: OBJ 2.5: A region describes a collection of data centers located within a geographic area and is located across the globe. An availability zone is a physical or logical data center within a single region. A Virtual Private Cloud (VPC) or a Virtual Network (VNet) allows for the creation of cloud resources within a private network that parallels the functionality of the same resources in a traditional, privately operated data center. Data zones describe the state and location of data to help isolate and protect it from unauthorized/inappropriate use within a data lake. For support or reporting issues, include Question ID: 63fe06e53b7322449ddbc725 in your ticket. Thank you.
70
Jorge is working with an application team to remediate a critical SQL injection vulnerability on a public-facing server. The team is worried that deploying the fix will require several hours of downtime and block customer transactions from being completed by the server. Which of the following is the BEST action for Jorge to recommend? Wait until the next scheduled maintenance window to remediate the vulnerability Schedule an emergency maintenance for an off-peak time later in the day to remediate the vulnerability Delay the remediation until the next major update of the SQL server occurs Remediate the vulnerability immediately
Schedule an emergency maintenance for an off-peak time later in the day to remediate the vulnerability Explanation: OBJ 4.2: Jorge should recommend that emergency maintenance windows be scheduled for an off-peak time later in the day. Since the vulnerability is critical, it needs to be remediated or mitigated as quickly as possible. But, this also needs to be balanced against the business and operational needs. Therefore, we cannot simply remediate it immediately, as this would cause downtime for this public-facing server. It is also unreasonable to accept the risk until the next scheduled maintenance window since it is a critical vulnerability. Therefore, the best way to balance the risk of the vulnerability and the outage's risk is to schedule an emergency maintenance window and patch the server during that time. For support or reporting issues, include Question ID: 63fe08003b7322449ddbd4f5 in your ticket. Thank you.
71
During a forensic investigation, an analyst is tasked with examining audio and video files recovered from a suspect’s device. The analyst wants to determine the files’ origin and potential tampering. Which of the following methods best utilizes metadata analysis to achieve this goal? Comparing the file’s hash values against a database of known malicious files to detect tampering Performing spectral analysis on the audio and video to identify anomalies in the content Extracting and reviewing embedded EXIF data to identify the device and software used to create the files Encrypting and making a copy of the recovered files to ensure they remain secure during the investigation
Extracting and reviewing embedded EXIF data to identify the device and software used to create the files Explanation: OBJ 4.4 - Metadata analysis of audio and video files often involves extracting EXIF (Exchangeable Image File Format) data, which contains information such as the device model, software version, creation timestamps, and GPS coordinates. This information can help verify the file’s origin and detect inconsistencies that may indicate tampering. Hash comparison ensures file integrity but does not analyze metadata. Spectral analysis evaluates the content but does not leverage metadata. Encrypting files secures them but does not aid in metadata analysis. For support or reporting issues, include Question ID: 67508c9c00440144bc05ee35 in your ticket. Thank you.
72
A company is moving sensitive customer data to a hybrid cloud environment. To ensure compliance and maintain security, what should be a key focus when defining data perimeters? Ensuring all users within the network can access the required data Establishing policies that restrict data movement to authorized environments Using application-based controls to limit traffic based on specific user roles Deploying additional firewalls to segment the cloud infrastructure
Establishing policies that restrict data movement to authorized environments Explanation: OBJ 2.6 - In a hybrid cloud environment, defining data perimeters involves creating policies that ensure sensitive data remains within authorized locations and is not improperly accessed or moved. This safeguards compliance and security. Broad access undermines security, while firewalls and application-based controls address other aspects of network management but do not focus on data movement policies. For support or reporting issues, include Question ID: 674f7088be598e99e87d61f6 in your ticket. Thank you.
73
Which security tool is used to facilitate incident response, threat hunting, and security configuration by orchestrating automated runbooks and delivering data enrichment? SOAR SIEM DLP MDM
SOAR Explanation: OBJ 3.6: A security orchestration, automation, and response (SOAR) is used to facilitate incident response, threat hunting, and security configuration by orchestrating automated runbooks and delivering data enrichment. A SOAR may be implemented as a standalone technology or integrated within a SIEM as a next-gen SIEM. A SOAR can scan the organization's store of security and threat intelligence, analyze it using machine/deep learning techniques, and then use that data to automate and provide data enrichment for the workflows that drive incident response and threat hunting. For support or reporting issues, include Question ID: 63fe06da3b7322449ddbc699 in your ticket. Thank you.
74
NA
75
An organization subscribes to a dark web monitoring service to detect potential threats involving its data. During routine monitoring, the service identifies compromised employee credentials being sold on a dark web marketplace. Which of the following actions should the organization prioritize? Reset the passwords for the affected employee accounts and enforce MFA Publicly disclose the breach to reassure stakeholders that the organization is monitoring threat Block access to the dark web marketplace to prevent further exposure of organizational data Notify law enforcement about the dark web listing and wait for further guidance
Reset the passwords for the affected employee accounts and enforce MFA Explanation: OBJ 4.3 - The organization should immediately reset the passwords for compromised accounts and implement multifactor authentication (MFA) to mitigate unauthorized access. Notifying law enforcement is important but secondary to securing accounts. Blocking access to the marketplace is infeasible and does not address the compromised credentials. Publicly disclosing the breach may harm the organization’s reputation and is not necessary at this stage. For support or reporting issues, include Question ID: 67508b49f86f3d695e9ad168 in your ticket. Thank you.
76
A cybersecurity analyst is working for a university that is conducting a big data medical research project. The analyst is concerned about the possibility of an inadvertent release of PHI data. Which of the following strategies should be used to prevent this? Conduct tokenization of the PHI data before ingesting it into the big data application Utilize a SaaS model to process the PHI data instead of an on-premise solution Use DevSecOps to build the application that processes the PHI Utilize formal methods of verification against the application processing the PHI
Conduct tokenization of the PHI data before ingesting it into the big data application Explanation: OBJ 3.8: The university should utilize a tokenization approach to prevent an inadvertent release of the PHI data. In a tokenization approach, all or part of data in a field is replaced with a randomly generated token. That token is then stored with the original value on a token server or token vault, separate from the production database. This is an example of a deidentification control and should be used since the personally identifiable medical data is not needed to be retained after ingesting it for the research project; only the medical data itself is needed. While using DevSecOps can improve the overall security posture of the applications being developed in this project, it does not explicitly define a solution to prevent this specific issue making it a less ideal answer choice for the exam. Formal verification methods can be used to prove that none of the AI/ML techniques that process the PHI data could inadvertently leak. Still, the cost and time associated with using these methods make them inappropriate for a system used to conduct research. A formal method uses a mathematical model of a system's inputs and outputs to prove that the system works as specified in all cases. It is difficult for manual analysis and testing to capture every possible use case scenario in a sufficiently complex system. Formal methods are mostly used with critical systems such as aircraft flight control systems, self-driving car software, and nuclear reactors, not big data research projects. The option provided that recommends utilizing a SaaS model is not realistic. There is unlikely to be a SaaS provider with a product suited to the big data research being done. SaaS products tend to be commoditized software products that are hosted in the cloud. The idea of migrating to a SaaS is a distractor on this exam, which is trying to get you to think about shifting the responsibility for the PHI to the service provider and away from the university, but due to the research nature of the project, this is unlikely to be a valid option in the real world and may not be legally allowed due to the PHI being processed. For support or reporting issues, include Question ID: 63fe07073b7322449ddbc8c9 in your ticket. Thank you.
77
Dion Training is creating a new website and needs to register a new digital certificate to support SSL/TLS connections to the server. When creating the digital certificate, Jason was required to publish a TXT record to the domain’s DNS records to validate that he owned the new domain name. Which of the following digital certificate types did he utilize in this scenario? Extended validation General purpose Wildcard Multidomain
General purpose Explanation: OBJ 3.8: General Purpose or Domain Validation (DV) digital certificates prove the ownership of a particular domain by responding to an email to the authorized domain contact or by publishing a text record to the domain’s DNS records. Extended Validation (EV) digital certificates are subject to a process that requires more rigorous checks on the subject's legal identity and control over the domain or software being signed. A major drawback to EV certificates is that they cannot be issued for a wildcard domain. A wildcard certificate contains the wildcard character (*) in its domain name field and allows the digital certificate to be used for any number of subdomains. For example, the wildcard certificate of *.diontraining.com can be used for members.diontraining.com, cart.diontraining.com, and support.diontraining.com. A multidomain certificate is a single SSL/TLS digital certificate that can be used to secure multiple, different domain names. For example, if you want to install the same certificate on diontraining.com and yourcyberpath.com, you will need to register a multidomain certificate. For support or reporting issues, include Question ID: 63fe07a63b7322449ddbd08d in your ticket. Thank you.
78
A penetration tester is conducting software assurance testing on a web application for Dion Training. You discover the web application is vulnerable to an SQL injection and could disclose a regular user's password. Which of the following actions should you perform? Recommend that the company conduct a full penetration test of their systems to identify other vulnerabilities Document the finding with an executive summary, methodology used, and a remediation recommendation Contact the development team directly and recommend adding input validation to the web application Conduct a proof-of-concept exploit on three user accounts at random and document this in your report
Document the finding with an executive summary, methodology used, and a remediation recommendation Explanation: OBJ 4.2: When you find a vulnerability, it should be documented fully. This includes providing an executive summary for management, the methodology used to find the vulnerability so that others can recreate and verify it, and the recommendation remediation actions that should be taken. You should not exploit three random accounts on the server, which could negatively impact the client's reputation. You should not contact the development team directly since they may ignore your recommendation, and they did not hire you. While it may be a good idea to conduct a full-scale penetration test, that would not necessarily solve this vulnerability. For support or reporting issues, include Question ID: 63fe06e23b7322449ddbc6fd in your ticket. Thank you.
79
A company is deploying host-based firewalls on all servers to strengthen its network security posture. The IT team wants to restrict external access to only specific applications and services while allowing internal system-to-system communication. Which of the following configurations best meets this requirement? Configure a deny-all rule as the default and add application-specific inbound and outbound rules Enable firewall rules that permit traffic from trusted IP ranges without further filtering Allow all outbound traffic and block all inbound traffic except for specifically identified service ports Disable the firewall on internal systems to improve system-to-system communication and reduce redundancy
Configure a deny-all rule as the default and add application-specific inbound and outbound rules Explanation: OBJ 3.2 - The deny-all rule enforces strict access control, allowing only explicitly permitted traffic. Application-specific inbound and outbound rules ensure that only the required services and systems can communicate. Allowing all outbound traffic or disabling the firewall risks exposing systems to unauthorized access, while permitting traffic solely based on trusted IP ranges lacks application-level control. For support or reporting issues, include Question ID: 675067ae3af44d9e9e56c776 in your ticket. Thank you.
80
A software development team at a healthcare organization is building a new patient management application. Due to strict compliance requirements for protecting sensitive health information, the organization mandates that security vulnerabilities be identified and addressed early in the development lifecycle. The team is evaluating tools to integrate into their development process to meet these requirements. Which of the following approaches should the team implement to detect vulnerabilities in the source code before the application is deployed? Schedule penetration testing after deployment to uncover potential security issues Use RASP to detect vulnerabilities during application execution Perform SAST during the development phase Your answer is incorrect Conduct DAST on the application in a staging environment
Perform SAST during the development phase Explanation: OBJ 2.2: Static application security testing (SAST) analyzes source code early in the development lifecycle, allowing the team to detect and fix security vulnerabilities before the application is compiled or deployed. This proactive approach aligns with compliance requirements and reduces the cost of addressing vulnerabilities later. Dynamic application security testing (DAST) and runtime application self-protection (RASP) occur after the application is operational, making them less effective for early vulnerability detection. Penetration testing is valuable but is typically performed closer to or after deployment and does not replace the need for early testing. For support or reporting issues, include Question ID: 674f6b607a4b195c66838b09 in your ticket. Thank you.
81
Which of the following authentication methods is an open-source solution for single sign-on across organizational boundaries on the web? RADIUS TACACS+ Kerberos Shibboleth
Shibboleth Explanation: OBJ 2.4: Shibboleth is a standards-based, open-source software package for single sign-on across or within organizational boundaries on the web. It allows sites to make informed authorization decisions for individual access to protected online resources in a privacy-preserving manner. Shibboleth utilizes SAML to provide this federated single sign-on and attribute exchange framework. For support or reporting issues, include Question ID: 63fe06c53b7322449ddbc593 in your ticket. Thank you.
82
A software development team is integrating testing activities into their CI/CD pipeline to enhance application security. They want to identify any potential new vulnerabilities with security functionality after each update. Which type of testing should be prioritized to achieve this goal? Regression testing Canary testing Unit testing Vulnerability scanning
Regression testing Explanation: OBJ 2.2: Regression testing ensures that recent changes or updates do not introduce new issues or break existing functionality, including security features. This is essential for maintaining application integrity throughout the CI/CD process. Canary testing validates changes in a limited environment, not comprehensive security functionality. Unit testing focuses on individual components rather than overall system stability. Vulnerability scanning identifies security flaws but does not ensure new changes preserve existing functionality. For support or reporting issues, include Question ID: 6750fe3623df37e1b5ec5f91 in your ticket. Thank you.
83
You just visited an e-commerce website by typing in its URL during a vulnerability assessment. You discovered that an administrative web frontend for the server's backend application is accessible over the internet. Testing this frontend, you discovered that the default password for the application is accepted. Which of the following recommendations should you make to the website owner to remediate this discovered vulnerability? (SELECT THREE) Rename the URL to a more obscure name Require two-factor authentication for access to the application Require an alphanumeric passphrase for the application's default password Change the username and default password Conduct a penetration test against the organization's IP space Create an allow list for the specific IP blocks that use this application
Require two-factor authentication for access to the applicationChange the username and default password Change the username and default password Create an allow list for the specific IP blocks that use this application Explanation: OBJ 4.2: First, you should change the username and default password since using default credentials is extremely insecure. Second, you should implement an allow list for any specific IP blocks with access to this application's administrative web frontend since it should only be a few system administrators and power users. Next, you should implement two-factor authentication to access the application since two-factor authentication provides more security than a simple username and password combination. You should not rename the URL to a more obscure name since security by obscurity is not considered a good security practice. You also should not require an alphanumeric passphrase for the application's default password. Since it is a default password, you can not change the password requirements without the vendor conducting a software update to the application. Finally, while it may be a good idea to conduct a penetration test against the organization's IP space to identify other vulnerabilities, it will not positively affect remediating this identified vulnerability. For support or reporting issues, include Question ID: 63fe079a3b7322449ddbcff8 in your ticket. Thank you.
84
An organization is transmitting sensitive financial data between two offices over the internet. To ensure the confidentiality and integrity of the data in transit, which of the following methods should the organization implement? Encrypting the data before transmission and sending it via standard HTTP Establishing a VPN to create an encrypted tunnel for the data Using secure physical media to transport the data between the offices Implementing strong access controls on both endpoints to prevent unauthorized access
Establishing a VPN to create an encrypted tunnel for the data Explanation: OBJ 3.8 - A Virtual Private Network (VPN) provides an encrypted tunnel that ensures the confidentiality and integrity of data in transit across untrusted networks like the internet. Encrypting data before transmission adds protection but does not address the risk of interception over unsecure channels like HTTP. Secure physical media is impractical for real-time data transmission. Strong access controls protect endpoints but do not secure data as it traverses the network. For support or reporting issues, include Question ID: 6750860cf86f3d695e9ad0e6 in your ticket. Thank you.
85
Dion Training Solutions is currently calculating the risk associated with building a new data center in a hurricane-prone location. The data center would cost $3,125,000 to build and equip. Based on their assessment of the history of the location, a major hurricane occurs every 20 years and their data center would risk losing 60% of its value due to downtime and possible structural damages. If the data center is built in this location, what is the annual rate of occurrence for this data center? 0.20 0.03 0.60 0.05
0.05 Explanation: OBJ 1.2: The annual rate of occurrence is 0.05 since one incident occurs every 20 years (1/20 is 0.05). The annual rate of occurrence (ARO) is the number of times in a year that a single event occurs. If the number of times an event occurs is counted over multiple years, then the number is divided by the number of years to calculate the ARO. If the number of times is counted monthly, then it is multiplied by 12 to annualize it. For example, if the number of occurrences was 3 times per month, then the ARO would be 36 times per year (3 occurrences x 12 months in a year). For support or reporting issues, include Question ID: 63fe07fc3b7322449ddbd4c3 in your ticket. Thank you.
86
Which authentication mechanism does 802.1x usually rely upon? TOTP RSA HOTP EAP
EAP Explanation: OBJ 2.4: The IEEE 802.1X Port-based Network Access Control framework establishes several ways for devices and users to be securely authenticated before they are permitted full network access. The actual authentication mechanism will be some variant of the Extensible Authentication Protocol (EAP). EAP allows lots of different authentication methods, but many use a digital certificate on the server and/or client machines. This allows the machines to establish a trust relationship and create a secure tunnel to transmit the user authentication credential. For support or reporting issues, include Question ID: 63fe06db3b7322449ddbc6ad in your ticket. Thank you.
87
Which of the following best enhances the recoverability of critical systems in an enterprise environment? Implementing regular backups with offsite and cloud replication Encrypting sensitive data during transmission and storage Deploying a single robust server with extensive backup storage Configuring load balancers for even traffic distribution
Implementing regular backups with offsite and cloud replication Explanation: OBJ 2.1 - Recoverability focuses on the ability to restore critical systems and data following a failure or disaster. Regular backups, combined with offsite and cloud replication, ensure that data is protected and accessible even in the event of physical damage to on-premises systems. Deploying a single robust server does not address system redundancy, and load balancing focuses on traffic distribution rather than recovery. Encryption protects data but does not aid in recoverability. For support or reporting issues, include Question ID: 674f6174d737cdb079388f12 in your ticket. Thank you.
88
Which type of encryption can allow fields in a given dataset to be used in a computation without first being decrypted? Homomorphic encryption Advanced encryption standard Data encryption standard Elliptic curve cryptography
Homomorphic encryption Explanation: OBJ 3.7: Homomorphic encryption is a method of encryption that allows computation of certain fields in a dataset without first decrypting the dataset. The advanced encryption system (AES), elliptic curve cryptography (ECC), and the data encryption standard (DES) are traditional encryption protocols that require their data to be decrypted before computations can be performed on it. Homomorphic encryption is considered an emerging technology and is still being developed/improved since it is currently too slow to be practical for modern applications. For support or reporting issues, include Question ID: 63fe06e63b7322449ddbc72f in your ticket. Thank you.
89
Dion Training is configuring a new Microsoft Exchange server to support 10,000 users. A single server cannot handle the expected load, therefore the system engineers are configuring 5 servers to provide multiple redundant processing of the data being stored on a centralized storage area network. When clients try to access the Exchange server, it appears as a single server instead of 5 individual servers. Which of the following BEST describes the type of action used to meet the increasing demands of the Exchange services? Horizontal scaling Vertical scaling Autoscaling Clustering
Clustering Explanation: OBJ 2.1: Clustering allows multiple redundant processing nodes that share data to accept connections. The cluster appears to be a single server to the clients but provides additional levels of redundancy and resiliency. Autoscaling is the ability to expand and contract the performance of workloads based on policies with specific maximum and minimum capacity specifications. Autoscaling can be used with either horizontal or vertical scaling depending on your cloud service provider. Vertical scaling allows additional resources to be added to an individual system, such as adding processors, memory, and storage to an existing server. Horizontal scaling allows additional capacity to be achieved by adding servers to help process the same workload, such as adding nodes to a distributed system or adding web servers to an existing server farm. For support or reporting issues, include Question ID: 63fe07013b7322449ddbc87e in your ticket. Thank you.
90
NA