Security + Measure Up #2 Flashcards
Pass the First Time
What is a security benefit of migrating an intranet application to the cloud?
A. Increased scalability under load
B. Reduced connectivity
C. Increased control of resources
D. Availability of multitenancy
D. Availability of multitenancy
Cloud providers often have robust security measures in place to support multi-tenant environments, ensuring data isolation, compliance, and enhanced security protocols for all tenants
A company hosts a customer feedback forum on its website. Visitors are redirected to a different website after opening a recently posted comment. What kind of attack does this MOST likely indicate?
A. Code injection
B. Directory Transversal
C. SQL injection
D. Cross-site scripting (XSS)
D. Cross-site scripting (XSS)
In an XSS attack, an attacker injects malicious scripts into content that is then displayed to other users. When users view the compromised content, the malicious scripts can execute, causing redirects to malicious sites, stealing cookies, or other malicious activity
A user reports they receive a certificate warning when attempting to visit their banking website. Upon investigation, a security administrator discovers the site is presenting an untrusted SSL certificate. Which of the following attacks has the administrator MOST likely uncovered?
A. Downgrade
B. Birthday
C. On-path
D. Zero day
C. On-path attack (formerly known as a man-in-the-middle attack). This type of attack involves intercepting the communication between the user and the website, presenting a fake SSL certificate to the user, and potentially capturing sensitive information.
What can be done to prevent an internet attacker from using a replay attack to gain access to a secure public website?
A. Deploy the web server in the internal network
B. Require user name and password for authentication
C. Deploy the web server in a perimeter network
D. Timestamp session packets
D. Timestamp session packets
By including timestamps in session packets, any attempts to replay old packets can be detected and discarded, ensuring that only valid, current sessions are accepted. Timestamping makes replay attacks significantly harder to execute since the attacker would need to send the packets within a very short time window.
A security administrator performs a vulnerability scan for a network and discovers an extensive list of vulnerabilities for several Windows-based servers. What should the administrator do FIRST to mitigate the risks created by these vulnerabilities?
A. Install HIDS software
B. Create application deny lists
C. Install missing patches
D. Remove any unnecessary software
C. Install missing patches
Installing patches addresses known vulnerabilities in the software, providing an immediate improvement in security by fixing the issues identified by the vulnerability scan. This step is crucial to mitigate risks effectively and quickly. The other actions are also important, but addressing known vulnerabilities through patching should be the priority.
All computers in an organization come with TPM installed. What type of data encryption most often uses keys generated from the TPM?
A. Full disk encryption
B. File encryption
C. Data in transit encryption
D. Database encryption
A. Full disk encryption
TPM is typically used in full disk encryption solutions like Bitlocker for Windows. It generates and stores cryptographic keys that can be used to encrypt the entire contents of a disk, providing robust security for data at rest
A user opens an attachment that is infected with a virus. The user’s boss decides operational controls should be implemented so that this type of attack does not occur again. What should the boss do?
A. Implement aggressive anti-phasing policies on email servers
B. Schedule security awareness training for end users
C. Install fingerprint scanners at all user workstations
D. Enable TLS enforcement for all servers sessions
B. Schedule security awareness training for end users
Operational controls involve procedures and policies that are implemented to enhance security through day-to-day operations. Scheduling security awareness training for end users is an operational control that focuses on educating employing about security threats, such as viruses in email attachments. By increasing awareness, users are less likely to open malicious attachments in the future.
Which key is used to encrypt data in an asymmetric encryption system?
A. The sender’s public key
B. The sender’s private key
C. The recipient’s private key
D. The recipient’s public key
D. The recipient’s public key
In an asymmetric encryption system (also known as public-key cryptography), each participant has a pair of keys:
Public Key: This key is shared openly and can be distributed to anyone.
Private Key: This key is kept secret by the owner and is not shared.
When encrypting data to send to a recipient securely, the sender uses the recipient’s public key.
The sender encrypts the data using the recipient’s key.
Since the public key is available to anyone, this process ensures that only the intended recipient can decrypt the message.
The recipient uses their private key to decrypt the data.
Only the recipient has the private key, so only they can access the encrypted information.
Encrypt with recipient’s public key
decrypt with recipient’s private key
This ensures that only the intended recipient can decrypt and read message, maintaining confidentiality
A security engineer receives an alert indicating that DNS tunneling has been detected in the environment. What is a hacker’s motivation for using this attack?
A. Service disruption
B. Chaos
C. Data exflitration
D. Blackmail
C. Data exfiltration
DNS tunneling is a method used by attackers to encapsulate and send data through DNS queries and responses, effectively bypassing traditional security measures. Since DNS traffic is typically allowed through firewalls and not closely scrutinized, it provides a cover channel for transferring data.
Primary Motivation: The main reason hackers use DNS tunneling is for data exfiltration. They can secretly extract sensitive information from the target network without detection.
Attackers use DNS tunneling to stealthily transfer data out of a network.
The alert indicates potential unauthorized data extraction.
Defenders should monitor DNS traffic and implement security measures to detect and block suspicious DNS activities.
As part of a change management process, a security administrator determines that a planned patch causes services to fail on some servers. What should the administrator do to address this finding?
A. Include the finding in a new statement of work (SOW)
B. Update the memorandum of understanding (MOU)
C. Update the standard operating procedures for severs
D. Record the finding in impact analysis documentation
D. Record the finding in impact analysis documentation
In the context of a change management process, it’s essential to thoroughly assess and document the potential impacts of any planned changes. When the security administrator discovers that a planned patch causes core services to fail on some servers, the immediate and appropriate action is to record this finding in the impact analysis document.
Impact Analysis Documentation is a critical component of change management. It evaluates the potential consequences of a change, considering factors like:
Risks and impacts on systems and services
Compatibility issues with existing infrastructure
Mitigation strategies to address identified problems
By recording the finding:
Decision Making: Stakeholders can make informed decisions about whether to proceed with the patch, delay it, or seek alternative solutions.
Risk Management: It helps in identifying risks early and planning accordingly to prevent service disruptions.
Communication: Ensures that all relevant parties are aware of the potential issues associated with the change.
To enhance availability, an organization has configured authentication and storage that provide redundancy for on-premises servers. However, the organization must ensure that all data is encrypted between the data center and the private cloud network. What should the organization do to meet this requirement?
A. Configure IPsec in transport mode between routers in each location.
B. Deploy a NAT gateway and only permit inbound connections from the cloud network.
C. Deploy NGFW appliances in the data center and cloud and share X.509 certificates
D. Configure an IPsec tunnel between the data center and cloud gateway routers.
D. Configure an IPsec tunnel between the data center and cloud gateway routers.
To ensure that all data is encrypted between the data center and the private cloud network, the most effective solution is to set up an IPsec tunnel in tunnel mode between the gateway routers.
IPsec Tunnel Mode:
Full Packet Encryption: Encrypts the entire IP packet, including both the payload and the original IP headers.
Gateway-to-Gateway Security: Ideal for establishing secure connections between networks over an untrusted medium like the internet.
Secure Data Transmission: Provides confidentiality, integrity, and authenticity for all data traversing the tunnel.
By configuring IPsec in tunnel mode on the routers at both the data center and the cloud gateway, all traffic between these points is automatically encrypted. This meets the organization’s requirement to ensure that all data is encrypted between the two locations.
By configuring an IPsec tunnel between the data center and cloud gateway routers, the organization can:
Encrypt All Data: Ensure that every piece of data transmitted between the two locations is encrypted.
Enhance Security: Protect against eavesdropping, tampering, and interception.
Maintain Performance: Utilize hardware-accelerated encryption on routers to minimize impact on network performance.
Simplify Management: Centralize encryption policies on gateway devices.
An organization plans to contract with a provider for a disaster recovery site that will host server hardware. When the primary data center fails, data will be restored, and the secondary site will be activated. Costs must be minimized. Which type of disaster recovery should the organization deploy?
A. Mobile site
B. Warm site
C. Hot site
D. Cold site
D. Cold site
A cold site is a backup facility with basic infrastructure-power, cooling, and physical space-but no active hardware or data.
Cost: It’s the least expensive option because you are not maintaining hardware or up-to-date data at the site.
Recovery Time: In the event of a failure, the organization needs to bring in hardware and restore data from backups, which means a longer recovery time.
Suitability: Ideal for organizations that want to minimize costs and can tolerate longer downtime during disaster recovery.
Since the organization wants to minimize costs and is prepared to restore data and activate the secondary site after a failure (accepting a longer recovery time).
A security analyst has been tasked with implementing secure access to a file server that stores sensitive data. The analyst plans to create rules using the IP addresses of systems that will be allowed to connect to the server. The analyst has been instructed to minimize costs and administrative overhead. Which device is the best solution in this scenario?
A. Intrusion Detection System (IDS)
B. Web Application Firewall (WAF)
C. Layer 4 Firewall
D. Next-Generation Firewall (NGFW)
C. Layer 4 Firewall
IP Address-Based Filtering:
Functionality: Layer 4 firewalls operate at the transport layer of the OSI model. They are designed to filter traffic based on source and destination IP addresses, port numbers, and protocols.
Alignment with Requirements: This directly aligns with the analyst’s plan to create rules using the IP addresses of systems allowed to connect. It provides precise control over which devices can access the sensitive file server.
Minimized Costs:
Cost-Effective: Compared to more advanced firewalls like Next-Generation Firewalls (NGFWs), a Layer 4 firewall is less expensive to procure and maintain.
Simplicity: With fewer features and complexities, it reduces the upfront investment and ongoing operational costs.
Reduced Administrative Overhead:
Ease of Management: Layer 4 firewalls are simpler to configure and manage due to their straightforward rule sets based on IP addresses and ports.
Quick Deployment: The simplicity allows for faster implementation without the need for specialized training or extensive management.
Secure Access Control: Allows the analyst to create IP-based rules to control access to the sensitive server.
Cost efficiency: Keeps costs low by avoiding unnecessary features and higher-priced equipment.
Low Administrative Overhead: Simplifies management with straightforward configurations, reducing the time and resources needed for administration.
By choosing a Layer 4 firewall, the organization achieves effective security tailored to its needs without incurring unnecessary expenses or complexity.
An organization’s users are redirected to a dummy vendor website that uses a stolen SSL certificate. The users unknowingly make purchases on the site using a corporate credit card. What should the organization do to mitigate this risk?
A. Configure all browser to use OCSP
B. Validate each vendor site’s CSR
C. Deploy PKI for certificate management
D. Validate the certificate with the CA
A. Configure all browser to use OCSP
Online Certificate Status Protocol
Stolen SSSL Certificate: Attackers are using a stolen SSL certificate to impersonate a legitimate vendor website.
User Risk: Users are being misled into making purchases on the fraudulent site using corporate credit cards.
Trust Exploitation: The browsers trust the stolen certificate because they’re not checking its revocation status.
OCSP (Online Certificate Status Protocol) is a protocol used for obtaining the revocation status of an X.509 digital certificate. It allows browsers to perform real-time checks with the Certificate Authority (CA) to verify whether a certificate is still valid or has been revoked.
Configuring all browsers to use OCSP ensures:
Browsers Automatically Verify Certificates: Users’ browsers will check the revocation status of SSL certificates in real-time.
Enhanced Security: Access to sites with revoked or invalid certificates will be blocked or will prompt a warning, reducing the risk of fraudulent transactions.
User Transparency: Protection is provided without requiring additional actions from users, maintaining a seamless experience.
Which of the following best describes a digital signal?
A. A message hash encrypted with the sender’s public key
B. A message hash encrypted with the sender private key
C. A message hash encrypted with the recipient’s public key
D. A message hash encrypted with the recipient’s private key
Answer: B. A message hash encrypted with the sender’s private key
A digital signature is a cryptographic technique that allows a sender to authenticate their identity and ensure the integrity of a message. It involves the following steps:
Hashing the Message:
The sender applies a hash function to the original message to generate a fixed-size hash value (message digest).
This hash value uniquely represents the contents of the message.
Encrypting the Hash with the Sender’s Private Key:
The sender encrypts the hash value using their private key.
This encrypted hash is the digital signature.
Appending the Signature:
The digital signature is attached to the original message and sent to the recipient.
Verification Process by the Recipient:
Decrypting the Signature:
The recipient uses the sender’s public key to decrypt the digital signature, retrieving the original hash value.
Hashing the Received Message:
The recipient independently computes the hash of the received message using the same hash function used by the sender.
Comparing Hash Values:
The recipient compares the decrypted hash (from the signature) with the newly computed hash.
If they match, it confirms that the message has not been altered and authenticates the sender.
Why Option B is Correct:
Authentication: Encrypting the hash with the sender’s private key ensures that the signature could only have been created by the sender, as only they possess their private key.
Integrity: Any alteration to the message would result in a different hash value upon verification, indicating tampering.
Non-Repudiation: The sender cannot deny sending the message, as the digital signature is uniquely tied to their private key.
Why the Other Options Are Incorrect:
Option A (A message hash encrypted with the sender’s public key):
The sender’s public key is available to anyone. Encrypting with the public key does not authenticate the sender, as anyone could perform this action.
Typically, the recipient’s public key is used for encrypting messages to ensure confidentiality, not for creating digital signatures.
Option C (A message hash encrypted with the recipient’s public key):
Encrypting with the recipient’s public key is used to ensure that only the recipient can decrypt the message (confidentiality), not to create a digital signature.
This does not provide authentication of the sender’s identity.
Option D (A message hash encrypted with the recipient’s private key):
The recipient’s private key should never be known or used by the sender.
Encrypting with the recipient’s private key doesn’t make sense in this context and does not create a valid digital signature.
Summary:
A digital signature is best described as a message hash encrypted with the sender’s private key (Option B).
This process provides authentication, integrity, and non-repudiation, ensuring that the message is from the claimed sender and has not been altered.
To increase security and prevent active attacks on a branch office network, an organization connects an IPS to a network tap. The IPS shows alerts for active attacks, but the network still suffers multiple breaches in quick succession. What should the organization do to address this situation?
A. Require a VPN for all connections
B. Replace the IPS with a firewall
C. Implement an SD-WAN
D. Place the IPS device inline
Answer: D. Place the IPS device inline
Explanation:
The organization should place the Intrusion Prevention System (IPS) device inline within the network traffic flow to actively prevent attacks and enhance security. Here’s why this is the most effective solution:
Current Scenario
IPS Connected to a Network Tap:
A network tap is a passive device that copies network traffic for monitoring purposes.
The IPS, connected via the tap, is operating in passive mode.
Limitations:
The IPS can detect and alert on malicious activities.
It cannot intervene or block malicious traffic because it’s not in the direct path of network traffic.
Outcome:
Despite alerts, the network continues to suffer breaches because the IPS isn’t able to actively prevent the attacks.
Recommended Solution
Place the IPS Device Inline:
Inline Deployment:
The IPS is positioned directly in the path of network traffic between the source and destination.
All incoming and outgoing traffic passes through the IPS.
Active Prevention Capability:
The IPS can inspect packets in real-time.
It can block, drop, or modify malicious traffic immediately upon detection.
Benefits:
Immediate Response: Stops attacks before they impact network resources.
Enhanced Security: Prevents unauthorized access, exploits, and malware propagation.
Reduced Breaches: Active intervention reduces the likelihood of successful breaches.
Why Other Options are Less Effective
A. Require a VPN for All Connections:
VPNs encrypt data in transit and secure remote access connections.
Limitations:
Does not prevent attacks originating from within the network.
Encryption alone doesn’t stop malicious traffic; it might even mask threats from security devices.
B. Replace the IPS with a Firewall:
Firewalls control traffic based on predefined security rules and are essential for perimeter security.
Limitations:
Traditional firewalls may not have deep packet inspection or advanced threat detection.
Replacing the IPS could reduce visibility into sophisticated attacks.
Complementary Roles: Firewalls and IPS devices serve different functions and are often used together.
C. Implement an SD-WAN:
SD-WAN (Software-Defined Wide Area Network) optimizes WAN connections using software-based management.
Limitations:
Enhances network performance and flexibility but is not designed primarily for security.
Does not provide intrusion prevention capabilities needed to stop active attacks.
Implementation Considerations
Network Design:
Ensure the IPS is properly integrated into the network without causing bottlenecks.
Consider network redundancy to prevent single points of failure.
Performance:
Verify that the IPS can handle network throughput requirements to avoid latency.
Utilize IPS devices with appropriate capacity and performance specifications.
High Availability:
Deploy the IPS with failover capabilities to maintain network uptime in case of device failure.
Policy Configuration:
Update IPS signatures regularly to detect the latest threats.
Configure policies to balance security and false positives, ensuring legitimate traffic isn’t blocked.
Monitoring and Maintenance:
Continuously monitor IPS alerts and logs.
Analyze incidents to refine IPS rules and improve security posture.
Conclusion
By placing the IPS device inline, the organization moves from passive detection to active prevention of attacks. This change allows the IPS to:
Inspect and Act on Traffic in Real-Time: Block malicious packets before they reach their target.
Prevent Breaches: Reduce or eliminate successful attacks resulting in breaches.
Enhance Overall Security: Strengthen defenses against a wide array of threats.
An administrator sets up a VM for testing different versions of an application. The administrator wants to be able to return to the baseline state as quickly as possible between each test. What should the administrator do?
A. Configure a sandbox environment
B. Implement automatic change management
C. Create a snapshot of the VM
D. Run a full backup of the host
The administrator should C. Create a snapshot of the VM.
Explanation:
Creating a snapshot of the virtual machine captures its exact state at a specific point in time—including the operating system, installed applications, configurations, and settings. This means that after testing different versions of the application, the administrator can quickly revert the VM back to this baseline state with minimal downtime. It’s a fast and efficient way to reset the environment between tests without the need to reinstall or reconfigure the system.
Why the Other Options Aren’t Ideal:
A. Configure a sandbox environment:
While sandboxing can isolate applications for testing and enhance security, it doesn’t provide a quick method to revert the entire system to a previous state. It focuses more on containing changes rather than facilitating rapid rollbacks.
B. Implement automatic change management:
Change management tracks and manages changes to the system but doesn’t help in rapidly returning to a baseline state. It’s about oversight and control, not swift restoration.
D. Run a full backup of the host:
Full backups are essential for disaster recovery but are time-consuming to create and restore. Using them between each test would be inefficient and would significantly slow down the testing process.
Additional Insights:
Leveraging VM snapshots not only accelerates the testing cycle but also offers flexibility:
Multiple Testing Scenarios: You can create multiple snapshots at different stages to branch out testing paths without affecting the primary baseline.
Resource Efficiency: Snapshots consume less storage compared to full backups since they only record changes from the time of the snapshot.
Integration with Automation Tools: Incorporate snapshot management into automation scripts to further streamline the process, allowing for automated revert actions after test completions.
Have you considered integrating continuous integration/continuous deployment (CI/CD) tools into your testing environment? This could automate the deployment of application versions and work hand-in-hand with your snapshot strategy to enhance efficiency even further.