701-750 Flashcards
A recent malware outbreak across a subnet included successful rootkit installations on many PCs, ensuring persistence by rendering remediation efforts ineffective. Which of the following would best detect the presence of a rootkit in the future?
A. FDE
B. NIDS
C. EDR
D. DLP
C. EDR (Endpoint Detection and Response)
Explanation:
Endpoint Detection and Response (EDR) solutions are specifically designed to detect and respond to advanced threats, including rootkits. EDR tools continuously monitor endpoints for suspicious activity and provide detailed visibility into the behavior of applications and processes, making them highly effective at detecting rootkits. They can identify anomalies, provide alerting, and facilitate in-depth investigations, which are crucial for detecting and addressing rootkits that aim to achieve persistence.
The other options do not directly address the detection of rootkits:
A. FDE (Full Disk Encryption): FDE encrypts the data on the disk, but it does not detect or prevent rootkits. B. NIDS (Network Intrusion Detection System): NIDS monitors network traffic for suspicious activity, but it may not be effective in detecting rootkits, which operate at the endpoint level. D. DLP (Data Loss Prevention): DLP focuses on preventing data breaches by monitoring and controlling data transfers, but it is not designed to detect rootkits.
Therefore, EDR is the best choice for detecting the presence of rootkits in the future.
An organization is building a single virtual environment that will host customer applications and data that require availability at all times. The data center that is hosting the environment will provide generator power and ISP services. Which of the following is the best solution to support the organization’s requirement?
A. NIC teaming
B. Cloud backups
C. A load balancer appliance
D. UPS
Community : D 62%, C 24%
Chat GPT :
D. UPS (Uninterruptible Power Supply)
Explanation:
In the scenario described, where the organization’s single virtual environment hosting customer applications and data requires constant availability, a UPS (Uninterruptible Power Supply) is the best solution to support this requirement. Here’s why:
Power Continuity: A UPS provides continuous power supply to critical infrastructure, such as servers and networking equipment, during electrical power disruptions or outages. This ensures that the virtual environment remains operational even if there are short-term power interruptions. Protection Against Power Surges: UPS units also protect against power spikes and surges, which can damage sensitive electronic equipment. Smooth Transition to Generator Power: In cases where the primary power source fails and backup generators kick in, a UPS acts as a bridge to ensure there is no interruption in power supply during the transition. Stabilization of Power Quality: UPS systems also regulate and stabilize the voltage and frequency of the power supplied to the virtual environment, which can help prevent issues that may arise from inconsistent power quality.
While options like NIC teaming, cloud backups, and load balancer appliances are important in their respective contexts (network redundancy, data protection, and traffic distribution for high availability), they do not directly address the need for continuous power supply and protection against outages, which is crucial for maintaining availability in the described environment. Therefore, UPS is the most appropriate solution for ensuring continuous availability of the virtual environment in this scenario.
(Brain dump : C. A load balancer appliance)
A new company wants to avoid channel interference when building a WLAN. The company needs to know the radio frequency behavior, identify dead zones, and determine the best place for access points. Which of the following should be done first?
A. Configure heat maps.
B. Utilize captive portals.
C. Conduct a site survey.
D. Install Wi-Fi analyzers.
C. Conduct a site survey.
Explanation:
When setting up a WLAN (Wireless Local Area Network), especially to avoid channel interference and ensure optimal coverage and performance, the first step should be to conduct a site survey. Here’s why:
Radio Frequency Behavior: A site survey involves analyzing the radio frequency (RF) behavior in the physical space where the WLAN will be deployed. This includes assessing existing RF interference, noise sources, and other signals that could affect WLAN performance. Identify Dead Zones: By conducting a site survey, you can identify areas with poor or no coverage (dead zones) where additional access points may be required to ensure comprehensive coverage. Optimal Access Point Placement: The survey helps in determining the best locations for installing access points (APs) to maximize coverage while minimizing interference between APs. It takes into account factors like building layout, construction materials, and potential sources of interference. Heat Maps and Wi-Fi Analyzers: Heat maps and Wi-Fi analyzers are tools used during and after the site survey to visualize signal strength and interference levels. However, these tools are utilized effectively after the initial site survey has been conducted to validate the survey findings and fine-tune the WLAN deployment.
Therefore, conducting a site survey is the crucial first step to gather data on RF behavior, identify dead zones, and strategically place access points to optimize WLAN performance and minimize interference.
The following IP information was provided to internal auditors to help assess organizational security:
User traffic subnet : 10.2.5.0/16
File Server : 10.2.2.7
Internal Linux Web server : 10.3.9.9
SQL Server : 10.3.15.82
HR server : 10.9.8.14
Firewall : 10.1.1.1
Which of the following tools would most likely be used to perform network reconnaissance and help understand what is accessible to all users? (Choose two.)
A. ipconfig
B. ping
C. chmod
D. netstat
E. traceroute
F. route
Community : (DE 47%, BE 45%)
D. netstat gives you what is listening
E. traceroute finds the path.
ping only does the host availability not the service.
(Chat GPT & Brain dump : B, E)
B. ping
Ping is used to determine whether a specific IP address is reachable and to measure the round-trip time for packets sent from the local host to a destination computer.
E. traceroute
Traceroute is used to display the route and measure transit delays of packets across an IP network. It shows the path packets take to reach a specified destination.
Here’s how these tools apply to the scenario provided:
Ping: It can be used to determine if the servers (File Server, Internal Linux Web Server, SQL Server, HR Server) are reachable from the user traffic subnet (10.2.5.0/16). For example, by pinging 10.2.2.7 (File Server) from a machine in the user traffic subnet, one can verify if it responds, indicating accessibility. Traceroute: This tool can be used to trace the path packets take from the user traffic subnet to each of the servers (File Server, Internal Linux Web Server, SQL Server, HR Server). It helps in understanding the network hops and potential access points between the user subnet and these servers.
Tools that are less relevant or not typically used for this purpose in this context:
A. ipconfig: This command displays the IP configuration for Windows machines, focusing on local network settings rather than external network reconnaissance. C. chmod: This command is used to change file permissions in Unix-like operating systems, not for network reconnaissance. D. netstat: Netstat displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. While useful for network troubleshooting, it doesn’t provide the path or accessibility understanding like ping and traceroute. F. route: This command shows/manipulates the IP routing table. It helps manage routing tables, not for network reconnaissance in terms of path and accessibility checking like ping and traceroute.
Therefore, ping and traceroute are the most appropriate tools for performing network reconnaissance and understanding what resources are accessible from the user traffic subnet based on the IP information provided.
A software company adopted the following processes before releasing software to production:
Peer review
Static code scanning
Signing
A considerable number of vulnerabilities are still being detected when code is executed on production. Which of the following security tools can improve vulnerability detection on this environment?
A. File integrity monitoring for the source code
B. Dynamic code analysis tool
C. Encrypted code repository
D. Endpoint detection and response solution
B. Dynamic code analysis tool
Here’s why a dynamic code analysis tool is the most appropriate choice:
Complement to Static Analysis: While static code scanning (static analysis) helps identify vulnerabilities by analyzing the code without executing it, dynamic code analysis (dynamic analysis) examines the code during runtime. It can detect vulnerabilities that may only manifest during execution or interaction with other components. Detection of Runtime Vulnerabilities: Dynamic analysis tools can detect issues such as memory leaks, input validation flaws, insecure configurations, and other runtime-specific vulnerabilities. These might not be evident during static analysis but can affect the application when it is running in a production environment. Continuous Monitoring: Unlike static analysis, which is typically performed during development or before deployment, dynamic analysis can provide continuous monitoring of the application in its operational state. This helps in detecting vulnerabilities that might emerge over time or due to changes in the environment. Feedback Loop for Developers: Dynamic analysis tools often provide real-time feedback to developers about vulnerabilities discovered during runtime. This allows for quicker identification and remediation of issues, improving overall application security.
Therefore, implementing a dynamic code analysis tool would enhance the company’s ability to detect vulnerabilities in the production environment, complementing the existing static code scanning and peer review processes.
A security analyst needs to harden access to a network. One of the requirements is to authenticate users with smart cards. Which of the following should the analyst enable to best meet this requirement?
A. CHAP
B. PEAP
C. MS-CHAPv2
D. EAP-TLS
D. EAP-TLS (Extensible Authentication Protocol - Transport Layer Security)
Here’s why EAP-TLS is suitable for smart card authentication:
Mutual Authentication: EAP-TLS supports mutual authentication, which means both the client (user with a smart card) and the server (network or authentication server) authenticate each other using digital certificates. This aligns well with the security requirements for smart card-based authentication. Certificate-based Authentication: Smart cards typically store digital certificates that authenticate the user to the network. EAP-TLS facilitates the use of these certificates for secure authentication without transmitting sensitive information like passwords over the network. Strong Security: EAP-TLS utilizes TLS (Transport Layer Security), which provides strong encryption and integrity protection during the authentication process. This ensures that the authentication exchange between the client and server is secure against eavesdropping and tampering.
In contrast, the other options:
CHAP (Challenge Handshake Authentication Protocol) and MS-CHAPv2 (Microsoft Challenge Handshake Authentication Protocol version 2) are primarily used for password-based authentication and do not directly support smart card authentication. PEAP (Protected Extensible Authentication Protocol) is an authentication protocol that supports various inner authentication methods, including EAP-TLS. However, PEAP itself does not provide the direct support needed for smart card-based authentication; rather, it encapsulates other EAP methods like EAP-TLS within a secure tunnel.
Therefore, EAP-TLS is the most appropriate choice to enable smart card authentication while ensuring strong security and compliance with smart card deployment requirements.
A penetration-testing firm is working with a local community bank to create a proposal that best fits the needs of the bank. The bank’s information security manager would like the penetration test to resemble a real attack scenario, but it cannot afford the hours required by the penetration-testing firm. Which of the following would best address the bank’s desired scenario and budget?
A. Engage the penetration-testing firm’s red-team services to fully mimic possible attackers.
B. Give the penetration tester data diagrams of core banking applications in a known-environment test.
C. Limit the scope of the penetration test to only the system that is used for teller workstations.
D. Provide limited networking details in a partially known-environment test to reduce reconnaissance efforts.
D. Provide limited networking details in a partially known-environment test to reduce reconnaissance efforts.
Explanation:
A. Engage the penetration-testing firm's red-team services to fully mimic possible attackers: While this option would provide a very realistic attack scenario, red-team services are typically very comprehensive and can be quite costly due to the extensive time and effort required to fully mimic attackers. This might not fit within the bank's budget constraints. B. Give the penetration tester data diagrams of core banking applications in a known-environment test: This would save time on reconnaissance but may not fully resemble a real attack scenario because the tester would have more information than a typical attacker would. C. Limit the scope of the penetration test to only the system that is used for teller workstations: Limiting the scope too much might not provide a comprehensive assessment of the bank's overall security posture, and could miss critical vulnerabilities in other parts of the network. D. Provide limited networking details in a partially known-environment test to reduce reconnaissance efforts: This option strikes a balance between providing a realistic attack scenario and controlling costs. By giving limited details, the penetration testers can focus on testing without spending excessive time on reconnaissance, thereby simulating a more realistic attack within a constrained budget.
Conclusion:
Option D allows the bank to simulate a realistic attack scenario while controlling costs by reducing the time spent on the reconnaissance phase. This approach aligns well with the bank’s need to balance realism with budget constraints.
A security analyst is reviewing SIEM logs during an ongoing attack and notices the following:
http://company.com/get.php?f=/etc/passwd
https://company.com/..%2F..%2F..%2F..%2Fetc%2Fshadow
https://company.com/../../../../etc/passwd
Which of the following best describes the type of attack?
A. SQLi
B. CSRF
C. API attacks
D. Directory traversal
D. Directory traversal
Explanation:
Directory traversal attacks involve an attacker manipulating a URL to access files and directories that are stored outside the web root folder. The aim is to access restricted files, such as configuration files, password files, or other sensitive information on the server. The log entries show attempts to access the /etc/passwd and /etc/shadow files by including sequences like ../../../../, which is a typical pattern in directory traversal attacks. This pattern attempts to navigate up the directory structure to reach the root directory and then access the specified files.
The other options are not applicable based on the provided log entries:
A. SQLi (SQL injection): This involves inserting or manipulating SQL queries in input fields to execute arbitrary SQL commands. There are no SQL commands or database interactions in the provided logs. B. CSRF (Cross-Site Request Forgery): This exploits the trust that a web application has in a user's browser, typically involving actions made on behalf of an authenticated user. The logs do not indicate any actions being performed on behalf of a user. C. API attacks: These involve exploiting vulnerabilities in an application's API. The logs do not show interactions with an API or attempts to exploit API endpoints.
Thus, the attack observed in the logs is best categorized as a directory traversal attack.
A certificate vendor notified a company that recently invalidated certificates may need to be updated. Which of the following mechanisms should a security administrator use to determine whether the certificates installed on the company’s machines need to be updated?
A. SCEP
B. OCSP
C. CSR
D. CRL
(Community : B 65%, D35%)
OCSP (Online Certificate Status Protocol) is a protocol used to check the revocation status of digital certificates. When a certificate vendor notifies a company that certificates may need to be updated, it often implies that there could be a revocation or expiration issue with the certificates. OCSP allows the security administrator to check the real-time status of the certificates by querying the issuing Certificate Authority (CA) or an OCSP responder to verify whether the certificates are still valid.
CRL (Certificate Revocation List) is a list of revoked certificates published by a CA. While CRLs contain information about revoked certificates, they may not always provide real-time status updates, unlike OCSP.
ChatGPT & Braindump :
D. CRL (Certificate Revocation List)
Explanation:
CRL (Certificate Revocation List): A CRL is a list of certificates that have been revoked by the certificate authority (CA) before their expiration date and should no longer be trusted. By checking the CRL, the security administrator can determine if any of the installed certificates have been revoked and therefore need to be updated.
The other options are not directly related to checking the status of installed certificates:
A. SCEP (Simple Certificate Enrollment Protocol): This is used for certificate enrollment and issuing certificates, not for checking their revocation status. B. OCSP (Online Certificate Status Protocol): OCSP is a protocol used to obtain the revocation status of a single certificate in real-time. While OCSP can be used to check the revocation status, it is more commonly used for checking individual certificates on the fly rather than reviewing all installed certificates. However, OCSP could be a complementary method if real-time checks are needed. C. CSR (Certificate Signing Request): A CSR is a request sent to a CA to apply for a digital identity certificate. It does not pertain to checking the status of existing certificates.
Therefore, the CRL is the most appropriate mechanism for determining whether the certificates need to be updated.
A recent vulnerability scan revealed multiple servers have non-standard ports open for applications that are no longer in use. The security team is working to ensure all devices are patched and hardened. Which of the following would the security team perform to ensure the task is completed with minimal impact to production?
A. Enable HIDS on all servers and endpoints.
B. Disable unnecessary services.
C. Configure the deny list appropriately on the NGFW.
D. Ensure the antivirus is up to date.
B. Disable unnecessary services.
Explanation:
Disabling unnecessary services will directly address the issue of non-standard ports being open for applications that are no longer in use. This action will help close security gaps and reduce the attack surface without significantly impacting production, as it targets only the services and applications that are confirmed to be no longer needed.
Here’s why the other options are less suitable in this specific scenario:
A. Enable HIDS on all servers and endpoints: While Host-based Intrusion Detection Systems (HIDS) are important for monitoring and detecting suspicious activities, enabling them does not directly address the issue of unnecessary services and open ports. C. Configure the deny list appropriately on the NGFW: Configuring a deny list on the Next-Generation Firewall (NGFW) can help block traffic to and from the non-standard ports. However, this does not remove the underlying issue of unnecessary services running on the servers. It is better to disable those services entirely to reduce the risk. D. Ensure the antivirus is up to date: Keeping antivirus software up to date is important for overall security, but it does not address the specific issue of open ports and unnecessary services directly.
An employee fell for a phishing scam, which allowed an attacker to gain access to a company PC. The attacker scraped the PC’s memory to find other credentials. Without cracking these credentials, the attacker used them to move laterally through the corporate network. Which of the following describes this type of attack?
A. Privilege escalation
B. Buffer overflow
C. SQL injection
D. Pass-the-hash
D. Pass-the-hash
Explanation:
In a pass-the-hash attack, an attacker extracts hashed credentials from one computer and uses them to authenticate on other computers within the network without cracking the hash. This method allows the attacker to move laterally through the network by using the hashed credentials to gain access to other systems.
Here’s why the other options are less suitable:
A. Privilege escalation: This involves gaining higher-level permissions than those initially granted, which is not the primary focus of the attack described. B. Buffer overflow: This involves exploiting a program's vulnerability to execute arbitrary code, which does not match the scenario described. C. SQL injection: This involves manipulating SQL queries to gain unauthorized access to a database, which is not related to the attack on credentials in this scenario.
Which of the following is a common source of unintentional corporate credential leakage in cloud environments?
A. Code repositories
B. Dark web
C. Threat feeds
D. State actors
E. Vulnerability databases
A. Code repositories
Explanation:
Code repositories, such as those hosted on platforms like GitHub, GitLab, or Bitbucket, are a common source of unintentional corporate credential leakage. Developers sometimes accidentally commit credentials, API keys, or other sensitive information to these repositories. If these repositories are public or improperly secured, unauthorized individuals can access and exploit this information. This makes code repositories a significant risk factor for credential leakage in cloud environments.
Here’s why the other options are less suitable:
B. Dark web: The dark web is a place where leaked credentials may be traded, but it is not a source of unintentional leakage. C. Threat feeds: These provide information about potential threats but are not a source of credential leakage. D. State actors: These are entities that might exploit leaked credentials, not a source of unintentional leakage. E. Vulnerability databases: These catalog vulnerabilities but do not typically contain leaked credentials directly.
A company is designing the layout of a new data center so it will have an optimal environmental temperature.
Which of the following must be included? (Choose two.)
A. An air gap
B. A cold aisle
C. Removable doors
D. A hot aisle
E. An IoT thermostat
F. A humidity monitor
B. A cold aisle
D. A hot aisle
Explanation:
To achieve optimal environmental temperature in a data center, it is crucial to design the layout with proper airflow management. The “cold aisle/hot aisle” configuration is a standard best practice in data center design for maintaining appropriate cooling and temperature control. Here’s a detailed look at why these options are the most relevant:
B. A cold aisle: This is where the cold air is supplied to cool the front of the equipment racks. The cold air is typically directed from the raised floor or directly from cooling units. D. A hot aisle: This is where the hot air expelled from the back of the equipment racks is collected. The hot aisle is usually aligned with the return air pathways to the cooling units.
By arranging equipment in cold aisle/hot aisle configurations, a data center can ensure that cold air is efficiently used to cool equipment and that hot air is effectively removed from the environment, maintaining an optimal temperature.
Here’s why the other options are less relevant in this context:
A. An air gap: While useful in certain cooling strategies, it is not a standard or primary method for data center cooling. C. Removable doors: These are not typically related to temperature control but more to physical access and maintenance. E. An IoT thermostat: While helpful for monitoring, it is not a design element of the data center layout. F. A humidity monitor: Important for overall environmental control, but not specifically related to the design layout for temperature management.
Thus, for maintaining optimal environmental temperature, incorporating a cold aisle and a hot aisle is essential.
A privileged user at a company stole several proprietary documents from a server. The user also went into the log files and deleted all records of the incident. The systems administrator has just informed investigators that other log files are available for review. Which of the following did the administrator most likely configure that will assist the investigators?
A. Memory dumps
B. The syslog server
C. The application logs
D. The log retention policy
B. The syslog server
Explanation:
A syslog server is typically used to collect and store log data from multiple devices and systems in a centralized location. By configuring a syslog server, the systems administrator ensures that log data is copied and stored separately from the local system logs. This makes it much harder for a privileged user to cover their tracks completely by deleting local log files, as the logs would still be available on the syslog server.
Here’s a more detailed look at why this is the most likely helpful configuration:
A. Memory dumps: These are snapshots of the system’s memory at a point in time and are not typically used for storing ongoing log files. They are more useful for diagnosing crashes or debugging applications. C. The application logs: While these could be useful, they are often stored on the same server where the application is running. If the user deleted local logs, the application logs on the same server might also be deleted. D. The log retention policy: This ensures logs are kept for a certain period, but it does not prevent logs from being deleted locally if not combined with central logging.
By having a syslog server configured, log entries are sent to a centralized and often secure location where they can be reviewed even if the local logs have been tampered with or deleted. This setup is essential for incident response and forensic investigations.
Local guidelines require that all information systems meet a minimum security baseline to be compliant. Which of the following can security administrators use to assess their system configurations against the baseline?
A. SOAR playbook
B. Security control matrix
C. Risk management framework
D. Benchmarks
D. Benchmarks
Explanation:
Benchmarks, such as those provided by the Center for Internet Security (CIS) or other similar organizations, offer detailed guidance and best practices for securing various information systems and ensuring they meet a minimum security baseline. These benchmarks can be used to assess system configurations against a set of standardized security controls and best practices.
Here’s a more detailed look at why benchmarks are the most appropriate choice:
A. SOAR playbook: A Security Orchestration, Automation, and Response (SOAR) playbook is used to automate and orchestrate incident response activities, not specifically for assessing system configurations against a security baseline. B. Security control matrix: While a security control matrix can help map controls to various requirements, it is not specifically used to assess system configurations. It’s more of a tool for tracking and ensuring all necessary controls are implemented. C. Risk management framework: This provides a structured approach to managing risk and includes processes for identifying, assessing, and mitigating risks. However, it is broader and more strategic, not specifically focused on assessing system configurations against a baseline.
Benchmarks provide practical and specific criteria for evaluating whether systems comply with security standards and baselines. They typically include detailed configuration settings and recommendations for securing various types of systems and applications, making them the best tool for this purpose.
A company’s public-facing website, https://www.organization.com, has an IP address of 166.18.75.6. However, over the past hour the SOC has received reports of the site’s homepage displaying incorrect information. A quick nslookup search shows https://www.organization.com is pointing to 151.191.122.115. Which of the following is occurring?
A. DoS attack
B. ARP poisoning
C. DNS spoofing
D. NXDOMAIN attack
C. DNS spoofing
Explanation:
DNS spoofing (also known as DNS cache poisoning) is an attack where the attacker corrupts the DNS resolver cache by inserting false DNS information. This results in redirecting traffic from the legitimate IP address to a malicious IP address. In this case, the company’s website should resolve to 166.18.75.6, but it is resolving to 151.191.122.115, indicating that a DNS spoofing attack is likely occurring.
Here’s why the other options are less appropriate:
A. DoS attack: A Denial of Service (DoS) attack aims to make a service unavailable by overwhelming it with traffic. This does not typically involve changing DNS records or redirecting traffic. B. ARP poisoning: Address Resolution Protocol (ARP) poisoning involves sending falsified ARP messages over a local network to associate the attacker's MAC address with the IP address of a legitimate device. This attack is localized to a specific network and would not explain the incorrect IP address being resolved over the internet. D. NXDOMAIN attack: This attack targets the non-existence of domains, causing legitimate queries to fail by returning an NXDOMAIN (non-existent domain) response. This does not explain why the domain resolves to an incorrect IP address.
The incorrect IP address pointing to the company’s domain strongly suggests that DNS spoofing is occurring.
An employee receives an email stating the employee won the lottery. The email includes a link that requests a name, mobile phone number, address, and date of birth be provided to confirm employee’s identity before sending the prize. Which of the following best describes this type of email?
A. Spear phishing
B. Whaling
C. Phishing
D. Vishing
C. Phishing
Explanation:
Phishing is a type of cyberattack where attackers send fraudulent emails or messages that appear to be from legitimate sources, attempting to trick recipients into providing personal information such as names, phone numbers, addresses, and dates of birth.
Here’s why the other options are less appropriate:
A. Spear phishing: This is a targeted form of phishing aimed at a specific individual or organization, often using personalized information. The given scenario does not indicate that the email is targeted specifically at the employee using personal details, but rather a generic "you won the lottery" scam. B. Whaling: This is a type of phishing attack that specifically targets high-profile individuals such as executives or senior management. The scenario does not indicate that the email targets a high-profile individual. D. Vishing: This stands for "voice phishing" and involves fraudulent phone calls to obtain personal information. The scenario involves an email, not a phone call.
Thus, the scenario described is best categorized as phishing.
A company currently uses passwords for logging in to company-owned devices and wants to add a second authentication factor. Per corporate policy, users are not allowed to have smartphones at their desks. Which of the following would meet these requirements?
A. Smart card
B. PIN code
C. Knowledge-based question
D. Secret key
A. Smart card
Explanation:
In the scenario where users are not allowed to have smartphones at their desks but need a second authentication factor, a smart card is a suitable solution. Smart cards are physical devices that typically contain embedded chips capable of storing cryptographic keys and certificates. Here’s why it fits the requirement:
Smart card: A smart card can be inserted into a card reader attached to the device or computer. It provides the second factor of authentication by requiring something the user has (the physical card) in addition to the password (something the user knows). PIN code: While a PIN code can be used with a smart card for authentication, it alone doesn't satisfy the requirement for a second factor because it falls under "something the user knows," similar to a password. Knowledge-based question: This is also something the user knows, typically used as a password recovery or reset method rather than a second factor for regular authentication. Secret key: This is typically used in cryptographic contexts but doesn't provide a second factor of authentication on its own without additional components like a smart card or token.
Therefore, a smart card is the most appropriate choice for adding a second authentication factor while adhering to the policy that prohibits smartphones at desks.
The Chief Technology Officer of a local college would like visitors to utilize the school’s Wi-Fi but must be able to associate potential malicious activity to a specific person. Which of the following would best allow this objective to be met?
A. Requiring all new. on-site visitors to configure their devices to use WPS
B. Implementing a new SSID for every event hosted by the college that has visitors
C. Creating a unique PSK for every visitor when they arrive at the reception area
D. Deploying a captive portal to capture visitors’ MAC addresses and names
D. Deploying a captive portal to capture visitors’ MAC addresses and names
Explanation:
A captive portal is a web page that intercepts and redirects a user’s attempt to access the network. It requires the user to complete certain actions before granting access, such as agreeing to terms of service or providing authentication credentials. Here’s how it fits the scenario described:
Capturing visitor information: A captive portal can be configured to collect specific information from visitors, such as their MAC addresses and names, before granting access to the school's Wi-Fi network. This information can help in associating potential malicious activity with specific individuals if needed for investigation purposes. Compliance with policies: By requiring visitors to go through the captive portal, the college ensures that each visitor is identifiable, which aligns with the CTO's objective of associating network activity with specific persons.
Now, let’s briefly consider why the other options may not be as suitable:
A. Requiring all new on-site visitors to configure their devices to use WPS: While WPS (Wi-Fi Protected Setup) is a convenient method for device configuration, it does not inherently provide identification of individual users or capture their information for tracking purposes. B. Implementing a new SSID for every event hosted by the college that has visitors: Creating new SSIDs for each event can be cumbersome to manage and does not directly solve the problem of identifying and associating malicious activity with specific users. C. Creating a unique PSK for every visitor when they arrive at the reception area: While this approach can provide individualized access credentials, it does not inherently capture visitor information needed for tracking and associating network activity.
Therefore, deploying a captive portal to capture visitors’ MAC addresses and names is the best option to meet the CTO’s objective of allowing visitors to use the school’s Wi-Fi while being able to associate potential malicious activity with specific individuals.
Which of the following is most likely associated with introducing vulnerabilities on a corporate network by the deployment of unapproved software?
A. Hacktivists
B. Script kiddies
C. Competitors
D. Shadow IT
D. Shadow IT
Explanation:
Shadow IT refers to the use of IT systems, software, and services within an organization without explicit approval or oversight from the IT department or management. When employees or departments deploy unapproved software or services, it can introduce several vulnerabilities and risks to the corporate network, including:
Security Vulnerabilities: Unapproved software may not undergo rigorous security testing or updates, leading to vulnerabilities that can be exploited by attackers. Data Loss: Some shadow IT solutions may not have adequate data protection measures, leading to potential data breaches or leaks. Compliance Risks: Using unapproved software may violate organizational policies, industry regulations, or legal requirements, exposing the organization to compliance risks. Interoperability Issues: Shadow IT solutions may not integrate well with existing corporate systems, leading to operational disruptions or compatibility issues.
In contrast, the other options are less directly associated with introducing vulnerabilities through unapproved software deployment:
A. Hacktivists: Typically focus on political or social causes rather than deploying unapproved software within a corporate network. B. Script Kiddies: Inexperienced individuals who use existing scripts or tools to exploit vulnerabilities but are not directly associated with introducing software to the network. C. Competitors: While competitors may engage in corporate espionage or targeted attacks, they are less likely to introduce vulnerabilities by deploying unapproved software directly.
Therefore, shadow IT is the most likely option associated with introducing vulnerabilities on a corporate network due to the unauthorized deployment of software or services.