Practice Test Qs #1 Flashcards
While browsing the internet, a user receives a pop-up that states, “We have detected a Trojan virus. Click OK to begin the repair process.” Out of fright, the user clicks OK. Given the following choices, what is the most likely outcome of the user’s response?
a) User starts experiencing drive-by downloads.
b) UAC will need to be enabled.
c) Nothing happens because Windows BitLocker blocks the Trojan virus.
d) Unwanted notifications start popping up in Windows.
d) Unwanted notifications start popping up in Windows.
Malware often targets the browser, so clicking on a website pop-up is likely to deliver some type of infection, such as adware, which will deliver unwanted notifications.
A drive-by download will infect a computer with malware because a user visited a malicious site. However, in this scenario, the user was not passive. They actively interacted with the pop-up to install the adware.
BitLocker is an encryption tool, not an antivirus tool.
User Account Controls (UACs) prevent the unauthorized use of administrative privileges. They are enabled by default but can be disabled.
A vulnerability manager is brainstorming different ways to enhance security for their cell phone devices. The company only uses Apple, and so one of the ideas the manager comes up with is to look for anomalistic files that do not belong with Apple for signs of possible malware which did not profile the device and instead just blasted malware out, hoping the operating system would be right. Which of the following would be anomalistic?
a) .dmg
b) .apk
c) .app
d) .pkg
b) .apk
An .apk file is a format for Android. The vulnerability manager only has Apple in their environment. Unknown sources enable untrusted apps to be downloaded from a website and installed using the .APK file format.
DMG (disk image) format is used for simple installs where the package contents need to be copied to the Applications folder.
PKG format is used where app setup needs to perform additional actions, such as running a service or writing files to multiple folders.
The app is placed in a directory with a .APP extension in the Applications folder when it has been installed.
An administrator uses a backup rotations scheme that labels the backup tapes in generations. What is this called?
a) Frequency
b) 3-2-1 backup rule
c) Synthetic
d) GFS
d) GFS
Grandfather-father-son (GFS) is a backup rotation scheme that uses son tapes to store the most recent data and have the shortest retention period. Grandfather tapes are the oldest and have the longest retention period.
3-2-1 backup rule is a best-practice maxim that users can apply to their backup procedures to verify that they are implementing a solution to mitigate the widest possible range of disaster scenarios.
The synthetic full backup is not generated directly from the original data but instead assembled from other backup jobs.
Frequency is the period between backup jobs. If the edits are much more difficult to reconstruct, the backup frequency might need to be measured in hours, minutes, or seconds.
A software engineer uses the “data protection” option for the apps on their mobile device. This option is subject to the second round of encoding using a key derived from and protected by the user’s credentials. What is this method?
a) Device encryption
b) Remote backup application
c) Profile security requirements
d) Locator application
a) Device encryption
Device encryption is enabled automatically when a user configures a passcode lock on the device.
A remote backup application is the backup of data, apps, and settings to the cloud. A user may choose to use a different backup provider or a third-party provider like Dropbox.
Profile security requirements document the details of the secure implementation of a device. These policies are applied to different employees and different sites or areas within the site.
A locator application finds a device if it is lost or stolen. Once set up, the phone’s location can be tracked from any web browser when it is powered on.
A company’s IT support specialist is ready to start recommissioning a system as part of the malware removal process. What is the last step before removing the computer from quarantine?
a) Verify DNS configuration.
b) Re-enable System Restore.
c) Create a fresh restore point.
d) Antivirus scan
d) Antivirus scan
Before removing a computer system from quarantine, the final step is to run another antivirus scan to make sure the system is clean.
Creating a new restore point (or system image) is one component of recommissioning and is done after re-enabling the System Restore but before running a final antivirus scan.
Re-enabling the System Restore is the beginning of the recommissioning process, along with re-enabling any disabled automatic backups.
Verifying Domain Name System (DNS) configuration to prevent reinfection is part of recommissioning, but it comes before the final antivirus scan.
An administrator in charge of user endpoint images wants to slipstream and use image deployment. Which boot method would best support this?
a) Network
b) Optical
c) Internal hard drive
d) Internet
a) Network
Network boot setup means connecting to a shared folder containing the installation files, which could be slipstreamed or use image deployment.
A computer that supports network boot could also be configured to boot to set up over the internet. To set that up the local network’s DHCP server must be configured to supply the DNS name of the installation server.
Historically, most attended installations and upgrades were run by booting from optical media (CD-ROM or DVD).
Once the OS has been installed, the administrator will usually want to set the internal hard drive as the default (highest priority) boot device and disable any other boot devices.
A server administrator locks down security on their golden client image but is concerned about potentially breaking things in the environment. They decided to set up a test image for test users in various departments before full implementation. What should the administrator use to make individual configuration changes to the image?
a) gpedit.msc
b) shell:startup
c) services.msc
d) regedit.exe
d) regedit.exe
The Windows registry provides a remotely accessible database for storing operating system, device, and software application configuration information. The administrator can use the Registry Editor (regedit.exe) to view or edit the registry. It allows the administrator to make individual configuration changes to the test image directly by editing the registry.
The Group Policy Editor (gpedit.msc) provides a more robust means of configuring many Windows settings than editing the registry directly.
The Services console (services.msc) starts, stops, and pauses processes running in the background. In order to make configuration changes, regedit.exe in this group of options would be used.
The Startup tab lets administrators disable programs added to the Startup folder (type shell: startup at the Run dialog to access this).
An administrator is backup chaining a database with the type of backup that utilizes a moderate time and storage requirement. What type of backup is this?
a) Retention
b) Full with incremental
c) Frequency
d) Full with differential
d) Full with differential
Utilizes moderate time: A differential backup only includes the changes (differences) made since the last full backup, so it doesn’t take as long as a full backup.
Requires moderate storage: It grows in size over time as more changes are made to the database since the last full backup. While it’s larger than an incremental backup (which only backs up changes since the last backup, whether full or incremental), it is smaller than a full backup.
A full backup would require the most time and storage, as it copies all data.
An incremental backup requires the least time and storage, as it only includes the changes made since the last backup (either full or incremental).
Frequency is the period between backup jobs. If the edits are much more difficult to reconstruct, the backup frequency might need to be measured in hours, minutes, or seconds.
Retention is the period that any given backup job is kept for. Short-term retention is important for version control and for recovering from malware infection.
A user has owned the same personal computer for a while and thinks it might be time for an upgrade. Which of the following are upgrade considerations? (Select all that apply.)
a) PXE support
b) Hardware compatibility
c) Application support
d) Backup files
b) Hardware compatibility
c) Application support
d) Backup files
Hardware compatibility is a consideration. The user must make sure that the central processing unit (CPU), chipset, and RAM components are sufficient to run the OS.
Application and driver support and backward compatibility are other considerations. Most version upgrades try to maintain support for applications and device drivers developed for older versions.
Backup files and user preferences are a consideration. If the user is installing a new operating system or doing a clean install, the user should back up any necessary data and settings.
Most computers now come with a Preboot eXecution Environment (PXE)–compliant firmware and network adapter to support this boot option and is not necessarily a consideration.
A video game development company is purchasing upgraded laptops to develop cutting-edge graphics for a new story they have been marketing. They want to be able to integrate persistent system RAM. What type of operating system should they use for support?
a) Pro for Workstations
b) Home
c) Pro
d) Enterprise
a) Pro for Workstations
Windows Pro for Workstations has many of the same features as Pro but supports more maximum RAM and advanced hardware technologies, such as persistent system RAM (NVDIMM).
Windows Pro is designed for usage in small- and medium-sized businesses and can be obtained using original equipment manufacturer (OEM), retail, or volume licensing.
The Enterprise edition has several features not available in the Pro edition, such as support for Microsoft’s DirectAccess virtual private networking technology, AppLocker, and more.
The Windows Home edition is designed for domestic consumers and possibly small office home office (SOHO) business use.
To ensure the authenticity and authorization of a mobile app, a service provider issues a certificate to valid developers. Developers can use this certificate to sign their app, and to establish trust. Which of the following attributes of an app would likely disqualify as trustworthy?
a) Duplicates the function of a VPN.
b) Duplicates the function of MDM.
c) Duplicates the function of IoT.
d) Duplicates the function of core OS apps.
d) Duplicates the function of core OS apps.
A mobile app that duplicates the function of core operating system (OS) apps would be at risk of not receiving trusted app status.
A virtual private network (VPN) is a secure tunnel created between two endpoints connected via an unsecured transport network. VPNs are not proprietary.
Mobile-device management (MDM) is a software tool for tracking, controlling, and securing an organization’s mobile infrastructure. MDMs are not proprietary.
Internet of Things (IoT) is a global network of personal devices, home appliances and control systems, and other items with network connectivity. An app could not duplicate IoT.
What type of data breach can be associated with a specific person or use an anonymized or de-identified data set for analysis and research?
a) Open-source license
b) Healthcare data
c) Personal government-issued information
d) PII
b) Healthcare data
While PII refers to any personal data that can identify a person, healthcare data is a specialized category of PII that includes not only identifying information but also sensitive health data. The reason b) Healthcare data is the most appropriate answer here is because it directly involves both personal identification and sensitive health-related data, which is highly regulated and often anonymized for research.
The question mentions a breach that can “be associated with a specific person or use an anonymized or de-identified data set for analysis and research.” This is an important clue: Healthcare data is often anonymized or de-identified for research purposes to protect patient privacy while still allowing researchers to analyze trends and outcomes.
A manager for a Linux server team recently purchased new software which will help to streamline operations, but they are worried that in IT, there is a high turnover of personnel. The manager wants to ensure they can obtain updates, monitor and fix security issues, and are provided technical assistance. What impact is the manager trying to mitigate?
a) Support
b) Licensing
c) Training
d) Network
a) Support
The core issue the manager is trying to address is the ability to continue maintaining and operating the new software in a highly dynamic environment where turnover is common. This requires access to ongoing support from the vendor to ensure smooth operations, regardless of changes in the team
The System Restore tool in Windows is used to roll back configuration changes to an earlier date or restore point. One option for creating restore points is to use Task Scheduler. What other actions will create a restore point? (Select all that apply.)
a) Rebooting
b) Updating an application
c) Deleting a file
d) Installing a program
b) Updating an application
d) Installing a program
Whenever an application or program is installed, a restore point is created.
A restore point is also created whenever an application or program is updated.
Deleting a file will not create a restore point. Likewise, when using System Restore to roll back to an earlier date, the user’s documents, pictures, and other data are not deleted. However, software and drivers installed after the restore point will be uninstalled.
A restore point is not created when a computer is rebooted, but Windows will create a restore point if one has not occurred in seven days.
What uses a 4-way handshake to allow a station to associate with an access point, authenticate its credential, and exchange a key to use for data encryption?
a) WPA3
b) TKIP
c) MFA
d) WPA2
d) WPA2
WPA2 (Wi-Fi Protected Access 2) uses a 4-way handshake as part of its process to secure wireless communications between a station (like a laptop or smartphone) and an access point (AP). The 4-way handshake is crucial for:
Association: It allows the client device (station) to associate with the access point.
Authentication: It ensures the station’s credentials are authenticated securely.
Key Exchange: It facilitates the exchange of a key for encrypting data (the Pairwise Transient Key, or PTK).
The 4-way handshake process in WPA2 serves these primary purposes:
Why not the other options?
a) WPA3: While WPA3 also uses a handshake for encryption and authentication, WPA3 uses a Simultaneous Authentication of Equals (SAE), which is different from the 4-way handshake used in WPA2. SAE is designed to be more secure and resistant to offline dictionary attacks.
b) TKIP (Temporal Key Integrity Protocol): TKIP is an older encryption protocol used in WPA and WPA2 (before WPA2 was fully adopted with AES encryption). While TKIP uses a handshake, it is not specifically associated with the 4-way handshake. Instead, TKIP is a legacy encryption standard that was replaced by AES in WPA2 for better security.
c) MFA (Multi-factor Authentication): MFA is a security process involving multiple methods of authentication (e.g., a password and a fingerprint). It is unrelated to the Wi-Fi protocol or the use of a 4-way handshake in wireless networking.
A marketing professional normally sends large files to other team members. The IT department recommended using a shared drive and assisted them in setting it up. The project was a very high priority, so the professional collaborated with several members but started receiving reports that some users could not access it sometimes and others could. They eventually figured out that only 20 people at a time seemed to be able to access it. What is causing the issue?
a) The file server was not properly configured.
b) The share was created on a Windows desktop.
c) DNS settings are intermittent.
d) The proxy settings are not properly configured on client machines.
b) The share was created on a Windows desktop.
The issue is likely caused by the fact that the shared drive was created on a Windows desktop rather than a dedicated server. Windows desktops are not designed to handle multiple concurrent connections as efficiently as a server operating system (e.g., Windows Server). In this case, only 20 people at a time can access the share because Windows desktop editions have a connection limit for shared resources. Windows desktop versions (e.g., Windows 10 or Windows 11) have a limit on the number of simultaneous file sharing connections. Typically, the limit is around 20 simultaneous connections. This restriction does not apply to Windows Server editions, which are designed to handle many more connections.
Why not the other options?
a) The file server was not properly configured: While it’s possible that a misconfiguration could cause access issues, the specific symptom of only 20 concurrent users being able to access the share suggests a connection limit rather than a server configuration issue. This is more indicative of a Windows desktop sharing limitation.
c) DNS settings are intermittent: DNS issues could cause connectivity problems, but the issue described (only 20 people at a time being able to access the share) suggests a limitation with the file sharing configuration itself, not intermittent name resolution problems.
d) The proxy settings are not properly configured on client machines: Proxy settings typically affect web traffic or external connections, not file sharing over the local network. The issue described is more related to the server’s ability to handle multiple file-sharing connections simultaneously.
A security researcher wants to install an older operating system for research and testing. What is the most common medium that comes with a disc that the researcher should use?
a) USB
b) Internet-Based
c) Internal hard drive
d) Optical Media
d) Optical Media
Optical Media (CD/DVD/Blu-ray discs): Older operating systems are often distributed on optical media, such as CDs or DVDs, especially if the operating system in question was released in the early 2000s or before. Many legacy operating systems (such as older versions of Windows, Linux, or other specialized OSes) were distributed on these physical media. If the researcher is working with older software or testing a system from a time when optical discs were the primary installation medium, optical media is the most common method.
Why not the other options?
a) USB: While USB drives have become a very common medium for installing modern operating systems, especially in recent years, older operating systems were often distributed on optical discs. USB drives are more commonly used today but were not as prevalent in the distribution of older OS versions, especially those from before the widespread use of USB booting in the late 2000s.
b) Internet-Based: Internet-based installation is a newer method used for modern operating systems (such as cloud-based operating systems or network-based installations). While internet-based installation has grown, older operating systems typically did not rely on this method. An internet-based installation would not be as typical for older OS versions that the researcher might be working with for testing.
c) Internal hard drive: Installing an operating system directly from an internal hard drive is not a common distribution method. It could be used for reinstallation or upgrades, but the OS would typically be copied onto the hard drive from another medium (like optical media, USB, or over a network). It is not the primary method of acquiring and installing an operating system.
After starting the computer and signing in, a user notices the desktop takes a long time to load. Evaluate the following Windows operating system problems to determine the one that best diagnoses what could be causing the slowness.
a) Corrupted user profile
b) Time drift
c) Invalid boot disk
d) Corrupted registry
a) Corrupted user profile
A corrupted user profile is one of the most common reasons for slow login times in Windows. When a user profile becomes corrupted, Windows might have trouble loading the settings, files, and configurations associated with that profile, causing delays in the startup process. This can manifest as slow loading of the desktop and other user-specific settings (e.g., taskbar icons, start menu). It can often be resolved by creating a new user profile or repairing the existing one.
Why not the other options?
b) Time drift: Time drift typically refers to the system clock drifting away from the correct time, usually due to a failing CMOS battery. While time drift can cause issues with certain applications, security certificates, and scheduled tasks, it does not typically result in a slow desktop loading experience after sign-in. Time drift is more of a background issue rather than something that would directly affect desktop performance.
c) Invalid boot disk: An invalid boot disk or boot configuration error would typically prevent Windows from starting properly or cause the system to display an error during boot. If Windows starts up normally and the issue only occurs after signing in and loading the desktop, an invalid boot disk is unlikely to be the cause. The system would likely fail to boot altogether if there were an invalid boot disk.
d) Corrupted registry: While a corrupted registry can lead to system instability or other issues, it typically causes problems that are more widespread, such as application crashes, system errors, or failure to launch certain services. While a corrupted registry could potentially cause delays, it is less likely to specifically cause the desktop to take a long time to load after signing in, compared to a corrupted user profile.
A security administrator for Linux systems in their demilitarized zone wants to ensure only some administrators can perform certain commands. Which of the following is best used to lock down certain commands?
a) rm
b) chown
c) sudo
d) chmod
c) sudo
sudo (Superuser Do) is the most appropriate tool for controlling access to specific commands for certain administrators. The sudo command allows administrators to execute specific commands with elevated privileges (as the root user) without giving them full root access. The security administrator can configure sudo to grant permission for only specific users to run certain commands. (In particular, sudoers file (/etc/sudoers) is used to define which users or groups can execute which commands.) This allows for fine-grained control over system administration tasks. For example, the security administrator could restrict the ability to execute dangerous commands like rm (remove files) to only certain users, while allowing others to run less sensitive commands.
Why not the other options?
a) rm: The rm command is used to remove files and directories. While it can be restricted through permissions or access control lists (ACLs), it is not specifically designed for controlling access to commands. The command itself doesn’t provide a way to enforce permissions or control who can execute it.
b) chown: The chown command is used to change file ownership. While it can be used to change the ownership of files or directories, it is not designed for restricting command execution. chown doesn’t help in controlling which administrators can execute which specific commands.
d) chmod: The chmod command is used to change file permissions (read, write, execute) for files and directories. While it can control access to files or programs, it doesn’t provide a mechanism for controlling access to commands in the way that sudo does. For example, chmod could be used to prevent a user from executing a certain file, but it doesn’t provide the flexibility needed for controlling who can execute specific administrative commands.
An intern for a Windows server team is watching a server administrator verify the authenticity and integrity of an installer. Where did the administrator most likely get it from?
a) ISO
b) USB
c) Share drive
d) Internet download
d) Internet download
When the administrator is verifying the authenticity and integrity of an installer, they are likely checking the file against some sort of digital signature or hash to ensure that the file has not been tampered with during its transfer. In this case, the most common scenario is that the installer was downloaded from the internet, and verifying the authenticity and integrity is particularly important for files obtained this way.
Internet download: When downloading software or installers from the internet, especially from external websites, there is always a risk of manipulation or corruption of the file. To address this risk, administrators will typically verify the checksum (like MD5, SHA1, or SHA256) or digital signature of the file to ensure that it is the same as the file originally distributed by the vendor. Many official websites and software distributors provide a hash value or digital signature for the file to help users verify the integrity of the downloaded file.
Why not the other options?
a) ISO: While an ISO file can certainly be used to install software, it is typically a disk image containing multiple files. The verification process for an installer downloaded from an ISO image is less common than verifying a file downloaded from the internet. The ISO image itself may need to be verified, but this is a different process than verifying a single installer file.
b) USB: USB drives are commonly used for physical transfers, but verification of the installer on a USB drive is not inherently linked to checking its authenticity and integrity. Unless the USB drive itself was downloaded or copied from the internet, this wouldn’t be the most typical source for a file that would require integrity verification.
c) Share drive: A shared drive is typically used for internal file sharing within an organization, and files there are usually under the organization’s control. While verifying an installer in a shared drive might be necessary in some cases, it is less common than verifying files downloaded from the internet, as internal files are assumed to be safer and under more direct control.
A helpdesk professional assists a user with issues booting up their Mac computer. The user reports that there is no drive to boot from. Where will the computer boot from?
a) Terminal
b) Web
c) Force Quit
d) FileVault
b) Web
When a Mac computer is unable to find a bootable drive, macOS has a feature called Internet Recovery. This allows the Mac to boot from Apple’s servers over the internet and install macOS or perform other troubleshooting tasks. The Mac checks for a bootable internal drive first, but if it cannot find one (due to a missing or corrupted disk), it will automatically attempt to boot from the internet, provided the Mac has a network connection. This Internet Recovery mode downloads a minimal macOS installer over the internet.
Why not the other options?
a) Terminal: The Terminal is a command-line interface used for performing tasks within macOS. However, it is not a bootable environment. The Terminal would only be accessible once the computer has booted successfully into macOS or into a recovery environment, so it wouldn’t be used in this case when there’s no bootable drive.
c) Force Quit: The Force Quit option is used for terminating non-responsive applications in macOS. It has no role in booting the computer and would not be applicable if the computer is unable to boot at all.
d) FileVault: FileVault is a disk encryption feature used to secure the data on the hard drive. While it protects the contents of the disk, it does not impact the ability to boot the system. If FileVault is enabled and there’s a problem with the disk, the system may ask for the encryption password, but it won’t cause the “no drive to boot from” issue directly.
A threat actor uses a technique that allows devices to connect to an open authentication and then redirect the user’s browser to a fake captive portal that encourages the user to enter their network password. What is this technique?
a) Spoofing
b) Evil twin
c) Insider threat
d) Whaling
b) Evil twin
An Evil Twin attack is a type of man-in-the-middle attack in which a threat actor sets up a rogue wireless access point that mimics a legitimate wireless network. The attacker then allows devices to connect to this fake access point using an open authentication mechanism, and once connected, the attacker redirects the user’s browser to a fake captive portal. The key aspect of the Evil Twin attack is that the attacker’s access point appears to be a legitimate network, but it is actually designed to intercept user data.
Why not the other options?
a) Spoofing: Spoofing refers to the act of disguising as a legitimate entity to deceive others. While spoofing is part of an Evil Twin attack (because the rogue access point “spoofs” the legitimate network), the term spoofing is too broad and does not fully capture the specifics of this type of attack.
c) Insider threat: An Insider threat involves an attack or breach originating from within the organization, typically by an employee or someone with authorized access. The attack described here, where a threat actor sets up a rogue access point to deceive users, is typically done by an external attacker, not an insider.
d) Whaling: Whaling is a type of phishing attack aimed at high-profile targets, such as executives or key personnel, where the attacker typically uses email to deceive the victim into revealing sensitive information. The described attack (a rogue access point and fake captive portal) does not align with whaling, which is more related to phishing and social engineering via email.
A Windows client administrator plans to upgrade their OS in the current environment. What is one of the most important considerations for the upgrade?
a) Journaling
b) TPM 2.0
c) Dynamic Disks
d) User training
d) User training
Why not the other options?
a) Journaling: While journaling is important for maintaining file system integrity, it’s more of a backend issue that IT professionals handle during the upgrade process. It is not something that directly impacts the user experience. It’s not a critical user-level consideration compared to user training.
b) TPM 2.0: While TPM 2.0 is indeed a key hardware requirement for Windows 11, it is more of a technical requirement that must be checked during system preparation for the upgrade. Once the hardware is confirmed, it’s largely a non-issue for end users. User training, however, addresses the human element of the transition, which is why it’s often prioritized over the technical specifications in an environment where many users need to be educated on the upgrade.
c) Dynamic Disks: The role of dynamic disks is usually an internal system consideration. While they may need to be properly managed, this is more of an administrative task for IT, not something that users need to worry about. The focus here would be on the seamless upgrade of systems, rather than user interaction with dynamic disks.
A new employee calls the help desk because their phone will not connect to the office Wi-Fi. When the technician asks about the phone model, the employee says it is an iPhone 5. The technician immediately knows the problem. Which of the following could be the problem?
a) Configuration
b) Signal strength
C) Throttling
d) Interference
a) Configuration
The iPhone 5 is an older model, and one of the most likely issues when it fails to connect to modern office Wi-Fi networks is a configuration problem, specifically related to the Wi-Fi standards it supports. The iPhone 5 supports only Wi-Fi 4 (802.11n) and earlier standards, but most modern office Wi-Fi networks, especially those that are newer, use Wi-Fi 5 (802.11ac) or even Wi-Fi 6 (802.11ax). These newer standards provide better performance, security, and efficiency but may have compatibility issues with older devices like the iPhone 5.
Why not the other options?
b) Signal strength: While weak signal strength can certainly cause connectivity issues, it typically wouldn’t be the primary reason for a device like the iPhone 5 to fail to connect, unless the phone is extremely far from the access point. Additionally, a weak signal wouldn’t likely lead to an immediate diagnosis based on the model of the phone. The technician would be more focused on the device’s configuration, not just the signal strength.
c) Throttling: Throttling refers to limiting the speed or bandwidth of a network connection, usually to prevent congestion or to enforce policies. Throttling would not cause a connection failure, especially when the device is not connecting at all. This would only affect performance if the device is able to connect, which isn’t the case here.
d) Interference: Interference from other devices or networks could cause a connection to be slow or unstable, but it wouldn’t prevent the iPhone 5 from connecting altogether. Interference is typically a performance issue once the connection is established, not a problem that would prevent the phone from connecting to the Wi-Fi network in the first place.