CCNA exam Flashcards

1
Q

Which of the following DTP modes actively negotiates a trunk connection with a neighboring
interface?
A.
desirable
B.
off
C.
auto
D.
on

A

Answer: A
Explanation:
Dynamic Trunking Protocol (DTP) desirable mode actively negotiates a trunk connection with a
neighboring interface. There are two dynamic modes of operation for a switch port:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You want to create a user account named oson with the password eX$1mM©x on a router. The
password should be converted to an MD5 hash and stored on the router.
Which of the following commands should you issue on the router?
A.
username oson secret 5 eX$1mM@x
B.
username oson secret eX$1mM@x
C.
username oson eX$1mM@x
D.
username oson password eX$1mM@x

A

Answer: B
Explanation:
To create a user account named oson with a Message Digest 5 (MD5)-hashed password of
eX$1mM©x, you should issue the username oson secret eX$1mM@x command on the router.
The username command creates a new user and adds the user to the local user database on a
router. The local user database on a router contains a list of users that have been added to the
router; these users can access the router. When using the username command to create a new
user on a router, you can configure the user’s password to be stored as plain text or as an MD5
hash. To configure a user name with a plain-text password, you should use the username username password password command. Using the secret keyword instead of the password
keyword ensures that the password is stored as an MD5 hash. Thus the command username
oson secret eX$1mM@x creates a user named oson and stores the password as an MD5 hash
value. In the output of the show running-config command, the hash value of the password rather
than the actual password would be displayed, similar to the following:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following combinations represents a single-factor authentication method?
A.
a password and a PIN
B.
a smart card, a password, and a PIN
C.
a password, a fingerprint, and a smart card
D.
a fingerprint, a retina scan, and a password

A

Answer: A
Explanation:
Of the available options, the combination of a password and a personal identification number (PIN)
represents a single-factor authentication method. A single-factor authentication method refers to
the use of only one of the three common methods to verify a user’s identity. The three
authentication factors are something you know, something you have, and something you are. A
password and a PIN are knowledge factor access control methods, which are pieces of
information that you know. Because a password and a PIN are both something you know, when
the two are used in combination with each other they represent a single-factor authentication
method.
Two-factor, or dual-factor, authentication refers to th

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following APIs are typically used to enable communication between an SDN
controller and the application plane? (Choose two.)
A.
OpenFlow
B.
OnePK
C.
OpFlex
D.
OSGi
E.
NETCONF
F.
REST

A

Answer: D,F
Explanation:
Of the available choices, only Representational State Transfer (REST) and Java Open Services
Gateway initiative (OSGi) are the Application Programming Interfaces (APIs) typically used to
enable communication between a Software-Defined Networking (SDN) controller and the
application plane. SDN is an intelligent network architecture in which a software controller
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 9
assumes the control plane functionality for all network devices. A northbound API, which is
sometimes called a northbound interface (NBI), enables an SDN controller to communicate with
applications in the application plane.
REST is a northbound API architecture that uses Hypertext Transfer Protocol (HTTP) or HTTP
Secure (HTTPS) to enable external resources to access and make use of programmatic methods
that are exposed by the API. REST APIs typically return data in either Extensible Markup
Language (XML) or JavaScript Object Notation (JSON) format.
OSGi is a Java-based northbound API framework that is intended to enable the development of
modular programs. OSGi also allows the use of the Python programming language as a means of
extended controller functions. For transport, OSGi deployments often rely on HTTP.
A southbound API, which is sometimes called a southbound interface (SBI), enables an SDN
controller to communicate with devices on the network data plane. NETCONF, OnePK, OpenFlow,
and OpFlex are all examples of southbound APIs.
NETCONF uses Extensible Markup Language (XML) and Remote Procedure Calls (RPCs) to
configure network devices. XML is used for both data encoding and protocol messages.
NETCONF typically relies on Secure Shell (SSH) for transport.
OpFlex uses a declarative SDN model in which the instructions that are sent to the controller are
not so detailed. The controller allows the devices in the data plane to make more network
decisions about how to implement the policy.
OpenFlow uses an imperative SDN model in which detailed instructions are sent to the SDN
controller when a new policy is to be configured. The SDN controller manages both the network
and the policies applied to the devices.
The OnePK API is a Cisco-proprietary API. It uses Java, C, or Python to configure network
devices. It can use either Secure Sockets Layer (SSL) or Transport Layer Security (TLS) to
encrypt data in transit.
Reference: https://www.cisco.com/c/en/us/td/do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You are configuring security on a new WLAN by using the WLC GUI.
Which of the following security settings are you most likely to configure by using the Layer 3
Security drop-down list box on the Layer 3 tab?
A.
VPN Pass-Through
B.
Web Passthrough
C.
WPA+WPA2
D.
Web Authentication

A

Answer: A
Explanation:
When you are configuring a new wireless local area network (WLAN), you are most likely to
configure the VPN Pass-Through setting by using the Layer 3 Security drop-down list box on the
Layer 3 tab of the Cisco Wireless LAN Controller (WLC) graphical user interface (GUI). There are
two types of WLANs that you can configure by using the WLC GUI: a WLAN and a Guest LAN.
The VPN Pass-Through setting is only available when you are configuring a WLAN.
When you configure a new WLAN by using the WLC GUI, you can configure security settings by
clicking the new WLAN’s Security tab. By default, the Layer 2 tab is selected when you click the
Security tab. However, it is not possible to configure Layer 2 security on a Guest LAN.
On the Layer 2 tab of the Security tab, you can select one of the following Layer 2 wireless
security features from the Layer 2 Security drop-down list box:
* None, which disables Layer 2 security and allows open authentication to the WLAN
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 11
* WPA+WPA2, which enables Layer 2 security by using Wi-Fi Protected Access (WPA) or the
more secure WPA2
* 802.1X, which enables Layer 2 security by using Extensible Authentication Protocol (EAP)
authentication combined with a dynamic Wired Equivalent Privacy (WEP) key
* Static WEP, which enables Layer 2 security by using a static shared WEP key
* Static WEP + 802.1X, which enables Layer 2 security by using either a static shared WEP key or
EAP authentication
* CKIP, which enables Layer 2 security by using the Cisco Key Integrity Protocol (CKIP)
* None + EAP Passthrough, which enables Layer 2 security by using open authentication
combined with remote EAP authentication
There are two different sets of Layer 3 security features that you can configure on a Cisco WLC:
one set for a WLAN and one set for a Guest LAN. Depending on which type of WLAN you create
and which Layer 2 security options you have selected, you can select one of the following Layer 3
wireless security features from the Layer 3 Security drop-down list box on the Layer 3 tab of the
Security tab in the WLC GUI:
* None, which disables Layer 3 security no matter which Layer 2 security option is configured and
regardless of whether you are configuring
* IPSec, which enables Layer 3 security for WLANs by using Internet Protocol Security (IPSec)
* VPN Pass-Through, which enables Layer 3 security for WLANs by allowing a client to establish a
connection with a specific virtual private
* Web Authentication, which enables Layer 3 security for Guest LANs by prompting for a user
name and password when a client connects
* Web Passthrough, which enables direct access to the network for Guest LANs without prompting
for a user name and password
Not every Layer 3 security mechanism is compatible with every Layer 2 security mechanism. It is
therefore important to first configure Layer 2 security options before you attempt to configure Layer
3 security options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

QUESTION NO: 8
You issue the ip ospf network non-broadcast command on an interface.
Which of the following statements is correct regarding how OSPF operates on the interface?
A.
Multicast updates are sent.
B.
DR and BDR elections are not performed.
C.
The Hello timer is set to 10 seconds, and the dead timer is set to 40 seconds.
D.
The neighbor command is required to establish adjacencies.

A

Answer: D
Explanation:
The neighbor command is required to establish adjacencies on Open Shortest Path First (OSPF)
nonbroadcast networks. There are five OSPF network types:
* Broadcast
* Nonbroadcast
* Point-to-point
* Point-to-multipoint broadcast
* Point-to-multipoint nonbroadcast
Nonbroadcast and point-to-multipoint nonbroadcast networks do not allow multicast packets. To
configure OSPF to send unicast updates, you must configure neighbor routers with the neighbor
command. Broadcast, point-to-point, and point-to-multipoint broadcast networks allow multicast
packets, so manual configuration of neighbor routers with the neighbor command is not required.
On broadcast networks, designated router (DR) and backup designated router (BDR) elections are
performed. By default, the Hello timer is set to 10 seconds and the dead timer is set to 40
seconds. To configure an OSPF broadcast network, you should issue the ip ospf network
broadcast command. The OSPF broadcast network type is enabled by default on Fiber
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 13
Distributed Data Interface (FDDI) and Ethernet interfaces, including Fast Ethernet and Gigabit
Ethernet interfaces.
On nonbroadcast networks, DR and BDR elections are performed. By default, the Hello timer is
set to 30 seconds and the dead timer is set to 120 seconds. To configure an OSPF nonbroadcast
network, which is also called a nonbroadcast multiaccess (NBMA) network, you should issue the
ip ospf network non-broadcast command.
On point-to-point networks, DR and BDR elections are not performed. By default, the Hello timer is
set to 10 seconds and the dead timer is set to 40 seconds. To configure an OSPF point-to-point
network, you should issue the ip ospf network point-to-point command. The OSPF point-topoint network type is enabled by default on High-Level Data Link Control (HDLC) and Point-toPoint Protocol (PPP) serial interfaces.
OSPF point-to-multipoint broadcast networks operate just like OSPF point-to-point networks
except the Hello timer is set to 30 seconds and the dead timer is set to 120 seconds by default. To
configure an OSPF point-to-multipoint broadcast network, you should issue the ip ospf network
point-to-multipoint command.
OSPF point-to-multipoint nonbroadcast networks operate just like OSPF point-to-multipoint
broadcast networks except that multicasts cannot be sent; therefore, manual configuration of
neighbor routers with the neighbor command is required so that OSPF sends unicast updates. To
configure an OSPF point-to-multipoint nonbroadcast network, you should issue the ip ospf
network point-to-multipoint non-broadcast command

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

An administrator has generated the following MD5 hash from a plain-text password:
$1$cf6N$Ugo.y0CXMLffTfQtyO/Xt.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 16

A

Answer: D
Explanation:
The administrator should issue the enable secret 5 $1$cf6N$Ugo.y0CXMLffTfQtyO/Xt.
command to configure the Message Digest 5 (MD5) hash generated from a plain-text password so
that it will be used to access enable mode on a Cisco router in this scenario. The no service
password-encryption command has been issued in this scenario. This command disables the
automatic encryption of new passwords when they are created by an administrator. If the service
password-encryption command had been issued in this scenario, all current and future
passwords in the running configuration would be encrypted automatically. Thus, of the available
choices, the enable secret 5 $1$cf6N$Ugo.y0CXMLffTfQtyO/Xt. command is the only option in
this scenario that enables the administrator to store a previously encrypted password that allows
access to enable mode on a Cisco router.
In some Cisco IOS versions prior to 15.3(3), the enable secret command by default stores an
encrypted password in the device’s configuration file by using a Secure Hash Algorithm (SHA)
256-bit hash. As of Cisco IOS 15.3(3), Type 4 passwords have been deprecated because of a
security flaw in their implementation. The syntax for the enable secret command is enable secret
[level level] {password | [encryption-type] encrypted-password}, where password is a string of
characters that represents the clear-text password. Instead of supplying a clear-text password,
you can specify an encryption-type value of 0, 4, or 5 and an encrypted-password value of either a
clear-text password, a SHA-256 hash, or an MD5 hash, respectively. Supplying a hash value
requires that you have previously encrypted the value by using a hashing algorithm in the same
fashion that IOS uses the algorithm. This command configures a password that is required in order
to place the device into enable mode, which is also known as privileged EXEC mode. The device
must, at a minimum, be placed into enable mode for the user to be able to display the running
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 17
configuration.
The administrator should not issue the enable secret 0 $1$cf6N$Ugo.y0CXMLffTfQtyO/Xt.
command in this scenario. Specifying an encryption-type value of 0 when you issue the enable
secret command indicates that the string following the command is in clear-text format, not
encrypted format. Because the router assumes the string is a clear-text password and the length
of the hash is greater than 25 characters, issuing the enable secret 0
$1$cf6N$Ugo.y0CXMLffTfQtyO/Xt. command would cause the router to generate an error similar
to the following:
% Invalid Password length - must contain 1 to 25 characters. Password configuration failed
If the already encrypted 1$cf6N$Ugo.y0CXMLffTfQtyO/Xt. string was shorter than 25 characters,
the command would encrypt that string and require anyone who is attempting to access enable
mode to issue 1$cf6N$Ugo.y0CXMLffTfQtyO/Xt. as the password instead of the original
unencrypted value that the MD5 hash 1$cf6N$Ugo.y0CXMLffTfQtyO/Xt. represents.
The administrator should not issue the enable password 5 $1$cf6N$Ugo.y0CXMLffTfQtyO/Xt.
command in this scenario. You can issue the enable password command to create a password
that must be used to gain access to enable mode. The syntax of the enable password command
is enable password [level level] {password | [encryption-type] encrypted-password}. The enable
password command supports the encryption-type values of 0 and 7, not 5. The encryption-level
value of 0 indicates that a clear-text password of 1 to 25 characters will follow. The MD5 hash in
this scenario is longer than 25 characters. An encryption-level value of 7 indicates that a hidden
password consisting of a Cisco-proprietary form of encryption will follow. Issuing the enable
password 5 $1$cf6N$Ugo.y0CXMLffTfQtyO/Xt. command would result in the following error:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You issue the following commands on SwitchA:
SwitchA(config)#interface port-channel 1
SwitchA(config-if)#interface range fastethernet 0/5 - 6
SwitchA(config-if-range)#channel-protocol lacp
SwitchA(config-if-range)#channel-group 1 mode on
You then issue the following commands on SwitchB:
SwitchB(config)#interface port-channel 1
SwitchB(config-if)#interface range fastethernet 0/5 - 6
SwitchB(config-if-range)#channel-protocol pagp
SwitchB(config-if-range)#channel-group 1 mode on
Which of the following statements is true about the resulting EtherChannel link between SwitchA
and SwitchB?
A.
No link is formed.
B.
A link is formed using LACP because it was configured first and has priority.
C.
A link is formed without an aggregation protocol.
D.
A link is formed using PAgP because it was configured last and has priority

A

Answer: A
Explanation:
An EtherChannel link is not formed in this scenario. EtherChannel is used to bundle two or more
identical, physical interfaces into a single logical link between switches. An EtherChannel can be
permanently established between switches, or it can be negotiated by using one of two
aggregation protocols: the Cisco-proprietary Port Aggregation Protocol (PAgP) or the openstandard Institute of Electrical and Electronics Engineers (IEEE) 802.3ad protocol, which is also
known as Link Aggregation Control Protocol (LACP). An EtherChannel can have up to eight active
switch ports in the bundle that forms the logical link between switches. Every switch port in the
bundle, which is also referred to as a channel group, must be configured with the same speed and
duplex settings.
To configure a switch port to use an aggregation protocol, you should use the channel-protocol {
lacp | pagp} command. The EtherChannel aggregation protocol must match on each switch, or
they will be unable to dynamically establish an EtherChannel link between them. In addition, if a
channel protocol is explicitly configured, each local switch port in the EtherChannel bundle must
be configured to operate in a mode that is compatible with the channel protocol or the switch will
display an error message and refuse to bundle the offending interface. In this scenario, the
channel protocol command on SwitchA specifies that LACP should be used to dynamically
establish an EtherChannel; however, the channel-group command attempts to configure an
incompatible operating mode. Because the channel-group command cannot override the
configuration specified by the channel-protocol command, the channel-group command issued
on SwitchA will produce an error message similar to the following sample output:
Command rejected (Channel protocol mismatch for interface Fa0/5 in group 1): the interface can
not be added to the channel group
% Range command terminated because it failed on FastEthernet0/5
To configure a switch port to be a member of a particular channel group, you should issue the
channel-group number mode {on | active | passive | {auto | desirable} [non-silent]} command.
This command uses a number parameter to specify a particular channel group; the number value
should correspond to the PortChannel interface being configured. The supported values for the
number parameter vary depending on hardware platform and IOS revision.
The following table displays the channel-group configurations that will establish an EtherChannel:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You are connecting two Catalyst 6500 switches with fiber-optic cable. When you boot SwitchA,
you receive a SYS-3-TRANSCEIVER_NOTAPPROVED error.
Which of the following is most likely the cause of the problem?
A.
There is a physical problem with the fiber cable.
B.
You have installed the SFP module upside down.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 23
C.
You have connected a cable to an incorrect port.
D.
You have installed a third-party SFP module.

A

Answer: D
Explanation:
You have most likely installed a third-party Small Form-Factor Pluggable (SFP) transceiver
module in SwitchA if you receive a SYS-3-TRANSCEIVER_NOTAPPROVED error when you boot
SwitchA. An SFP module is a hot-pluggable device that enables a switch, router, or other device to
accept connections from Fibre Channel (FC) or Gigabit Ethernet cables. Cisco devices do not
support the use of third-party SFP modules.
An SFP module that is installed in a Cisco device stores identifying information, such as the
module serial number, vendor name, and security code. When a switch detects the insertion of an
SFP module, the switch software attempts to read the identifying information stored on the SFP
module. If the information is not valid or not present, the switch software will report the SYS-3-
TRANSCEIVER_NOTAPPROVED error.
The switch would not report a SYS-3-TRANSCEIVER_NOTAPPROVED error if you had
connected a cable to an incorrect port. If you connected a cable to the wrong SFP module port,
you would most likely notice that the ports on the switches are up, but the line protocol is down.
The switch would not report a SYS-3-TRANSCEIVER_NOTAPPROVED error if there were a
physical problem with the fiber cable. If the fiber cable were broken, you would notice that the port
status light-emitting diodes (LEDs) on the SFP modules are not lit.
The switch would not report a SYS-3-TRANSCEIVER_NOTAPPROVED error if you had installed
the SFP module upside down. Instead, the switch would not recognize the SFP module, and the
output from show commands would contain no information about the module.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

QUESTION NO: 14
Which of the following best describes an AP deployment that connects APs to a WLC that is
housed within a switch stack?
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 24

A

Answer: A
Explanation:
Of the available choices, an embedded access point (AP) deployment typically connects APs to a
Cisco wireless LAN controller (WLC) that is housed within a switch stack. An AP is a device that
connects a wireless client to a wired network. The primary difference between this deployment and
others is that the WLC is embedded within a stack of switching hardware instead of existing as a
separate entity. APs can connect to the WLC by connecting to switches that are directly hosting
the WLC or switch ports that are operating on the same virtual local area network (VLAN) as the
WLC.
A lightweight AP deployment can be an embedded AP deployment. However, a lightweight AP
deployment does not always connect APs to a WLC that is housed within a switch stack. A
lightweight AP deployment requires a separate wireless controller. Wireless clients connect to
lightweight APs, which are capable of performing real-time wireless network functions but rely on a
WLC for management functions. The connection between a lightweight AP and a WLC is created
by using two tunnels established by the Control and Provisioning of Wireless Access Points
(CAPWAP) tunneling protocol. Information sent between lightweight APs and the WLC is
encapsulated in Internet Protocol (IP) packets. This process enables a lightweight AP and WLC to
manage connectivity to the same wireless local area network (WLAN) yet be separated by both
physical and logical means. This type of deployment is also known as a split-MAC architecture
because the lightweight AP handles the frames while the WLC handles the management
functions.
An autonomous AP deployment does not connect APs to a WLC that is housed within a switch
stack. An autonomous AP contains network interfaces for both wireless and wired networks; it is
typically deployed as part of an autonomous AP architecture in which APs are connected directly
to the access layer of the three-tier hierarchical network model.
A cloud-based AP deployment does not connect APs to a WLC that is housed within a switch
stack. Instead cloud-based APs connect to and are automatically configured by a WLC that is
housed in a cloud-based system. For example, a Cisco Meraki AP provides wireless access by
connecting to a centralized management system known as the Cisco Meraki Cloud. APs deployed
at the access layer of the three-tier hierarchical network model contact the cloud in order to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You are implementing common Layer 2 security measures on a Cisco switch. You create a new
VLAN with an ID of 4. No devices operate on VLAN 4. Next, you issue the following commands on
a switch interface:
switchport access vlan 4
switchport nonegotiate
Which of the following Layer 2 security measures are you implementing? (Choose two.)
A.
configuring the port mode manually
B.
disabling DTP on a port
C.
enabling port security on an access port
D.
moving the port to an unused VLAN
E.
disabling an unused port

A

Answer: B,D
Explanation:
You are disabling Dynamic Trunking Protocol (DTP) on a port when you issue the switchport
nonegotiate command while you are implementing common Layer 2 security measures on a
Cisco switch. In addition, you are moving the port to an unused virtual local area network (VLAN)
by issuing the switchport access vlan 4 command. By default, every network interface on a
Cisco switch is an active port. Before you deploy a switch on a network, you should take steps to
ensure that every trunk port and access port on the switch is secured and that every unused port
on the switch is disabled.
By default, all interfaces on a Cisco switch will use DTP to automatically negotiate whether an
interface should be a trunk port or an access port. The transmission of DTP packets over an
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 26
interface can be exploited by a malicious user to obtain information about the network or to
convert an interface that should be an access port into a trunked port. You should issue the
switchport nonegotiate command on a manually configured port to prevent any attempts by the
switch to negotiate by using DTP.
Moving an unused port to an unused VLAN creates a logical barrier that prevents rogue devices
from communicating on the network should such a device connect to the port. To move an access
port to an unused VLAN, you should issue the switchport access vlan vlan-id command on the
port, where vlan-id is the ID of the unused VLAN. When you move an unused port to an unused
VLAN, you should also manually configure the port as an access port by issuing the switchport
mode access command and shut down the port by issuing the shutdown command.
You are not configuring the port mode manually by issuing the commands in this scenario. To
manually configure a trunk port, you should first issue the switchport trunk encapsulation
protocol command in interface configuration mode, where protocol is the trunk encapsulation
protocol you want to use, and then issue the switchport mode trunk command in interface
configuration mode. To manually configure an access port, you should issue the switchport
mode access command in interface configuration mode. Manually configuring interfaces to use
either trunk mode or access mode effectively disables DTP and ensures that the traffic on those
ports is restricted to the intended purpose. Even so, you should issue the switchport nonegotiate
command on a manually configured trunk port to prevent any attempts by the switch to negotiate
by using DTP, because a manually configured trunk port will continue to send DTP frames.
You are not disabling an unused port by issuing the commands in this scenario. Disabling an
unused port creates a barrier that prevents rogue devices from communicating on the network
should such a device connect to the port. To disable an unused port on a switch, you should issue
the shutdown command on that port. To verify that a port is in the shutdown state, you should
issue the show interfaces type number command, where type and number specify the interface
you want to show. A port that has been shut down will be reported as administratively down by the
show interfaces type number command.
You are not enabling port security on an access port by issuing the commands in this scenario. To
protect switch interfaces against Media Access Control (MAC) flooding attacks, you should enable
port security on all access mode interfaces on the switch. Issuing the switchport port-security
command in interface configuration mode enables port security with default settings. You can
modify port security settings before you enable port security by issuing the switchport portsecurity mac-address mac-address command, the switchport port-security maximum
maximum-number-of-mac-addresses command, and the switchport port-security violation [
protect | restrict | shutdown] command.
When enabled with its default settings, port security will shut down a port on which a violation
occurs. In addition, port security will allow only the first MAC address to connect to the port to
access the port.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

QUESTION NO: 16
You have enabled LAG on a WLC that contains eight distribution system ports.
How many ports will be included in the LAG bundle by default?
A.
eight
B.
one
C.
four
D.
none

A

Answer: A
Explanation:
By default, all eight ports will be included in the link aggregation (LAG) bundle if you have enabled
LAG on a Cisco wireless LAN controller (WLC) that contains eight distribution system ports. A
distribution system port is a data port that typically connects to a switch in Institute of Electrical
and Electronics Engineers (IEEE) 802.1Q trunk mode. Similar to EtherChannel on switches, LAG
enables multiple physical ports on a WLC to operate as one logical group. Thus, LAG enables
load balancing across links between devices and redundancy. If one link fails, the other links in the
LAG bundle will continue to function.
LAG will bundle all eight ports in this scenario. However, LAG requires only one functional physical
port in order to pass client traffic. Similar to EtherChannel, LAG enables redundancy. If one
physical port fails in a LAG bundle, the other ports are capable of passing client traffic in that port’s
place. If all but one port in a LAG bundle fails, that port will pass client traffic for all of the failed
ports.
Distribution system ports can be configured to work in pairs or independently of each other if LAG
is disabled. By default, a Cisco WLCs distribution system ports operate in 802.1Q trunk mode,
forming a trunk link between each WLC distribution system port and the switch to which it is
connected. When enabled, LAG modifies this configuration so that the ports are bundled and no

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

QUESTION NO: 17
Which of the following tables is used by a switch to discover the relationship between the Layer 2
address of a device and the physical port used to reach the device?
A.
the adjacency table
B.
the ARP table
C.
the VLAN table
D.
the FIB table
E.
the CAM table

A

Answer: E
Explanation:
The Content Addressable Memory (CAM) table is used by a switch to discover the relationship
between the Open Systems Interconnection (050 Layer 2 address of a device and the physical
port used to reach the device. Switches make forwarding decisions based on the destination MAC
address contained in a frame’s header. The switch first searches the CAM table for an entry that
matches the frame’s destination MAC address. If the frame’s destination MAC address is not
found in the table, the switch forwards the frame to all its ports, except the port from which it
received the frame. If the destination MAC address is found in the table, the switch forwards the
frame to the appropriate port. The source MAC address is also recorded if it did not previously
exist in the CAM table.
The Forwarding Information Base (FIB) is a table that contains all the prefixes from the Internet
Protocol (IP) routing table and is structured in a way that is optimized for forwarding. The FIB and
the adjacency table are the two main components of Cisco Express Forwarding (CEF), which is a
hardware-based switching method that is implemented in all OSI Layer 3-capable Catalyst
switches. The FIB is synchronized with the IP routing table and therefore contains an entry for
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 29
every IP prefix in the routing table. The IP prefixes are ordered so that when a Layer 3 address is
compared against the FIB, the longest, most specific match will be found first; therefore, prefix
lookup times are minimized.
The adjacency table maintains the Layer 2 addressing information for the FIB. Each network prefix
in the FIB is associated with a next-hop address and an outbound interface. The adjacency table
contains the Layer 2 addressing information for each next-hop address listed in the FIB and is
used to rewrite the Layer 2 header of each forwarded IP packet. You can issue the show
adjacency command to display the contents of the adjacency table.
The Address Resolution Protocol (ARP) table contains Layer 3 to Layer 2 address translations.
Whenever the switch encounters a packet destined for a Layer 3 address that does not have an
entry in the ARP table, the switch broadcasts an ARP request to query the network for the Layer 2
address. When the ARP reply is received, the switch enters the address pair into the ARP table for
future reference. You can issue the show ip arp command to display the contents of the ARP
table.
The virtual local area network (VLAN) table contains a record of the VLAN definitions on the switch
and a list of the interfaces associated with each VLAN. The VLAN table does not contain any
Layer 3 information. You can issue the show vlan command to display the contents of the VLAN
table.
Reference: https://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-seriesswitches/71079-arp-cam-tableissues.html#backinfo CCNA 200-301 Official Cert Guide, Volume 1,
Chapter 5: Analyzing Ethernet LAN Switching, Learning MAC Addresse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

QUESTION NO: 18
Which of the following statements is true regarding a floating static route?
A.
A floating static route is used to provide link redundancy.
B.
A floating static route is used to provide link load balancing.
C.
A router always prefers a floating static route to a dynamically learned route.
D.
A floating static route has a lower AD than a normal static route.

A

Answer: A
Explanation:
A floating static route is used to provide link redundancy. When multiple routes to a network exist
and a more specific route is not available, a router will choose the route with the lowest
administrative distance (AD). Because a normal static route has a default AD of 1, a router will
always prefer a normal static route over any other type of route. You can manually assign a static
route a higher AD than 1 to prevent a router from always choosing the normal static route as the
best path to a destination network. By assigning a floating static route a higher AD than another
route, you are able to create a static route that will be used only when routes with a lower AD are
no longer available. For example, if a router’s primary path to a remote office is a dynamically
learned route and a floating static route with a higher AD is configured to use a specified exit
interface as a backup path, the router will use only the primary route to reach the remote office.
The dynamically learned route is preferred over the floating static route because the floating static
route has a higher AD than the dynamically learned route. However, if the dynamically learned
route becomes unavailable, the router will search its routing table for an available path with the
lowest AD. In this example, the router will use the floating static route to forward packets destined
to the remote office to the exit interface specified in the floating static route when the dynamically
learned route becomes unavailable.
A router will not always prefer a floating static route to a dynamically learned route. Because an
administrator can arbitrarily assign an AD to a floating static route, a router will prefer a floating
static route only if it has a lower AD than a dynamically learned route to the same destination
network. Likewise, a router will not always prefer a dynamically learned route to a floating static
route unless the dynamically learned route has an AD lower than a floating static route to the
same destination network.
A floating static route is not used for link load balancing. Load balancing is possible if multiple
paths to a destination network exist with equal ADs and if cost values exist. Because a floating
static route has a higher AD than the primary path to a destination network, a router will not use a
floating static route unless the primary path becomes unavailable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the ports on SwitchA will use PortFast?
A.
all access ports
B.
all ports
C.
no ports, because PortFast cannot be enabled globally
D.
all trunk ports

A

Answer: A
Explanation:
All access ports on SwitchA will use PortFast. PortFast enables faster connectivity for hosts
connected to an access-layer switch port. If PortFast is not enabled, a switch port transitions
through the Spanning Tree Protocol (STP) listening and learning states before it enters the
forwarding state. This process can take as long as 30 seconds if the default STP timers are used.
In addition, port initialization could take as long as 50 seconds if Port Aggregation Protocol (PAgP)
is enabled. PortFast transitions the port into the STP forwarding state without going through the
STP listening and learning states.
PortFast is a feature that should be used only on switch ports that are connected to end devices,
such as user workstations or print devices. Because PortFast immediately transitions a port to the
STP forwarding state, skipping over the listening and learning states, steps should be taken to
ensure that a switch that is inadvertently or intentionally connected to the port cannot influence the
STP topology or cause switching loops. Cisco recommends that switches should not be connected
to access ports that are configured with PortFast; switches should always be connected by trunk
ports.
You can enable PortFast for specific ports by issuing the spanning-tree portfast command in
interface configuration mode. However, you can also enable PortFast for all access ports on the
switch by issuing the spanning-tree portfast default command in global configuration mode;
trunk ports are not affected by the spanning-tree portfast default command.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which of the following VLANs is used by DTP to negotiate a trunk link when 802.1Q encapsulation
is configured on the interface?
A.
the native VLAN
B.
1
C.
0
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 34
D.
4094

A

Answer: A
Explanation:
Dynamic Trunking Protocol (DTP) uses the native virtual local area network (VLAN) to negotiate a
trunk link when Institute of Electrical and Electronics Engineers (IEEE) 802.1Q encapsulation is
configured on the interface. Because DTP frames are always transmitted on the native VLAN,
changing the native VLAN can have unexpected consequences. For example, if the native VLAN
is not configured identically on both ends of a link, a trunk will not dynamically form.
By default, all interfaces on a Cisco switch will use DTP to automatically negotiate whether an
interface should be an IEEE 802.1Q trunk port or an access port. There are two dynamic modes of
operation for a switch port:
* auto – operates in access mode unless the neighboring interface actively negotiates to operate
as a trunk
* desirable – operates in access mode unless it can actively negotiate a trunk connection with a
neighboring interface
The default dynamic mode is dependent on the hardware platform. In general, departmental-level
or wiring closet-level switches default to auto mode, whereas backbone-level switches default to
desirable mode. Because a switch port in auto mode does not actively negotiate to operate in
trunk mode, it will form a trunk link only if negotiations are initiated by the neighboring interface. A
neighboring interface will initiate negotiations only if it is configured to operate in trunk mode or
desirable mode. By contrast, a switch port in desirable mode will actively negotiate to operate in
trunk mode and will form a trunk link with a neighboring port that is configured to operate in trunk,
desirable, or auto mode.
Although VLAN 1 is the default native VLAN on a Cisco switch, the native VLAN can be changed
by issuing the switchport trunk native vlan vlan-id command from interface configuration mode.
Because the configuration of the native VLAN in this scenario is not specified, you cannot be
certain that VLAN 1 is still configured as the native VLAN.
VLAN 0 is a special VLAN used by Internet Protocol (IP) phones to indicate to an upstream switch
that it is sending frames that have a configured 802.1p priority but that should reside in the native
VLAN. This VLAN is used if voice traffic and data traffic should be separated but do not require
that a unique voice virtual VLAN be created.
VLAN 4094 is an extended VLAN and is not used for DTP frames unless it has been configured as
the native VLAN. VLAN IDs in the number range from 1006 through 4094 are available only on
extended IOS images. A VLAN ID can be a value from 1 through 1005 or from 1 through 4094,
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 35
depending on the IOS image and switch model. VLANs 1002 through 1005 are reserved for Token
Ring and Fiber Distributed Data Interface (FDDI) VLANs. VLANs in this reserved range, as well as
the switch’s native VLAN, can be modified but not deleted.
Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following statements best describe why WRED is useful for networks where the
majority of traffic uses TCP? (Choose two.)
A.
TCP sources reduce traffic flow when congestion occurs.
B.
TCP packets that are dropped must be retransmitted.
C.
TCP packets cannot arrive out of sequence
E.
TCP packets must have priority over UDP packets.

A

Answer: A,B
Explanation:
Weighted random early detection (WRED) is useful for networks where the majority of traffic uses
Transmission Control Protocol (TCP) because TCP packets that are dropped must be
retransmitted. Additionally, TCP sources reduce traffic flow when congestion occurs, thereby
further slowing down the network.
WRED is a congestion avoidance mechanism that addresses packet loss caused by tail drop,
which occurs when new incoming packets are dropped because a router’s queues are too full to
accept them. Tail drop causes a problem called global TCP synchronization, whereby all of the
TCP sources on a network reduce traffic flow during periods of congestion and then the TCP
sources increase traffic flow when the congestion is reduced, which again causes congestion and
dropped packets. When WRED is implemented, you can configure different tail drop thresholds for
each IP precedence or Differentiated Services Code Point (DSCP) value so that lower-priority
traffic is more likely to be dropped than higher-priority traffic, thereby avoiding global TCP
synchronization.
WRED does not address header size. To compress the header of TCP packets, you should
implement TCP header compression. Because TCP header compression compresses only the
header, not the entire packet, TCP header compression works best for packets with small
payloads, such as those carrying interactive data.
WRED does not address the order in which TCP packets arrive. TCP packets can arrive in any
order because each packet is numbered with a sequence number. When the TCP packets arrive
at their destination, TCP rearranges the packets into the correct order.
Although it is possible for TCP packets to require a higher priority than User Datagram Protocol
(UDP) packets, it is also possible for UDP packets to require a higher priority than TCP packets.
UDP traffic that requires a high priority includes Voice over IP (VoIP) traffic and real-time
multimedia traffic. You should avoid placing TCP and UDP traffic in the same traffic class,
because doing so can cause TCP starvation. UDP traffic is not aware of packet loss due to
congestion control mechanisms, so devices sending UDP traffic might not reduce their
transmission rates. This behavior causes the UDP traffic to dominate the queue and prevent TCP
traffic from resuming a normal flow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which of the following devices cannot be connected to leaf nodes in the Cisco ACI architecture?
A.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 48
leaf nodes
B.
spine nodes
C.
EPGs
D.
application servers
E.
APICs

A

Answer: A
Explanation:
In the Cisco Application Centric Infrastructure (ACI), leaf nodes cannot connect to each other.
Cisco ACI is a data center technology that uses switches, categorized as spine and leaf nodes, to
dynamically implement network application policies in response to application-level requirements.
Network application policies are defined on a Cisco Application Policy Infrastructure Controller
(APIC) and are implemented by the spine and leaf nodes.
The spine and leaf nodes create a scalable network fabric that is optimized for east-west data
transfer, which in a data center is typically traffic between an application server and its supporting
data services, such as database or file servers. Each spine node requires a connection to each
leaf node; however, spine nodes do not interconnect nor do leaf nodes interconnect. Despite its
lack of fully meshed connections between spine nodes or between leaf nodes, this physical
topology enables nonlocal traffic to pass from any ingress leaf interface to any egress leaf
interface through a single, dynamically selected spine node. By contrast, local traffic is passed
directly from an ingress interface on a leaf node to the appropriate egress interface on the same
leaf node.
Because a spine node has a connection to every leaf node, the scalability of the fabric is limited by
the number of ports on the spine node, not by the number of ports on the leaf node. For example,
if additional access ports are needed, a new leaf node can be added to the infrastructure as long
as there is a sufficient number of ports remaining on the existing spine nodes to support the new
leaf node. In addition, redundant connections between a spine and leaf pair are unnecessary
because the nature of the topology ensures that each leaf has multiple connections to the network
fabric. Therefore, each spine node requires only a single connection to each leaf node.
Redundancy is also provided by the presence of multiple APICs, which are typically deployed as a
cluster of three controllers. APICs are not directly involved in forwarding traffic and are therefore
not required to connect to every spine or leaf node. Instead, the APIC cluster is connected to one
or more leaf nodes in much the same manner that other endpoint groups (EPGs), such as
application servers, are connected. Because APICs are not directly involved in forwarding traffic,
the failure of an APIC does not affect the ability of the fabric to forward traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What percentage of wireless coverage overlap is considered appropriate to ensure that wireless
clients do not lose connectivity when roaming from one AP to another?
A.
10 to 15 percent
B.
40 to 50 percent
C.
0 to 5 percent
D.
more than 50 percent
E.
20 to 35 percent

A

Answer: A
Explanation:
A wireless coverage overlap area of 10 to 15 percent is considered appropriate to ensure that
wireless clients do not lose connectivity when roaming from one access point (AP) to another. Too
little wireless coverage overlap often causes gaps in wireless coverage, which prevents roaming
clients from being able to seamlessly transition from one AP to another. Providing more than 10 to
15 percent wireless coverage overlap would require you to purchase more APs than are
necessary for adequate wireless coverage. In addition, too much wireless coverage overlap could
introduce radio interference from neighboring APs. You should ensure that the APs on the network
use nonoverlapping channels to avoid radio interference from neighboring APs. For example,
although 802.11b can be configured to use 11 different channels in the United States and Canada,
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 52
only three nonoverlapping channels can be used: 1, 6, and 11.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Which of the following are used by WPA2 to provide MICs and encryption? (Choose two.)
A.
CCMP
B.
TKIP
C.
GCMP
D.
AES
E.
RC4

A

Answer: A,D
Explanation:
Advanced Encryption Standard (AES) and Counter Mode with Cipher Block Chaining Message
Authentication Code Protocol (CCMP) are used by Wi-Fi Protected Access 2 (WPA2) to provide
message integrity checks (MICs) and encryption. Wireless security protocols use MICs to prevent
data tampering. Encryption is used to protect confidentiality.
WPA2, which implements the 802.11i wireless standard, was developed to address the security
vulnerabilities in the original WPA standard. One enhancement over WPA included in WPA2 is the
encryption algorithm. AES is a stronger encryption algorithm than the RC4 algorithm used by
earlier wireless standards. When AES is implemented, a 128-bit block cipher is used to encrypt
data and a security key of 128, 192, or 256 bits can be used. This is a processor-intensive
operation, and implementing WPA2 and AES often requires new hardware, such as new wireless
access points (WAPs) and new client wireless network adapters.
In addition to AES, WPA2 also uses CCMP to provide encryption. CCMP is an encryption
mechanism that uses block ciphers. In WPA2, CCMP is used by AES during the encryption
process. The WPA2 encryption process is thus sometimes known as AES-CCMP.
RC4 is a stream cipher encryption algorithm used in the Wired Equivalent Privacy (WEP) protocol.
Unlike AES, which supports an encryption key length of 256 bits, RC4 supports an encryption key
length of up to 128 bits. Consequently, RC4 is not as secure as AES. Furthermore, RC4 uses a
stream cipher, which is a less secure encryption method. RC4 is not used with WPA2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

QUESTION NO: 32
Which of the following examples best describes the SaaS service model?
A.
A company licenses an office suite, including email service, that is delivered to the end user
through a web browser.
B.
A company hires a service provider to deliver cloud-based processing and storage that will house
multiple virtual hosts configured in a variety of ways.
C.
A company obtains a subscription to use a service provider’s infrastructure, programming tools,
and programming languages to develop and serve cloud-based applications.
D.
A company moves all company-wide policy documents to an Internet-based virtual file system
hosted by a service provider

A

Answer: A
Explanation:
A company that licenses an office suite, including email service, that is delivered to the end user
through a web browser is an example of the Software as a Service (SaaS) service model. The
National Institute of Standards and Technology (NIST) defines three service models in its
definition of cloud computing: SaaS, Infrastructure as a Service (IaaS), and Platform as a Service
(PaaS).
The SaaS service model enables its consumer to access applications running in the cloud
infrastructure but does not enable the consumer to manage the cloud infrastructure or the
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 56
configuration of the provided applications. Of the three service models, SaaS exposes the least
amount of the consumer’s network to the cloud and is the least likely to require changes to the
consumer’s network design. A company that licenses a service provider’s office suite and email
service that is delivered to end users through a web browser is using SaaS. SaaS providers use
an Internet-enabled licensing function, a streaming service, or a web application to provide end
users with software that they might otherwise install and activate locally. Web-based email clients,
such as Gmail and Outlook.com, are examples of SaaS.
The PaaS service model provides its consumer with slightly more freedom than the SaaS model
by enabling the consumer to install and possibly configure provider-supported applications in the
cloud infrastructure. A company that uses a service provider’s infrastructure, programming tools,
and programming languages to develop and serve cloud-based applications is using PaaS. PaaS
enables a consumer to use the service provider’s development tools or Application Programming
Interface (API) to develop and deploy specific cloud-based applications or services. Another
example of PaaS might be using a third party’s MySQL database and Apache services to build a
cloud-based customer relationship management (CRM) platform.
The IaaS service model provides the greatest degree of freedom by enabling its consumer to
provision processing, memory, storage, and network resources within the cloud infrastructure. The
IaaS service model also enables its consumer to install applications, including operating systems
(OSs) and custom applications. However, with IaaS, the cloud infrastructure remains in control of
the service provider. A company that hires a service provider to deliver cloud-based processing
and storage that will house multiple physical or virtual hosts configured in a variety of ways is
using IaaS. For example, a company that wanted to establish a web server farm by configuring
multiple Linux Apache MySQL PHP (LAMP) servers could save hardware costs by virtualizing the
farm and using a provider’s cloud service to deliver the physical infrastructure and bandwidth for
the virtual farm. Control over the OS, software, and server configuration would remain the
responsibility of the organization, whereas the physical infrastructure and bandwidth would be the
responsibility of the service provider. Using a third party’s infrastructure to host corporate Domain
Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) servers is another
example of IaaS.
A company that moves all company-wide policy documents to an Internet-based virtual file system
hosted by a third party is using cloud storage. Cloud storage is a term used to describe the use of
a service provider’s virtual file system as a document or file repository. Cloud storage enables an
organization to conserve storage space on a local network. However, cloud storage is also a
security risk in that the organization might not have ultimate control over who can access the files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You are configuring a normal WLAN by using the WLC GUI. You want to configure the WLAN with
the SSID of MyCompanyLAN. You click Create New on the WLANs page.
Which action are you most likely to perform first?
A.
Assign a profile name of up to 32 characters in the Profile Name field.
B.
Select Guest LAN from the Type drop-down list box.
C.
Assign the SSID of MyCompanyLAN in the WLAN SSID field.
D.
Assign a unique ID of 1 in the ID field

A

Answer: A
Explanation:
Most likely, you will assign a profile name of up to 32 characters in the Profile Name field first if
you want to configure a wireless local area network (WLAN) by using the Cisco wireless LAN
controller (WLC) graphical user interface (GUI). The Cisco WLC GUI is a browser-based interface
that enables you to configure various wireless network settings. In this scenario, you want to
create a normal WLAN named MyCompanyLAN. To create a new normal WLAN, you should
complete four steps on the WLANs > New page of the WLC GUI:
1. Select the type of WLAN you are creating from the Type drop-down list box; by default, this
value is configured to WLAN.
2. Enter a 32-character or less profile name in the Profile Name field.
3. Enter a 32-character or less Service Set Identifier (SSID) in the SSID field.
4. Choose a WLAN ID from the ID drop-down list box.
There are three types of WLANs you can create by using the WLC GUI:
1. A normal WLAN, which is the WLAN to which wireless clients inside your company’s walls will
connect
2. A Guest LAN, which is the WLAN to which guest wireless clients inside your company’s walls
will connect
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 58
3. A Remote LAN, which is the WLAN configuration for wired ports on the WLC
In this scenario, you are configuring a normal WLAN with an SSID of MyCompanyLAN.
Therefore, you do not need to select WLAN from the Type drop-down list box, because WLAN is
the default value for this drop-down list box. The Type drop-down list box should be configured to
WLAN in order to create a normal WLAN by using the WLC GUI.
After you configure the type of WLAN, you should configure a profile name for the WLAN in the
Profile Name field. The profile name can be up to 32 characters in length and should uniquely
identify the WLAN that you are configuring. The value that you enter in the Profile Name field will
be used by the WLC to identify the WLAN on other configuration pages. For simplicity, many
administrators choose to use the same value for the Profile Name field as they plan to configure
in the SSID field, although this is not required.
After you configure the Profile Name field, you should configure a value of up to 32 characters in
the SSID field. The SSID is the WLAN network name that will be broadcast to wireless clients. In
general, an SSID is the name for the collection of wireless clients that are all operating with the
same Institute of Electrical and Electronics Engineers (IEEE) 802.11 configuration.
Finally, you should configure the WLAN ID on which the WLAN will operate. By default, the ID
drop-down list box on the WLANs > New page will be configured to a value of 1. You can choose
to configure a WLAN on any WLAN ID in the range from 1 through 512. Although Cisco controllers
support a maximum of 512 WLANs, only 16 can be actively configured.
Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-5/configguide/b_cg85/wlans.html#ID72 CCNA 200-301 Official Cert Guide, Volume 1, Chapter 29: Building
a Wireless LAN, Configuring a WLAN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which of the following Application layer protocols use UDP for unsynchronized, connectionless
data transfer? (Choose two.)
A.
SMTP
B.
HTTP
C.
FTP
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 59
D.
TFTP
E.
SNMP

A

Answer: D,E
Explanation:
Simple Network Management Protocol (SNMP) and Trivial File Transfer Protocol (TFTP) use User
Datagram Protocol (UDP) for unsynchronized, connectionless data transfer. UDP is a Transport
layer protocol that does not use sequence numbers or establish synchronized connections.
Because of UDP’s connectionless nature, transmitted datagrams can appear out of sequence or
can be dropped without notice; thus it is the responsibility of the Application layer protocol to
reorder packets or request the transmission of lost datagrams. SNMP is used to monitor and
manage network devices. TFTP uses UDP port 69 to transfer files unreliably and without
authentication over a network. Other common Application layer protocols that use UDP include
Dynamic Host Configuration Protocol (DHCP), which is used to assign Internet Protocol (IP)
addressing information to clients, Network Time Protocol (NTP), which is used to coordinate time
on a network, and Remote Authentication Dial-In User Service (RADIUS), which is used to
authenticate users.
Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP) use Transmission Control
Protocol (TCP) for reliable, connection-oriented data transfer. TCP is a Transport layer protocol
that uses sequencing and error-checking to ensure that transmitted data can be easily reordered if
packets arrive out of sequence and can be retransmitted if any packets are lost. Because TCP
handles data sequencing and the retransmission of lost data, the Application layer protocols that
rely on TCP do not need to handle those tasks and can rely on receiving reliable, ordered data.
FTP, which is used to transfer files over a network, uses TCP ports 20 and 21. Cisco devices can
reliably transfer IOS images by using FTP. FTP requires the transmission of authentication
credentials, even if anonymous FTP is in use, but those credentials are transmitted in plain text.
Other common TCP protocols are HTTP, which is used to transfer webpages over the Internet,
Simple Mail Transfer Protocol (SMTP), which is used to send email messages, Post Office
Protocol 3 (POP3), which is used to retrieve email messages, and Telnet, which is used to
manage network devices.
Reference: https://www.iana.org/protoco

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

QUESTION NO: 35
Which of the following is a valid HSRP version 2 virtual MAC address?
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 60
A.
0007.6400.0102
B.
0000.5E00.0101
C.
0000.0C9F.F00A
D.
0000.0C07.AC0B
E.
0005.73A0.0FFF

A

Answer: C
Explanation:
Of the available choices, only 0000.0C9F.F00A is a valid Hot Standby Router Protocol (HSRP)
version 2 virtual Media Access Control (MAC) address. HSRP is a Cisco-proprietary protocol that
enables multiple routers to function as a single gateway for the network. HSRP configures two or
more routers to share a virtual Internet Protocol (IP) address and a virtual MAC address so that
the group of routers appears as a single device to other hosts on the network.
Based on priority value, HSRP elects a single active router and a standby router. The active router
is the router with the highest priority; it forwards packets, responds to Address Resolution Protocol
(ARP) requests with a virtual MAC address, and can be the only router that is explicitly configured
with the virtual IP address. The standby router is the router with the second-highest priority. If
multiple HSRP routers have the same priority, the router with the highest IP address is elected as
the active router. The router with the second-highest IP address is elected as the standby router,
which will assume the role of the active router if the active router fails. To participate in the active
and standby router election process, each HSRP router must be a member of the same group.
There are two versions of HSRP for IP version 4 (IPv4) networks: HSRP version 1 and HSRP
version 2. An HSRP version 1 group is identified by a group number from 0 through 255. An HSRP
version 2 group is identified by a group number from 0 through 4095. The default HSRP group
value for both versions is 0.
To differentiate the virtual MAC addresses of the various groups, HSRP version 1 uses a special
format based on the well-known virtual MAC address 0000.0007.ACxx, where xx is the group
number in hexadecimal format. HSRP version 2, on the other hand, uses a virtual MAC address of
0000.0C9F.Fxxx, where xxx is the group number in hexadecimal format. In this scenario, the
virtual MAC address for the HSRP group is 0000.0C9F.F00A; the group number is identified by
the final three digits, OOA, in the virtual MAC address. Thus, because 00A is the hexadecimal
equivalent of 10 in decimal notation, the virtual MAC address 0000.0C9F.F00A indicates that the
HSRP group number for this scenario is 10.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 61
The virtual MAC address 0000.5E00.0101 is not an HSRP version 2 virtual MAC address. This
MAC address is a Virtual Router Redundancy Protocol (VRRP) MAC address. VRRP is an Internet
Engineering Task Force (IETF)-standard First-Hop Redundancy Protocol (FHRP) that is supported
by both Cisco and non-Cisco devices. However, if only Cisco devices are used in the topology and
a choice between HSRP and VRRP is available, Cisco recommends using HSRP. A VRRP virtual
MAC address typically uses the 0000.5E00.01xx format, where xx is the VRRP group number.
The virtual MAC address 0007.B400.0102 is not an HSRP version 2 virtual MAC address. This
MAC address is a Gateway Load Balancing Protocol (GLBP) virtual MAC address. The GLBP
active virtual gateway (AVG) assigns a virtual MAC address to a maximum of four primary active
virtual forwarders (AVFs); all other routers in the group are considered secondary AVFs and are
placed in the listen state. GLBP virtual MAC addresses typically use the 0007.B400.xxyy format,
where xx represents the GLBP group number and yy represents the AVF number.
The virtual MAC address 0005.73A0.0FFF is not an HSRP version 2 virtual MAC address. There
is a version of HSRP for IPv6 that uses a range of virtual MAC addresses from 0005.73A0.0000
through 0005.73A0.0FFF. However, configuring HSRP for IPv6 is beyond the scope of CCNA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

QUESTION NO: 36
You are trying to configure OSPF to perform equal-cost load balancing. Router1 should have eight
equal-cost OSPF routes to the 192.168.102.0/24 network. However, only four OSPF routes exist.
Which of the following should you do to perform equal-cost load balancing over all eight routes?
A.
Issue the maximum-paths 8 command.
B.
Issue the ip ospf cost 1 command on all interfaces.
C.
Configure EIGRP throughout the network.
D.
Configure the variance to a value of 8.

A

Answer: A
Explanation:
You should issue the maximum-paths 8 command. Many Open Shortest Path First (OSPF) routers
can insert a maximum of four equal-cost paths into the routing table by default. You can override
the default maximum by issuing the maximum-paths maximum command in OSPF router
configuration mode, where maximum indicates the maximum number of equal-cost paths to insert
into the routing table.
You need not configure Enhanced Interior Gateway Routing Protocol (EIGRP) throughout the
network. OSPF supports equal-cost load balancing; if multiple OSPF paths to a destination exist
and each path has the same bandwidth, OSPF will load balance between the paths. By contrast,
EIGRP supports load balancing over equal-cost and unequal-cost paths. OSPF does not use
variance; therefore, configuring variance to a value of 8 will not enable Router1 to perform equalcost load balancing over eight paths. The variance command is used to determine whether EIGRP
feasible successors can be used for unequal-cost load balancing.
You need not issue the ip ospf cost 1 command on all interfaces, because the routes already have
the same cost. You can manually configure the OSPF cost of a path through an interface by
issuing the ip ospf cost costcommand in interface configuration mode, where cost is the path cost
that you want to assign. OSPF uses cost, which is based on bandwidth, as its metric. The higher
the bandwidth, the lower the cost. OSPF selects the lowest-cost path, which is the path with the
highest bandwidth, to a destination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

QUESTION NO: 38
Which of the following best describes a lightweight AP in bridge mode?
A.
It acts as a dedicated connection between two networks.
B.
It is the default operating mode for a lightweight AP.
C.
It captures wireless traffic for analysis.
D.
It enables a failsafe if the CAPWAP connection goes down.

A

Answer: A
Explanation:
A Cisco lightweight access point (AP) operating in bridge mode acts as a dedicated connection
between two networks. A lightweight AP provides an interface for wireless clients to connect to the
wireless local area network (WLAN). However, unlike autonomous APs, a lightweight AP relies on
a Cisco wireless LAN controller (WLC) for management and configuration. Lightweight APs
operating in bridge mode can connect to other networks in either a point-to-point or point-tomultipoint fashion. When multiple APs are configured in bridge mode, the collection of lightweight
APs can be used to form a mesh network.
Local mode is the default operating mode for a lightweight AP. A Cisco lightweight AP operating in
local mode is capable of providing multiple basic service sets (BSSs) on a single channel. In this
mode, the AP can connect to a WLC and can provide client connectivity. In addition, an AP
operating in local mode scans all wireless channels as a means of monitoring wireless quality and
security. The connection between a lightweight AP and a WLC is created by using two tunnels
established by the Control and Provisioning of Wireless Access Points (CAPWAP) tunneling
protocol. Information sent between lightweight APs and the WLC is encapsulated in Internet
Protocol (IP) packets. This process enables a lightweight AP and WLC to manage connectivity to
the same WLAN yet be separated by both physical and logical means.
A Cisco lightweight AP operating in sniffer mode, not bridge mode, captures wireless traffic for
analysis. When traffic is captured, a lightweight AP that is operating in sniffer mode will send the
traffic to an analyzer, which is typically software that is installed on a PC or other host.
A Cisco lightweight AP operating in FlexConnect mode, not bridge mode, enables a failsafe if the
CAPWAP connection goes down. FlexConnect mode does not provide BSSs. When configured,
FlexConnect mode enables a lightweight AP to switch traffic between a given Service Set Identifier
(SSID) and a given virtual LAN (VLAN).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

QUESTION NO: 41
Which of the following is the default frequency that a Cisco switch will send LLDP advertisements
when LLDP is enabled on an interface?
A.
five seconds
B.
65534 seconds
C.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 70
120 seconds
D.
30 seconds
E.
60 seconds

A

Answer: D
Explanation:
By default, a Cisco switch will send Link Layer Discovery Protocol (LLDP) advertisements every
30 seconds when LLDP is enabled on an interface. LLDP is an Open Systems Interconnection
(050 Layer 2 open-standard discovery protocol that is used to facilitate interoperability between
Cisco devices and non-Cisco devices. Attributes that can be learned from neighboring devices
contain Type, Length, Value (TLV) information including port description, system description, and
management address. You can issue the lldp timer rate command from global configuration
mode to configure the frequency at which LLDP advertisements are sent by a switch. The default
rate value is 30 seconds; however, the rate can be configured to any integer value from 5 through
65534 seconds. You can issue the show lldp command from privileged EXEC mode to display
the current LLDP configuration. The following sample output shows the default settings for a Cisco
3560 series switch after LLDP has been enabled globally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

QUESTION NO: 42
You want to configure SSH for incoming VTY connections on a new router. The router is running a
K9 IOS image but has not yet been configured with a host name, a domain name, or an RSA key
pair. In addition, the VTY lines are not yet configured to accept incoming SSH connections.
You issue the crypto key generate rsa command from global configuration mode.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 72
Which of the following messages will you most likely receive?
A.
Please define a hostname other than Router.
B.
Please create RSA keys to enable SSH.
C.
The name for the keys will be:
D.
Please enable SSH as a transport mode.
E.
Please define a domain-name first.

A

Answer: A
Explanation:
You will most likely receive the Please define a hostname other than Router message when you
issue the crypto key generate rsa command, because you have not configured the router with a
host name other than the default name of Router. To configure a router with a host name other
than the default, you should issue the hostname host-name command from global configuration
mode.
To enable Secure Shell (SSH) for virtual terminal (VTY) lines on a Cisco router, you should
complete the following steps:
1. Configure the router with a host name other than Router by issuing the hostname command.
2. Configure the router with a domain name by issuing the ip domain-name command.
3. Generate an RSA key pair for the router by issuing the crypto key generate rsa command.
4. Configure the VTY lines to use SSH by issuing the transport input ssh command from line
configuration mode.
SSH is often used as a secure replacement for Telnet to manage network devices. In order for
SSH to be enabled on a Cisco device, the device must be running a K9 IOS image, which
provides cryptographic functionality.
You will not receive the The name for the keys will be: message when you issue the crypto key
generate rsa command in this scenario. However, if you had already configured the router with a
valid host name and a domain name, you would have received the The name for the keys will be:
message after issuing the crypto key generate rsa command. After you specify the name for the
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 73
keys, you will be prompted for the modulus length.
You will not receive the Please define a domain-name first message when you issue the crypto
key generate rsa command in this scenario. However, if you had configured the router with a
valid host name but had not configured the router with a domain name, you would have received
the Please define a domain-name first message after issuing the crypto key generate rsa
command. In this scenario, you have configured neither the domain name nor the host name. To
configure a router with a domain name, you should issue the ip domain-name domain-name
command from global configuration mode.
You will not receive the Please create RSA keys to enable SSH message when you issue the
crypto key generate rsa command in this scenario. However, if you had issued another
command related to SSH, such as the ip ssh time-out 60 command, but had not yet enabled
SSH on the router, you would have received the Please create RSA keys to enable SSH
message.
You will not receive the Please enable SSH as a transport mode message when you issue the
crypto key generate rsa command in this scenario. The Please enable SSH as a transport mode
message is not a warning message that is displayed on Cisco routers. You can issue the
transport input ssh command to configure SSH as the transport mode for VTY lines.
Reference: https://www.cisco.com/c/en/us/support/docs/security-vpn/secure-shell-ssh/4145-
ssh.html#settingupaniosrouterasssh CCNA 200-301 Official Cert Guide, Volume 1, Chapter 6:
Configuring Basic Switch Management, Securing Remote Access with Secure Shell

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

QUESTION NO: 43
Which of the following is a REST API encoding format that uses HTML-like tags to define blocks of
data?
A.
BSON
B.
XML
C.
YAML
D.
JSON

A

Answer: B
Explanation:
Of the available choices, Extensible Markup Language (XML) is a Representational State Transfer
(REST) Application Programming Interface (API) encoding format that uses Hypertext Markup
Language (HTML)-like tags to define blocks of data. REST APIs encode data in either XML format
or in JavaScript Object Notation (JSON) format. In addition, REST APIs are typically used to
communicate with a Software-Defined Networking (SDN) application plane.
An SDN controller uses two different sets of APIs: one set to communicate with applications and
another set to communicate with devices. Northbound APIs enable an SDN controller to
communicate with applications in the application plane. Applications use northbound APIs to send
requests or instructions to the SDN controller, which uses that information to modify and manage
network flow. Southbound APIs enable an SDN controller to communicate with devices in the data
plane.
XML is a markup language that is similar to HTML in structure; it uses tags to define blocks of
data. Whereas HTML is used to render information on a webpage, XML is a more structured
language that is used to format data in a way that can be easily transmitted over the Internet and
parsed by a variety of applications.
JSON is a REST API encoding format. However, JSON returns data in the form of an object that
contains key and value pairs. A single JSON object can contain multiple key and value pairs. Each
key and value pair inside a JSON object is separated from the others by a comma (,).
Furthermore, each pair’s key is separated from its value by a colon (:). The element in quotation
marks on the left side of each colon is the key. The element on the right side of each colon is the
value, which might or might not be enclosed in quotation marks. There are several data value
types that can be returned in JSON output: text, numeric, array, object, Boolean, and null.
YAML Ain’t Markup Language (YAML) is not a REST API encoding format. YAML is a data
serialization language that presents information in a format that is typically more human-readable
than either XML or JSON. YAML is commonly used by the Ansible configuration management tool
to store configuration playbooks.
Binary JSON (BSON) is not a REST API encoding format. BSON is a data serialization format that
stores JSON data in a binary form that is not human-readable. This is in contrast to the text format
that is typical of JSON. BSON is typically used in information storage systems, such as MongoDB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

QUESTION NO: 44
You issue the following commands on a Cisco router named RouterA:
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 75
enable password !abcu$3r!
enable secret abc4dm!n
line console 0
password abc4dm!n
line vty 0 15
login
password abcu$3r
service password-encryption
Another user has been asked to examine the running configuration on RouterA but not make any
configuration changes. The user connects to RouterA by using Telnet.
Which of the following will the user require in order to perform this task?
A.
the enable secret password and the console password
B.
the console password alone
C.
the VTY line password alone
D.
the enable password and the console password
E.
the enable secret password and the VTY line password
F.
the enable password and the VTY line password
G.
the console password and the VTY line password

A

Answer: E
Explanation:
The user will require the enable secret password and the virtual terminal (VTY) line password to
examine the running configuration on RouterA in this scenario. In this scenario, the user connects
to RouterA by using Telnet. You can configure Telnet login information on a Cisco device by
issuing the line vty first last command to place the device in VTY line configuration mode. Next,
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 76
you can issue the password password command to configure a Telnet password and the login
command to enable password checks if the command has been disabled on the router. The login
command is typically configured by default. Issuing the Telnet password when you are connecting
to a device places the device into user EXEC mode, where it is not typically possible to display the
running configuration.
In Cisco IOS 15, the enable secret command stores an encrypted password in the device’s
configuration file by using a Secure Hash Algorithm (SHA) 256-bit hash. The syntax for the
enable secret command is enable secret [level level] {password | [encryption-type] encryptedpassword}, where password is a string of characters that represents the clear-text password.
Instead of supplying a clear-text password, you can specify an encryption-type value of 0, 4, or 5
and an encrypted-password value of either a clear-text password, a SHA-256 hash, or a Message
Digest 5 (MD5) hash, respectively. Supplying a hash value requires that you have previously
encrypted the value by using a hashing algorithm in the same fashion that IOS uses the algorithm.
This command configures a password that is required in order to place the device into enable
mode, which is also known as privileged EXEC mode. The device must, at a minimum, be placed
into enable mode for the user to be able to display the running configuration. In some Cisco IOS
versions prior to 15.3(3), the enable secret command by default stores an encrypted password in
the device’s configuration file by using a Secure Hash Algorithm (SHA) 256-bit hash. As of Cisco
IOS 15.3(3), Type 4 passwords have been deprecated because of a security flaw in their
implementation.
The user will need the enable secret password, not the enable password, to access privileged
EXEC mode in this scenario. The enable password password command configures a clear-text
enable password on a Cisco device. If both the enable password command and the enable
secret command are in the running configuration of a Cisco device, the device will ignore the
password associated with the enable password command. Therefore, issuing the password that
is configured by the enable password command in this scenario will not provide the user with
access to privileged EXEC mode. Because the service password-encryption command has
been configured on RouterA, all passwords on the device have been encrypted. However, the
router will still prefer the enable secret password over the enable password password.
The user will not need the console password in this scenario. The line console 0 command
followed by the password command configures a password for accessing the router by using the
console. Typically, the console is accessed by physically connecting a console cable between the
router and a device that is running terminal software. Issuing the password for t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

QUESTION NO: 45
In a controller-based network, the functions of which of the following protocols are most likely to be
moved to a centralized controller? (Choose two.)
A.
Syslog
B.
EIGRP
C.
OSPF
D.
SSH
E.
SNMP

A

Answer: B,C
Explanation:
In a controller-based network, the functions of Enhanced Interior Gateway Routing Protocol
(EIGRP) and Open Shortest Path First (OSPF) are most likely to be moved to a centralized
controller. Routing protocols like EIGRP and OSPF operate in the control plane of a traditional
distributed network. These protocols make routing decisions for packets that require routing
among Layer 3 devices. In a controller-based network, such as a Software-Defined Networking
(SDN) network, the control plane is centralized. Therefore, the decision-making logic is either
moved to a central controller or monitored by a central controller.
In a controller-based network, none of the functions of Secure Shell (SSH), Syslog, or Simple
Network Management Protocol (SNMP) are likely to be moved to a centralized controller. All of
these protocols operate in the management plane in both a traditional network and a controllerbased network. Another network management protocol that operates in this plane includes Telnet.
All of these protocols enable an administrator to connect to and manage a network device.
Layer 2 switches, Layer 3 switches, and end devices typically operate in the data plane. In a
controller-based network, the controller communicates with the data plane by using a southbound
Application Programming Interface (API), such as NETCONF, OpenFlow, OpFlex, or OnePK.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 78
Network tasks that are typically performed in the data plane include the encapsulation and
decapsulation of packets, the adding or removing of trunk headers, the matching of Media Access
Control (MAC) addresses to a MAC address table, the matching of Internet Protocol (IP)
addresses to paths in a routing table, the encryption of data, Network Address Translation (NAT),
and filtering by using either access control lists (ACLs) or port security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

QUESTION NO: 47
Which of the following best describes what occurs when a packet must be re-sent because of an
interruption that occurs before the 64th byte has been transmitted?
A.
A jumbo frame is transmitted.
B.
A late collision occurs.
C.
A collision occurs.
D.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 80
A baby giant frame is transmitted.
E.
A runt frame is transmitted.

A

Answer: C
Explanation:
A collision occurs when a packet must be re-sent because of an interruption that occurs before the
64th byte, or 512th bit, has been transmitted. When two devices attempt to send data
simultaneously, a collision occurs. On Ethernet networks, which use Carrier Sense Multiple
Access with Collision Detection (CSMA/CD), both devices will wait a random amount of time
before resending. Collisions can be caused by a duplex mismatch, by a malfunctioning device, or
by having too many nodes on a network segment.
A late collision occurs when a packet must be re-sent because of an interruption that occurs after
the 64th byte, or 512th bit, has been transmitted. Late collisions can be caused by a duplex
mismatch or by a network segment that extends farther than the cable length supports.
A runt is a frame that is fewer than 64 bytes and has a bad Frame Check Sequence (FCS).
Frames that are smaller than 64 bytes are discarded. Runts can sometimes be caused by
excessive collisions but can also be caused by malfunctioning hardware.
A baby giant is a frame that is up to 1,600 bytes in length. The default maximum transmission unit
(MTU) size for Ethernet frames is 1,500 bytes, not including the Ethernet header and the cyclic
redundancy check (CRC) trailer, which add 18 bytes to the frame. Therefore, baby giant frames
are slightly larger than an Ethernet frame. Baby giants can occur if you use Q-in-Q encapsulation,
Multiprotocol Label Switching (MPLS), or any other feature that adds to the size of an Ethernet
frame.
A jumbo is a frame that is up to 9,216 bytes in length, which is much larger than a standard
Ethernet frame. You can issue the system mtu bytes global configuration command to change the
MTU size on Ethernet or Fast Ethernet interfaces; however, they cannot support jumbo frames.
Reference: https://www.cisco.com/c/en/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

QUESTION NO: 48
You issue the spanning-tree guard root command on a switch port that you are connecting to a
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 81
new, unconfigured switch.
Which of the following are you most likely attempting to do?
A.
prevent loops on a port that could erroneously receive BPDUs
B.
prevent the new switch from being elected root
C.
prevent loops from an interruption of BPDU flow
D.
prevent a port from transitioning through all of the STP states

A

Answer: B
Explanation:
Most likely, you are attempting to prevent the new switch from being elected root if you issue the
spanning-tree guard root command on a switch port that you are connecting to a new,
unconfigured switch. The spanning-tree guard root command configures the Spanning Tree
Protocol (STP) root guard feature.
Root guard is used to prevent newly introduced switches from being elected as the new root
switch. This allows administrators to maintain control over which switch is the root. When STP is
used, the device with the lowest switch priority is elected the root. If a new device is added to the
network with a lower priority than the current root, it will become the new root. However, this could
cause the network to reconfigure in unintended ways. To prevent this, root guard can be applied.
Root guard is applied on a per-port basis by issuing the spanning-tree guard root command. If
root guard is enabled on a loop guard-enabled port, loop guard will be automatically disabled.
You are not attempting to prevent a port from transitioning through all of the STP states. If you
want to ensure that a port immediately transitions to the STP forwarding state, you should enable
PortFast on the port. PortFast is a feature that provides immediate accessibility to the network for
edge ports, such as access ports that are connected to end-user workstations. PortFast transitions
the port into the STP forwarding state without going through the STP listening and learning states.
Because the ports are not expected to receive bridge protocol data units (BPDUs), they are not
required to listen for BPDUs and learn the network topology. It is important to note that if PortFast
is enabled on a port that is connected to another switch, the potential for creating spanning tree
loops significantly increases.
You are not attempting to prevent loops from an interruption of BPDU flow. The loop guard feature
prevents nondesignated ports from inadvertently forming bridging loops if the steady flow of
BPDUs is interrupted. When the port stops receiving BPDUs, loop guard puts the port into the
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 82
loop-inconsistent state, which keeps the port in a blocking state. After the port starts receiving
BPDUs again, loop guard automatically re-enables the port so that it transitions through the
normal STP states. You can enable loop guard for the entire switch by issuing the spanning-tree
loopguard default command in global configuration mode, or you can enable loop guard for
specific ports by issuing the spanning-tree guard loop command in interface configuration mode.
You are not attempting to prevent loops on a port that could erroneously receive BPDUs. BPDU
guard is used to disable ports that erroneously receive BPDUs. BPDU guard is typically applied to
edge ports that have PortFast enabled. Because PortFast automatically places ports into a
forwarding state, a switch that has been connected to a PortFast-enabled port could cause
switching loops. However, when BPDU guard is applied, the receipt of a BPDU on a port will result
in the port being placed into the error-disabled state, which prevents loops from occurring. BPDU
guard should be enabled on ports that have been enabled with PortFast so that BPDU guard can
prevent a rogue switch from modifying the STP topology. When such a port receives a BPDU,
BPDU guard immediately puts that port into the error-disabled state and shuts down the port. The
port must then be manually re-enabled, or it can be recovered automatically by configuring the
errdisable recovery cause bpduguard command and the errdisable recovery interval interval
command.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

QUESTION NO: 49
Which HSRP router or routers will use the HSRP virtual IP address and will respond to ARP
requests with the HSRP virtual MAC address?
A.
only the active router
B.
only the active router and the standby router
C.
only the standby router
D.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 83
all HSRP routers in a group

A

Answer: A
Explanation:
Only the active router will use the Hot Standby Router Protocol (HSRP) virtual Internet Protocol
(IP) address and will respond to Address Resolution Protocol (ARP) requests with the HSRP
virtual Media Access Control (MAC) address. HSRP is a Cisco-proprietary protocol that enables
two or more routers to act as a single virtual router. Multiple routers are assigned to an HSRP
group, and the routers function as a single gateway. The HSRP virtual IP address can then be
configured as the default gateway address for client devices.
Each HSRP group is identified by a group number from 0 through 255. The default HSRP group
value is 0. Based on priority value, HSRP elects a single active router and a standby router for
each group. To participate in the active and standby router election process, each HSRP router
must be a member of the same group. The active router is the router with the highest priority; it
forwards packets, responds to ARP requests with a virtual MAC address, and can be the only
router that is explicitly configured with the virtual IP address. The standby router is the router with
the second-highest priority. If multiple HSRP routers have the same priority, the router with the
highest IP address is elected as the active router. The router with the second-highest IP address is
elected as the standby router, which will assume the role of the active router if the active router
fails. Other routers in the HSRP group are in the listen state.
By default, Hello packets are sent by the active router every three seconds. Only the standby
router monitors the active router’s Hello packets. If the standby router does not receive a Hello
packet from the active router for the duration configured in the holdtime, the standby router will
take over the role of the active router. By default, the holdtime is set to 10 seconds.
To differentiate the virtual MAC addresses of the various groups, HSRP uses a special format for
the virtual MAC address that uses the well-known virtual MAC address 0000.0c07.acxx, where xx
is the group number in hexadecimal format. For example, the virtual MAC address for HSRP
group 11 is 0000.0c07.ac0b; 0b is the hexadecimal equivalent of 11 in decimal notation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

QUESTION NO: 50
Which of the following MAC addresses represents an IP multicast address?
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 84
A.
01-00-5E-0F-0F-0F
B.
CF-00-00-00-00-00
C.
00-00-0C-F0-F0-F0
D.
FF-FF-FF-FF-FF-FF

A

Answer: A
Explanation:
The Media Access Control (MAC) address 01-00-5E-0F-0F-0F represents an Internet Protocol (IP)
multicast address. The Ethernet multicast range of 01-00-5E-00-00-00 through 01-00-5E-7F-FFFF has been allocated for IP multicast use. This means that the first 24 bits of a 48-bit multicast
MAC address are always 01-00-5E, and the twenty-fifth bit is always set to 0. The remaining 23
bits are created from the last 23 bits of the multicast IP address. Because IP addresses are 32 bits
long, several multicast IP addresses correspond to each multicast MAC address. For example, the
IP addresses 224.15.15.15 and 225.143.15.15 share the same last 23 bits, as shown below:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Which of the following Cisco management solutions supports Cisco SDA?
A.
Cisco DNA Center
B.
Cisco Network Assistant
C.
Cisco IOS 15
D.
Cisco PI

A

Answer: A
Explanation:
Of the available choices, Cisco Digital Network Architecture (DNA) Center is the Cisco
management solution that supports Cisco Software-Defined Access (SDA). Cisco SDA is a Ciscodeveloped means of building local area networks (LANs) by using policies and automation. Cisco
DNA Center, which abstracts the complexity of network configuration by implementing a
centralized controller and graphical user interface (GUI), also supports many of the same
traditional campus device management features that are supported by other Cisco management
solutions. Administrators typically interact with Cisco DNA by using a browser-based GUI. Cisco
DNA Center uses the Representational State Transfer (REST) API to natively communicate with
Cisco devices. To communicate with third-party devices, Cisco DNA Center relies on software
development kits (SDKs).
Cisco IOS 15 is not built specifically to support the Cisco SDA. Cisco IOS is a network device
operating system (OS) that is used to directly configure, manage, and troubleshoot a single
device. Administrators typically interact with Cisco IOS by using a command-line interface (CLI).
Access to the CLI can be gained by connecting to a device’s console port, by connecting to a
Telnet session, or by connecting to a Secure Shell (SSH) session, depending on how the device is
configured.
Cisco Network Assistant is not built specifically to support the Cisco SDA. Cisco Network Assistant
is a free Java-based desktop application that enables a LAN administrator to perform network
operations, diagnose problems, and interact with network devices by using a GUI. A typical Cisco
Network Assistant installation supports the management of up to 80 devices. Cisco Network
Assistant predates Cisco SDA and is therefore not specifically built to support Cisco SDA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

QUESTION NO: 52
Which of the following entries from the show ip route command indicates a host route?
A.
C 192.168.1.0/30 is directly connected, FastEthernet0/0
B.
S* 0.0.0.0/0 [1/0] via FastEthernet0/0
C.
L 192.168.1.1/32 is directly connected, FastEthernet0/0
D.
O 192.168.1.0/24 [110/2] via 10.1.1.13, 00:00:08, FastEthernet0/0
E.
D 192.168.0.4/30 [90/2195456] via 192.168.0.18, 00:03:31, FastEthernet0/0
F.
S 192.168.1.0/24 [5/0] via 10.1.2.3

A

Answer: C
Explanation:
The L 192.168.1.1/32 is directly connected, FastEthernet0/0 entry from the show ip route
command indicates a host route. Local host routes are marked with an L in the output of the
show ip route command or the show ipv6 route command. Internet Protocol version 4 (IPv4)
host routes have a /32 mask, and IP version 6 (IPv6) host routes have a /128 mask.
Not all IPv4 routes with a /32 mask are considered host routes. IPv4 addresses that are manually
configured with a /32 mask are considered to be connected addresses and are marked with a c in
the output of the show ip route command. For example, the C 192.168.1.0/30 is directly
connected, FastEthernet0/0 entry from the show ip route command indicates a connected route.
Routes that are marked with an o in the output of the show ip route command are Open Shortest
Path First (OSPF) routes. Routes that are marked with a D in the output of the show ip route
command are Enhanced Interior Gateway Routing Protocol (EIGRP) routes. OSPF routes and
EIGRP routes are considered network routes.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 88
Routes that are marked with an s in the output of the show ip route command are static routes.
Normal static routes have an administrative distance (AD) of 1; the AD is the first number inside
the brackets. A static route with a modified AD is called a floating static route and is often used as
a backup route in case the primary route goes down. The S 192.168.1.0/24 [5/0] via 10 .1.2.3
entry from the show ip route command indicates a floating static route with an AD of 5.
Routes that are marked with an * in the output of the show ip route command are default routes.
A static default route can be configured by issuing the ip route 0.0.0.0 0.0.0.0 {next-hop-IP |
interface} command. The S* 0.0.0.0/0 [1/0] via FastEthernet0/0 entry from the show ip route

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

QUESTION NO: 53
RouterA receives routes to the following overlapping networks:
* 192.168.1.0/24
* 192.168.1.0/25
* 192.168.1.0/26
* 192.168.1.0/28
Each of the routes is received from a different routing protocol.
Which of the following routes will RouterA install in the routing table?
A.
all of the routes
B.
the route with the highest AD
C.
the route with the longest prefix match
D.
the route with the lowest AD
E.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 89
the route with the shortest prefix match

A

Answer: A
Explanation:
RouterA will install all of the routes in the routing table. When multiple routes to overlapping
networks exist, a router will prefer the most specific route, which is the route with the longest prefix
match. For example, if RouterA receives a packet to 192.168.1.4, it will send the packet to the
192.168.1.0/28 route; if RouterA receives a packet to 192.168.1.20, it will send the packet to the
192.168.1.0/26 route. RouterA will not install only the route with the longest or shortest prefix
match.
RouterA will not install only the route with the highest or lowest administrative distance (AD),
because the routes target separate destination networks. When multiple routes to the same
destination network exist and each route uses a different routing protocol, a router will install only
the route from the routing protocol with the lowest AD. The following list contains the most

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

QUESTION NO: 54
In a two-tier network design, which layers are combined together?
A.
aggregation and access
B.
core, distribution, and access
C.
core, aggregation, and access
D.
core and access
E.
core and distribution
F.
distribution and access

A

Answer: E
Explanation:
In a two-tier network design, the core and distribution layers are combined together into a single
layer; this two-tier network design is often called the collapsed core design. The functionality of the
core layer can be collapsed into the distribution layer if the distribution layer infrastructure is
sufficient to meet the design requirements. The collapsed core in a two-tier network design
provides physical and logical paths as well as a Layer 2 aggregation and demarcation point. In
addition, a collapsed core defines routing polices and network access policies and provides
intelligent network services.
The Cisco hierarchical network model divides the network into three distinct layers:
* Core layer
* Distribution layer, sometimes called the aggregation layer
* Access layer
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 91
The core layer of the hierarchical model is primarily associated with low latency and high reliability.
As the network backbone, the core layer provides fast convergence and typically provides the
fastest switching path in the network.
The distribution layer provides route filtering and interVLAN routing. The distribution layer serves
as an aggregation point for access layer network links. Because the distribution layer is the
intermediary between the access layer and the core layer, the distribution layer is the ideal place
to enforce security policies, to provide Quality of Service (QoS), and to perform tasks that involve
packet manipulation, such as routing. Summarization and next-hop redundancy are also
performed in the distribution layer.
The access layer serves as a media termination point for endpoints such as servers and hosts.
Because access layer devices provide access to the network, the access layer is the ideal place to
perform user authentication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

QUESTION NO: 55
You issue the following commands on a Cisco router’s FastEthernet 0/0 interface:
ipv6 enable
no shutdown
The interface on the other side of the link is not yet configured. In addition, there is no DHCPv6
server on the network.
How many IPv6 addresses are configured on the interface?
A.
one
B.
none
C.
three
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 92
D.
two

A

Answer: A
Explanation:
One Internet Protocol version 6 (IPv6) address is configured on the interface after you issue the
ipv6 enable command on the Cisco router’s FastEthernet 0/0 interface. There are three ways to
enable IPv6 on an interface: by manually assigning an IPv6 address to the interface, by
automatically assigning an IPv6 address to the interface, or by issuing the ipv6 enable command
on the interface. Once IPv6 is enabled on an interface, it can use its automatically derived, linklocal IPv6 address to communicate with other IPv6 enabled devices on directly connected
networks.
IPv6 link-local unicast addresses are used for communication over a single link. Routers do not
forward traffic sent to a link-local address; the traffic stays on the local link. IPv6 link-local unicast
addresses are often used for neighbor discovery. These addresses usually begin with FE8, as
specified in Request for Comments (RFC) 4291.
To manually assign an IPv6 address to an interface, you can issue the ipv6 address
address/prefix-length [eui-64] command. The eui-64 keyword configures a static IPv6 prefix but
allows the router to automatically generate a 64-bit interface ID known as an extended unique
identifier (EUI)-64 interface ID; the EUI-64 interface ID is based on the interface’s Media Access
Control (MAC) address.
To automatically assign an IPv6 address to an interface, you can implement Stateless Address
Automatic Configuration (SLAAC), Dynamic Host Configuration Protocol version 6 (DHCPv6), or a
combination of the two. SLAAC configurations occur based on information that is sent in router
advertisements from an IPv6 gateway operating on the same network segment. When the linklocal interface is active on the segment, it can announce itself and receive router advertisements
from an IPv6 router that is operating on the same segment. If an IPv6 router exists and can
function as an IPv6 gateway, it will advertise that functionality as well as the globally unique prefix
with which it is configured and with which connected nodes should be used. The ipv6 address
autoconfig command configures an interface to automatically assign itself a global unicast IPv6
address by using SLAAC.
The same ipv6 address autoconfig command that enables SLAAC on an interface will enable
the interface to obtain additional information from a DHCPv6 server if a DHCPv6 server exists on
the network and is configured to send nonaddress information. The ipv6 address dhcp
command configures a DHCPv6 client interface to use stateful DHCPv6 addressing, which
configures addressing information and extra information from the DHCPv6 server.
Unlike with IP version 4 (IPv4), it is possible to configure more than one IPv6 address on an
interface without defining the addresses as primary or secondary. IPv4-only interfaces can be
configured with only one primary IPv4 address. However, in this scenario, the ipv6 enable
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 93
command only configures a link-local IPv6 address. Although it is possible for an interface to have
more than two IPv6 addresses, that is not the case in this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

QUESTION NO: 56
You issue the show ip ospf neighbor command on Router1 and see that Router2 is in the
2WAY/DROTHER state.
Which of the following statements is true regarding Router2?
A.
Router1 is the DR for the segment.
B.
Router2 is the DR for the segment.
C.
Router1 and Router2 are normal neighbor routers that are operating correctly.
D.
The MTU settings are mismatched between Router1 and Router2.

A

Answer: C
Explanation:
Router1 and Router2 are normal neighbor routers that are operating correctly. Neighbor routers
that are neither the designated router (DR) nor the backup designated router (BDR) remain in the
2-Way state. The DR and BDR will proceed to the Full state.
Neither Router1 nor Router2 is the DR for the segment. If Router1 were the DR, the output of the
show ip ospf neighbor command would show Router2 in the FULL/DROTHER state. If Router2
were the DR, the output of the show ip ospf neighbor command would show Router2 in the
FULL/DR state.
The maximum transmission unit (MTU) settings are not mismatched between Router1 and
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 94
Router2. If the MTU settings were mismatched, the routers would be stuck in the Exstart,
Exchange, or Loading states.
When an OSPF neighbor router is powered on, it transitions through the following neighbor states:
* Down
* Init
* 2-Way
* Exstart
* Exchange
* Loading
* Full
An OSPF neighbor router begins in the Down state. A neighbor in the Down state has not yet sent
a Hello packet.
When a Hello packet is received from the neighbor router but the Hello packet does not contain
the receiving router’s ID, the neighbor router is in the Init state. The receiving router replies to the
neighbor router with a Hello packet that contains the neighbor router’s ID as an acknowledgment
that the receiving router received the neighbor’s Hello packet. If a router is stuck in the Init state, it
has sent Hello packets but has not received any from the neighbor router.
The neighbor router replies with a Hello packet that contains the receiving router’s ID. When this
occurs, the neighbor router is in the 2-Way state. At the end of the 2-Way state, the DR and BDR
are elected for broadcast and nonbroadcast multiaccess (NBMA) networks. On broadcast and
NBMA networks, neighbor routers will proceed to the Full state only with the DR and BDR; other
neighbor adjacencies will remain in the 2-Way state. If all routers on a segment remain in the 2-
Way state, you should verify whether all routers on the segment are set to a priority of 0, which
prevents any of them from becoming the DR or BDR.
After the DR and BDR are elected, neighbor routers form master-slave relationships in order to
establish the method for exchanging link-state information. Routers in this state are in the Exstart
state. If a router is stuck in the Exstart state, you should verify whether there is a problem with
mismatched maximum transmission unit (MTU) settings or duplicate router IDs.
Neighbor routers then exchange database descriptor (DBD) packets. These DBD packets contain
link-state advertisement (ISA) headers that describe the contents of the link-state database
(LSDB). Routers in this state are in the Exchange state. If a router is stuck in the Exchange state,
you should verify whether there is a problem with mismatched MTU settings or duplicate router
IDs.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 95
Routers then send link-state request (LSR) packets to request the contents of the neighbor
router’s OSPF database. The neighbor router replies with link-state update (LSU) packets that
contain the routing database information. Routers in this state are in the Loading state. If a router
is stuck in the Loading state, you should verify whether there is a problem with mismatched MTU
settings or corrupted LSR packets.
After the OSPF databases of neighbor routers are fully synchronized, the routers transition to the
Full state, which is the normal OSPF router state for DRs and BDRs. A router will periodically send
Hello packets to its neighbors to indicate that it is still functional. If a router does not receive a
Hello packet from a neighbor within the dead timer interval, the neighbor router will transition back
to the Down state.
Reference: https://www.cisco.com/c/en/us/support/docs/ip/open-shortest-path-first

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

QUESTION NO: 57
Which of the following statements are true regarding dynamic interfaces on WLCs? (Choose two.)
A.
Dynamic interfaces are user-defined.
B.
Dynamic interfaces are typically used for client data.
C.
Dynamic interfaces are typically used for management information.
D.
Dynamic interfaces must be reachable by other WLCs.
E.
Dynamic interfaces are often used for maintenance purposes.

A

Answer: A,B
Explanation:
Dynamic interfaces are user-defined and are typically used for client data. A wireless LAN
controller (WLC) contains both static and dynamic interfaces. A WLC can contain up to four types
of static interfaces: the management interface, the AP-manager interface, a virtual interface, and
the service port interface. A WLC can contain up to 512 dynamic interfaces. The dynamic
interfaces function similarly to virtual local area networks (VLANs). For example, you can create a
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 96
dynamic interface to segment traffic on the WLC.
The management interface, which is a static interface, is used for management information. This
interface is used for all Layer 2 Lightweight Access Point Protocol (LWAPP) communications
between the controller and the lightweight access points (APs). In addition, the management
interface is used to communicate with other WLCs on the wireless network.
The service port interface, which is a static interface, is used for maintenance purposes on a WLC.
This interface is a physical interface on the WLC that can be used to recover the WLC in the event
that the WLC fails. The service port interface is the only interface that is available while the WLC is
booting.
It is not necessary for a dynamic interface to be reachable by all other WLCs. The WLCs will use
the management interface, not a dynamic interface, to exchange information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

QUESTION NO: 58
To which of the following planes does a centralized controller connect by using a northbound API?
A.
the management plane
B.
the application plane
C.
the control plane
D.
the data plane

A

Answer: B
Explanation:
In a controller-based network, a centralized controller connects to the application plane by using a
northbound Application Programming Interface (API). The application plane is the component of a
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 97
controller-based network in which applications that are written to allow interaction with the
centralized controller reside. These applications are typically designed to improve network
management efficiency through network automation. A controller communicates with applications
in the application plane by using a northbound API such as Representational State Transfer
(REST) or Java Open Services Gateway initiative (OSGi).
In a controller-based network, such as a Software-Defined Networking (SDN) network, the control
plane is centralized. The control plane is responsible for network decision making in both a
controller-based network and a traditional network. However, the control plane in a traditional
network is typically distributed among many devices. The Open Shortest Path First (OSPF) routing
protocol running on a series of routers on a traditional network is one example of a traditional
control plane. OSPF makes routing decisions for packets that require routing among Layer 3
devices. In a controller-based network, the decision-making logic is either moved to a central
controller or monitored by a central controller.
In a controller-based network, a centralized controller connects to the data plane by using a
southbound API, such as NETCONF, OpenFlow, OpFlex, or OnePK. Layer 2 switches, Layer 3
switches, and end devices typically operate in the data plane. Network tasks that are typically
performed in the data plane include the encapsulation and decapsulation of packets, the adding or
removing of trunk headers, the matching of Media Access Control (MAC) addresses to a MAC
address table, the matching of Internet Protocol (IP) addresses to paths in a routing table, the
encryption of data, Network Address Translation (NAT), and filtering by using either access control
lists (ACLs) or port security.
In both a controller-based network and a traditional network, the management plane consists of
network management protocols, such as Telnet, Secure Shell (SSH), Simple Network
Management Protocol (SNMP), and Syslog. All of these protocols enable an administrator to
connect to and manage a network device.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

QUESTION NO: 59
In a split-MAC deployment, which device is responsible for prioritizing packets and responding to
beacon and probe requests?
A.
a lightweight AP
B.
a router
C.
a switch
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 98
D.
a WLC

A

Answer: A
Explanation:
In a split-MAC deployment, a lightweight access point (AP) is responsible for prioritizing packets
and responding to beacon and probe requests. In a Cisco Unified Wireless Network deployment,
the Media Access Control (MAC) functions that are normally handled by a single device in an
autonomous wireless network are distributed between lightweight APs and wireless LAN
controllers (WLCs). The functionality provided by the lightweight AP includes handling the realtime processing of data, such as sending and receiving 802.11 traffic, responding to beacons and
probe messages, encryption, and packet prioritization. In addition, the lightweight AP must send
management information to the WLC so that the WLC can forward the information to a
management station.
By contrast, a WLC handles tasks that are not time-sensitive, such as security management,
lightweight AP configuration management, and client load balancing. The WLC is also responsible
for client association requests, data encapsulation, client authentication, key exchange, security
policy enforcement, and radio frequency (RF) management.
Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-1/Enterprise-Mobility-8-1-
Design-Guide/Enterprise_Mobility_8-1_Deployment_Guide/cuwn.html#pgfId-1173394 CCNA 200-
301 Official Cert Guide, Volume 1, Chapter 27: Analy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

QUESTION NO: 63
Which of the following is enabled on a Cisco router when you issue the ntp server command from
global configuration mode?
A.
symmetric active mode
B.
broadcast client mode
C.
authentication
D.
static client mode
E.
server mode

A

Answer: D
Explanation:
Network Time Protocol (NTP) static client mode is enabled on a Cisco router when you issue the
ntp server command from global configuration mode. NTP is used to synchronize the time on
network devices. An NTP static client receives its time from an NTP server. The syntax of the ntp
server command is ntp server ip-address, where ip-address is the Internet Protocol (IP) address
of the NTP server that the client will use to receive its time.
NTP broadcast client mode is enabled on a Cisco router when you issue the ntp broadcast client
command from interface configuration mode. An NTP broadcast client listens on the configured
interface for NTP broadcasts from an NTP server, which the NTP client uses to adjust its time. The
difference between a broadcast client and a static client is that a broadcast client can receive its
time from any NTP server. By contrast, a static client receives its time from the NTP server
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 105
specified in the ntp server command.
NTP authentication is enabled on a Cisco router when you issue the ntp authenticate command
from global configuration mode. Authentication can be used with NTP to provide source
verification for NTP synchronization. NTP authentication supports only Message Digest 5 (MD5)
keys. To enable authentication on an NTP client, you should issue the following command set:
ntp authenticate
ntp authentication-key key-number md5 key
ntp trusted-key key-number
ntp server ip-address key key-number
To enable authentication on an NTP server, you should issue the following command set:
ntp authenticate
ntp authentication-key key-number md5 key
NTP server mode is enabled on a Cisco router when you issue the ntp master command from
global configuration mode. The syntax of the ntp master command is ntp master [stratum], where
stratum is an NTP stratum value from 1 through 15; if the stratum value is not specified, the NTP
server uses the default stratum value of 8. NTP servers not only synchronize time with NTP clients
but also synchronize time with each other. Devices with higher stratum numbers receive time from
devices with lower stratum numbers. For example, a stratum 2 device typically receives its time
from a stratum 1 device, a stratum 3 device typically receives its time from a stratum 2 device, and
so on.
NTP symmetric active mode is enabled on a Cisco router when you issue the ntp peer command
from global configuration mode. A device in symmetric active mode attempts to mutually
synchronize with another NTP host; the host might synchronize the peer, or it might be
synchronized by the peer. The syntax of the ntp peer command is ntp peer ip-address, where ipaddress is the IP address of the NTP host.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

QUESTION NO: 64
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 106
On which interfaces is the OSPF broadcast network type enabled by default? (Choose two.)
A.
Ethernet
B.
X.25
C.
PPP
D.
Frame Relay
E.
HDLC
F.
FDDI

A

Answer: A,F
Explanation:
The Open Shortest Path First (OSPF) broadcast network type is enabled by default on Fiber
Distributed Data Interface (FDDI) and Ethernet interfaces, including Fast Ethernet and Gigabit
Ethernet interfaces. If the ip ospf network command has not been issued for an OSPF interface,
the default network type is used. The default OSPF network type depends upon the type of
network to which the interface is connected.
There are five OSPF network types:
* Broadcast
* Nonbroadcast
* Point-to-point
* Point-to-multipoint broadcast
* Point-to-multipoint nonbroadcast
On broadcast networks, designated router (DR) and backup designated router (BDR) elections are
performed. Multicast updates are sent, so manual configuration of neighbor routers with the
neighbor command is not required. By default, the Hello timer is set to 10 seconds and the dead
timer is set to 40 seconds. To configure an OSPF broadcast network, you should issue the ip ospf
network broadcast command.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 107
The OSPF nonbroadcast network type is enabled by default on Frame Relay and X.25 interfaces.
On nonbroadcast networks, DR and BDR elections are performed. Nonbroadcast networks do not
allow multicasts; therefore, manual configuration of neighbor routers with the neighbor command
is required so that OSPF sends unicast updates. By default, the Hello timer is set to 30 seconds
and the dead timer is set to 120 seconds. To configure an OSPF nonbroadcast network, which is
also called a nonbroadcast multiaccess (NBMA) network, you should issue the ip ospf network
non-broadcast command.
The OSPF point-to-point network type is enabled by default on High-Level Data Link Control
(HDLC) and Point-to-Point Protocol (PPP) serial interfaces. On point-to-point networks, DR and
BDR elections are not performed. Multicast updates are sent, so manual configuration of
neighbor routers with the neighbor command is not required. By default, the Hello timer is set to
10 seconds and the dead timer is set to 40 seconds. To configure an OSPF point-to-point network,
you should issue the ip ospf network point-to-point command.
On OSPF point-to-multipoint networks, DR and BDR elections are not performed. Multicast
updates are sent, so manual configuration of neighbor routers with the neighbor command is not
required. By default, the Hello timer is set to 30 seconds and the dead timer is set to 120 seconds.
To configure an OSPF point-to-multipoint broadcast network, you should issue the ip ospf
network point-to-multipoint command.
On OSPF point-to-multipoint nonbroadcast networks, DR and BDR elections are not performed.
Nonbroadcast networks do not allow multicasts; therefore, manual configuration of neighbor
routers with the neighbor command is required so that OSPF sends unicast updates. By default,
the Hello timer is set to 30 seconds and the dead timer is set to 120 seconds. To configure an
OSPF point-to-multipoint nonbroadcast network, you should issue the ip ospf network point-tomultipoint non-broadcast command

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

QUESTION NO: 65
Which of the following networks is not defined by RFC 1918?
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 108
A.
10.1.1.0
B.
192.168.111.0
C.
10.16.1.0
D.
172.172.1.0
E.
172.20.1.0
F.
192.168.1.0

A

Answer: D
Explanation:
The Internet Protocol (IP) address 172.172.1.0 is a public IP address and is not defined by
Request for Comments (RFC) 1918. Three network address ranges are defined as private
address ranges by RFC 1918. These network ranges are intended for private local area network
(LAN) use and are not routed across the Internet. RFC 1918 defines these private address ranges
and provides guidelines for their use. The following list indicates the IP address ranges that are
reserved for private, internal use:
* Class A – 10.0.0.0 to 10.255.255.255
* Class B – 172.16.0.0 to 172.31.255.255
* Class C – 192.168.0.0 to 192.168.255.255
Public IP addresses are maintained by the Internet Assigned Numbers Authority (IANA). In order
to use a public, routable IP address, you must obtain an available range of IP addresses from
IANA or from a third party that has obtained valid IP addresses from IANA. You can limit the
number of public IP addresses required for your network by using private IP addresses that have
been defined in RFC 1918 for internal use. This enables you to configure private IP addresses for
your network’s internal host computers. You can then implement Network Address Translation
(NAT) to translate these private addresses to a minimal number of public addresses for
transmission across public networks such as the Internet.
The IP addresses 10.16.1.0 and 10.1.1.0 are examples of a Class A internal, private address
defined by RFC 1918. The IP address 172.20.1.0 is an example of a Class B internal, private
address defined by RFC 1918. The IP addresses 192.168.111.0 and 192.168.1.0 are examples of
Class C internal, private addresses defined by RFC 1918

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

QUESTION NO: 66
You have issued the power inline police action log command from interface configuration mode
on a Cisco switch.
Which of the following best describes what will occur when an attached PD attempts to draw more
than its allocated amount of power from the configured interface?
A.
The port will enter an error-disabled state.
B.
A log message will appear on the console.
C.
The port will restart, and a log message will appear on the console.
D.
The port will enter an error-disabled state, and a log message will appear on the console

A

Answer: C
Explanation:
In this scenario, the port will restart and a log message will appear on the console when an
attached powered device (PD) attempts to draw more than its allocated amount of power from the
configured interface. Because sending an electrical current to a device that does not support
Power over Ethernet (PoE) could potentially damage the receiving device, power-sourcing
equipment (PSE), such as a PoE-capable switch, will first apply a small voltage to a PoE-enabled
port to determine whether a PD is attached to the port. The Institute of Electrical and Electronics
Engineers (IEEE) PoE standards require a PD to provide a measurable resistance of
approximately 25 kilo Ohms (kohms) when it is probed by a PSE. If the PSE detects a PD, the
PSE can then send a signal with a higher voltage to determine the class of the PD. When an IEEE
standards-compliant PD receives this higher-voltage signal from a PSE, its response will inform
the PSE about the PD’s power requirements. The PSE will categorize the PD into an appropriate
class, if possible, and will then guarantee a minimum amount of power relative to the class of the
PD. If the PSE cannot identify the appropriate class for a PD, the PD will be categorized into the
default class and will receive the default amount of power.
Power policing is a Cisco feature that enables a switch to monitor the current draw of connected
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 110
devices and to take action if the draw exceeds the amount allocated to the PD in accordance with
its negotiated power class. The allocated maximum power draw is referred to as the cutoff power
value. You can issue the power inline police command from interface configuration mode to
enable power policing with the default settings. When power policing is enabled with the default
settings for a PoE-capable interface, the interface will enter an error-disabled state, effectively
shutting down the port, when an attached PD attempts to draw more than the cutoff power from
the configured interface. A log message describing the event will also be sent to the console.
An interface in an error-disabled state will remain shut down until it is manually reset (by an
administrator issuing the shutdown and no shutdown commands in sequence for the interface)
or until the error-disabled auto recovery mechanism timer expires and the port is automatically
reset. Although error-disable detection for inline power is enabled by default on Cisco PoEcapable switches, error-disable auto recovery for inline power is not enabled by default. Therefore,
a port that has been placed into an error-disabled state by an inline power event will not
automatically reset by default. You can issue the errdisable recovery cause inline-power
command from global configuration mode to enable error-disable auto recovery for inline power.
You can issue the power inline police action log command to change the default power policing
behavior. When the log action is configured, a PoE-enabled interface will restart and send a log
message to the console when an attached PD attempts to draw more than the cutoff power from
the configured interface. This will typically cause the PD to reboot and to renegotiate its power
requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

QUESTION NO: 68
You issue the show running-config command on RouterA and receive the following partial
output:
interface FastEthernet0/0
ip address 192.168.18.33 255.255.255.240
HostA is on the same physical network as the FastEthernet 0/0 interface of RouterA.
Which IP address should you configure on HostA to ensure that the host can communicate with
the rest of the network?
A.
192.168.18.32/28
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 114
B.
192.168.18.48/28
C.
192.168.18.46/28
D.
192.168.18.16/28

A

Answer: C
Explanation:
Of the choices available, you should use the 192.168.18.46/28 Internet Protocol (IP) address on
HostA. You should configure an IP address on HostA that is in the same network as RouterA’s
FastEthernet 0/0 interface so that HostA can communicate with the router. Based on the output
from the show running-config command on RouterA, the router is configured with an IP address
of 192.168.18.33 255.255.255.240, or 192.168.18.33/28.
The /28 in the 192.168.18.33/28 address indicates that 28 bits belong to the network portion of a
32-bit IP address. The remaining bits belong to the host portion of the IP address. To determine
how many addresses are defined by a subnet mask, use the formula 2n, where n is the number of
bits in the host portion of the address. A /28 subnet mask uses 4 bits for host addresses, so 24
equals 16 addresses for the subnet. Networks that are subnetted by using /28 masks are
separated into groups of 16 addresses each. For example, the 192.168.18.0 network can be
divided into the following subnets:
192.168.18.0/28
192.168.18.16/28
192.168.18.32/28
192.168.18.48/28
192.168.18.64/28
192.168.18.80/28
192.168.18.96/28
192.168.18.112/28
192.168.18.128/28
192.168.18.144/28
192.168.18.160/28
192.168.18.176/28
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 115
192.168.18.192/28
192.168.18.208/28
192.168.18.224/28
192.168.18.240/28
These addresses are the subnet addresses for each subnet defined by the subnet mask.
Therefore, the 192.168.18.32/28 address is a subnet address. The last address in this subnet,
192.168.18.47/28, is the broadcast address, and the 14 addresses from 192.168.18.33/28 through
192.168.18.46/28 are host addresses.
Therefore, given that RouterA’s FastEthernet 0/0 interface is configured with an IP address of
192.168.18.33, you can configure HostA with an IP address from 192.168.18.34 through
192.168.18.46. Thus, of the choices provided, you should configure HostA with an IP address of
192.168.18.46, which falls within this range. Using this address, HostA will be configured with an
IP address in the same subnet as RouterA’s FastEthernet 0/0 interface.

50
Q

QUESTION NO: 70
You connect a new Cisco Catalyst 3750-X switch to the LAN that is configured to use an NTP
server. You want the new switch to automatically obtain its time from the NTP server.
You manually configure the system clock and then issue the ntp server 1.1.1.1 command on the
new switch.
Which of the following should you do to complete a basic NTP client configuration?
A.
Reconfigure NTP access restrictions.
B.
Disable NTP authentication.
C.
Nothing; the basic configuration is complete.
D.
Reconfigure the NTP packet source IP address.
E.
Disable the NTP broadcast service

A

Answer: C
Explanation:
The basic configuration is already complete in this scenario. Therefore, nothing else needs to be
done. In this scenario, you have manually configured the Cisco switch’s system clock. In addition,
you have issued the ntp server 1.1.1.1 command, which configures Network Time Protocol (NTP)
on the switch to synchronize its time with the time on the device that has been assigned the
Internet Protocol (IP) address of 1.1.1.1.
By default, NTP is enabled on all interfaces on a Cisco switch. Therefore, all interfaces on a switch
can receive NTP packets. However, the following conditions apply to a default NTP configuration:
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 118
* NTP associations, such as peer or server associations, are not yet configured.
* NTP authentication is disabled.
* NTP access restrictions are not configured.
* NTP broadcast service is disabled.
* NTP packet source IP address is configured by the outgoing interface.
Although you could also configure NTP authentication, access restrictions, and a packet source IP
address in this scenario, those configurations are not necessary to complete a basic NTP client
configuration on the switch.
You do not need to configure the NTP packet source IP address as long as the address of the
outgoing interface can be used as a destination address for NTP replies. If for some reason the
outgoing interface IP address cannot be used as a destination IP address, you can configure an
alternate source address to which replies can be sent. To configure a specific source interface,
issue the ntp source interface-type interface-number command from global configuration mode.
The NTP broadcast service enables interfaces to send or receive NTP broadcast packets. It can
be enabled if you do not want to statically configure NTP associations. NTP broadcast messages
enable an NTP client to configure its time based on NTP broadcast messages from any NTP
server on the local area network (LAN). You can configure an interface to either send NTP
broadcast messages or receive NTP broadcast messages, but not both. To configure a Cisco
device to listen for NTP broadcasts on an interface, issue the ntp broadcast client command
from interface configuration mode.

51
Q

QUESTION NO: 71
You want to configure your Cisco router to provide IP addresses to the computers on your
network. The IP addresses should be assigned from the 192.168.1.0/26 address range.
Which of the following commands should you issue? (Choose two.)
A.
dhcp pool 1
B.
host 192.168.1.0 255.255.255.192
C.
network 192.168.1.0 0.0.0.63
D.
network 192.168.1.0 255.255.255.192
E.
ip address dhcp
F.
ip dhcp pool 1

A

Answer: D,F
Explanation:
To configure your Cisco router to provide Internet Protocol (IP) addresses to the computers on
your network over Dynamic Host Configuration Protocol (DHCP), you should issue the ip dhcp
pool 1 command from global configuration mode and then issue the network 192.168.1.0
255.255.255.192 command from DHCP pool configuration mode. The router will then provide IP
addresses to hosts connected to the router interface that belongs to that subnet. The syntax of the
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 120
ip dhcp pool command is ip dhcp pool name, where name is the name of your DHCP pool. The
syntax of the network command is network address [mask | /prefix], where address is the
network address, mask is the subnet mask, and prefix is the prefix length in Classless InterDomain Routing (CIDR) notation.
You should not issue the dhcp pool 1 command, because it contains invalid syntax. To configure
a Cisco router to become a DHCP server, you must issue the ip dhcp pool name command.
You should not issue the network 192.168.1.0 0.0.0.63 command, because the subnet mask is
incorrectly specified as a wildcard mask. To configure your DHCP pool with addresses from the
192.168.1.0/26 range, you should issue the network 192.168.1.0 255.255.255.192 command.
Alternatively, you can configure the DHCP pool using Classless Inter-Domain Routing (CIDR)
notation by issuing the network 192.168.1.0 /26 command.
You should not issue the host 192.168.1.0 255.255.255.192 command. The host command,
when issued from DHCP pool configuration mode, is used to configure an IP address for a manual
binding. A manual binding enables a device to always receive the same IP address from DHCP by
associating a static IP address with the device’s Media Access Control (MAC) address. To
configure a manual binding, you should issue the host address [mask | /prefix] command from
DHCP pool configuration mode, where address is the address of the device, mask is the subnet
mask, and prefix is the prefix length in CIDR notation. Then you should issue the client-identifier
MAC command, where MAC is the client’s MAC address in dotted hexadecimal notation. For
example, to create a manual binding so that the computer with MAC address 0000.0c12.3456
always receives the IP address 192.168.1.20/26, you should issue the following commands:
host 192.168.1.20 /26
client-identifier 0000.0c12.3456
Alternatively, you can issue the host 192.168.1.20 255.255.255.192 command to specify the IP
address for the static mapping. You cannot use the same DHCP pool for manual bindings and for
dynamic IP address allocation.
You should not issue the ip address dhcp command. The ip address dhcp command configures
an interface to become a DHCP client so that it can receive IP configuration information from a
DHCP server. A DHCP client can receive an IP address, a subnet mask, a domain name, a
Domain Name System (DNS) server, and more from a DHCP server

52
Q

QUESTION NO: 72
In which of the following situations does a router use AD values to determine route selection?
A.
when multiple routes to different destination networks are received, and each of these routes is
received from a different routing protocol
B.
when multiple routes to the same destination network are received, and each of these routes is
received from a different routing protocol
C.
when multiple routes to different destination networks are received, and all of these routes are
received from the same routing protocol
D.
when multiple routes to the same destination network are received, and all of these routes are
received from the same routing protocol

A

Answer: B
Explanation:
A router uses administrative distance (AD) values to determine route selection when multiple
routes to the same destination network are received, and each of these routes is received from a
different routing protocol. Lower ADs are preferred over higher ADs. The following list contains the
most commonly used ADs:

53
Q

QUESTION NO: 73
Which of the following commands should you issue on a switch port so that no more than two
devices can send traffic into the port?
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 123
A.
switchport port-security
B.
switchport port-security maximum 2
C.
switchport port-security mac-address sticky
D.
switchport port-security 2
E.
switchport port-security mac-address 2

A

Answer: B
Explanation:
You should issue the switchport port-security maximum 2 command from interface
configuration mode so that only two devices can send traffic into the port. The switchport portsecurity maximum 2 command configures the switch port to allow no more than two devices,
each with a unique Media Access Control (MAC) address, to send traffic into the port.
Port security allows traffic into a switch port from authorized MAC addresses. If traffic arrives from
a MAC address that is authorized, the traffic will be forwarded to its destination. If traffic arrives
from a MAC address that is not authorized, the traffic will be discarded, and on some switch
configurations, the switch port will also be disabled. By itself, the switchport port-security
command enables port security and authorizes a maximum of one MAC address to send traffic
into the port. The 2 parameter in the switchport port-security 2 command is an invalid
parameter; thus this command will not allow two devices to communicate on a switch port.
Authorized MAC addresses can be statically configured or dynamically learned. To statically
configure a switch port to allow traffic from a MAC address, you should issue the switchport portsecurity mac-address mac-address command in interface configuration mode. The 2 parameter
in the switchport port-security mac-address 2 command is an invalid parameter; thus this
command will not allow two devices to communicate on a switch port.
Any MAC addresses that are not configured statically will be learned dynamically from incoming
traffic, up to the maximum number of MAC addresses configured in the switchport port-security
maximum number command. The switchport port-security mac-address sticky command
converts dynamically learned MAC addresses to sticky MAC addresses. Sticky MAC addresses
are stored in the running configuration. To ensure that the sticky MAC addresses are not lost
during a reboot, you should issue the write memory or copy running-config startup-config
commands. You cannot use the switchport port-security mac-address sticky command by
itself to authorize a maximum of two devices to send traffic into a switch port; you must also issue
the switchport port-security maximum 2 command.

54
Q

QUESTION NO: 75
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 126
Which of the following is another name for a hypervisor?
A.
a VM
B.
an IaaS
C.
a PaaS
D.
a VMM

A

Answer: D
Explanation:
A virtual machine monitor (VMM) is another name for a hypervisor. A hypervisor is software that is
capable of virtualizing the physical components of computer hardware. Virtualization enables the
creation of multiple VMs that can be configured and run in separate instances on the same
hardware. In this way, virtualization is capable of reducing an organization’s expenses on
hardware purchases. A potential security risk associated with hypervisors is access to the
hypervisor itself. Individuals or services with access to the hypervisor are potentially capable of
compromising all of the VMs running on that hypervisor.
There are two types of hypervisors. A Type 1 hypervisor is a hypervisor that is installed on a bare
metal server, meaning that the hypervisor is also its own operating system (OS). Because of their
proximity to the physical hardware, Type 1 hypervisors typically perform better than Type 2
hypervisors. A Type 2 hypervisor cannot be installed on a bare metal server. Instead, Type 2
hypervisors are applications that are installed on host OSs, such as Microsoft Windows, macOS,
or Linux. These applications, which are also called hosted hypervisors, use calls to the host OS to
translate between guest OSs in VMs and the server hardware. Because they are installed similarly
to other applications on the host OS, Type 2 hypervisors are typically easier to deploy and
maintain than Type 1 hypervisors.
A virtual machine (VM) is not another name for a hypervisor. A VM is a virtual instance of a device
that runs on a hypervisor. In other words, a VM is a virtualized computing environment that relies
on a hypervisor to communicate with the physical hardware on which it is installed.
Neither Infrastructure as a Service (IaaS) nor Platform as a Service (PaaS) is another name for a
hypervisor. Cloud storage, IaaS, Software as a Service (SaaS), and PaaS are all terms used to
describe cloud computing, which is a general term for products or services that are provided by a
third party over a network. IaaS enables an organization to use the hardware resources of a third
party, such as processing, networking, and file system resources, to house and configure virtual
hosts. PaaS differs from IaaS because the licensee is using the third party’s development tools or
Application Programming Interface (API) to develop and deploy specific cloud-based applications

55
Q

QUESTION NO: 76
Which of the following terms best describes an Ethernet frame that exceeds 1,518 bytes and has a
bad FCS value?
A.
a runt
B.
a giant
C.
a jumbo
D.
a baby giant

A

Answer: B
Explanation:
A giant is an Ethernet frame that exceeds 1,518 bytes and has a bad Frame Check Sequence
(FCS) value. The default maximum transmission unit (MTU) size for Ethernet frames is 1,500
bytes, not including the Ethernet header and the cyclic redundancy check (CRC) trailer, which add
18 bytes to the frame. The FCS field in the Ethernet frame stores a 4-byte CRC value that is
intended to enable a frame’s receiver to determine whether the frame has been corrupted in
transit. The FCS is calculated based on the values of every other field in the frame. If a CRC error
is detected, the frame is discarded and the interface CRC and Frame counters are incremented.
Although the Ethernet standard requires frames to have a size between 64 bytes and 1,518 bytes,
there are devices that can support larger frame sizes. These nonstandard frames can facilitate the
efficient transmission of large data payloads in environments where large, non-standard frame
sizes are supported, such as data center storage network implementations. There are several
common Ethernet frames that exceed the standard size of 1,518 bytes. A baby giant is a frame
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 128
that is up to 1,600 bytes in length. Baby giants can occur if you use Q-in-Q encapsulation,
Multiprotocol Label Switching (MPLS), or any other feature that adds to the size of an Ethernet
frame. A jumbo is a frame that is up to 9,216 bytes in length, which is much larger than a standard
Ethernet frame.
A runt is a frame that is fewer than 64 bytes and has a bad FCS value. Frames that are smaller
than 64 bytes are discarded. Runts can sometimes be caused by excessive collisions but can also
be caused by malfunctioning hardware.

56
Q

QUESTION NO: 79
Which of the following standards natively includes PortFast, UplinkFast, and BackboneFast?
A.
802.1s
B.
802.1D
C.
802.1D and 802.1s
D.
802.1D and 802.1w
E.
802.1w

A

Answer: E
Explanation:
The 802.1w Rapid Spanning Tree Protocol (RSTP) standard natively includes PortFast,
UplinkFast, and BackboneFast. PortFast enables a port to immediately access the network by
transitioning the port into the Spanning Tree Protocol (STP) forwarding state without passing
through the listening and learning states. UplinkFast increases convergence speed for an access
layer switch that detects a failure on the root port with backup root port selection by immediately
replacing the root port with an alternative root port. BackboneFast increases convergence speed
for switches that detect a failure on links that are not directly connected to the switch.
The 802.1D standard is traditional STP, which prevents switching loops on a network. Although
PortFast, UplinkFast, and BackboneFast can be used with the 802.1D standard, it does not
contain those features natively. Traditional STP converges slowly, so the 802.1w RSTP standard
was developed by the Institute of Electrical and Electronics Engineers (IEEE) to address the slow
transition of an 802.1D port to the forwarding state. RSTP is backward compatible with STP, but
the convergence benefits provided by RSTP are lost when RSTP interacts with STP devices.
The 802.1s Multiple Spanning Tree (MST) standard is used to create multiple spanning tree
instances on a network. Implementing MST on a switch also implements RSTP. However, the
802.1s standard does not natively include PortFast, UplinkFast, and BackboneFast within the
specification.
Reference: https://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-treeprotocol/24062-146.html#conclusion CCNA 200-301 Official Cert Guide, Volume 1, Chapter 9:
Spanning Tree Protocol Concepts, Optional STP Features

57
Q

QUESTION NO: 81
Which of the following configuration management tools accepts inbound requests from agents by
using HTTPS on TCP port 8140?
A.
Ansible
B.
Chef
C.
Puppet
D.
Salt

A

Answer: C
Explanation:
Puppet is the configuration management tool that accepts inbound requests from agents by using
Hypertext Transfer Protocol Secure (HTTPS) on Transmission Control Protocol (TCP) port 8140.
Of the four major configuration management tools, Puppet is the most mature and the most widely
used. Puppet operates on Linux distributions, UNIX-like systems, and Microsoft Windows. Puppet
uses a client/server architecture; managed nodes that are running the Puppet Agent application
can receive configurations from a master server that is running Puppet Server. Modules are
written in Ruby Domain Specific Language (DSL) or in a Ruby-like Puppet language known as
Puppet DSL.
Like Puppet, Chef operates on Linux distributions, UNIX-like systems, and Microsoft Windows.
Chef can use a client/server architecture or a standalone client configuration. Chef communicates
by using HTTPS on the traditional TCP port 443. Configuration information is contained within
cookbooks that are written in Ruby DSL and are stored on a Chef Server. Managed nodes that are
running the Chef Client can pull cookbooks from the server. Standalone clients that do not have
access to a server can run chef-solo and pull cookbooks from a local directory or from a tar.gz
archive on the Internet.
Like the other configuration management software packages, Ansible also operates on Linux
distributions, UNIX-like systems, and Microsoft Windows. However, unlike the other configuration
management software packages, Ansible does not use agent software on managed nodes.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 138
Ansible uses Secure Shell (SSH) to connect to remote nodes. By default, SSH operates on TCP
port 22. Configurations are stored on the Ansible server in playbooks that are written in YAML Ain’t
Markup Language (YAML). Managed nodes can download scripted modules from an Ansible
server by using SSH.
Salt also operates on Linux distributions, UNIX-like systems, and Microsoft Windows. Salt can use
a client/server architecture by installing Salt master software on the server and Salt minion
software on managed nodes. Masters and minions communicate by using ZeroMQ. To
communicate, Salt requires TCP ports 4505 and 4506. Salt can also be used without client agent
software by using Salt SSH. However, Salt SSH is much slower than ZeroMQ. Salt configuration
information is stored primarily in state modules that are typically written in YAML; however, Python
or Python Domain Specific Language (PyDSL) can also be used for complex configuration scripts.

58
Q

QUESTION NO: 82
Which of the following components are used to calculate the EIGRP composite metric by default?
(Choose two.)
A.
reliability
B.
bandwidth
C.
cost
D.
delay
E.
MTU
F.
hop count
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 139
G.
load

A

Answer: B,D
Explanation:
By default, Enhanced Interior Gateway Routing Protocol (EIGRP) uses bandwidth and delay to
calculate the composite metric, which is used to determine the best path to a destination network.
Bandwidth refers to the data throughput of a link. Delay refers to the length of time required to
send a packet to a destination.
EIGRP can also use load and reliability as components, but these components are not used by
default. Load refers to the amount of data activity over a link. Reliability refers to the bit-error rate
of a link.
The maximum transmission unit (MTU) is not used to calculate the EIGRP metric. MTU refers to
the maximum length of frames that can be accepted by devices along the data route.
Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS) use
cost to calculate the best path to a destination network. By default, OSPF and IS-IS calculate the
cost based on bandwidth. However, cost can be configured by using any value that an
administrator desires, such as the monetary cost of using a link.
Routing Information Protocol version 1 (RIPv1) and RIPv2 use hop count to calculate the best path
to a destination network. Hop count refers to the number of routers a packet will traverse from
source to destination. However, RIP has a hop-count limitation of 15 hops; any route more than 15
hops away is considered to be unreachable. With a defined maximum metric, a routing protocol
can mitigate routing loops caused by invalid routing updates.
Reference: https://www.cisco.com/c/en/us/support/docs/ip/enhanced-interior-gateway-routingprotocol-eigrp/13673-14.html

59
Q

QUESTION NO: 84
Which of the following best describes authentication?
A.
the process of verifying a user’s identity
B.
the process of verifying the level of access configured for a user
C.
the process of establishing a user’s accounts upon hire
D.
the process of recording the use of resources

A

Answer: A
Explanation:
Authentication is the process of verifying a user’s identity. The following list defines the three
phases of the Authentication, Authorization, and Accounting (AAA) process:
* Authentication – the process of verifying a user’s identity
* Authorization – the process of verifying the level of access configured for a user
* Accounting – the process of recording the use of resources
AAA systems manage user activity. AAA systems are typically more sophisticated than simple
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 143
password authentication systems, such as a local password database. Two common AAA
systems are Remote Authentication Dial-In User Server (RADIUS) and Terminal Access Controller
Access-Control System Plus (TACACS+).
RADIUS is a standard AAA protocol created by the Internet Engineering Task Force (IETF).
Compared to TACACS+, RADIUS has several limitations. For example, RADIUS encrypts only the
password in Access-Request packets; it does not encrypt the entire contents of the packet like
TACACS+ does.
TACACS+ is a Cisco-proprietary protocol used during AAA operations. TACACS+ provides more
granular and flexible control over user access privileges. For example, the AAA operations are
separated by TACACS+, whereas RADIUS combines the authentication and authorization
services into a single function. Because TACACS+ separates these functions, administrators have
more control over access to configuration commands. In addition, TACACS+ encrypts the entire
contents of packets, thus providing additional security.

60
Q

QUESTION NO: 85
You issue the show ip route command on RouterA and receive the following partial output:
S 10.20.0.0/16 [1/0] via 192.168.10.2
D 10.20.0.0/20 [90/2809856] via 192.168.10.4, 00:02:14, Serial0/4
R 10.20.0.0/24 [120/3] via 192.168.10.3, 00:33:38, Serial0/3
O 10.20.0.0/28 [110/64] via 192.168.10.1, 00:02:38, Serial0/1
RouterA receives a packet that is destined for 10.20.0.17.
To which next-hop IP address will RouterA send the packet?
A.
192.168.10.1
B.
192.168.10.3
C.
192.168.10.2
D.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 144
192.168.10.4

A

Answer: B
Explanation:
RouterA will send the packet to the next-hop address 192.168.10.3. RouterA will use the Routing
Information Protocol (RIP) route, because it is the route with the longest prefix match. When a
packet is sent to a router, the router checks the routing table to see if the next-hop address for the
destination network is known. If multiple routes to a destination are known, the most specific route
is used. Therefore, the following rules apply on RouterA:
* Packets sent to the 10.20.0.0/28 network use the Open Shortest Path First (OSPF) route. This
includes destination addresses from 10.20.0.0 through 10.20.0.15.
* Packets sent to the 10.20.0.0/24 network, except those sent to the 10.20.0.0/28 network, use the
RIP route. This includes destination addresses from 10.20.0.16 through 10.20.0.255.
* Packets sent to the 10.20.0.0/20 network, except those sent to the 10.20.0.0/24 network, use the
Enhanced Interior Gateway Routing Protocol (EIGRP) route. This includes destination addresses
from 10.20.1.0 through 10.20.15.255.
* Packets sent to the 10.20.0.0/16 network, except those sent to the 10.20.0.0/20 network, use the
static route. This includes destination addresses from 10.20.16.0 through 10.20.255.255.
* Packets sent to any destination not listed in the routing table are forwarded to the default
gateway, if one is configured.
Because the most specific route to 10.20.0.17 is the route toward the 10.20.0.0/24 network,
RouterA will forward a packet destined for 10.20.0.17 to the next-hop address 192.168.10.3
through the Seria10/3 interface.
RouterA will not use the OSPF route to send a packet destined for 10.20.0.17 to the next-hop
address 192.168.10.1, because 10.20.0.17 is outside the 10.20.0.0/28 address range. Packets
destined to addresses within the 10.20.0.0/28 subnet will be sent by using the OSPF route to the
next-hop address 192.168.10.1.
RouterA will not use the static route to send a packet destined for 10.20.0.17 to the next-hop
address 192.168.10.2. Although the static route has the lowest AD, AD values are used only to
determine which route is placed in the routing table when multiple routes to a destination are
known. A router considers routes with different prefix lengths as separate routes. If the static route
were configured so that the destination network were 10.20.0.0/24, the static route would be
preferred over the RIP route.
RouterA will not use the EIGRP route to send a packet destined for 10.20.0.17 to the next-hop
address 192.168.10.4. If OSPF, EIGRP, and RIP had all advertised routes to 10.20.0.0/24, the
EIGRP route would have been selected because EIGRP has the lowest AD of the three dynamic

61
Q

QUESTION NO: 86
Which of the following Layer 2 attacks uses the MAC address of another known host on the
network in order to bypass port security measures?
A.
MAC spoofing
B.
ARP poisoning
C.
MAC flooding
D.
VLAN hopping
E.
DHCP spoofing

A

Answer: A
Explanation:
In a Media Access Control (MAC) spoofing attack, an attacker uses the MAC address of another
known host on the network in order to bypass port security measures. MAC spoofing can also be
used to impersonate another host on the network. Implementing port security with sticky secure
MAC addresses can help mitigate MAC spoofing attacks.
In a MAC flooding attack, an attacker generates thousands of forged frames every minute with the
intention of overwhelming the switch’s MAC address table. Once this table is flooded, the switch
can no longer make intelligent forwarding decisions and all traffic is flooded. This allows the
attacker to view all data sent through the switch because all traffic will be sent out each port.
Implementing port security can help mitigate MAC flooding attacks by limiting the number of MAC
addresses that can be learned on each interface to a maximum of 128. A MAC flooding attack is
also known as a Content Addressable Memory (CAM) table overflow attack.
In an Address Resolution Protocol (ARP) poisoning attack, which is also known as an ARP
spoofing attack, the attacker sends a gratuitous ARP (GARP) message to a host. The GARP
message associates the attacker’s MAC address with the Internet Protocol (IP) address of a valid
host on the network. Subsequently, traffic sent to the valid host address will go through the
attacker’s computer rather than directly to the intended recipient. Implementing Dynamic ARP
Inspection (DAI) can help mitigate ARP poisoning attacks.
In a virtual local area network (VLAN) hopping attack, an attacker attempts to inject packets into
other VLANs by accessing the VLAN trunk and double-tagging 802.1Q frames. A successful VLAN
hopping attack enables an attacker to send traffic to other VLANs without the use of a router. You
can prevent VLAN hopping by disabling Dynamic Trunking Protocol (DTP) on trunk ports, by
changing the native VLAN, and by configuring user-facing ports as access ports.
In a Dynamic Host Configuration Protocol (DHCP) spoofing attack, an attacker installs a rogue
DHCP server on the network in an attempt to intercept DHCP requests. The rogue DHCP server
can then respond to the DHCP requests with its own IP address as the default gateway address;
hence all traffic is routed through the rogue DHCP server. You should enable DHCP snooping to
help prevent DHCP spoofing attacks.

62
Q

QUESTION NO: 88
Which of the following commands will automatically enable SSH on a router?
A.
no transport input telnet
B.
enable secret
C.
crypto key generate rsa
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 149
D.
crypto key zeroize rsa
E.
transport input ssh

A

Answer: C
Explanation:
The crypto key generate rsa command will automatically enable Secure Shell (SSH) on a router.
This command creates a set of RSA security keys that can be used for SSH sessions. RSA is an
asymmetric encryption algorithm that can be used to create a public/private key pair. SSH is a
cryptographic protocol that provides a secure connection between two devices. A router uses the
security keys generated by RSA to secure SSH connections. Information sent by using SSH is
encrypted and, thus, is not viewable by using packet sniffing applications.
SSH is often used as a secure replacement for Telnet to manage network devices. In order for
SSH to be enabled on a Cisco device, the device must be running a K9 IOS image, which
provides cryptographic functionality. To enable SSH for virtual terminal (VTY) lines on a Cisco
router, you should complete the following steps:
1. Configure the router with a host name other than Router by issuing the hostname command.
2. Configure the router with a domain name by issuing the ip domain-name command.
3. Generate an RSA key pair for the router by issuing the crypto key generate rsa command.
4. Configure the VTY lines to use SSH by issuing the transport input ssh command from line
configuration mode.
The transport input ssh command does not enable SSH on the router. The transport input ssh
command only configures the VTY lines to use SSH if SSH has already been configured.
The crypto key zeroize rsa command removes RSA keys from a router. You may want to remove
RSA keys in order to generate new keys. However, removing RSA keys does not automatically
enable SSH on a router.
The enable secret command can be used to help prevent unauthorized access to privileged
EXEC mode. Using the enable secret command is more secure than using the enable password
command because the enable secret command configures the enable password to be stored as a
Message Digest 5 (MD5) hash, whereas the enable password command configures the enable
password to be stored as plain text. However, the enable secret command does not automatically
enable SSH on a router.

63
Q

QUESTION NO: 89
Which of the following Cisco lightweight AP modes provides BSSs?
A.
bridge
B.
FlexConnect
C.
local
D.
sniffer

A

Answer: C
Explanation:
Of the available choices, only a Cisco lightweight access point (AP) operating in local mode
provides basic service sets (BSSs). A BSS is a closed group of wireless devices that are
dependent on a fixed device. Before a wireless device can join the group, it must advertise its
capabilities and obtain permission from the fixed device. A lightweight AP provides an interface for
wireless clients to connect to the wireless local area network (WLAN) but requires a wireless LAN
controller (WLC) for management functions. This is in contrast to an autonomous AP, which
provides BSSs without the need for a WLC.
A Cisco lightweight AP operating in local mode, which is the default, is capable of providing
multiple BSSs on a single channel. In this mode, the AP can connect to a WLC and can provide
client connectivity. In addition, an AP operating in local mode scans all wireless channels as a
means of monitoring wireless quality and security. The connection between a lightweight AP and a
WLC is created by using two tunnels established by the Control and Provisioning of Wireless
Access Points (CAPWAP) tunneling protocol. Information sent between lightweight APs and the
WLC is encapsulated in Internet Protocol (IP) packets. This process enables a lightweight AP and
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 151
WLC to manage connectivity to the same WLAN yet be separated by both physical and logical
means.
A Cisco lightweight AP operating in FlexConnect mode does not provide BSSs. Instead,
FlexConnect mode enables a failsafe for the lightweight AP if its connection to the WLC by way of
CAPWAP tunnels goes down. When configured, FlexConnect mode enables a lightweight AP to
switch traffic between a given Service Set Identifier (SSID) and a given virtual local area network
(VLAN).
A Cisco lightweight AP operating in bridge mode does not provide BSSs. Bridge mode enables a
lightweight AP to ad as a dedicated connection between two networks. Lightweight APs operating
in bridge mode can connect to other networks in either a point-to-point or a point-to-multipoint
fashion. When multiple APs are configured in bridge mode, the collection of lightweight APs can
be used to form a mesh network.
A Cisco lightweight AP operating in sniffer mode does not provide BSSs. Sniffer mode allows a
lightweight AP to capture wireless traffic, similar to the way a wired network sniffer behaves. When
traffic is captured, a lightweight AP that is operating in sniffer mode will send the traffic to an
analyzer, which is typically software that is installed on a PC or other host.

64
Q

QUESTION NO: 90
Which of the following statements about FlexConnect ACLs is true?
A.
They are applied per AP and per interface.
B.
They are supported on the native VLAN.
C.
They do not support the implicit deny rule.
D.
They can be configured with a per-rule direction.

A

Answer: B
Explanation:
FlexConnect access control lists (ACLs) are supported on the native virtual local area network
(VLAN). FlexConnect ACLs are similar to traditional Cisco IOS ACLs in that they are rules that
permit or deny traffic from a given source to a given destination. However, FlexConnect ACLs are
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 152
configured on Cisco wireless lightweight access point (AP) VLAN interfaces if the lightweight AP is
operating in FlexConnect mode. Although it is possible to configure FlexConnect ACLs for the
native VLAN, it is not possible to configure FlexConnect ACLs for the native VLAN if the VLAN
configuration is inherited from a FlexConnect group.
FlexConnect ACLs are applied per AP and per VLAN, not per AP and per interface. One possible
application of FlexConnect ACLs is to prevent administration of the wireless local area network
(WLAN) from a particular VLAN. Even though FlexConnect ACLs are applied differently than
traditional ACLs, it is important to name FlexConnect ACLs differently from any traditional ACLs
that might be configured on the WLAN.
FlexConnect ACLs cannot be configured with a per-rule direction. This is in contrast to a traditional
ACL, which can be configured with inbound rules or outbound rules. A FlexConnect ACL is applied
in the ingress direction or the egress direction as an entire set of rules, not on a per-rule basis.
FlexConnect ACLs support the implicit deny rule. In this way, FlexConnect ACLs work similarly to
traditional ACLs. The implicit deny rule is an invisible rule that is applied to the end of an ACL. It
ensures that traffic that is not explicitly matched by a previous rule in the ACL is denied by the
ACL.

65
Q

QUESTION NO: 91
Which of the following statements about unique local unicast IPv6 addresses are true? (Choose
two.)
A.
They are assigned by ICANN.
B.
The first 7 bits of the prefix are always 1111110.
C.
They are equivalent to IPv4 multicast addresses.
D.
They are unique only within an organization.
E.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 153
They can communicate only over a single link

A

Answer: B,D
Explanation:
Internet Protocol version 6 (IPv6) unique local unicast addresses are unique only within an
organization. They are similar to IP version 4 (IPv4) Request for Comments (RFC) 1918
addresses in that they are assigned by a local administrator and must be unique only within an
organization. These addresses always begin with FC or FD because the first 7 bits of an IPv6
unique local unicast address are always 1111110. Unique local unicast addresses require a
randomly generated prefix to ensure that they are unique. Because of the random nature of the
network prefix, unique local unicast addresses are not aggregatable and cannot be summarized.
IPv6 global unicast addresses, not unique local addresses, are assigned by the Internet
Corporation for Assigned Names and Numbers (ICANN). The IPv6 prefix of 2000::/3 is the global
unicast IPv6 prefix. IPv6 global unicast addresses are similar to IPv4 global unicast addresses in
that they are globally routable. These addresses are assigned by ICANN to the Regional Internet
Registries (RIRs), which distribute the addresses to Internet service providers (ISPs). The ISPs
then distribute address ranges to organizations. IPv6 global unicast addresses always begin with a
2 or a 3 because the first 3 bits of an IPv6 global unicast address are always 001.
IPv6 multicast addresses are similar to IPv4 multicast addresses. The IPv6 prefix FF00::/8 is used
for multicast addresses, which are used for one-to-many communication. IPv6 addresses in the
FF00::/8 range begin with the characters FF00 through FFFF. However, certain address ranges
are used to indicate the scope of the multicast address. The following IPv6 multicast scopes are
defined:
* FF01:116 – node-local
* FF02::/16 – link-local
* FF05::/16 – site-local
* FF08::/16 – organization-local
* FF0E::/16 – global
These addresses always begin with FF because the first 8 bits of an IPv6 multicast address are
always 11111111.
IPv6 link-local unicast addresses, not unique local addresses, are used for communication over a
single link. Routers do not forward traffic sent to a link-local address; the traffic stays on the local
link. IPv6 link-local unicast addresses are often used for neighbor discovery. These addresses
usually begin with FEB, as specified in RFC 4291. Technically, these addresses could begin with
FE9, FEA, or FEB because there are four possible combinations of the first 12 bits of the address.
The first 10 bits of an IPv6 link-local unicast address are always 1111111010, or FE, which means
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 154
that link-local IPv6 addresses could technically begin with any of the following:
* 1111 1110 1000, which is equal to FE8
* 1111 1110 1001, which is equal to FE9
* 1111 1110 1010, which is equal to FEA
* 1111 1110 1011, which is equal to FEB
Reference: https://www.ripe.net/manage-ips-and-asns/ipv6/ipv6-addresstypes/ipv6addresstypes.pdf (PDF) CCNA 200-301 Official Cert Guide, Volume 1, Chapter 23: IPv6
Addressing and Subnetting, Unique Local Unicast Addresses

66
Q

QUESTION NO: 92
You are configuring Layer 2 security on a WLAN by using the WLC GUI. You select WPA+WPA2
from the Layer 2 Security drop-down list box. You want to configure the WPA2 key in
hexadecimal format.
Which of the following WPA2 key management methods should you select from the Auth Key
Mgmt drop-down list box?
A.
802.1X
B.
CCKM
C.
PSK
D.
802.1X+CCKM

A

Answer: C
Explanation:
You should select the PSK Wi-Fi Protected Access 2 (WPA2) key management method from the
Auth Key Mgmt drop-down list box if you want to configure the WPA2 key in hexadecimal format.
The PSK method configures WPA or WPA2 to use the Pre-Shared Key (PSK) key management
method. This method requires that an administrator configure each wireless client that will connect
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 155
to the network with the key that is configured on the Cisco Wireless LAN Controller (WLC). The
PSK option supports key entry as either an ASCII passphrase from 8 through 63 characters in
length or a key of 64 hexadecimal values. Combining WPA or WPA2 with a PSK key management
method is often known as WPA-PSK, or WPA Personal.
You should not select the 802.1X key management method in this scenario. The Institute of
Electrical and Electronics Engineers (IEEE) 802.1X standard defines a method of port-based
network access control. On Cisco wireless local area networks (WLANs), the 802.1X key
management method is the default method for both WPA and WPA2. It typically requires a
Remote Authentication Dial-In User Service (RADIUS) server and uses various Extensible
Authentication Protocol (EAP) implementations to authenticate users. Combining WPA or WPA2
with an 802.1X key management method is often known as WPA-8021X mode, or WPA
Enterprise.
You should not select the CCKM key management method in this scenario. This option enables
the Cisco Centralized Key Management (CCKM) key management method. CCKM is a Ciscoproprietary fast-rekeying method that enables a wireless client to roam from one access point to
another without requiring intervention from the WLC. CCKM is typically used to reduce delay when
wireless clients transition between access points so that delay sensitive services, such as Voice
over Internet Protocol (VoIP), operate smoothly.
You should not select the 802.1X+CCKM key management method in this scenario. This option
enables 802.1X clients to use the CCKM key management method to roam between access points
without performing the complete authentication process again. Normally, 802.1X clients mutually
authenticate to a new access point. This process likewise involves reauthenticating with the
RADIUS server. The 802.1X+CCKM key management method removes the need to
reauthenticate with the RADIUS server, thus reducing the amount of time it takes for an 802.1X
client to roam between access points

67
Q

QUESTION NO: 93
Which of the following is most likely to be considered a form of accounting?
A.
logging a verified user’s file access
B.
verifying a user’s password
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 156
C.
assigning a role to a verified user
D.
allowing a user to access a specific file
E.
verifying a user’s fingerprint pattern

A

Answer: A
Explanation:
Logging a verified user’s file access is a form of accounting. Authentication, Authorization, and
Accounting (AAA) systems manage user activity. Accounting is a feature of AAA systems that
enables administrators to track resource usage across a network. If a security incident occurs,
accounting can aid the effort to track the incident back to its source. AAA systems are typically
more sophisticated than simple password authentication systems, such as a local password
database. Two common AAA systems are Remote Authentication Dial-In User Server (RADIUS)
and Terminal Access Controller Access-Control System Plus (TACACS+).
RADIUS is a standard AAA protocol created by the Internet Engineering Task Force (IETF).
Compared to TACACS+, RADIUS has several limitations. For example, RADIUS encrypts only the
password in Access-Request packets; it does not encrypt the entire contents of the packet like
TACACS+ does.
TACACS+ is a Cisco-proprietary protocol used during AAA operations. TACACS+ provides more
granular and flexible control over user access privileges. For example, the AAA operations are
separated by TACACS+, whereas RADIUS combines the authentication and authorization
services into a single function. Because TACACS+ separates these functions, administrators have
more control over access to configuration commands. In addition, TACACS+ encrypts the entire
contents of packets, thus providing additional security.
The following list defines the three phases of the AAA process:
* Authentication – the process of verifying a user’s identity
* Authorization – the process of verifying the level of access configured for a user
* Accounting – the process of recording the use of resources
Verifying a user’s fingerprint pattern and verifying a user’s password are both likely to be
considered authentication, not accounting. Authentication is the process of verifying a user’s
identity. Authentication by itself does not grant access to a given resource.
Allowing a user to access a specific file is a form of authorization, not accounting. Similarly,
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 157
assigning a role to a verified user is a form of authorization. Allowing access to a specific file can
involve assigning specific user or group permissions directly to the file, matching a rule, such as
an access control list (ACL) that associates a specific user with a file, or assigning a user to a
specific role that has permission to access the file

68
Q

QUESTION NO: 95
You have issued the power inline police command from interface configuration mode on a Cisco
switch.
Which of the following best describes what will occur when an attached PD attempts to draw more
than its allocated amount of power from the configured interface?
A.
The port will enter an error-disabled state, and a log message will appear on the console.
B.
The port will restart, and a log message will appear on the console.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 160
C.
A log message will appear on the console.
D.
The port will enter an error-disabled state.

A

Answer: A
Explanation:
In this scenario, the port will enter an error-disabled state and a log message will appear on the
console when an attached powered device (PD) attempts to draw more than its allocated amount
of power from the configured interface. Because sending an electrical current to a device that does
not support Power over Ethernet (PoE) could potentially damage the receiving device, powersourcing equipment (PSE), such as a PoE-capable switch, will first apply a small voltage to a PoEenabled port to determine whether a PD is attached to the port. The Institute of Electrical and
Electronics Engineers (IEEE) PoE standards require a PD to provide a measurable resistance of
approximately 25 kilo Ohms (kohms) when it is probed by a PSE. If the PSE detects a PD, the
PSE can then send a signal with a higher voltage to determine the class of the PD. When an IEEE
standards-compliant PD receives this higher-voltage signal from a PSE, its response will inform
the PSE about the PD’s power requirements. The PSE will categorize the PD into an appropriate
class, if possible, and will then guarantee a minimum amount of power relative to the class of the
PD. If the PSE cannot identify the appropriate class for a PD, the PD will be categorized into the
default class and will receive the default amount of power.
Power policing is a Cisco feature that enables a switch to monitor the current draw of connected
devices and to take action if the draw exceeds the amount allocated to the PD in accordance with
its negotiated power class. The allocated maximum power draw is referred to as the cutoff power
value. You can issue the power inline police command from interface configuration mode to
enable power policing with the default settings. When power policing is enabled with the default
settings for a PoE-capable interface, the interface will enter an error-disabled state, effectively
shutting down the port, when an attached PD attempts to draw more than the cutoff power from
the configured interface. A log message describing the event will also be sent to the console.
An interface in an error-disabled state will remain shut down until it is manually reset (by an
administrator issuing the shutdown and no shutdown commands in sequence for the interface)
or until the error-disable auto recovery mechanism timer expires and the interface is automatically
reset. Although error-disable detection for inline power is enabled by default on Cisco PoEcapable switches, error-disable auto recovery for inline power is not enabled by default. Therefore,
a port that has been placed into an error-disabled state by an inline-power event will not
automatically reset by default. You can issue the errdisable recovery cause inline-power
command from global configuration mode to enable error-disable auto recovery for inline power.
You can issue the power inline police action log command to change the default power policing
behavior. When the log action is configured, a PoE-enabled interface will restart and send a log
message to the console when an attached PD attempts to draw more than the cutoff power from
the configured interface. This will typically cause the PD to reboot and to renegotiate its power

69
Q

QUESTION NO: 96
You are configuring security on a new Guest LAN by using the WLC GUI.
Which of the following security settings are you most likely to configure by using the Layer 3
Security drop-down list box on the Layer 3 tab? (Choose two.)
A.
802.1X
B.
WPA+WPA2
C.
Web Passthrough
D.
Web Authentication
E.
Static WEP

A

Answer: C,D
Explanation:
Of the available choices, you are most likely to configure Web Authentication or Web
Passthrough by using the Layer 3 Security drop-down list box on the Layer 3 tab of the Cisco
Wireless LAN Controller (WLC) graphical user interface (GUI). There are two types of wireless
local area networks (WLANs) that you can configure by using the WLC GUI: a WLAN and a Guest
LAN. When you configure a new WLAN by using the WLC GUI, you can configure security
settings by clicking the new WLAN’s Security tab. By default, the Layer 2 tab is selected when
you click the Security tab. However, it is not possible to configure Layer 2 security on a Guest
LAN.
When you are configuring a WLAN, you can select one of the following Layer 2 wireless security
features from the Layer 2 Security drop-down list box on the Layer 2 tab of the Security tab:
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 162
* None, which disables Layer 2 security and allows open authentication to the WLAN
* WPA+WPA2, which enables Layer 2 security by using Wi-Fi Protected Access (WPA) or the
more secure WPA2
* 802.1X, which enables Layer 2 security by using Extensible Authentication Protocol (EAP)
authentication combined with a dynamic Wired Equivalent Privacy (WEP) key
* Static WEP, which enables Layer 2 security by using a static shared WEP key
* Static WEP + 802.1X, which enables Layer 2 security by using either a static shared WEP key or
EAP authentication
* CKIP, which enables Layer 2 security by using the Cisco Key Integrity Protocol (CKIP)
* None + EAP Passthrough, which enables Layer 2 security by using open authentication
combined with remote EAP authentication
There are two different sets of Layer 3 security features that you can configure on a Cisco WLC:
one set for a WLAN and one set for a Guest LAN. Depending on which type of WLAN you create
and which Layer 2 security options you have selected, you can select one of the following Layer 3
wireless security features from the Layer 3 Security drop-down list box on the Layer 3 tab of the
Security tab in the WLC GUI:
* None, which disables Layer 3 security no matter which Layer 2 security option is configured and
regardless of whether you are configuring
* IPSec, which enables Layer 3 security for WLANs by using Internet Protocol Security (IPSec)
* VPN Pass-Through, which enables Layer 3 security for WLANs by allowing a client to establish a
connection with a specific virtual private
* Web Authentication, which enables Layer 3 security for Guest LANs by prompting for a user
name and password when a client connects
* Web Passthrough, which enables direct access to the network for Guest LANs without prompting
for a user name and password
Not every Layer 3 security mechanism is compatible with every Layer 2 security mechanism. It is
therefore important to first configure Layer 2 security options before you attempt to configure Layer
3 security options

70
Q

QUESTION NO: 98
Which of the following best describes authorization?
A.
the process of establishing a user’s accounts upon hire
B.
the process of verifying the level of access configured for a user
C.
the process of verifying a user’s identity
D.
the process of recording the use of resources

A

Answer: B
Explanation:
Authorization is the process of verifying the level of access configured for a user. The following list
defines the three phases of the Authentication, Authorization, and Accounting (AAA) process:
* Authentication – the process of verifying a user’s identity
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 166
* Authorization – the process of verifying the level of access configured for a user
* Accounting – the process of recording the use of resources
AAA systems manage user activity. AAA systems are typically more sophisticated than simple
password authentication systems, such as a local password database. Two common AAA
systems include Remote Authentication Dial-In User Server (RADIUS) and Terminal Access
Controller Access-Control System Plus (TACACS+).
RADIUS is a standard AAA protocol created by the Internet Engineering Task Force (IETF).
Compared to TACACS+, RADIUS has several limitations. For example, RADIUS encrypts only the
password in Access-Request packets; it does not encrypt the entire contents of the packet like
TACACS+ does.
TACACS+ is a Cisco-proprietary protocol used during AAA operations. TACACS+ provides more
granular and flexible control over user access privileges. For example, the AAA operations are
separated by TACACS+, whereas RADIUS combines the authentication and authorization
services into a single function. Because TACACS+ separates these functions, administrators have
more control over access to configuration commands. In addition, TACACS+ encrypts the entire
contents of packets, thus providing additional security

71
Q

QUESTION NO: 99
Which of the following Cisco SDA components creates VXLAN tunnels between SDA switches?
A.
the underlay network
B.
the overlay network
C.
the scripts
D.
the applications
E.
the fabric

A

Answer: B
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 167
Explanation:
Of the available choices, the overlay network is the Cisco Software-Defined Access (SDA)
component that creates Virtual Extensible local area network (VXLAN) tunnels between Cisco
SDA switches. The tunnels send and receive traffic between fabric endpoints.
Cisco SDA is a Cisco-developed means of building local area networks (LANs) by using policies
and automation. The Cisco Digital Network Architecture (DNA) controller, which is similar to a
Software-Defined Networking (SDN) controller, is the central component of a Cisco SDA network.
Cisco DNA is a software-centric network architecture that uses a combination of Application
Programming Interfaces (APIs) and a graphical user interface (GUI) to simplify network
operations. The Representational State Transfer (REST) API is used to natively communicate with
Cisco devices. To communicate with third-party devices, Cisco DNA Center relies on software
development kits (SDKs).
Neither the underlay network nor the fabric is the component that creates VXLAN tunnels between
SDA switches. The underlay network is a more traditional network configuration of switches. It is a
collection of devices, interfaces, and media that comprises the Internet Protocol (IP) network that
connects each fabric node. The underlay network is part of a dynamic discovery process that is
involved in creating the overlay network’s VXLAN tunnels. When an endpoint in a Cisco SDA
network sends traffic to another endpoint, the traffic flows from the endpoint through the overlay
network’s VXLAN tunnels. The fabric is the entirety of the overlay network and the underlay
network in a Cisco SDA network.
An SDN controller uses two different sets of APIs: one set to communicate with applications and
another set to communicate with devices in the data plane. Northbound APIs enable an SDN
controller to communicate with applications in the application plane. Applications use northbound
APIs to send requests or instructions to the SDN controller, which uses that information to modify
and manage network flow. Southbound APIs enable an SDN controller to communicate with
devices in the data plane.
Neither scripts nor applications are the component that creates VXLAN tunnels between SDA
switches. In both SDA and SDN deployments, the controller communicates with devices by using
a southbound API. Communication with applications and user interfaces is accomplished by using
a northbound API.

72
Q

QUESTION NO: 100
You are trying to configure Router1 to perform unequal-cost load balancing over OSPF. However,
only one OSPF route exists to the destination network.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 168
Which of the following must you do to enable unequal-cost load balancing?
A.
Adjust the OSPF process IDs so that they match throughout the network.
B.
Configure the variance to a value higher than 1.
C.
Configure the routes to use EIGRP.
D.
Adjust the Hello timers and dead timers so that they match throughout the network.
E.
Issue the ip ospf cost 1 command on all interfaces

A

Answer: C
Explanation:
You should configure the routers to use Enhanced Interior Gateway Routing Protocol (EIGRP).
EIGRP supports load balancing over equal-cost and unequal-cost paths. By contrast, Open
Shortest Path First (OSPF) supports equal-cost load balancing but does not support unequal-cost
load balancing. If multiple OSPF paths to a destination exist and each path has the same
bandwidth, OSPF will load balance between the paths.
Issuing the ip ospf cost 1 command on all interfaces will not enable Router1 to perform unequalcost load balancing. You can manually configure the OSPF cost of a path through an interface by
issuing the ip ospf cost cost command in interface configuration mode, where cost is the path
cost that you want to assign. OSPF uses cost, which is based on bandwidth, as its metric. The
higher the bandwidth, the lower the cost. OSPF selects the lowest-cost path, which is the path with
the highest bandwidth, to a destination.
OSPF does not use variance; therefore, configuring variance to a value higher than 1 will not
enable Router1 to perform unequal-cost load balancing unless you also configure Router1 to use
EIGRP. The variance command is used to determine whether EIGRP feasible successors can be
used for unequal-cost load balancing.
Adjusting the OSPF process IDs so that they match throughout the network will not enable
Router1 to perform unequal-cost load balancing. The OSPF process ID is locally significant to the
router and can be any positive integer from 1 through 65535. You can specify the OSPF process
ID by issuing the router ospf process-id command

73
Q

QUESTION NO: 102
Which of the following address types is used by IPv6 routing protocols to form neighbor
adjacencies?
A.
multicast address
B.
global unicast address
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 172
C.
anycast address
D.
link-local address
E.
site-local unicast address

A

Answer: D
Explanation:
A link-local address is an address type that is used by Internet Protocol version 6 (IPv6) routing
protocols to form neighbor adjacencies. IPv6 link-local addresses are unicast addresses used for
communication over a single link. Routers do not forward traffic sent to a link-local address; the
traffic stays on the local link. These addresses always begin with FEB, FE9, FEA, or FEB.
IPv6 routing protocols do not use a global unicast address to form a neighbor adjacency. A global
unicast address, which is also referred to as an aggregatable global address, is designed to
minimize the size of Internet routing tables. A global unicast address contains three distinct parts:
* The Global Routing Prefix – identifies the public portion of the address, as assigned by a service
provider
* The Site-Level Aggregator (SLA) – identifies the site or group of sites associated with the
address
* The Interface ID – identifies the address assigned to the interface of the network device
associated with the address
The Global Routing Prefix is a 48-bit field that is defined by the Internet service provider (ISP). The
SLA is a 16-bit field that identifies a site and is analogous to a subnet in IP version 4 (IPv4). The
Interface ID is a 64-bit field that must be globally unique; therefore, it typically contains the MAC
address of the originating device in extended unique identifier (EUI)-64 format. Because there is
an inherent hierarchy in the aggregatable global address scheme, these addresses lend
themselves to simple consolidation, which greatly reduces the complexity of Internet routing
tables.
IPv6 routing protocols do not use a multicast address to form neighbor adjacencies. Some IPv4
routing protocols, such as Open Shortest Path First version 2 (OSPFv2), use multicast addresses
to form neighbor adjacencies. A particular multicast address is used to send packets to multiple
devices that are configured with that multicast address, such as the All OSPF Routers multicast
address. The following table shows common IPv4 multicast addresses and their respective IPv6
multicast addresses:

74
Q

QUESTION NO: 103
Which of the following components simplifies the management and deployment of wireless APs in
a Cisco Autonomous WLAN solution?
A.
WLSE
B.
WLC
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 174
C.
WDS
D.
WiSM

A

Answer: A
Explanation:
CiscoWorks Wireless LAN Solution Engine (WISE) simplifies the management and deployment of
wireless access points (WAPs) in a Cisco Autonomous wireless local area network (WLAN)
solution. In a Cisco Autonomous WLAN solution, each access point (AP) is responsible for both
connection and management functionalities; hence the management of the WLAN is
decentralized. A CiscoWorks WISE can be installed to help automate the management and
deployment of the APs in a Cisco Autonomous WLAN solution. Features provided by a
CiscoWorks WISE include dynamic radio frequency (RF) management, network security, intrusion
detection, self-healing capabilities, and monitoring and reporting services for the wireless network.
By contrast, a wireless LAN controller (WLC) provides wireless network management services in a
Cisco Unified Wireless Network. A Cisco Unified Wireless Network uses Lightweight Access Point
Protocol (LWAPP) and a combination of lightweight access points (APs) and WLCs. Lightweight
APs enable wireless clients to connect to the network, and WLCs provide management and
configuration information for the lightweight APs. For example, WLCs determine which RF each
lightweight AP should use. WLCs are not used in a Cisco Autonomous WLAN solution.
Although Wireless Domain Services (WDS) is a component used in Cisco Autonomous WLAN
solutions, WDS does not simplify the management of or deployment of WAPs. WDS is a Cisco
IOS feature that can be installed on APs and used to enable those APs to interact with a
CiscoWorks WISE. For example, WDS collects and aggregates radio information from APs and
forwards that data to a CiscoWorks WISE.
The Cisco Wireless Services Module (WiSM) is a WLC module that can be installed in a Catalyst
6500 series switch or a Cisco 7600 series router. Cisco WiSMs are used on Cisco Unified
Wireless Networks and are not part of Cisco Autonomous WLAN solutions.

75
Q

QUESTION NO: 104
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 175
Which of the following Application layer protocols use UDP for unsynchronized, connectionless
data transfer? (Choose two.)
A.
TFTP
B.
HTTP
C.
FTP
D.
SMTP
E.
SNMP

A

Answer: A,E
Explanation:
Simple Network Management Protocol (SNMP) and Trivial File Transfer Protocol (TFTP) use User
Datagram Protocol (UDP) for unsynchronized, connectionless data transfer. UDP is a Transport
layer protocol that does not use sequence numbers or establish synchronized connections.
Because of its connectionless nature, transmitted datagrams can appear out of sequence or can
be dropped without notice; thus it is the responsibility of the Application layer protocol to reorder
packets or request the transmission of lost datagrams. SNMP is used to monitor and manage
network devices. TFTP is used to transfer files over a network. Other common Application layer
protocols that use UDP include Dynamic Host Configuration Protocol (DHCP), which is used to
assign Internet Protocol (IP) addressing information to clients, Network Time Protocol (NTP),
which is used to coordinate time on a network, and Remote Authentication Dial-In User Service
(RADIUS), which is used to authenticate users.
Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP) use Transmission Control
Protocol (TCP) for reliable, connection-oriented data transfer. TCP is a Transport layer protocol
that uses sequencing and error-checking to ensure that transmitted data can be easily reordered if
packets arrive out of sequence and can be retransmitted if any packets are lost. Because TCP
handles data sequencing and the retransmission of lost data, the Application layer protocols that
rely on TCP do not need to handle those tasks and can rely on receiving reliable, ordered data.
FTP, which is used to transfer files over a network, uses TCP ports 20 and 21. Other common
TCP protocols are HTTP, which is used to transfer webpages over the Internet, Simple Mail
Transfer Protocol (SMTP), which is used to send email messages, Post Office Protocol 3 (POP3),
which is used to retrieve email messages, and Telnet, which is used to manage network devices.
Reference: https://www.iana.org/protocols CCNA 200-301 Official Cert Guide, Volume 2, Chapter
1: Introduction to TCP/IP Transport and Applications, Connection Establishment and Terminati

76
Q

QUESTION NO: 105
You are implementing common Layer 2 security measures on a Cisco switch. You create a new
VLAN with an ID of 4. No devices operate on VLAN 4. You issue the following commands on a
switch interface:
switchport access vlan 4
switchport mode access
Which of the following Layer 2 security measures are you implementing? (Choose two.)
A.
disabling DTP on a port
B.
moving the port to an unused VLAN
C.
enabling port security on an access port
D.
configuring the port mode manually
E.
disabling an unused port

A

Answer: B,D
Explanation:
You are moving the port to an unused virtual local area network (VLAN) by issuing the
switchport access vlan 4 command in this scenario. In addition, you are configuring the port
mode manually by issuing the switchport mode access command. By default, every network
interface on a Cisco switch is an active port. Before you deploy a switch on a network, you should
take steps to ensure that every trunk port and access port on the switch is secured and that every
unused port on the switch is disabled.
Moving an unused port to an unused VLAN creates a logical barrier that prevents rogue devices
from communicating on the network should such a device connect to the port. To move an access
port to an unused VLAN, you should issue the switchport access vlan vlan-id command on the
port, where vlan-id is the ID of the unused VLAN. When you move an unused port to an unused
VLAN, you should also manually configure the port as an access port by issuing the switchport
mode access command and shut down the port by issuing the shutdown command.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 177
To manually configure an access port, you should issue the switchport mode access command
in interface configuration mode. To manually configure a trunk port, you should first issue the
switchport trunk encapsulation protocol command in interface configuration mode, where
protocol is the trunk encapsulation protocol you want to use, and then issue the switchport mode
trunk command in interface configuration mode.
You are not disabling Dynamic Trunking Protocol (DTP) on a port if you issue the commands in
this scenario. By default, all interfaces on a Cisco switch will use DTP to automatically negotiate
whether an interface should be a trunk port or an access port. The transmission of DTP packets
over an interface can be exploited by a malicious user to obtain information about the network or
to convert an interface that should be an access port into a trunked port. You should issue the
switchport nonegotiate command on a manually configured port to prevent any attempts by the
switch to negotiate by using DTP. Manually configuring interfaces to use either trunk mode or
access mode effectively disables DTP and ensures that the traffic on those ports is restricted to
the intended purpose. Even so, you should issue the switchport nonegotiate command on a
manually configured trunk port to prevent any attempts by the switch to negotiate by using DTP,
because a manually configured trunk port will continue to send DTP frames.
You are not disabling an unused port by issuing the commands in this scenario. Disabling an
unused port creates a barrier that prevents rogue devices from communicating on the network
should such a device connect to the port. To disable an unused port on a switch, you should issue
the shutdown command on that port. To verify that a port is in the shutdown state, you should
issue the show interfaces type number command, where type and number specify the interface
you want to show. A port that has been shut down will be reported as administratively down by the
show interfaces type number command.
You are not enabling port security on an access port by issuing the commands in this scenario. To
protect switch interfaces against Media Access Control (MAC) flooding attacks, you should enable
port security on all access mode interfaces on the switch. Issuing the switchport port-security
command in interface configuration mode enables port security with default settings. You can
modify port security settings before you enable port security by issuing the switchport portsecurity mac-address mac-address command, the switchport port-security maximum
maximum-number-of-mac-addresses command, and the switchport port-security violation [
protect | restrict | shutdown] command.
When enabled with its default settings, port security will shut down a port on which a violation
occurs. In addition,

77
Q

QUESTION NO: 106
Which of the following are most likely to be considered forms of authorization? (Choose two.)
A.
verifying a user’s fingerprint pattern
B.
verifying a user’s password
C.
assigning a role to a verified user
D.
allowing a user to access a specific file
E.
logging a verified user’s file access

A

Answer: C,D
Explanation:
Assigning a role to a verified user and allowing a user to access a specific file are forms of
authorization. Authentication, Authorization, and Accounting (AAA) systems manage user activity.
AAA systems are typically more sophisticated than simple password authentication systems, such
as a local password database. Allowing access to a specific file can involve assigning specific user
or group permissions directly to the file, matching a rule, such as an access control list (ACL) that
associates a specific user with a file, or assigning a user to a specific role that has permission to
access the file.
Two common AAA systems are Remote Authentication Dial-In User Server (RADIUS) and
Terminal Access Controller Access-Control System Plus (TACACS+). RADIUS is a standard AAA
protocol created by the Internet Engineering Task Force (IETF). Compared to TACACS+, RADIUS
has several limitations. For example, RADIUS encrypts only the password in Access-Request
packets; it does not encrypt the entire contents of the packet like TACACS+ does.
TACACS+ is a Cisco-proprietary protocol used during AAA operations. TACACS+ provides more
granular and flexible control over user access privileges. For example, the AAA operations are
separated by TACACS+, whereas RADIUS combines the authentication and authorization
services into a single function. Because TACACS+ separates these functions, administrators have
more control over access to configuration commands. In addition, TACACS+ encrypts the entire
contents of packets, thus providing additional security.
The following list defines the three phases of the AAA process:
* Authentication – the process of verifying a user’s identity
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 179
* Authorization – the process of verifying the level of access configured for a user
* Accounting – the process of recording the use of resources
Verifying a user’s fingerprint pattern and verifying a user’s password are both likely to be
considered authentication, not authorization. Authentication is the process of verifying a user’s
identity. Authentication by itself does not grant access to a given resource.
Logging a verified user’s file access is a form of accounting, not authorization. Accounting is a
feature of AAA systems that enables administrators to track resource usage across a network. If a
security incident occurs, accounting can aid the effort to track the incident back to its source

78
Q

QUESTION NO: 107
Which of the following is an example of authentication by something you have?
A.
your driver’s license
B.
a password
C.
a PIN
D.
your fingerprints

A

Answer: A
Explanation:
Your driver’s license is an example of authentication by something you have. There are three
typical methods of authentication for gaining access to a secure environment: something you
know, something you have, and something you are. A fourth possible method is authentication by
someplace you are, which means that you can be granted access to a secure system by virtue of
your workstation’s location on a network, such as an Internet Protocol (IP) address or your
physical location in the world.
Authentication by one type of factor, such as something you know, is known as single-factor
authentication. Authentication by more than one type of factor, such as something you know and
something you are, is known as multifactor authentication. However, requiring more than one of
any single factor, such as two knowledge factors, is not considered multifactor authentication.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 180
Authentication by something you have is the process of verifying your identity by using a device or
document that you carry with you, such as a fob, a driver’s license, a smart card, or a mobile
phone with an authenticator application. For example, a police officer who stops you can verify
your identity by comparing an image on your driver’s license to your physical appearance. In
addition, your company might require that you carry a fob in order to gain access to the office by
using an exterior door. You would typically hold the fob in front of a sensor and allow the sensor to
read the information that is stored on the fob; if the information matches that of an authorized user,
you would be allowed entry. Authentication by something you have is also known as Type 2
authentication. Authentication by something you have is considered a stronger form of
authentication than authentication by something you know because it requires the user to carry
some sort of authenticating electronic access control (EAC) token.
A password or a personal identification number (PIN) is an example of authentication by
something you know. For example, a bank’s website might choose to ask you to provide both a
password and the answer to a security question such as your mother’s maiden name. Although
the bank’s website prompts you for two forms of verification, both of those prompts are for
information that you store in your memory. Authentication by something you know is also known
as Type 1 authentication. Authentication by something you know is considered the weakest form
of authentication because such authentication can often be easily guessed or broken by brute
force.
Fingerprints are an example of authentication by something you are. Authentication by something
you are is the process of verifying your identity by using something that is unique about you and
that cannot be easily changed, such as your iris, your retina, or your fingerprints. For example,
your company could connect your workstation to a fingerprint scanner instead of requiring you to
unlock your workstation with keystrokes and a password. Authentication by something you are is
also known as Type 3 authentication. Authentication by something you are is considered the
strongest form of authentication because of the unique biometrics of individuals

79
Q

QUESTION NO: 109
Which of the following is used by WEP to provide encryption?
A.
AES
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 184
B.
CCMP
C.
RC4
D.
TKIP
E.
GCMP

A

Answer: C
Explanation:
RC4 is a stream cipher encryption algorithm used in the Wired Equivalent Privacy (WEP) protocol
to provide encryption. RC4 is less secure than Advanced Encryption Standard (AES), which is
used by Wi-Fi Protected Access 2 (WPA2) and WPA3. Unlike AES, which supports an encryption
key length of 256 bits, RC4 supports an encryption key length of up to 128 bits. Consequently,
RC4 is not as secure as AES. Furthermore, RC4 uses a stream cipher, which is a less secure
encryption method. RC4 is not used with WPA2.
AES and Counter Mode with Cipher Block Chaining Message Authentication Code Protocol
(CCMP) are used by WPA2 to provide message integrity checks (MICs) and encryption. Wireless
security protocols use MICs to prevent data tampering. Encryption is used to protect
confidentiality.
WPA2, which implements the 802.11i wireless standard, was developed to address the security
vulnerabilities in the original WPA standard. One enhancement over WPA included in WPA2 is the
encryption algorithm. AES is a stronger encryption algorithm than the RC4 algorithm used by
earlier wireless standards. When AES is implemented, a 128-bit block cipher is used to encrypt
data and a security key of 128, 192, or 256 bits can be used. This is a processor-intensive
operation, and implementing WPA2 and AES often requires new hardware, such as new wireless
access points (WAPs) and new client wireless network adapters.
In addition to AES, WPA2 also uses CCMP to provide encryption. CCMP is an encryption
mechanism that uses block ciphers. In WPA2, CCMP is used by AES during the encryption
process. The WPA2 encryption process is thus sometimes known as AES-CCMP.
Temporal Key Integrity Protocol (TKIP) is used to provide MICs and encryption in the WPA
protocol. WPA is the successor to WEP and the predecessor of WPA2. The WPA TKIP
implementation provides improvements over WEP but uses RC4 as the encryption algorithm. TKIP
supports an encryption key of up to 128 bits, whereas AES supports an encryption key of 256 bits.
Consequently, TKIP is not specified as the encryption method in the 802.11i standard.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 185
Galois/Counter Mode Protocol (GCMP) is used along with AES to provide MICs and encryption in
the WPA3 protocol. The WPA3 protocol was introduced in 2018 as a future replacement for
WPA2. GCMP is considered to be stronger and more efficient than CCMP. GCMP uses AES to
provide encryption and Galois Message Authentication Code (GMAC) to provide MICs

80
Q

QUESTION NO: 112
Which of the following do not indicate a duplex mismatch on an Ethernet LAN?
A.
runts
B.
FCS errors
C.
alignment errors
D.
late collisions
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 189
E.
baby giants

A

Answer: E
Explanation:
Of the available choices, baby giants do not indicate a duplex mismatch on an Ethernet local area
network (LAN). A baby giant is an Ethernet frame that is up to 1,600 bytes in length. The default
maximum transmission unit (MTU) size for Ethernet frames is 1,500 bytes, not including the
Ethernet header and the cyclic redundancy check (CRC) trailer, which add 18 bytes to the frame.
Baby giant frames are slightly larger than an Ethernet frame. These can occur if you use Q-in-Q
encapsulation, Multiprotocol Label Switching (MPLS), or any other feature that adds to the size of
an Ethernet frame.
A late collision is an Ethernet collision that occurs after 512 bits of a frame have already been
transmitted. Typically, collisions are detected within a 51.2-microsecond time frame, or 512 bits.
Thus an Ethernet cable that is too long might create late collisions. In addition, a half-duplex port
that is connected to a full-duplex port can report late collisions on the half-duplex side of the
connection.
Runts, Frame Check Sequence (FCS) errors, and alignment errors can all indicate a duplex
mismatch on an Ethernet LAN. Although the half-duplex side of a duplex mismatch will report late
collisions, the full-duplex side will report different errors, such as runts, FCS errors, and alignment
errors. A runt is a frame that is fewer than 64 bytes and has a bad FCS.

81
Q

QUESTION NO: 113
Which of the following statements about REST APIs are true? (Choose two.)
A.
REST APIs are typically used to communicate with an SDN application plane.
B.
REST APIs encode data in either XML format or JSON format.
C.
REST APIs encode data exclusively in JSON format.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 190
D.
REST APIs are typically used to communicate with an SDN data plane.
E.
REST APIs encode data exclusively in XML format.

A

Answer: A,B
Explanation:
Representational State Transfer (REST) Application Programming Interfaces (APIs) encode data
in either Extensible Markup Language (XML) format or in JavaScript Object Notation (JSON)
format. In addition, REST APIs are typically used to communicate with a Software-Defined
Networking (SDN) application plane.
An SDN controller uses two different sets of APIs: one set to communicate with applications and
another set to communicate with devices. Northbound APIs enable an SDN controller to
communicate with applications in the application plane. Applications use northbound APIs to send
requests or instructions to the SDN controller, which uses that information to modify and manage
network flow. Southbound APIs enable an SDN controller to communicate with devices in the data
plane.
XML is a markup language that is similar to Hypertext Markup Language (HTML) in structure; it
uses tags to define blocks of data. Whereas HTML is used to render information on a webpage,
XML is a more structured language that is used to format data in a way that can be easily
transmitted over the Internet and parsed by a variety of applications.
JSON is a data modeling language that returns data in the form of an object that contains key and
value pairs. A single JSON object can contain multiple key and value pairs. Each key and value
pair inside a JSON object is separated from the others by a comma (,). Furthermore, each pair’s
key is separated from its value by a colon (:). The element in quotation marks on the left side of
each colon is the key. The element on the right side of each colon is the value, which might or
might not be enclosed in quotation marks. There are several data value types that can be returned
in JSON output: text, numeric, array, object, Boolean, and null

82
Q

QUESTION NO: 115
By default, what is the default maximum amount of time that a Cisco switch will retain LLDP
information before discarding it when LLDP is enabled on an interface?
A.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 193
180 seconds
B.
65534 seconds
C.
60 seconds
D.
30 seconds
E.
120 seconds

A

Answer: E
Explanation:
By default, a Cisco switch will retain Link Layer Discovery Protocol (LLDP) information for 120
seconds when LLDP is enabled on an interface. LLDP is an Open Systems Interconnection (050
Layer 2 open-standard discovery protocol that is used to facilitate interoperability between Cisco
devices and non-Cisco devices. Attributes that can be learned from neighboring devices contain
Type, Length, Value (TLV) information including port description, system description, and
management address.
By default, a Cisco switch will send LLDP advertisements every 30 seconds when LLDP is
enabled on an interface. These advertisements are used by neighboring devices to update the
LLDP information learned about each neighbor. They are also used as keepalive messages to
ensure that a discovered neighbor continues to be available on the network. You can issue the
lldp timer rate command from global configuration mode to configure the frequency at which
LLDP advertisements are sent by a switch. The default rate value is 30 seconds; however, the rate
can be configured to any integer value from 5 through 65534 seconds. A Cisco switch will retain
LLDP information for 120 seconds when LLDP is enabled on an interface. This time interval is
known as the LLDP holdtime. You can issue the lldp holdtime seconds command from global
configuration mode to configure the LLDP holdtime to any integer value from 0 through 65535
seconds. Whenever a new LLDP advertisement is received, the hold timer is reset and the LLDP
information is considered current. When the hold timer expires for a particular neighbor, the LLDP
information regarding that neighbor is considered stale and is discarded.

83
Q

QUESTION NO: 116
Which of the following FHRPs would use the virtual MAC address 0000.5E00.0101?
A.
only HSRP
B.
only VRRP
C.
GLBP and HSRP
D.
only GLBP

A

Answer: B
Explanation:
Only Virtual Router Redundancy Protocol (VRRP) would use the virtual Media Access Control
(MAC) address 0000.5E00.0101. VRRP is an Internet Engineering Task Force (IETF)-standard
First-Hop Redundancy Protocol (FHRP) that is supported by both Cisco and non-Cisco devices.
Routers are assigned to a VRRP group, and the group functions as a single gateway for clients. A
VRRP group has one master router, which is the router with the highest priority value. All other
routers in the VRRP group are backup routers. The virtual MAC address for VRRP groups is in the
form of 0000.5E00.01n; where axis a hexadecimal value identifying the VRRP group number. For
example, VRRP Group 1 would be identified by the virtual MAC address 0000.5E00.0101.
Gateway Load Balancing Protocol (GLBP) would not use the virtual MAC address
0000.5E00.0101. GLBP is an FHRP that also provides load balancing. GLBP enables you to
configure multiple routers as a GLBP group; the routers in the group receive traffic sent to a virtual
IP address that is configured for the group. Each GLBP group contains an active virtual gateway
(AVG) that is elected based on which router is configured with the highest priority value, or with
the highest IP address if multiple routers are configured with the highest priority value. The other
routers in the GLBP group are configured as primary or secondary active virtual forwarders
(AVFs). Up to four primary AVFs can be configured in a GLBP group, and the primary AVFs can
participate in forwarding traffic. Consequently, multiple routers can be used simultaneously to
provide load balancing for the GLBP group.
Hot Standby Router Protocol (HSRP) would not use the virtual MAC address 0000.5E00.0101.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 196
HSRP is an FHRP that is defined in Request for Comments (RFC) 2281. Similar to GLBP, HSRP
can be used to provide backup router coverage if the primary gateway becomes unavailable.
Multiple routers are assigned to an HSRP group, and the routers function as a single gateway. An
HSRP group contains one active router and one standby router. The active router is the router with
the highest priority value, and the standby router is the router with the second-highest priority
value. Other routers in the HSRP group are in the listen state. If the active router fails, the standby
router assumes the active router role and a new standby router is elected

84
Q

QUESTION NO: 117
Which of the following is considered best practice when expanding an existing 802.11 wireless
network?
A.
configuring each AP with a unique SSID and a unique, nonoverlapping channel
B.
configuring each AP with the same SSID and a unique, nonoverlapping channel
C.
configuring each AP with the same SSID and a unique, overlapping channel
D.
configuring each AP with a unique SSID and the same channel

A

Answer: B
Explanation:
Of the choices provided, configuring each wireless access point (AP) with the same Service Set
Identifier (SSID) and a unique, nonoverlapping channel is considered best practice when
expanding an existing Institute of Electrical and Electronics Engineers (IEEE) 802.11 wireless
network. APs are devices that are used to establish communication between wireless devices and
a wired network through the use of radio waves. APs are often placed in a centralized location
within a network environment and are typically connected to a network through a wired
connection. An SSID is a unique name used to identify a wireless network. When multiple APs are
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 197
used to establish network connections to the same network, each AP must be configured with the
same SSID.
APs transmit data by using a single channel at a time. A channel is a small portion of the spreadspectrum used for transmission. APs operating on the same channel and within close physical
proximity to other APs may experience some interference. To avoid potential interference while
continuing to offer the same physical range of coverage, technicians can set the channel used by
the individual APs to a channel further away on the spread-spectrum than the channel being used
by the other AP in the same service radius. For example, if you have two APs within physical
range of each other, you could set one to operate on channel 1 and the other to operate on
channel 11. By default, APs typically use the nonoverlapping channels 1, 6, or 11 to decrease the
potential for interference issues by keeping the channels in use evenly spread apart.

85
Q

QUESTION NO: 118
Which of the following are used in the calculation of EIGRP metric weights? (Choose two.)
A.
the sum of the segment delays
B.
the average segment delay
C.
the lowest segment bandwidth
D.
the highest segment bandwidth

A

Answer: A,C
Explanation:
Enhanced Interior Gateway Routing Protocol (EIGRP) uses the lowest segment bandwidth and the
sum of the segment delays in the calculation of metric weights, or K values. The metric weights
command adjusts K values, which EIGRP uses to calculate the best path to a destination network.
By default, EIGRP uses the K values that are related to bandwidth and delay, and the other values
are set to 0. Modifying K values can cause undesired effects on the network, such as allowing lowCisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 198
bandwidth connections to be used for load balancing. Therefore, Cisco recommends leaving the K
values at their default settings. The K values must match between two routers for a neighbor
relationship to be established between the routers.
Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_eigrp/configuration/15-
sy/ire-15-sy-book/ire-wid-met.html#GUID-86D573E5-F921-45E2-8062-C2EDEAFC7656

86
Q

QUESTION NO: 119
Which of the following statements about FlexConnect ACLs is true?
A.
They are not supported on the native VLAN.
B.
They are applied per AP and per VLAN.
C.
They can be configured with a per-rule direction.
D.
They do not support an implicit deny rule

A

Answer: B
Explanation:
FlexConnect access control lists (ACLs) are applied per access point (AP) and per virtual local
area network (VLAN). One possible application of FlexConnect ACLs is to prevent administration
of the wireless local area network (WLAN) from a particular VLAN. FlexConnect ACLs are similar
to traditional Cisco IOS ACLs in that they are rules that permit or deny traffic from a given source
to a given destination. However, FlexConnect ACLs are configured on Cisco wireless lightweight
AP VLAN interfaces if the lightweight AP is operating in FlexConnect mode. Even though
FlexConnect ACLs are applied differently than traditional ACLs, it is important to name
FlexConnect ACLs differently from any traditional ACLs that might be configured on the WLAN.
FlexConnect ACLs are supported on the native VLAN. Although it is possible to configure
FlexConnect ACLs for the native VLAN, it is not possible to configure FlexConnect ACLs for the
native VLAN if the VLAN configuration is inherited from a FlexConnect group.
FlexConnect ACLs cannot be configured with a per-rule direction. This is in contrast to a traditional
ACL, which can be configured with inbound rules or outbound rules. A FlexConnect ACL is applied
in the ingress direction or the egress direction as an entire set of rules, not on a per-rule basis.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 199
FlexConnect ACLs support the implicit deny rule. In this way, FlexConnect ACLs work similarly to
traditional ACLs. The implicit deny rule is an invisible rule that is applied to the end of an ACL. It
ensures that traffic that is not explicitly matched by a previous rule in the ACL is denied by the
ACL.
Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/7-
4/configuration/guides/consolidated/b_cg74_CONSOLIDATED/b_cg74_CONSOLIDATED_chapter

87
Q

QUESTION NO: 121
Which of the following management frames contain the SSID of a wireless network?
A.
probe requests
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 201
B.
association requests
C.
association responses
D.
deauthentications
E.
beacons

A

Answer: E
Explanation:
Beacons are management frames that contain the Service Set Identifier (SSID) of a wireless
network. Beacon frames contain a variety of information about wireless networks. An SSID is a
label that identifies a wireless network and is broadcast by an access point (AP). The SSID is one
of several components of a beacon frame. Beacon frames also contain timestamp information,
authentication information, data transfer speed information, and vendor-specific proprietary
information. Beacon frames can be disabled in order to hide the presence of a wireless network.
However, although disabling beacon frames can help prevent users from locating wireless
networks, tools such as NetStumbler can identify wireless networks, regardless of whether the
beacon frame is active.
Association requests are not management frames that contain the SSID of a wireless network. An
association request frame is sent from the wireless client to the AP to request access to the
wireless network. The process of requesting access to the wireless network comes after the client
has been authenticated by an AP or authentication server.
Once the AP has received and processed the association request, an association response is sent
back to the wireless client. An association response provides the wireless client with an answer as
to whether the client will be allowed to access the network. An association response is not a
management frame that contains the SSID of a wireless network.
Deauthentication frames are not management frames that contain the SSID of a wireless network.
Deauthentication management frames are sent by either the AP or the wireless client to terminate
the connection. Deauthentication messages are typically used to end an authorized connection;
however, they can also be used to end wireless sessions between rogue clients or rogue APs.
Probe request frames are not management frames that contain the SSID of a wireless network.
Probe request management frames are sent by wireless clients to request network information
from any AP in the transmission range of the client. Once an AP receives a probe request, the AP
can provide a probe response. Probe responses provide the client with information about the

88
Q

QUESTION NO: 122
Which of the following fields appears last in an Ethernet frame?
A.
destination address
B.
FCS
C.
preamble
D.
SOF
E.
data
F.
source address
G.
length

A

Answer: B
Explanation:
The 4-byte Frame Check Sequence (FCS) field appears last in an Ethernet frame. The FCS field
is a 4-byte cyclic redundancy check (CRC) that is intended to enable a frame’s receiver to
determine whether the frame has been corrupted in transit. The FCS is calculated based on the
values of every other field in the frame. If a CRC error is detected, the frame will be discarded and
the interface CRC and Frame counters will be incremented. Similarly, the frame will be discarded if
it contains fewer than 64 bytes; a frame containing fewer than 64 bytes is referred to as a runt.
An Ethernet frame typically consists of seven fields in the following order:
* A 7-byte preamble field
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 203
* A 1-byte start-of-frame (SOF) field
* A 6-byte destination address field
* A 6-byte source address field
* A 2-byte type field
* A data field in the range from 46 through 1,500 bytes
* A 4-byte FCS field
The first five fields of the frame are known as the Ethernet header. The preamble field is used to
notify receiving hosts that a frame is being sent. The SOF field is used for synchronization with
other hosts on the local area network (LAN). The destination address field contains the Media
Access Control (MAC) address of the host for which the data is intended. The source address field
contains the MAC address of the host that is sending the data. Finally, an Ethernet header
contains a 2-byte type field to indicate the protocol that is intended to receive the frame’s data
after processing. A payload field of a size in the range from 46 through 1,500 bytes immediately
follows the Ethernet header.

89
Q

QUESTION NO: 123
You administer an OSPF network that contains a mixture of Ethernet, FastEthernet,
GigabitEthernet, and TenGigabitEthernet links. The reference bandwidth is set to the default value
of 100.
Which of the following will occur?
A.
All links will have different OSPF costs.
B.
All links will have the same OSPF cost.
C.
Ethernet and FastEthernet links will have the same OSPF cost.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 204
D.
GigabitEthernet and TenGigabitEthernet links will have the same OSPF cost.
E.
FastEthernet, GigabitEthernet, and TenGigabitEthernet links will have the same OSPF cost

A

Answer: E
Explanation:
FastEthernet, GigabitEthernet, and TenGigabitEthernet links will have the same Open Shortest
Path First (OSPF) cost. An OSPF routing process uses a cost metric that is based on the
bandwidth of an interface relative to a reference bandwidth. The formula to determine the cost of
an interface is as follows:
cost = reference bandwidth / interface bandwidth
The default reference bandwidth is 100 megabits per second (Mbps). You can issue the auto-cost
command from router configuration mode to change the reference bandwidth for an OSPF routing
process. The syntax for the auto-cost command is auto-cost reference-bandwidth ref-bw,
where ref-bw is the reference bandwidth expressed as an integer value in megabits per second
between 1 and 4294967. Therefore, the default value of the ref-bw parameter is 100.
The minimum supported cost for an OSPF interface is 1, and any values that calculate to less than
1 are rounded up to 1. Therefore, any link with an interface bandwidth greater than or equal to 100
Mbps will result in a cost of 1 by default. As a result, the 100-Mbps FastEthernet links, the 1-Gbps
GigabitEthernet links, and the 10-Gbps TenGigabitEthernet links in this scenario will all have a
cost of 1; the 10-Mbps Ethernet links will have a cost of 10.
If the reference bandwidth is less than the fastest routed link on the network, a situation can arise
where the cost of two interfaces is the same even though their link speeds are different. When an
OSPF routing process is presented with multiple routes of the same cost, equal-cost load
balancing is used to distribute packets evenly among the available paths. This distribution will
cause some packets in this scenario to take suboptimal routes to their destinations. To prevent
this from occurring, the reference bandwidth should be a value greater than or equal to the
bandwidth of the fastest routed link in the administrative domain. Alternatively, you can manually
configure an OSPF cost for each interface by issuing the ip ospf cost command from interface
configuration mode.

90
Q

QUESTION NO: 124
Which of the following wireless QoS levels is the default setting when you configure a WLAN on a
WLC?
A.
Platinum
B.
Gold
C.
Bronze
D.
Silver

A

Answer: D
Explanation:
The Silver wireless Quality of Service (QoS) level is the default setting when you configure a
wireless local area network (WLAN) on a Cisco wireless LAN controller (WLC). The Silver level is
also known as the best-effort level. Cisco WLCs support four different QoS levels: Platinum, Gold,
Silver, and Bronze. QoS prioritizes certain types of traffic over others and can therefore be used to
ensure quality for services that are sensitive to network issues such as delay and congestion. The
Silver QoS level is also known as the best-effort level of QoS. Traffic that is delivered by using
best effort is considered lower priority than mission-critical, video, and voice traffic. This is the level
at which most transactional traffic is delivered.
The Platinum wireless QoS level prioritizes Voice over Internet Protocol (VoIP) traffic on a Cisco
WLAN. VoIP is susceptible to network delay, which can create jitter and severely affect the quality
of a call. To ensure that VoIP traffic is of highest quality, the Platinum level is typically applied to
VoIP endpoints and to the control tunnels between lightweight access points (APs) and the WLC.
The Gold wireless QoS level prioritizes video traffic on a Cisco WLAN. The Gold level is typically
used to ensure that video and mission-critical real-time interactive traffic streams from source to
destination without disruption.

91
Q

QUESTION NO: 125
Given a host IP address of 48.25.24.71/21, what is the broadcast address for the subnetwork?
A.
48.25.24.127
B.
48.25.31.255
C.
48.25.25.254
D.
48.25.31.254
E.
48.25.24.255
F.
48.25.24.240

A

Answer: B
Explanation:
The broadcast address for the subnetwork containing the host 48.25.24.71/21 is 48.25.31.255. An
Internet Protocol (IP) address is composed of four groups of eight binary bits, or 32 bits total. Each
bit can store either a 1 or a 0 value. The address consists of two parts, a network portion and a
host portion, which are divided by the use of a subnet mask. Like the IP address, the subnet mask
is composed of four groups of eight binary bits containing either a 1 or a 0 value. Because each
group contains eight bits of information, the groups are referred to as octets. Each octet ranges
from 0 through 255 in decimal value.
To determine the subnetwork address range of a given IP address/subnet mask combination, you
must first identify the interesting octet within the subnet mask. The interesting octet is the first octet
that contains a decimal value other than 255 or 0.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 207
The subnet mask in this example is /21. This notation is known as Classless Inter-Domain Routing
(CIDR) notation. To calculate the network and host information for the network, you will need to
convert the subnet mask to dotted decimal notation.
To convert /21 from CIDR notation to dotted decimal notation, begin at the left and set the first 21
bits to a value of 1. Set the remaining 11 bits to 0.
/21 = 11111111.11111111.11111000.00000000
Binary bit weight increases in significance from right to left, with the leftmost bit in each octet worth
a decimal value of 128 and the rightmost bit worth a decimal value of 1. The decimal value for
each octet is computed by adding up the bit weight for any bit containing a 1 within the octet. The
following exhibit displays how to calculate the decimal value of the subnet mask octets based on
the binary value assigned to each bit:

92
Q

QUESTION NO: 127
Which of the following connects a wireless client to a wired network without requiring a separate
wireless controller?
A.
embedded AP deployment
B.
autonomous AP deployment
C.
lightweight AP deployment
D.
cloud-based AP deployment

A

Answer: B
Explanation:
Of the available choices, an autonomous access point (AP) deployment connects a wireless client
to a wired network without requiring a separate wireless controller. An AP is a device that
connects a wireless client to a wired network. An autonomous AP contains network interfaces for
both wireless and wired networks; it is typically deployed as part of an autonomous AP
architecture in which APs are connected directly to the access layer of the three-tier hierarchical
network model.
A lightweight AP deployment connects a wireless client to a wired network but requires a separate
wireless controller. Wireless clients connect to lightweight APs, which are capable of performing
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 211
real-time wireless network functions but rely on a Cisco wireless LAN controller (WLC) for
management functions. The connection between a lightweight AP and a WLC is created by using
two tunnels established by the Control and Provisioning of Wireless Access Points (CAPWAP)
tunneling protocol. Information sent between lightweight APs and the WLC is encapsulated in
Internet Protocol (IP) packets. This process enables a lightweight AP and WLC to manage
connectivity to the same wireless local area network (WLAN) yet be separated by both physical
and logical means. This type of deployment is also known as a split-MAC architecture because the
lightweight AP handles the frames while the WLC handles the management functions.
A cloud-based AP deployment connects a wireless client to a wired network but requires a
separate wireless controller. For example, a Cisco Meraki AP provides wireless access by
connecting to a centralized management system known as the Cisco Meraki Cloud. APs deployed
at the access layer of the three-tier hierarchical network model contact the cloud in order to
automatically configure themselves. APs are managed through a cloud-based dashboard.
An embedded AP deployment connects a wireless client to a wired network but requires a
separate wireless controller. The primary difference between this deployment and others is that
the WLC is embedded within a stack of switching hardware instead of existing as a separate
entity. APs can connect to the WLC by connecting to switches that are directly hosting the WLC or
switch ports that are operating on the same virtual local area network (VLAN) as the WLC.

93
Q

QUESTION NO: 128
You are configuring a normal WLAN by using the WLC GUI. You configure the Profile Name field
on the WLANs > New page with a value of MyCompanyLAN.
Which of the following statements about the SSID field is true?
A.
You must not configure it with the Profile Name value.
B.
You can configure it with a reserved keyword.
C.
You must configure it the with Profile Name value.
D.
You can configure it with the Profile Name value, but it is not required.

A

Answer: D
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 212
Explanation:
You can configure the SSID field on the WLANs > New page with the value that you created for
the Profile Name field, but you are not required to do so. In this scenario, you have configured the
Profile Name field on the Cisco wireless LAN controller (WLC) graphical user interface (GUI)
WLANs > New page with a value of MyCompanyLAN. When you are creating a new wireless
local area network (WLAN), the profile name can be up to 32 characters in length and should
uniquely identify the WLAN that you are configuring. The value that you enter in the Profile Name
field will be used by the WLC to identify the WLAN on other configuration pages. For simplicity,
many administrators choose to use the same value for the Profile Name field as they plan to
configure in the SSID field, although this is not required.
The Cisco WLC GUI is a browser-based interface that enables you to configure various wireless
network settings. To create a new normal WLAN, you should complete four steps on the WLANs
> New page of the WLC GUI:
1. Select the type of WLAN you are creating from the Type drop-down list box; by default, this
value is configured to WLAN.
2. Enter a 32-character or less profile name in the Profile Name field.
3. Enter a 32-character or less Service Set Identifier (SSID) in the SSID field.
4. Choose a WLAN ID from the ID drop-down list box; by default, this value is configured to WLAN
.
There are three types of WLANs you can create by using the WLC GUI:
1. A normal WLAN, which is the WLAN to which wireless clients inside your company’s walls will
connect
2. A Guest LAN, which is the WLAN to which guest wireless clients inside your company’s walls
will connect
3. A Remote LAN, which is the WLAN configuration for wired ports on the WLC
After you configure the type of WLAN, you should configure a profile name for the WLAN in the
Profile Name field. After you configure the Profile Name field, you should configure a value of up
to 32 characters in the SSID field. The SSID is the WLAN network name that will be broadcast to
wireless clients. In general, an SSID is the name for the collection of wireless clients that are all
operating with the same Institute of Electrical and Electronics Engineers (IEEE) 802.11
configuration.
Finally, you should configure the WLAN ID on which the WLAN will operate. By default, the ID
drop-down list box on the WLANs > New page will be configured to a value of 1. You can choose
to configure a WLAN on any WLAN ID in the range from 1 through 512. Although Cisco controllers
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 213
support a maximum of 512 WLANs, only 16 can be actively configured.
You cannot configure an SSID by using a reserved keyword. For example, you cannot configure
the SSID field to a value s because s is a keyword that is short for shutdown.
Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-5/configguide/b_cg85/wlans.html#ID72 CCNA 200-301 Official Cert Guide, Volume 1, Chapter 29: Building
a Wireless LAN, Configuring a WLAN

94
Q

QUESTION NO: 129
Which of the following technologies can you use to tunnel any Layer 3 protocol through an IP
transport network?
A.
IPSec
B.
GRE
C.
PPPoA
D.
PPPoE

A

Answer: B
Explanation:
You can use Generic Routing Encapsulation (GRE) to tunnel any Layer 3 protocol through an
Internet Protocol (IP) transport network. Because the focus of GRE is to transport many different
protocols, it has very limited security features. By contrast, IP Security (IPSec) has strong data
confidentiality and data integrity features but it can transport only IP traffic. GRE over IPSec
combines the best features of both protocols to securely transport any protocol over an IP
network.
You cannot use IPSec to tunnel any Layer 3 protocol through an IP transport network. However,
you can use IPSec to establish a secure virtual private network (VPN) tunnel between two sites
that are separated by an untrusted network. IPSec is a security framework that can guarantee the
confidentiality and integrity of data as it passes through an untrusted network. An IPSec VPN
connection is established through a series of negotiations and authentications. Initially, the VPN
peers negotiate an Internet Key Exchange (IKE) security association (SA) and establish a tunnel
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 214
for key management and authentication. The key management tunnel protects the subsequent
negotiation of IPSec SAs. The IPSec SAs enable the VPN peers to establish a tunnel for data
transmission and to specify the methods that are used to ensure the confidentiality and integrity of
the data sent through that tunnel. Typically, Authentication Header (AH) protocol or Encapsulating
Security Protocol (ESP) is used to ensure the integrity of a packet and to authenticate the origin of
a packet. AH is embedded within a packet to provide authentication, whereas ESP encapsulates
the data in order to provide data privacy.
You cannot use Point-to-Point Protocol over ATM (PPPoA) to tunnel any Layer 3 protocol through
an IP transport network. PPPoA is used to initiate a session with a Digital Subscriber Line (DSL)
service provider. With PPPoA, a Point-to-Point Protocol (PPP) session is initiated between an
Asymmetric DSL (ADSL)-enabled router and an access concentrator. After the PPP session is
established, traffic that passes between the router and the access concentrator is encapsulated in
PPP frames. The PPP frames are then encapsulated directly into Asynchronous Transfer Mode
(ATM) cells and transmitted across the ADSL circuit. In addition, because neither PPP frames nor
ATM cells are encrypted, PPPoA cannot provide a secure connection between the remote location
and the company headquarters.
You cannot use PPP over Ethernet (PPPoE) to tunnel any Layer 3 protocol through an IP
transport network. PPPoE is typically used to initiate a session with a DSL service provider. With
PPPoE, PPP frames are encapsulated into Ethernet frames for transmission to the service
provider. Because PPP frames are not encrypted, PPPoE cannot provide a secure connection
between the remote location and the company headquarters.
Reference: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_conn_vpnips/configuration/xe3s/sec-sec-for-vpns-w-ipsec-xe-3s-book/sec-cfg-vpn-ipsec.html#GUID-DCA522E7-9AA3-41B4-
901C-88A4842845A0

95
Q

QUESTION NO: 130
How many address fields can be expected in an 802.11 data frame that is sent from a wireless
station and destined to a host on the wired network?
A.
four
B.
three
C.
one
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 215
D.
two

A

Answer: B
Explanation:
Three address fields can be expected in an Institute of Electrical and Electronics Engineers (IEEE)
802.11 data frame that is sent from a wireless station and destined to a host on the wired network.
An 802.11 Media Access Control (MAC) frame is generally comprised of nine fields, as shown in
the following diagram:
The Frame Control (FC) field is used to identify the type of 802.11 frame, and its 2 bytes of data
are subdivided into 11 related fields of information, such as wireless protocol, frame type, and
frame subtype.
The Duration (DUR) field is a 2-byte field that is used mainly by control frames to indicate
transmission timers. However, this field is also used by the Power Save (PS) Poll control frame to
indicate the association identity (AID) of a client.
The address fields, Address 1 (ADD1), Address 2 (ADD2), Address 3 (ADD3), and Address 4
(ADD4), are 6-byte fields used to convey MAC address and Basic Service Set Identifier (BSSID)
information. What information resides in which address field is entirely dependent on the type of
frame. However, ADD1, ADD2, and ADD3 typically contain a source MAC address, destination
MAC address, and BSSID with the order being dependent on whether the frame is entering the
distribution system (DS), leaving the DS, or passing directly between ad-hoc wireless devices. The
ADD4 field is only present for frames passing between devices in the DS, such as from one
access point (AP) to another AP.
The Sequence (SEQ) field is a 2-byte field that is subdivided to store two related pieces of
information: the fragment number and sequence number of each frame.
The DATA portion of a frame varies in size and contains the frame’s payload. For data frames, the
payload is user data. However, for other frames, such as management frames, this portion of the
frame might contain information such as supported data rates and cipher suites.
Finally, the Frame Check Sequence (FCS) field contains a 4-byte cyclic redundancy check (CRC)
value calculated from all the 802.11 header fields, including the data portion of the frame. This
value is used by the receiving station to determine whether the frame was corrupted during trans

96
Q

QUESTION NO: 132
Which of the following is a benefit of network automation?
A.
Data models are formalized and defined by a centralized controller.
B.
Data models are human-interpreted from the output of show commands.
C.
Data models are enhanced by APIs to provide only the most specific information.
D.
Data models are formed from show command output that is processed by automation scripts

A

Answer: A
Explanation:
One benefit of network automation is that data models are formalized and defined by a centralized
controller. In addition, network automation aids reliable deployment of device configurations
throughout an enterprise. Network automation includes the Software-Defined Networking (SDN)
architecture. An SDN architecture is one in which management software is used to centralize
device intelligence.
SDNs use northbound Application Programming Interfaces (APIs) to send network instructions
from software applications to the central controller. Southbound protocols, which connect to a
network’s physical devices, are typically linked to the SDN controller by using a service abstraction
layer (SAL). The SAL is a database, or registry, of the services provided by the southbound APIs.
The APIs are bound to the registry so that the SAL can service an application’s request.
Data models are enhanced by APIs to provide more robust information, not only the most specific
information, about a network. Because of the centralized controller, data can be collected
throughout a network. The controller can then provide that data to APIs in ways that enable
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 219
extrapolation and interpretation that is more difficult to achieve when data is manually collected.
Data models are not human-interpreted from the output of show commands when network
automation is implemented. The issuing of Cisco IOS show commands to verify configurations
and to troubleshoot networks is an action traditionally performed by an administrator. The
administrator must then interpret the output of the show command to verify a configuration or to
uncover problems with the configuration. Often, the output of a given show command must be
compared to the output of other show commands or to a configuration standard in order for the
administrator to obtain complete information.
Data models are not formed from show command output that is processed by automation scripts.
Automation scripts are typically human-created programs that parse the output of show
commands in order to obtain specific information about a specific configuration. Although they
might ease administrative burden for common and repetitive tasks, automation scripts are less
robust than network automation.

97
Q

QUESTION NO: 133
Which of the following Application layer protocols uses TCP for reliable, connection-oriented data
transfer?
A.
TFTP
B.
FTP
C.
SNMP
D.
DHCP

A

Answer: B
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 220
Explanation:
File Transfer Protocol (FTP) uses Transmission Control Protocol (TCP) for reliable, connectionoriented data transfer. TCP is a Transport layer protocol that uses sequencing and error-checking
to ensure that transmitted data can be easily reordered if packets arrive out of sequence and can
be retransmitted if any packets are lost. FTP, which is used to transfer files over a network, uses
TCP ports 20 and 21. Cisco devices can reliably transfer IOS images by using FTP. FTP requires
the transmission of authentication credentials, even if anonymous FTP is in use, but those
credentials are transmitted in plain text. Other common TCP protocols are Hypertext Transfer
Protocol (HTTP), which is used to transfer webpages over the Internet, Simple Mail Transfer
Protocol (SMTP), which is used to send email messages, Post Office Protocol 3 (POP3), which is
used to retrieve email messages, and Telnet, which is used to manage network devices.
Dynamic Host Configuration Protocol (DHCP), Simple Network Management Protocol (SNMP),
and Trivial FTP (TFTP) use User Datagram Protocol (UDP) and not TCP. UDP is a Transport layer
protocol that is used for unreliable, connectionless datagram transfer. Because UDP does not use
sequence numbers or establish synchronized connections, transmitted datagrams can appear out
of sequence or can be dropped without notice. DHCP is used to assign Internet Protocol (IP)
addressing and configuration information to clients. SNMP is used to monitor and manage network
devices. TFTP is used to transfer files without authentication over a network. Other common
Application layer protocols that use UDP include Network Time Protocol (NTP), which is used to
coordinate time on a network, and Remote Authentication Dial-In User Service (RADIUS), which is
used to authenticate users.
Reference: https://www.iana.org/protocols

98
Q

QUESTION NO: 134
You issue the show ap config general MyLAP command on a Cisco AP.
Which of the following is the command output least likely to contain?
A.
the AP’s default gateway address
B.
the AP’s IP address
C.
the AP’s Syslog server settings
D.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 221
the AP’s DNS server address

A

Answer: C
Explanation:
Of the available choices, the output of the show ap config general MyLAP command on the
Cisco access point (AP) in this scenario is least likely to contain the AP’s Syslog server settings.
However, you can obtain the Syslog server settings for all APs that join a Cisco wireless LAN
controller (WLC) by issuing the show ap config global command at the WLC command prompt.
Similar to a Cisco wired router or switch, you can administer a Cisco AP or WLC by using a
command-line interface (CLI). However, the CLI interface does not support the same Cisco IOS
command set as a Cisco router or switch. You can configure a Cisco WLC or a Cisco AP either by
using the built-in graphical user interface (GUI) in a browser or by using the CU.
Issuing the show ap config general cisco-ap command, where cisco-ap is the host name of the
Cisco AP that is configured with the information you want to display, produces general AP
configuration output. This output includes information such as the AP’s Internet Protocol (IP)
address, the default gateway IP address, and the Domain Name System (DNS) server address. In
addition, the output includes the subnet mask that is configured on the AP. The following is sample
output from a Cisco A

99
Q

QUESTION NO: 135
On which interfaces is the OSPF nonbroadcast network type enabled by default? (Choose two.)
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 222
A.
FDDI
B.
X.25
C.
Frame Relay
D.
Ethernet
E.
PPP
F.
HDLC

A

Answer: B,C
Explanation:
The OSPF nonbroadcast network type is enabled by default on Frame Relay and X.25 interfaces.
If the ip ospf network command has not been issued for an OSPF interface, the default network
type will be used. The default OSPF network type depends upon the type of network to which the
interface is connected.
There are five OSPF network types:
* Broadcast
* Nonbroadcast
* Point-to-point
* Point-to-multipoint broadcast
* Point-to-multipoint nonbroadcast
The Open Shortest Path First (OSPF) broadcast network type is enabled by default on Fiber
Distributed Data Interface (FDDI) and Ethernet interfaces, including Fast Ethernet and Gigabit
Ethernet interfaces. On broadcast networks, designated router (DR) and backup designated router
(BDR) elections are performed. Multicast updates are sent, so manual configuration of neighbor
routers with the neighbor command is not required. By default, the Hello timer is set to 10
seconds and the dead timer is set to 40 seconds. To configure an OSPF broadcast network, you
should issue the ip ospf network broadcast command.
On nonbroadcast networks, DR and BDR elections are performed. Nonbroadcast networks do not
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 223
allow multicasts; therefore, manual configuration of neighbor routers with the neighbor command
is required so that OSPF sends unicast updates. By default, the Hello timer is set to 30 seconds
and the dead timer is set to 120 seconds. To configure an OSPF nonbroadcast network, which is
also called a nonbroadcast multiaccess (NBMA) network, you should issue the ip ospf network
non-broadcast command.
The OSPF point-to-point network type is enabled by default on High-Level Data Link Control
(HDLC) and Point-to-Point Protocol (PPP) interfaces. On point-to-point networks, DR and BDR
elections are not performed. Multicast updates are sent, so manual configuration of neighbor
routers with the neighbor command is not required. By default, the Hello timer is set to 10
seconds and the dead timer is set to 40 seconds. To configure an OSPF point-to-point network,
you should issue the ip ospf network point-to-point command.
On OSPF point-to-multipoint networks, DR and BDR elections are not performed. Multicast
updates are sent, so manual configuration of neighbor routers with the neighbor command is not
required. By default, the Hello timer is set to 30 seconds and the dead timer is set to 120 seconds.
To configure an OSPF point-to-multipoint network, you should issue the ip ospf network pointto-multipoint command.
On OSPF point-to-multipoint nonbroadcast networks, DR and BDR elections are not performed.
Nonbroadcast networks do not allow multicasts; therefore, manual configuration of neighbor
routers with the neighbor command is required so that OSPF sends unicast updates. By default,
the Hello timer is set to 30 seconds and the dead timer is set to 120 seconds. To configure an
OSPF point-to-multipoint nonbroadcast network, you should issue the ip ospf network point-tomultipoint non-broadcast commandW

100
Q

QUESTION NO: 137
You issue the following commands on RouterA:
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 227
RouterA#configure terminal
RouterA(config)#ip route 0.0.0.0 0.0.0.0 192.168.1.4
RouterA(config)#ip route 10.0.0.0 255.255.0.0 192.168.1.3
RouterA(config)#ip route 10.0.0.0 255.255.255.0 192.168.1.2
RouterA(config)#ip route 10.0.0.0 255.255.255.224 192.168.1.1
RouterA receives a packet destined for 10.0.0.24.
To which next-hop IP address will RouterA forward the packet?
A.
192.168.1.1
B.
192.168.1.4
C.
192.168.1.3
D.
10.0.0.4
E.
192.168.1.2

A

Answer: A
Explanation:
RouterA will forward the packet to the next-hop Internet Protocol (IP) address of 192.168.1.1.
When a packet is sent to a router, the router checks the routing table to see if the next-hop
address for the destination network is known. The routing table can be filled dynamically by a
routing protocol, or you can configure the routing table manually by issuing the ip route command
to add static routes. Static routes are not removed from a routing table when the path becomes
unavailable. A dynamic routing protocol is therefore preferable where possible.
The ip route command uses the syntax ip route prefix mask {ip-address | interface), where prefix
is the network address of the destination network, mask is the subnet mask of the destination
network, ip-address is the IP address of the next-hop router, and interface is the local interface to
which packets should be sent.
A default route is used to send packets that are destined for a location that is not listed elsewhere
in the routing table. For example, the ip route 0.0.0.0 0.0.0.0 192.168.1.4 command specifies that
packets destined for addresses not otherwise specified in the routing table are sent to the default
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 228
next-hop address of 192.168.1.4. A prefix and mask combination of 0.0.0.0 0.0.0.0 specifies any
packet destined for any network.
If multiple static routes to a destination are known, the most specific route is used. Therefore, the
following rules apply on RouterA:
* Packets sent to the 10.0.0.0 255.255.255.224 network are forwarded to the next-hop address of
192.168.1.1. This includes destination addresses from 10.0.0.0 through 10.0.0.31.
* Packets sent to the 10.0.0.0 255.255.255.0 network, except those sent to the 10.0.0.0
255255255224 network, are forwarded to the next-hop address of 192.168.1.2. This includes
destination addresses from 10.0.0.32 through 10.0.0.255.
* Packets sent to the 10.0.0.0 255.255.0.0 network, except those sent to the 10.0.0.0 2552552550
network, are forwarded to the next-hop address of 192.168.1.3. This includes destination
addresses from 10.0.1.0 through 10.0.255.255.
* Packets sent to any destination not listed in the routing table are forwarded to the default static
route next-hop address of 192.168.1.4.
Because the most specific route to 10.0.0.24 is the route toward the 10.0.0.0 255.255.255.224
network, RouterA will forward a packet destined for 10.0.0.24 to the next-hop address o

101
Q

QUESTION NO: 140
An LACP channel group on SwitchA is configured to operate in active mode.
In which modes could you configure the corresponding channel group on SwitchB to create a valid
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 233
EtherChannel configuration? (Choose two.)
A.
on
B.
active
C.
auto
D.
passive
E.
desirable

A

Answer: B,D
Explanation:
In this scenario, you could configure the channel group on SwitchB to operate in either active or
passive mode to create a valid EtherChannel configuration. EtherChannel is used to bundle two or
more identical, physical interfaces into a single logical link between switches. An EtherChannel
can be permanently established between switches, or it can be negotiated by using one of two
aggregation protocols: the Cisco-proprietary Port Aggregation Protocol (PAgP) or the openstandard Institute of Electrical and Electronics Engineers (IEEE) 802.3ad protocol, which is also
known as Link Aggregation Control Protocol (LACP). The EtherChannel aggregation protocol must
match on each switch, or they will be unable to dynamically establish an EtherChannel link
between them.
In addition, the channel groups on each switch must operate in compatible modes to create a
functional EtherChannel link. The channel-group number mode {on | active | passive | {auto |
desirable} [non-silent]} command is used to configure the operating mode for an interface, or
range of interfaces, in a channel group. The following table displays the channel-group modes that
will result in a valid EtherChannel configuration:

102
Q

QUESTION NO: 142
Which of the following WLC interfaces controls all Layer 3 communications between a WLC and a
lightweight AP?
A.
the service port interface
B.
the AP-manager interface
C.
the management interface
D.
a dynamic interface
E.
the virtual interface

A

Answer: B
Explanation:
The AP-manager interface on a wireless LAN controller (WLC) controls all Layer 3
communications between a WLC and a lightweight access point (AP). A WLC can contain up to
four static interfaces: the management interface, the AP-manager interface, the virtual interface,
and the service port interface. The AP-manager interface contains the Internet Protocol (IP)
address that is used as the source IP address by which the lightweight APs communicate with the
WLC. Because the AP-manager interface communicates with the lightweight APs on the wireless
network, the IP address assigned to the AP-manager interface should be unique on the network.
After the interface has been configured, the WLC uses the AP-manager interface to listen for
Layer 3 Lightweight Access Point Protocol (LWAPP) communications.
The management interface is used for in-band management information. This interface is used for
all Layer 2 LWAPP communications between the controller and the lightweight APs. In addition,
the management interface is used to communicate with other WLCs on the wireless network.
The service port interface is used for maintenance purposes on a WLC. This interface is a physical
interface on the WLC that can be used to recover the WLC in the event that the WLC fails. The
service port interface is the only interface that is available while the WLC is booting.
The virtual interface can be used to provide a specific IP address that is the same across multiple
controllers when wireless clients roam among the controllers. This enables seamless roaming
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 239
among the controllers. The virtual interface is also used in situations where web authorization has
been enabled for clients; the user is redirected to the IP address of the virtual interface when the
user opens a web browser. In addition, if Dynamic Host Configuration Protocol (DHCP) relay has
been enabled on the controller, the virtual interface can be used as the DHCP server address on
wireless clients.
In addition to the four static interfaces, a WLC can contain up to 512 dynamic interfaces. Dynamic
interfaces are user-defined and are typically used for wireless client data. The dynamic interfaces
function similarly to virtual local area networks (VLANs). For example, you can create a dynamic
interface to segment traffic on the WLC.
Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/7-
4/configuration/guides/consolidated/b_cg74_CONSOLIDATED/b_cg74_CONSOLIDATED_chapter
_010011100.html#ID325 CCNA 200-301 Official Cert Guide, Volume 1, Chapter 29: Building a
Wireless LAN, Accessing a Cisco WLC

103
Q

QUESTION NO: 143
You receive the following output on the console of RouterA:
00:00:47: %LINK-3-UPDOWN: Interface GigabitEthernet0/2, changed state to up
What is the severity level of this Syslog message?
A.
notifications
B.
warnings
C.
errors
D.
informational
E.
alerts

A

Answer: C
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 240
Explanation:
The 00:00:47: %LINK-3-UPDOWN: Interface GigabitEthernet0/2, changed state to up message is
at the errors severity level. Cisco debug messages and log messages are divided into the
following severity levels:
* 0 – emergencies
* 1 – alerts
* 2 – critical
* 3 – errors
* 4 – warnings
* 5 – notifications
* 6 – informational
* 7 – debugging
You can filter log messages on the console by severity level by issuing the logging console
severity-level command, or you can filter log messages to a Syslog server by issuing the logging
trap severity-level command. When the logging console or logging trap command is issued with
a severity-level parameter, messages with the specified severity level and all lower-numbered
severity levels will be displayed or sent, respectively.
Messages are formatted in the Berkeley Software Distribution (BSD) Syslog format, which is a
percent sign (%) followed by a facility code, a severity code, and a mnemonic code. The three
codes are separated by dashes. In the output displayed in this scenario, the facility code is LINK,
the severity code is 3, which is equivalent to errors, and the mnemonic code is UPDOWN. The
dash-separated code is followed by a colon and the human-readable text of the log message.
Emergencies and alerts indicate a severe hardware or software problem with the device. These
messages need to be addressed immediately.
Critical, error, and warning messages indicate something that might impact the device. For
example, interface up/down state changes are displayed as errors at level 3.
Notifications and informational messages are routine messages but still might indicate a problem.
Route flaps, neighbor adjacencies, and interface protocol up/down transitions are displayed as
notifications at level 5.
Debugging messages appear only as the result of issuing the debug command. After debugging,
always remember to issue the no debug all command to stop collecting data.

104
Q

QUESTION NO: 146
Which of the following best describes a lightweight AP in sniffer mode?
A.
It acts as a dedicated connection between two networks.
B.
It enables a failsafe if the CAPWAP connection goes down.
C.
It captures wireless traffic for analysis.
D.
It is the default operating mode for a lightweight AP.

A

Answer: C
Explanation:
A Cisco lightweight access point (AP) operating in sniffer mode captures wireless traffic for
analysis. A lightweight AP provides an interface for wireless clients to connect to the wireless local
area network (WLAN). However, unlike autonomous APs, a lightweight AP relies on a Cisco
wireless LAN controller (WLC) for management and configuration. Sniffer mode allows a
lightweight AP to capture wireless traffic, similar to the way a wired network sniffer behaves. When
traffic is captured, a lightweight AP that is operating in sniffer mode will send the traffic to an
analyzer, which is typically software that is installed on a PC or other host.
Local mode, not sniffer mode, is the default operating mode for a lightweight AP. A Cisco
lightweight AP operating in local mode is capable of providing multiple basic service sets (BSSs)
on a single channel. In this mode, the AP can connect to a WLC and can provide client
connectivity. In addition, an AP operating in local mode scans all wireless channels as a means of
monitoring wireless quality and security. The connection between a lightweight AP and a WLC is
created by using two tunnels established by the Control and Provisioning of Wireless Access
Points (CAPWAP) tunneling protocol. Information sent between lightweight APs and the WLC is
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 247
encapsulated in Internet Protocol (IP) packets. This process enables a lightweight AP and WLC to
manage connectivity to the same WLAN yet be separated by both physical and logical means.
A Cisco lightweight AP operating in FlexConnect mode, not sniffer mode, enables a failsafe if the
CAPWAP connection goes down. FlexConnect mode does not provide BSSs. When configured,
FlexConnect mode enables a lightweight AP to switch traffic between a given Service Set Identifier
(SSID) and a given virtual local area network (VLAN).
A Cisco lightweight AP operating in bridge mode, not sniffer mode, acts as a dedicated connection
between two networks. Lightweight APs operating in bridge mode can connect to other networks
in either a point-to-point or point-to-multipoint fashion. When multiple APs are configured in bridge
mode, the collection of lightweight APs can be used to form a mesh network.

105
Q

QUESTION NO: 147
The sending host on a site-to-site VPN that is constructed by using GRE with IPSec for transport
adds a VPN header and an IP header to the packet.
Which of the following steps occurs next?
A.
The sending host adds the session key to the packet.
B.
The sending host sends the packet to the destination.
C.
The receiving host decrypts the packet.
D.
The sending host encapsulates the packet.

A

Answer: B
Explanation:
After a sending host on a site-to-site virtual private network (VPN) that is constructed by using
Generic Routing Encapsulation (GRE) with Internet Protocol Security (IPSec) for transport adds a
VPN header and an Internet Protocol (IP) header to the packet, the sending host sends the packet
to the destination. The addition of the VPN header and IP header to the packet is a process known
as encapsulation.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 248
A site-to-site VPN uses IPSec to transport information across a tunnel that is established between
two hosts. A typical site-to-site VPN uses GRE with confidentiality, integrity, and antireplay
protection provided by IPSec. There are four steps in the site-to-site VPN IPSec encryption
process.
First, the sending device combines a session key, which is also known as an encryption key or a
shared key, with the data that is to be transported over the tunnel. It then uses the session key to
encrypt both the data and the key.
Second, the sending device encapsulates the encrypted data and session key into a packet with a
VPN header and a new IP header. These headers contain the source and destination information
that is used to transport the encrypted data and session key over the tunnel.
Third, the sending device sends the completed packet to the destination device at the other end of
the tunnel, or site-to-site VPN.
Fourth and finally, the destination device, or receiving device, uses the same session key that the
sending device used for encryption to decrypt the encrypted packet and session key.

106
Q

QUESTION NO: 148
Your company has been assigned the 2012:0:0:99::/64 IPv6 prefix by your ISP. You have issued
the following commands on your Cisco router’s FastEthernet 0/0 interface:
ipv6 address 2012:0:0:99::/64 eui-64
no shutdown
Host computers should be also autoconfigured by using the EUI-64 format. HostA has an IPv4
address of 192.168.0.14 and a MAC address of 00-33-66-99-BB-EE.
You have configured HostA to use stateless autoconfiguration.
Which of the following IPv6 addresses will HostA use?
A.
2012::99:0233:66FF:FE99:BBEE
B.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 249
2012::99:0233:6699:BBEE
C.
2012::99:0:0233:6699:BBEE
D.
2012::99:0:0:192.168.0.14
E.
2012:0:0:99:192.168.0.14
F.
2012:0:0:99::COA8:000E

A

Answer: A
Explanation:
HostA will use the Internet Protocol version 6 (IPv6) address 2012::99:0233:66FF:FE99:BBEE.
The scenario indicates that hosts should be autoconfigured by using the extended unique identifier
(EUI)-64 format. For autoconfiguration to occur, the router must send the IPv6 prefix to a host in a
router advertisement message. Router advertisements, and other stateless autoconfiguration
messages, are sent by using Internet Control Message Protocol version 6 (ICMPv6). An IPv6 host
typically sends a router solicitation message on startup to prompt a router into sending a router
advertisement, rather than waiting for the arrival of a periodic router advertisement. Once a router
advertisement is received, the host will append its interface identifier, which is a modified version
of its Media Access Control (MAC) address, to the received IPv6 prefix to create a globally unique,
IPv6 unicast address.
An interface identifier in EUI-64 format is created by taking the first half of the host’s MAC address,
which is referred to as the Organizationally Unique Identifier (OUI), adding the hexadecimal
number FFFE, and then appending the last half of the host’s MAC address, which is the hardwarespecific portion of the MAC address. The seventh binary bit in the OUI, which is referred to as the
U/L bit, is then flipped; that is, the original value of the U/L bit is inverted. A U/L bit value of 0 in the
OUI indicates that the MAC address is universal. Universal addresses are intended to be globally
unique and are the addresses burned in by the manufacturer. By contrast, a U/L bit value of 1
indicates a locally administered MAC address, such as the address created by a virtual interface
or manually configured by an administrator. The U/L value is inverted when an EUI-64 interface ID
is created in order to facilitate simple local scope identifiers for manual administration. It should be
noted that there is no correlation between the U/L bit value in an EUI-64 interface ID and the
scope of the IPv6 address. The U/L bit value in an EUI-64 interface ID was intended to provide the
ability for future technology to identify interface IDs with a local scope.
In this scenario, the MAC address 00-33-66-99-BB-EE has an OUI of 00-33-66 and a network
interface card (NIC) identifier of 99-BB-EE. To create an EUI-64-compliant interface ID, you should
first append the hexadecimal value FFEE to the OUI and then append the NIC identifier:
0033:66FF:FE99:BBEE. Finally, you should invert the value of the seventh bit of the OUI to
represent the scope of the EUI-64 interface ID. The hexadecimal value 00 in the first eight bits of
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 250
the OUI can be represented in binary as 0000 0000 and indicates that the seventh bit represents
the U/L bit. In this scenario, the seventh bit has a value of 0, which indicates that the MAC address
from which the EUI-64 address is derived is a universal address. Inverting the value of the U/L bit
changes the value from 0 to 1; therefore, the first eight bits of the OUI become 0000 0010. These
bits can be represented in hexadecimal as 02, and they result in an EUI-64 interface ID of
0233:66FF:FE99:BBEE. The EUI-64 interface ID is then combined with the IPv6 prefix to create
an IPv6 address. Appending the IPv6 prefix 2012::99 to this interface identifier creates the global
unicast IPv6 address 2012::99:0233:66FF:FE99:BBEE. This address is considered globally unique
because the IPv6 prefix is in the 2000::/3 range. IPv6 global unicast addresses always begin with
a 2 or a 3 because the first three bits of an IPv6 global unicast address are always 001.
HostA will not use the IPv6 address 2012::99:0233:6699:BBEE. The address
2012::99:0233:6699:BBEE expands to 2012:0:0:0:99:0233:6699:BBEE, which does not use the
correct prefix value of 2012:0:0:99::/64.
HostA will not use the IPv6 address 2012::99:0:0033:6699:BBEE. Although the interface identifier
is composed by using the MAC address, it does not follow the EUI-64 format.
HostA will not use the IPv6 addresses 2012::99:0:0:192.168.0.14 or 2012:0:0:99::COA8:000E. An
IP version 4 (IPv4)-compatible IPv6 address can be created by using zeros for the first 96 bits of
the address and by using the IPv4 address for the last 32 bits of the address. The IPv4 address
can be written with or without leading zeros and can be written in binary or hexadecimal format.
Therefore, the following notations would be acceptable for HostA if it were to use an IPv4-
compatible IPv6 address:
* 0:0:0:0:0:0:192.168.0.14
* ::192.168.0.14
* 0:0:0:0:0:0:C0A8:000E
* ::C0A8:000E
When you convert the IPv4 address 192.168.0.14 from decimal to hexadecimal, 192 converts to
C0, 168 converts to A8, 0 converts to 00, and 14 converts to 0E.
HostA will not use the IPv6 address 2012:0:0:99:192.168.0.14, because it does not contain
enough bits to create a valid 128-bit IPv6 address. The prefix 2012:0:0:99::/64 contains 64 bits for
the prefix, and 64 bits are needed for the interface identifier. IPv4 addresses contain only 32 bits,
so appending 192.168.0.14 directly to the prefix creates an address with only 96 bits.

107
Q

QUESTION NO: 149
SwitchA and SwitchB are connected by an 802.1Q trunk link with the default settings.
Which of the following is most likely to occur if you change the native VLAN to VLAN 10 on the
trunk interface of SwitchB?
A.
Traffic will be sent between the two switches, and traffic sent over VLAN 10 will be untagged.
B.
Traffic will be sent between the switches, but problems could occur because of a native VLAN
mismatch.
C.
Traffic from VLANs other than VLAN 10 will be sent between the two switches, but no traffic from
VLAN 10 will be sent between the switches.
D.
No traffic will be sent between the two switches.

A

Answer: B
Explanation:
Traffic will be sent between the switches, but problems could occur because of a native virtual
local area network (VLAN) mismatch. SwitchA is configured to use the default native VLAN, VLAN
1. Modifying the native VLAN to VLAN 10 on SwitchB could cause problems because of the
different native VLANs. A mismatched native VLAN configured on either of the two ends of a trunk
link could cause problems when traffic is sent by using one of the configured native VLANs. The
traffic may be sent, but the native VLAN mismatch could potentially cause the traffic to be
misdirected or dropped. In addition, when Dynamic Trunking Protocol (DTP) is used to negotiate
the formation of a trunk link between switches, DTP uses the native VLAN for its packets. If the
native VLAN is not the same on both ends of the link, a trunk will not dynamically form.
Spanning Tree Protocol (STP) issues, such as unexpected loops, can occur if there is a native
VLAN mismatch on the ends of a trunk link. If Per-VLAN Spanning Tree Plus (PVST+) is enabled,
one or more of the following error messages might appear on the console to indicate a native
VLAN mismatch:
%SPANTREE-SP-2-RECV-PVID-ERR: Received BPDU with inconsistent peer Vlan id 1 on
GigabitEthernetl/1 VLAN10
%SPANTREE-SP-2-BLOCK-PVID-PEER: Blocking GigabitEthernetl/1 on VLAN0001. Inconsistent
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 252
peer vlan.
%SPANTREE-SP-2-BLOCK-PVID-LOCAL: Blocking GigabitEthernetl/1 on VLAN0001.
Inconsistent local vlan.
Traffic over the native VLAN is not tagged, which means that an 802.1Q header is not added to
the frame. When a switch receives a frame without an 802.1Q header, the switch knows that the
frame is part of the native VLAN. Thus both SwitchA and SwitchB should be configured with the
same native VLAN in order to ensure that traffic flows correctly between the switches.
Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3560/software/release/12-
2_52_se/configuration/guide/3560scg/swvlan.html#wp1200245
https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/12-
2SX/configuration/guide/book/dot1qtnl.html#wp1006495
CCNA 200-301 Official Cert Guide, Volume 1, Chapter 8: Implementing Ethernet Virtual LANs,
Mismatched Native VLAN on a Trunk

108
Q

QUESTION NO: 150
You are configuring Layer 2 security on a WLAN by using the WLC GUI. You select WPA+WPA2
from the Layer 2 Security drop-down list box. You want to minimize the amount of time it takes an
802.1X client to roam between access points.
Which of the following WPA2 key management methods should you select from the Auth Key
Mgmt drop-down list box?
A.
802.1X
B.
PSK
C.
802.1X+CCKM
D.
CCKM

A

Answer: C
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 253
Explanation:
You should select the 802.1X+CCKM Wi-Fi Protected Access 2 (WPA2) key management method
from the Auth Key Mgmt drop-down list box in order to minimize the amount of time it takes an
Institute of Electrical and Electronics Engineers (IEEE) 802.1X wireless client to roam between
access points. The 802.1X+CCKM option enables 802.1X clients to use the Cisco Centralized Key
Management (CCKM) key management method to roam between access points without
performing the complete 802.1X authentication process again. Normally, 802.1X clients mutually
authenticate to a new access point. This process likewise involves reauthenticating with the
Remote Authentication Dial-In User Service (RADIUS) server. The 802.1X+CCKM key
management method eliminates the need to reauthenticate with the RADIUS server, thus reducing
the amount of time it takes for an 802.1X client to roam between access points.
You should not select the CCKM key management method in this scenario. This option enables
the CCKM key management method but does not minimize delay specifically for 802.1X clients.
CCKM is a Cisco-proprietary fast-rekeying method that enables a wireless client to roam from one
access point to another without requiring intervention from the Cisco Wireless LAN Controller
(WLC). CCKM is typically used to reduce delay when wireless clients transition between access
points so that delay-sensitive services, such as Voice over Internet Protocol (VoIP), operate
smoothly.
You should not select the 802.1X key management method in this scenario. The IEEE 802.1X
standard defines a method of port-based network access control. On Cisco wireless local area
networks (WLANs), the 802.1X key management method is the default method for both WPA and
WPA2. It typically requires a RADIUS server and uses various Extensible Authentication Protocol
(EAP) implementations to authenticate users. Combining WPA or WPA2 with an 802.1X key
management method is often known as WPA-8021X mode, or WPA Enterprise.
You should not select the PSK key management method in this scenario. The PSK method
configures WPA or WPA2 to use the Pre-Shared Key (PSK) key management method. This
method requires an administrator to configure each wireless client that will connect to the network
with the key that is configured on the WLC. The PSK option supports key entry as either an ASCII
passphrase from 8 through 63 characters in length or a key of 64 hexadecimal values. Combining
WPA or WPA2 with a PSK key management method is often known as WPA-PSK, or WPA
Personal.
Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-5/configguide/b_cg85/wlan_security.html#ID996

109
Q

QUESTION NO: 152
How many octets of a MAC address represent the OUI?
A.
three
B.
five
C.
one
D.
two
E.
four

A

Answer: A
Explanation:
The first three octets of a Media Access Control (MAC) address represent the organizationally
unique identifier (OUI), which is assigned by the Institute of Electrical and Electronics Engineers
(IEEE) to identify the manufacturer of the device. The last three octets make up the unique
network interface card (NIC)-specific identifier assigned to the device by the manufacturer.
A MAC address, also known as a physical address, is a 48-bit address that is permanently
encoded on a NIC. MAC addresses are written in hexadecimal format and are composed of six 8-
bit octets for a total of 6 bytes of data in the entire address, as shown in the following diagram:
The most significant bytes are at the beginning, or leftmost octet, and are transmitted first. Bytes
decrease in significance as you move to the right through the address to the least significant octet
appearing at the end, or rightmost octet.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 257
The significance of each octet follows the same rule of the overall address: the most significant bit
is on the left, and the least significant bit is on the right. However, when transmitted, a bit differs
from a byte in that the least significant bit of a byte is transmitted first. The two least significant bits
of the most significant byte of a MAC address are used as indicator flags; these two bits are bit 2
and bit 1 in the example below:
The least significant bit, or bit 1, of the most significant byte is where a MAC address is designated
as a unicast address or a multicast address; a 0 equates to unicast, and a 1 equates to multicast.
The second least significant bit, or bit 2, is used to designate whether the MAC address is globally
administered by the IEEE and carries an OUI or whether the MAC address is locally administered;
a 0 indicates the presence of an OUI, and a 1 indicates a locally administered MAC address.

110
Q

QUESTION NO: 153
Which of the following is used to run a guest OS within a host OS?
A.
a VM
B.
a virtual PBX
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 258
C.
virtual memory
D.
a virtual switch

A

Answer: A
Explanation:
A virtual machine (VM) is used to run a guest operating system (OS) within a host OS. Though
VMs share hardware resources with the host OS, they are otherwise isolated from one another.
VMs can be used for a variety of purposes, such as software testing or hosting specific network
services. An additional programming layer, known as a hypervisor, is required in order for the VM
to communicate with the host hardware. A hypervisor is used to allocate hardware resources, such
as hard drive space, central processing unit (CPU), and random access memory (RAM), to the
VM.
A virtual switch is a virtual device used to allow multiple VMs to communicate within a host
system. A VM needs a virtual network interface card (NIC) in order to communicate with other
devices. Each virtual NIC is assigned a unique Media Access Control (MAC) address. Similar to a
hardware-based switch, a virtual switch maintains a table of MAC-to-port associations. When data
is sent from one VM to another, the virtual switch will use this table to determine which port to use
to forward the received data.
A virtual private branch exchange (PBX) is a virtual device used to route telephone calls. A PBX
serves as a centralized device that routes calls between a telephone company and phones within
a single office location. In addition, PBX systems can be used to connect fax and voicemail
services. A virtual PBX is a software-based exchange that can run on a desktop computer or a VM
instead of on a dedicated device.
Virtual memory is an allocated section of hard drive space that can be used as additional RAM. If
more memory is needed than is available in RAM, data is moved out of physical RAM in chunks of
data called pages. When the data stored in a page is needed again, the page is moved back into
RAM.
Reference: https://www.ibm.com/cloud/learn/virtual-machi

111
Q

QUESTION NO: 156
Which of the following CoS priority values does a Cisco IP phone assign to traffic received from a
host on its access port by default?
A.
3
B.
7
C.
0
D.
5

A

Answer: C
Explanation:
By default, a Cisco Internet Protocol (IP) phone assigns a Class of Service (CoS) priority value of
0 to traffic received from a host on its access port. Because voice traffic is vulnerable to
degradation and deterioration if the traffic is sent unevenly, IP phones support Quality of Service
(QoS) that is based on the Institute of Electrical and Electronics Engineers (IEEE) 802.1p CoS
standard. QoS uses the CoS priority value to prioritize the forwarding of voice and data packets in
a predictable fashion. Because data packets from the host computer and voice packets from the
IP phone share a physical link to the switch, a method to prioritize the transmission of the voice
packets over the data packets is required. A problem occurs when the data packets transmitted by
the host have a higher CoS priority value than the voice packets that are generated by the IP
phone. If this happens, the data packets could take precedence over the voice packets and cause
unacceptable degradation of the voice call. There, the default behavior of a Cisco IP phone is to
override the CoS priority value assigned by the host and reassign the lowest CoS priority value of
0 to the data packets.
In addition, you can configure the IP phone to reclassify the CoS priority value that the host
assigns to its data packets to a specify value, instead of the default CoS priority value of 0. The
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 264
CoS priority value can range from 0 through 7, with 7 being the highest priority. By default, Cisco
IP phones classify voice data traffic with a CoS priority of 5 and voice signaling traffic with a CoS
priority value of 3. Overriding the CoS priority to a specify value ensures that voice packets will
have a higher priority than the data packets and the voice packets will be given preference over
the data packets as they are processed by the switch.
However, under certain circumstances, such as when the data transmitted by the host is missioncritical, you might want the IP phone to trust the host-generated CoS priority value assigned to the
data packets. In those circumstances, you can configure the IP phone so that it does not override
the CoS values from the host but accepts the existing CoS value as valid and forwards unchanged
data packets to the switch.
Reference: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3560/software/release/12-
2_52_se/configuration/guide/3560scg/swvoip.html#wp1033848 CCNA 200-301 Official Cert
Guide, Volume 1, Chapter 8: Implementing Ethernet Virtual LANs, Implementing Interfaces
Connected to Phones

112
Q

QUESTION NO: 157
LAG is enabled on a WLC that contains eight distribution system ports. All eight distribution
system ports are connected to a single switch that is correctly configured to unconditionally bundle
its ports. Seven of the eight links fail.
Which of the following is true?
A.
The WLC will pass all wireless client traffic to the switch.
B.
The WLC will intermittently pass wireless client traffic to the switch.
C.
The WLC will automatically reconfigure all eight ports as 802.1Q trunk ports.
D.
The WLC will no longer pass wireless client traffic to the switch.

A

Answer: A
Explanation:
The Cisco wireless LAN controller (WLC) will pass all wireless client traffic to the switch, even
though seven of the eight links in the link aggregation (LAG) bundle have failed in this scenario.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 265
LAG enables multiple distribution system ports on a WLC to operate as one logical group when
connected to a switch.
Thus, LAG enables load balancing across links between devices and redundancy. If one link fails,
the other links in the LAG bundle will continue to function. By default, LAG is enabled on all
distribution system ports when it is enabled. However, LAG requires only one functional physical
port in order to pass wireless client traffic to the switch.
LAG will pass all wireless client traffic; it will not intermittently pass traffic. Nor will LAG stop
passing traffic. Similar to EtherChannel, LAG enables redundancy. If one physical port fails in a
LAG bundle, the other ports are capable of passing client traffic in that port’s place. If all but one
port in a LAG bundle fails, that port will pass client traffic for all of the failed ports.
LAG will not automatically reconfigure all eight ports as Institute of Electrical and Electronics
Engineers (IEEE) 802.1Q trunk ports. Although a distribution system port by default connects to a
switch in IEEE 802.1Q trunk mode, LAG does not fall back to that mode if one functional physical
port remains in the bundle.
Reference: https://www.cisco.com/c/en/us/td/docs/wireless/controller/7-
4/configuration/guides/consolidated/b_cg74_CONSOLIDATED/b_cg74_CONSOLIDATED_chapter
_010100001.html#ID1363 CCNA 200-301 Official Cert Guide, Volume 1, Chapter 29: Building a
Wireless LAN, Using WLC Port

113
Q

QUESTION NO: 158
You want to configure SSH for incoming VTY connections on a new router. The router is running a
K9 IOS image but has not yet been configured with a host name, a domain name, or an RSA key
pair. In addition, the VTY lines are not yet configured to accept incoming SSH connections.
You issue the ip ssh time-out 60 command from global configuration mode to configure the router
with a 60-second timeout.
Which of the following messages will you most likely receive?
A.
Please define a domain-name first.
B.
Please define a hostname other than Router.
C.
Please create RSA keys to enable SSH.
D.
Invalid input detected at ‘^’ marker.
E.
Please enable SSH as a transport mode.

A

Answer: C
Explanation:
You will most likely receive the Please create RSA keys to enable SSH message when you issue
the ip ssh time-out 60 command from global configuration mode. To enable Secure Shell (SSH)
for virtual terminal (VTY) lines on a Cisco router, you should complete the following steps:
1. Configure the router with a host name other than Router by issuing the hostname command.
2. Configure the router with a domain name by issuing the ip domain-name command.
3. Generate an RSA key pair for the router by issuing the crypto key generate rsa command.
4. Configure the VTY lines to use SSH by issuing the transport input ssh command from line
configuration mode.
SSH is often used as a secure replacement for Telnet to manage network devices. In order for
SSH to be enabled on a Cisco device, the device must be running a K9 IOS image, which
provides cryptographic functionality.
You will not receive the Invalid input detected at ‘^’ marker message when you issue the ip ssh
time-out 60 command in this scenario. You would receive the Invalid input detected at ‘^’ marker
message if you were to mistype the time-out keyword or if you were to try to configure the SSH
timeout with a value greater than 120 seconds. Although SSH is not yet enabled in this scenario,
the router will accept the ip ssh time-out 60 command as a valid configuration. The ip ssh timeout 60 command would appear in the configuration if you were to issue the show running-config
command.
You will not receive the Please define a hostname other than Router message when you issue the
ip ssh time-out 60 command in this scenario. However, because you have not configured the
router with a host name other than the default name of Router, you would receive the Please
define a hostname other than Router message if you were to issue the crypto key generate rsa
command. To configure a router with a host name other than the default, you should issue the
hostname host-name command from global configuration mode.
You will not receive the Please define a domain-name first message when you issue the ip ssh
time-out 60 command in this scenario. However, if you had configured the router with a valid host
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 267
name but had not configured the router with a domain name, you would receive the Please define
a domain-name first message if you were to issue the crypto key generate rsa command. In this
scenario, you have configured neither the domain name nor the host name. To configure a router
with a domain name, you should issue the ip domain-name domain-name command from global
configuration mode.
You will not receive the Please enable SSH as a transport mode message when you issue the ip
ssh time-out 60 command in this scenario. The Please enable SSH as a transport mode
message is not a warning message that is displayed on Cisco routers. You can issue the
transport input ssh command to configure SSH as the transport mode for VTY lines.

114
Q

QUESTION NO: 159
You issue the following commands on SwitchA:
SwitchA#configure terminal
SwitchA(config)#interface fastethernet 0/1
SwitchA(config-if)#switchport port-security
SwitchA(config-if)#switchport port-security maximum 12
You want to configure SwitchA to discard traffic and increment the SecurityViolation counter when
it receives traffic on FastEthernet 0/1 from a host with an unauthorized MAC address.
Which of the following commands should you issue?
A.
switchport port-security violation protect
B.
switchport port-security violation discard
C.
switchport port-security violation restrict
D.
switchport port-security violation shutdown

A

Answer: C
Explanation:
You should issue the switchport port-security violation restrict command to configure SwitchA
to discard traffic and increment the SecurityViolation counter when SwitchA receives traffic on
FastEthernet 0/1 from a host with an unauthorized Media Access Control (MAC) address. The
syntax of the switchport port-security violation command is switchport port-security violation
{protect | restrict | shutdown}.
The switchport port-security violation protect command configures a switch port to discard
traffic that it receives from unauthorized hosts. However, the SecurityViolation counter is not
incremented when the protect keyword is used.
The switchport port-security violation shutdown command configures a switch port to enter
the error-disabled state when the port receives traffic from unauthorized hosts. You can remove
the switch port from the error-disabled state by issuing the errdisable recovery cause shutdown
command from global configuration mode or by issuing the shutdown and no shutdown
commands from interface configuration mode.
The switchport port-security violation discard command contains incorrect syntax, because
discard is not a valid keyword of the switchport port-security violation command

115
Q

QUESTION NO: 160
Which of the following protocols uses both TCP and UDP?
A.
DNS
B.
Telnet
C.
TFTP
D.
DHCP
E.
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 269
FTP

A

Answer: A
Explanation:
Domain Name System (DNS) uses both Transmission Control Protocol (TCP) and User Datagram
Protocol (UDP) over port 53. DNS uses a hierarchical database to translate fully qualified domain
names (FQDNs) to Internet Protocol (IP) addresses. An FQDN includes the host name of a device
and the domain name to which that device is connected. DNS enables you to access a computer
named server1 on the abc.com domain by using the FQDN server1.abc.com instead of the IP
address of the computer.
TCP is a Transport layer protocol that is used for reliable, synchronized, connection-oriented
transfer of data. Data sent by TCP is sequenced and checked for errors, and any lost packets are
retransmitted. File Transfer Protocol (FTP), which is used to transfer files over a network, uses
TCP ports 20 and 21. Telnet, a terminal emulation protocol that can be used to remotely log on to
a router, uses TCP port 23. Other Application layer protocols that use TCP include Simple Mail
Transfer Protocol (SMTP), which uses TCP port 25; Hypertext Transfer Protocol (HTTP), which
uses TCP port 80; and Post Office Protocol 3 (POP3), which uses TCP port 110. Neither FTP nor
Telnet uses UDP to communicate.
UDP is a Transport layer protocol that is used for unreliable, connectionless datagram transfer.
Transmitted datagrams can appear out of sequence or can be dropped without notice. Dynamic
Host Configuration Protocol (DHCP), which assigns IP addressing and default gateway
information to clients, uses UDP ports 67 and 68. Trivial File Transfer Protocol (TFTP), which is
used to transfer files over a network, uses UDP port 69. Other Application layer protocols that use
UDP include Network Time Protocol (NTP), which uses UDP port 123; Simple Network
Management Protocol (SNMP), which uses UDP ports 161 and 162; and Remote Authentication
Dial-In Service User (RADIUS), which uses UDP ports 1812 and 1813. Neither DHCP nor TFTP
uses TCP to communicate.

116
Q

QUESTION NO: 161
You connect a new, unconfigured switch to an existing switch’s FastEthernet 0/1 interface. That
interface was previously connected to an end user’s workstation. You notice that the FastEthernet
0/1 interface on the existing switch enters the error-disabled state.
Which of the following are the most likely causes of the problem? (Choose two.)
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 270
A.
The interface on the new switch is a statically configured trunk port.
B.
Root guard is enabled on the FastEthernet 0/1 interface.
C.
Loop guard is enabled on the FastEthernet 0/1 interface.
D.
PortFast is enabled on the FastEthernet 0/1 interface.
E.
BPDU guard is enabled on the FastEthernet 0/1 interface.

A

Answer: D,E
Explanation:
Of the available choices, the most likely cause of the problem is that the existing switch’s
FastEthernet 0/1 interface has PortFast and BPDU guard enabled. BPDU guard is used to disable
ports that erroneously receive bridge protocol data units (BPDUs). BPDU guard is typically applied
to edge ports that have PortFast enabled. Because PortFast automatically places ports into a
forwarding state, a switch that has been connected to a PortFast-enabled port could cause
switching loops. However, when BPDU guard is applied, the receipt of a BPDU on a port will result
in the port being placed into the error-disabled state, which prevents loops from occurring. When
such a port receives a BPDU, BPDU guard immediately puts that port into the error-disabled state
and shuts down the port. The port must then be manually re-enabled, or it can be recovered
automatically by configuring the errdisable recovery cause bpduguard command and the
errdisable recovery interval interval command.
BPDU guard should be enabled on ports that have been enabled with PortFast so that BPDU
guard can prevent a rogue switch from modifying the Spanning Tree Protocol (STP) topology.
PortFast is a feature that provides immediate accessibility to the network for edge ports, such as
access ports that are connected to end-user workstations. PortFast transitions the port into the
STP forwarding state without going through the STP listening and learning states.
It is not likely that the interface on the new switch is a statically configured trunk port. Depending
on the Cisco hardware model, it is possible that Dynamic Trunking Protocol (DTP) is by default
configured in dynamic desirable mode, which means that the new device would automatically send
out DTP frames in an attempt to negotiate trunking mode. If the FastEthernet 0/1 interface on the
existing switch is not statically configured as an access port with the switchport mode access
command, the ports could negotiate a trunk link.
It is not likely that loop guard is enabled on the FastEthernet 0/1 interface. The loop guard feature
prevents nondesignated ports from inadvertently forming bridging loops if the steady flow of
BPDUs is interrupted. When the port stops receiving BPDUs, loop guard puts the port into the
loop-inconsistent state, which keeps the port in a blocking state. After the port starts receiving
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 271
BPDUs again, loop guard automatically re-enables the port so that it transitions through the
normal STP states. You can enable loop guard for the entire switch by issuing the spanning-tree
loopguard default command in global configuration mode, or you can enable loop guard for
specific ports by issuing the spanning-tree guard loop command in interface configuration mode.
It is not likely that root guard is enabled on the FastEthernet 0/1 interface. Root guard is used to
prevent newly introduced switches from being elected as the new root switch. This allows
administrators to maintain control over which switch is the root. When STP is used, the device with
the lowest switch priority is elected the root. If a new device is added to the network with a lower
priority than the current root, it will become the new root. However, this could cause the network to
reconfigure in unintended ways. To prevent this, root guard can be applied. Root guard is applied
on a per-port basis by issuing the spanning-tree guard root command. If root guard is enabled
on a loop guard-enabled port, loop guard will be automatically disabled

117
Q

QUESTION NO: 162
Which of the following is a Cisco-proprietary FHRP that elects an AVG and up to four primary
AVFs?
A.
GLBP
B.
VRRP
C.
HSRP
D.
LACP

A

Answer: A
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 272
Explanation:
Gateway Load Balancing Protocol (GLBP) is a Cisco-proprietary First-Hop Redundancy Protocol
(FHRP) that elects an active virtual gateway (AVG) and up to four primary active virtual forwarders
(AVFs). FHRPs are protocols that are used to provide Layer 3 gateway redundancy, such as
failover and load balancing. Providing Layer 3 redundancy ensures that hosts on a local area
network (LAN) will have a backup path to external networks should a primary path fail or become
too congested to forward traffic. Layer 3 devices in an FHRP configuration typically share a virtual
Internet Protocol (IP) address that is then configured as the default gateway on each host for
which the device is to forward traffic. The FHRP devices might also share a single virtual Media
Access Control (MAC) address or provide multiple virtual MAC addresses, depending on the
protocol. FHRPs typically use a priority system to elect a primary Layer 3 forwarding device, which
is known as an AVG, an active router, or a master router, depending on the protocol. The same
priority system elects either a single or multiple backup-forwarding devices.
Each GLBP group contains an AVG that is elected based on which router is configured with the
highest priority value or the highest IP address value if multiple routers are configured with the
highest priority value. The other routers in the GLBP group are configured as primary or secondary
AVFs. GLBP can support up to 1,024 virtual routers on a physical interface. The AVG in a GLBP
group assigns a virtual MAC address to a maximum of four primary AVFs; all other routers in the
group are considered secondary AVFs and are placed in the listen state. When the AVG receives
Address Resolution Protocol (ARP) requests that are sent to the virtual IP address for the GLBP
group, the AVG responds with different virtual MAC addresses. This provides load balancing,
because each of the primary AVFs will participate by forwarding a portion of the traffic sent to the
virtual IP address. The primary difference between GLBP and other FHRPs is that, by default,
GLBP load balances between every router in the GLBP group. Other protocols either cannot load
balance or require additional configuration in order to load balance.
Hot Standby Router Protocol (HSRP) is a Cisco-proprietary FHRP; however, HSRP elects only an
active router and a standby router. Based on priority value, HSRP elects a single active router and
a standby router. The active router is the router with the highest priority; it forwards packets,
responds to ARP requests with a virtual MAC address, and can be the only router that is explicitly
configured with the virtual IP address. The standby router is the router with the second-highest
priority. If multiple HSRP routers have the same priority, the router with the highest IP address is
elected as the active router. The router with the second-highest IP address is elected as the
standby router, which will assume the role of the active router if the active router fails. To
participate in the active and standby router election process, each HSRP router must be a
member of the same group. An HSRP group is identified by a group number from 0 through 255.
The default HSRP group value is 0. Unlike GLBP, HSRP does not load balance by default.
However, it is possible to load balance traffic between HSRP gateways by creating up to 255
HSRP groups on an interface and configuring each group so that it elects a different active router.
This is known as a multigroup HSRP configuration.
Virtual Router Redundancy Protocol (VRRP) is not a Cisco-proprietary FHRP. VRRP is an Internet
Engineering Task Force (IETF)-standard FHRP that is supported by both Cisco and non-Cisco
devices. Similar to HSRP, VRRP elects a master router that forwards packets, responds to ARP
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 273
requests, and can be the only router that is explicitly configured with the virtual IP address. It is
important to note that Cisco-specific enhancements to VRRP might not be available when
connecting a Cisco device that is using VRRP to a non-Cisco device that is using VRRP. If only
Cisco devices are used in the topology and a choice between HSRP and VRRP is available, Cisco
recommends using HSRP. Both HSRP and VRRP can be used to configure failover in case a
primary default gateway goes down.

118
Q

QUESTION NO: 163
Which of the following statements is not true regarding the IaaS service model?
A.
The consumer has control over the physical infrastructure in the cloud.
B.
The consumer has control over the allocation of processing, memory, storage, and network
resources within the cloud.
C.
The consumer has control over the configuration of the OS running on the physical infrastructure
in the cloud.
D.
The consumer has control over development tools or APIs in the cloud running on the physical
infrastructure in the cloud.

A

Answer: A
Explanation:
In the Infrastructure as a Service (IaaS) service model, the consumer does not have control over
the physical infrastructure in the cloud. The National Institute of Standards and Technology (NIST)
defines three service models in its definition of cloud computing: IaaS, Software as a Service
(SaaS), and Platform as a Service (PaaS).
The IaaS service model provides the greatest degree of freedom by enabling its consumer to
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 274
provision processing, memory, storage, and network resources within the cloud infrastructure. The
IaaS service model also enables its consumer to install applications, including operating systems
(OSs) and custom applications. However, with IaaS, the cloud infrastructure remains in control of
the service provider. A company that hires a service provider to deliver cloud-based processing
and storage that will house multiple physical or virtual hosts configured in a variety of ways is
using IaaS. For example, a company that wanted to establish a web server farm by configuring
multiple Linux Apache MySQL PHP (LAMP) servers could save hardware costs by virtualizing the
farm and using a provider’s cloud service to deliver the physical infrastructure and bandwidth for
the virtual farm. Control over the OS, software, and server configuration would remain the
responsibility of the organization, whereas the physical infrastructure and bandwidth would be the
responsibility of the service provider. Using a third party’s infrastructure to host corporate Domain
Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) servers is another
example of IaaS.
The SaaS service model enables its consumer to access applications running in the cloud
infrastructure but does not enable the consumer to manage the cloud infrastructure or the
configuration of the provided applications. Of the three service models, SaaS exposes the least
amount of the consumer’s network to the cloud and is the least likely to require changes to the
consumer’s network design. A company that licenses a service provider’s office suite and email
service that is delivered to end users through a web browser is using SaaS. SaaS providers use
an Internet-enabled licensing function, a streaming service, or a web application to provide end
users with software that they might otherwise install and activate locally. Web-based email clients,
such as Gmail and Outlook.com, are examples of SaaS.
The PaaS service model provides its consumer with slightly more freedom than the SaaS model
by enabling the consumer to install and possibly configure provider-supported applications in the
cloud infrastructure. A company that uses a service provider’s infrastructure, programming tools,
and programming languages to develop and serve cloud-based applications is using PaaS. PaaS
enables a consumer to use the service provider’s development tools or Application Programming
Interface (API) to develop and deploy specific cloud-based applications or services. Another
example of PaaS might be using a third party’s MySQL database and Apache services to build a
cloud-based customer relationship management (CRM) platform.

119
Q

QUESTION NO: 164
Which of the following does RED and WRED address?
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 275

A

Answer: A
Explanation:
Random early detection (RED) and weighted RED (WRED) are congestion avoidance
mechanisms that address tail drop, which occurs when new incoming packets are dropped
because a router’s queues are too full to accept them. Tail drop particularly affects Transmission
Control Protocol (TCP) traffic, because when TCP packets are dropped, the sources of the traffic
must retransmit the lost TCP packets. Additionally, the TCP traffic sources will detect the
congestion and will correspondingly slow down the rate at which they send data until the
congestion clears. When the congestion clears, the TCP sources speed up data transmission,
which again causes congestion; this ebb and flow of traffic is called global TCP synchronization.
RED mitigates the problems caused by global TCP synchronization by randomly dropping packets
as congestion increases and before the queue becomes full. As the average size of the queue
increases, RED will randomly drop packets at an increasingly faster rate. WRED improves upon
RED by employing different tail drop thresholds for each IP precedence or Differentiated Services
Code Point (DSCP) value, whereby lower-priority traffic is more likely to be dropped than higherpriority traffic.
RED and WRED do not address bandwidth starvation. Queuing methods, such as weighted fair
queuing (WFQ), class-based WFQ (CBWFQ), or low latency queuing (LLQ), mitigate bandwidth
starvation. Bandwidth starvation occurs when higher-priority queues monopolize an interface’s
bandwidth so that traffic from lower-priority queues is never sent.
RED and WRED do not address bandwidth guarantees. CBWFQ and LLQ provide bandwidth
guarantees by allowing the creation of up to 64 custom traffic classes, each with a guaranteed
minimum bandwidth. Bandwidth can be allocated as a value in Kbps, as a percentage of
bandwidth, or as a percentage of the remaining bandwidth.
RED and WRED do not address strict-priority queuing. LLQ improves upon CBWFQ through the
support of strict-priority queues that can be used for delay-sensitive traffic. The strict-priority
queues can use as much bandwidth as possible but can use only the guaranteed minimum
bandwidth when other queues have traffic to send, thereby avoiding bandwidth starvation for the
lower-priority queues.

120
Q

QUESTION NO: 165
You want to configure an IP address for a router’s serial interface that will provide a point-to-point
connection to a branch office.
Which of the following IP addresses are you most likely to use?
A.
172.16.17.23/30
B.
172.16.17.19/30
C.
172.16.17.20/30
D.
172.16.17.24/30
E.
172.16.17.18/30

A

Answer: E
Explanation:
Of the choices available, the 172.16.17.18/30 Internet Protocol (IP) address is the most
appropriate address for the router’s serial interface in this scenario. To conserve IP addresses,
you should always use a /30 subnet mask for point-to-point links.
In this scenario, you should first determine whether each address is a network address, a
broadcast address, or a host address. A /30 subnet mask, which is equivalent to 255.255.255.252,
indicates that 30 bits are used for the network portion of the address and that 2 bits remain for the
host portion of the address, which allows for 2, or 22 – 2, host addresses. The first address is the
subnet address, the next two addresses are valid host addresses, and the last address is the
broadcast address for the subnet. Networks that are subnetted by using /30 masks are separated
into groups of four addresses each. For example, the 172.16.17.0 network can be divided into the
following subnets:
Cisco 200-301 Exam
“Pass Any Exam. Any Time.” - www.actualtests.com 277
172.16.17.0/30
172.16.17.4/30
172.16.17.8/30
172.16.17.12/30
172.16.17.16/30
172.16.17.20/30
172.16.17.24/30
172.16.17.28/30
and so on
The addresses in the list above are considered network addresses. The two addresses after the
network address are host addresses, and the final address in the group of four is considered the
broadcast address. Network and broadcast addresses cannot be assigned to hosts.
The 172.16.17.18 address is a host address on the 172.16.17.16/30 network. The network
address is 172.16.17.16, the two available host addresses are 172.16.17.17 and 172.16.17.18,
and the broadcast address is 172.16.17.19.
The 172.16.17.19/30 and 172.16.17.23/30 IP addresses cannot be used as the router’s serial
interface address in this scenario, because they are broadcast addresses; 172.16.17.19/30 is the
broadcast address for the 172.16.17.16/30 subnet, whereas 172.16.17.23/30 is the broadcast
address for the 172.16.17.20/30 subnet.
The 172.16.17.20/30 and 172.16.17.24/30 IP addresses cannot be used as the router’s serial
interface address in this scenario, because they are network addresses; 172.16.17.20/30 is the
network address for the subnet containing the hosts 172.16.17.21/30 and 172.16.17.22/30,
whereas 172.16.17.24/30 is the network address for the subnet containing the hosts
172.16.17.25/30 and 172.16.17.26/30.
Reference: https://www.cisco.com/c/en/us/support/docs/ip/routing-information-protocol-rip/13788-
3.html CCNA 200-301 Official Cert Guide, Volume 1, Chapter 13: Analyzing Subnet Masks,
Calculations Based on the IPv4 Address Format