CISSP-P3 Flashcards
- Regarding media sanitization, which of the following is the
correct order for fully and physically destroying hand-held
devices, such as cell phones? - Incinerate
- Disintegrate
- Pulverize
- Shred
a. 3, 2, 1, and 4
b. 4, 2, 3, and 1
c. 1, 4, 3, and 2
d. 1, 2, 4, and 3
- b. The correct order for fully and physically destroying hand-held
devices such as cell phones is shred, disintegrate, pulverize, and
incinerate. This is the best recommended practice for both public and
private sector organizations.
Shredding is a method of sanitizing media and is the act of cutting or
tearing into small particles. Here, the shredding step comes first to
make the cell phone inoperable quickly. Disintegration is a method of
sanitizing media and is the act of separating the equipment into
component parts. Disintegration cannot be the first step because some
determined attacker can assemble these parts and can make the cell
phone work. Pulverization is a method of sanitizing media and is the
act of grinding to a powder or dust. Incineration is a method of
sanitizing media and is the act of burning completely to ashes done in a
licensed incinerator. Note that one does not need to complete all these
methods, but can stop after any specific method and after reaching the
final goal based on the sensitivity and criticality of data on the device.
- Which of the following detects unauthorized changes to software
and information for commercial off-the-shelf integrity
mechanisms? - Tamper-evident system components
- Parity checks
- Cyclical redundancy checks
- Cryptographic hashes
a. 2 only
b. 2 and 3
c. 3 and 4
d. 1, 2, 3, and 4
- d. Organizations employ integrity verification mechanisms to look
for evidence of tampering, errors, and omissions. Software engineering
techniques such as parity checks, cyclical redundancy checks, and
cryptographic hashes are applied to the information system. In
addition, tamper-evident system components are required to ship from
software vendors to operational sites, and during their operation.
- Effective configuration change controls for hardware, software,
and firmware include: - Auditing the enforcement actions
- Preventing the installation of software without a signed certificate
- Enforcing the two-person rule for changes to systems
- Limiting the system developer/integrator privileges
a. 1 only
b. 3 only
c. 2 and 4
d. 1, 2, 3, and 4
- d. All four items are effective in managing configuration changes to
hardware, software, and firmware components of a system.
- An information system can be protected against denial-of service (DoS) attacks through:
- Network perimeter devices
- Increased capacity
- Increased bandwidth
- Service redundancy
a. 2 only
b. 3 only
c. 4 only
d. 1, 2, 3, and 4
- d. Network perimeter devices can filter certain types of packets to
protect devices on an organization’s internal network from being
directly affected by denial-of-service (DoS) attacks. Employing
increased capacity and increased bandwidth combined with service
redundancy may reduce the susceptibility to some type of DoS attacks.
A side-benefit of this is enabling availability of data, which is a good
thing.
- What is the major purpose of conducting a post-incident analysis
for a computer security incident?
a. To determine how security threats and vulnerabilities were
addressed
b. To learn how the attack was done
c. To re-create the original attack
d. To execute the response to an attack
- a. The major reason for conducting a post-incident analysis is to
determine whether security weaknesses were properly and effectively
addressed. Security holes must be plugged to prevent recurrence. The
other three choices are minor reasons.
- Which of the following is an example of a reactive approach to
software security?
a. Patch-and-patch
b. Penetrate-and-patch
c. Patch-and-penetrate
d. Penetrate-and-penetrate
- b. Crackers and hackers attempt to break into computer systems by
finding flaws in software, and then system administrators apply
patches sent by vendors to fix the flaws. In this scenario of penetrate and-patch, patches are applied after penetration has occurred, which is
an example of a reactive approach. The scenario of patch-and patch is
good because one is always patching, which is a proactive approach.
The scenario of patch-and-penetrate is a proactive approach in which
organizations apply vendor patches in a timely manner. There is not
much damage done when crackers and hackers penetrate (break) into
the computer system because all known flaws are fixed. In this
scenario, patches are applied before penetration occurs. The scenario
of penetrate-and-penetrate is bad because patches are not applied at all
or are not effective.
- Regarding a patch management program, which of the following
is an example of vulnerability?
a. Misconfigurations
b. Rootkits
c. Trojan horses
d. Exploits
- a. Misconfiguration vulnerabilities cause a weakness in the security
of a system. Vulnerabilities can be exploited by a malicious entity to
violate policies such as gaining greater access or permission than is
authorized on a computer. Threats are capabilities or methods of attack
developed by malicious entities to exploit vulnerabilities and
potentially cause harm to a computer system or network. Threats
usually take the form of exploit scripts, worms, viruses, rootkits,
Trojan horses, and other exploits.
- An information system initiates session auditing work at system:
a. Restart
b. Shutdown
c. Startup
d. Abort
- The major reason for retaining older versions of baseline
configuration is to support:
a. Roll forward
b. Rollback
c. Restart
d. Restore
- b. A rollback is restoring a database from one point in time to an
earlier point. A roll forward is restoring the database from a point in
time when it is known to be correct to a later time. A restart is the
resumption of the execution of a computer system using the data
recorded at a checkpoint. A restore is the process of retrieving a dataset
migrated to offline storage and restoring it to online storage.
- Which of the following updates the applications software and
the systems software with patches and new versions?
a. Preventive maintenance
b. Component maintenance
c. Hardware maintenance
d. Periodic maintenance
- a. The scope of preventive maintenance includes updating
applications software and systems software with patches and new
versions, replacing failed hardware components, and more.
The other three choices are incorrect because they can be a part of
corrective maintenance (fixing errors) or remedial maintenance (fixing
faults).
- Regarding incident handling, dynamic reconfiguration does not
include changes to which of the following?
a. Router rules
b. Access control lists
c. Filter rules
d. Software libraries
- d. Software libraries are part of access restrictions for change so
changes are controlled. Dynamic reconfiguration (i.e., changes on-thefly) can include changes to router rules, access control lists, intrusion
detection and prevention systems (IDPS) parameters, and filter rules
for firewalls and gateways.
- Prior to initiating maintenance work by maintenance vendor
personnel who do not have the needed security clearances and
access authorization to classified information, adequate controls
include: - Sanitize all volatile information storage components
- Remove all nonvolatile storage media
- Physically disconnect the storage media from the system
- Properly secure the storage media with physical or logical access
controls
a. 1 only
b. 2 only
c. 2, 3, and 4
d. 1, 2, 3, and 4
- d. All four items are adequate controls to reduce the risk resulting
from maintenance vendor personnel’s access to classified information.
For handling classified information, maintenance personnel should
possess security clearance levels equal to the highest level of security
required for an information system.
- A security configuration checklist is referred to as which of the
following? - Lockdown guide
- Hardening guide
- Security guide
- Benchmark guide
a. 1 and 2
b. 1 and 3
c. 2 and 3
d. 1, 2, 3, and 4
- d. A security configuration checklist is referred to as several
names, such as a lockdown guide, hardening guide, security technical
implementation guide, or benchmark guide. These guides provide a
series of instructions or procedures for configuring an information
system’s components to meet operational needs and regulatory
requirements.
- Regarding the verification of correct operation of security
functions, which of the following is the correct order of alternative
actions when anomalies are discovered? - Report the results.
- Notify the system administrator.
- Shut down the system.
- Restart the system.
a. 1, 2, 3, and 4
b. 3, 4, 2, and 1
c. 2, 1, 3, and 4
d. 2, 3, 4, and 1
- d. The correct order of alternative actions is notify the system
administrator, shut down the system, restart the system, and report the
results of security function verification.
- The audit log does not include which of the following?
a. Timestamp
b. User’s identity
c. Object’s identity
d. The results of action taken
15. d. The audit log includes a timestamp, user’s identity, object’s
identity, and type of action taken, but not the results from the action
taken. The person reviewing the audit log needs to verify that the
results of the action taken were appropriate.
- Which of the following fault tolerance metrics are most
applicable to the proper functioning of redundant array of disks
(RAID) systems? - Mean time between failures (MTBF)
- Mean time to data loss (MTTDL)
- Mean time to recovery (MTTR)
- Mean time between outages (MTBO)
a. 1 and 2
b. 1 and 3
c. 2 and 3
d. 3 and 4
- c. Rapid replacement of RAID’s failed drives or disks and
rebuilding them quickly is important, which is facilitated specifically
and mostly through applying MTTDL and MTTR metrics. The
MTTDL metric measures the average time before a loss of data
occurred in a given disk array. The MTTR metric measures the amount
of time it takes to resume normal operation, and includes the time to
replace a failed disk and the time to rebuild the disk array. Thus,
MTTDL and MTTR metrics prevent data loss and ensure data
recovery.
MTBF and MTBO metrics are incorrect because they are broad
measures of providing system reliability and availability respectively,
and are not specifically applicable to RAID systems. The MTBF
metric measures the average time interval between system failures and
the MTBO metric measures the mean time between equipment
failures.
- All the following have redundancy built in except:
a. Fast Ethernet
b. Fiber distributed data interface
c. Normal Ethernet
d. Synchronous optical network
- c. Normal Ethernet does not have a built-in redundancy. Fast
Ethernet has built-in redundancy with redundant cabling for file
servers and network switches. Fiber distributed data interface (FDDI)
offers an optional bypass switch at each node for addressing failures.
Synchronous optical network (SONET) is inherently redundant and
fault tolerant by design.
- Which of the following go hand-in-hand?
a. Zero-day warez and content delivery networks
b. Zero-day warez and ad-hoc networks
c. Zero-day warez and wireless sensor networks
d. Zero-day warez and converged networks
- a. Zero-day warez (negative day or zero-day) refers to software,
games, music, or movies (media) unlawfully released or obtained on
the day of public release. An internal employee of a content delivery
company or an external hacker obtains illegal copies on the day of the
official release. Content delivery networks distribute such media from
the content owner. The other three networks do not distribute such
media.
Bluetooth mobile devices use ad-hoc networks, wireless sensor
networks monitor security of a building perimeter and environmental
status in a building (temperature and humidity), and converged
networks combine two different networks such as voice and data.
- Which of the following provides total independence?
a. Single-person control
b. Dual-person control
c. Two physical keys
d. Two hardware tokens
- a. Single-person control means total independence because there is
only one person performing a task or activity. In the other three
choices, two individuals or two devices (for example, keys and tokens)
work together, which is difficult to bypass unless collusion is involved.
- The use of a no-trespassing warning banner at a computer
system’s initial logon screen is an example of which of the
following?
a. Correction tactic
b. Detection tactic
c. Compensating tactic
d. Deterrence tactic
- d. The use of no-trespassing warning banners on initial logon
screens is a deterrent tactic to scare system intruders and to provide
legal evidence. The other three choices come after the deterrence
tactic
- Countermeasures applied when inappropriate and/or
unauthorized modifications have occurred to security functions
include: - Reversing the change
- Halting the system
- Triggering an audit alert
- Reviewing the records of change
a. 1 only
b. 2 only
c. 3 only
d. 1, 2, 3, and 4
- d. Safeguards and countermeasures (controls) applied when
inappropriate and/or unauthorized modifications have occurred to
security functions and mechanisms include reversing the change,
halting the system, triggering an audit alert, and reviewing the records
of change. These countermeasures would reduce the risk to an
information system.
- Which of the following situations provides no security
protection?
a. Controls that are designed and implemented
b. Controls that are developed and implemented
c. Controls that are planned and implemented
d. Controls that are available, but not implemented
- d. Controls that are available in a computer system, but not
implemented, provide no protection.
- A computer system is clogged in which of the following
attacks?
a. Brute force attack
b. Denial-of-service attack
c. IP spoofing attack
d. Web spoofing attack
- b. The denial-of-service (DoS) type of attack denies services to
users by either clogging the system with a series of irrelevant messages
or sending disruptive commands to the system. It does not damage the
data. A brute force attack is trying every possible decryption key
combination to break into a computer system. An Internet Protocol (IP)
spoofing attack means intruders creating packets with spoofed source
IP addresses. The intruder then takes over an open-terminal and login connections. In a Web spoofing attack, the intruder sits between the
victim user and the Web, thereby making it a man-in-the-middle attack.
The user is duped into supplying the intruder with passwords, credit
card information, and other sensitive and useful data.
- Which of the following is not an effective, active, and
preventive technique to protect the integrity of audit information
and audit tools?
a. Backing up the audit records
b. Using a cryptographic-signed hash
c. Protecting the key used to generate the hash
d. Using the public key to verify the hash
- a. Backing up the audit records is a passive and detective action,
and hence not effective in protecting integrity. In general, backups
provide availability of data, not integrity of data, and they are there
when needed. The other three choices, which are active and preventive,
use cryptographic mechanisms (for example, keys and hashes), and
therefore are effective in protecting the integrity of audit-related
information.
- Regarding a patch management program, which of the
following should not be done to a compromised system?
a. Reformatting
b. Reinstalling
c. Restoring
d. Remigrating
- d. In most cases a compromised system should be reformatted and
reinstalled or restored from a known safe and trusted backup.
Remigrating deals with switching between using automated and
manual patching tools and methods should not be performed on a
compromised system.
- Which of the following is the most malicious Internet-based
attack?
a. Spoofing attack
b. Denial-of-service attack
c. Spamming attack
d. Locking attack
- b. Denial-of-service (DoS) attack is the most malicious Internet based attack because it floods the target computer with hundreds of
incomplete Internet connections per second, effectively preventing any
other network connections from being made to the victim network
server. The result is a denial-of-service to users, consumption of
system resources, or a crash in the target computer. Spoofing attacks
use various techniques to subvert IP-based access control by
masquerading as another system by using its IP address. Spamming
attacks post identical messages to multiple unrelated newsgroups. They
are often used in cheap advertising to promote pyramid schemes or
simply to annoy people. Locking attack prevents users from accessing
and running shared programs such as those found in Microsoft Office
product.
- Denial-of-service attacks can be prevented by which of the
following?
a. Redundancy
b. Isolation
c. Policies
d. Procedures
- a. Redundancy in data and/or equipment can be designed so that
service cannot be removed or denied. Isolation is just the opposite of
redundancy. Policies and procedures are not effective against denialof-service (DoS) attacks because they are examples of management
controls. DoS requires technical controls such as redundancy.
- Which of the following denial-of-service attacks in networks is
least common in occurrence?
a. Service overloading
b. Message flooding
c. Connection clogging
d. Signal grounding
- d. In denial-of-service (DoS) attacks, some users prevent other
legitimate users from using the network. Signal grounding, which is
located in wiring closets, can be used to disable a network. This can
prevent users from transmitting or receiving messages until the
problem is fixed. Signal grounding is the least common in occurrence
as compared to other choices because it requires physical access.
Service overloading occurs when floods of network requests are made
to a server daemon on a single computer. It cannot process regular
tasks in a timely manner.
Message flooding occurs when a user slows down the processing of a
system on the network, to prevent the system from processing its
normal workload, by “flooding” the machine with network messages
addressed to it. The system spends most of its time responding to these
messages.
Connection clogging occurs when users make connection requests with
forged source addresses that specify nonexistent or unreachable hosts
that cannot be contacted. Thus, there is no way to trace the connection
back; they remain until they time out or reset. The goal is to use up the
limit of partially open connections.
- Smurf is an example of which of the following?
a. IP address spoofing attack
b. Denial-of-service attack
c. Redirect attack
d. TCP sequence number attack
- b. Smurf attacks use a network that accepts broadcast ping packets
to flood the target computer with ping reply packets. The goal of a
smurf attack is to deny service.
Internet Protocol (IP) address spoofing attack and transmission control
protocol (TCP) sequence number attack are examples of session
hijacking attacks. The IP address spoofing is falsifying the identity of a
computer system. In a redirect attack, a hacker redirects the TCP
stream through the hacker’s computer. The TCP sequence number
attack is a prediction of the sequence number needed to carry out an
unauthorized handshake.
- The demand for reliable computing is increasing. Reliable
computing has which of the following desired elements in
computer systems?
a. Data integrity and availability
b. Data security and privacy
c. Confidentiality and modularity
d. Portability and feasibility
- a. Data integrity and availability are two important elements of
reliable computing. Data integrity is the concept of ensuring that data
can be maintained in an unimpaired condition and is not subject to
unauthorized modification, whether intentional or inadvertent.
Products such as backup software, antivirus software, and disk repair
utility programs help protect data integrity in personal computers (PCs)
and workstations. Availability is the property that a given resource will
be usable during a given time period. PCs and servers are becoming an
integral part of complex networks with thousands of hardware and
software components (for example, hubs, routers, bridges, databases,
and directory services) and the complex nature of client/server
networks drives the demand for availability. System availability is
increased when system downtime or outages are decreased and when
fault tolerance hardware and software are used.
Data security, privacy, and confidentiality are incorrect because they
deal with ensuring that data is disclosed only to authorized individuals
and have nothing to do with reliable computing. Modularity deals with
the breaking down of a large system into small modules. Portability
deals with the ability of application software source code and data to
be transported without significant modification to more than one type
of computer platform or more than one type of operating system.
Portability has nothing to do with reliable computing. Feasibility deals
with the degree to which the requirements can be implemented under
existing constraint
- Which of the following is not a part of implementation of
incident response support resources in an organization?
a. Help desk
b. Assistance group
c. Forensics services
d. Simulated events
- d. An organization incorporates simulated events into incident
response training to facilitate effective response by individuals in crisis
situations. The other three choices are possible implementations of
incident response support resources in an organization.
- Software flaw remediation is best when it is incorporated into
which of the following?
a. Configuration management process
b. Security assessments
c. Continuous monitoring
d. Incident response activities
- a. Software flaws result in potential vulnerabilities. The
configuration management process can track and verify the required or
anticipated flaw remediation actions.
Flaws discovered during security assessments, continuous monitoring,
incident-response activities, or system error handling activities become
inputs to the configuration management process. Automated patch
management tools should facilitate flaw remediation by promptly
installing security-relevant software updates (for example, patches,
service packs, and hot fixes).
- Audit trails establish which of the following information
security objectives?
a. Confidentiality
b. Integrity
c. Accountability
d. Availability
- c. Accountability is the existence of a record that permits the
identification of an individual who performed some specific activity so
that responsibility for that activity can be established through audit
trails. Audit trails do not establish the other three choices.
- Audit trails are least useful to which of the following?
a. Training
b. Deterrence
c. Detection
d. Prosecution
- a. Audit trails are useful in detecting unauthorized and illegal
activities. They also act as a deterrent and aid in prosecution of
transgressors. They are least useful in training because audit trails are
recorded after the fact. They show what was done, when, and by
whom.
- In terms of audit records, which of the following information is
most useful? - Timestamps
- Source and destination address
- Privileged commands
- Group account users
a. 1 only
b. 1 and 2
c. 3 and 4
d. 1, 2, 3, and 4
- c. Audit records contain minimum information such as timestamps,
source and destination addresses, and outcome of the event (i.e.,
success or failure). But the most useful information is recording of
privileged commands and the individual identities of group account
users.
- Which of the following is an example of improper separation of
duties?
a. Computer security is embedded into computer operations.
b. Security administrators are separate from security auditors.
c. Mission-critical functions and support functions are separate
from each other.
d. Quality assurance is separate from network security.
- a. A natural tension often exists between computer security and
computer operations functions. Some organizations embed a computer
security program in computer operations to resolve this tension. The
typical result of this organizational strategy is a computer security
program that lacks independence, has minimal authority, receives little
management attention, and has few resources to work with. The other
three choices are examples of proper separation of duties.
- What are labels used on internal data structures called?
a. Automated marking
b. Automated labeling
c. Hard-copy labeling
d. Output labeling
- b. Automated labeling refers to labels used on internal data
structures such as records and files within the information system.
Automated marking refers to labels used on external media such as
hard-copy documents and output from the information system (for
example, reports).
- Which of the following is not allowed when an information
system cannot be sanitized due to a system failure?
a. Periodic maintenance
b. Remote maintenance
c. Preventive maintenance
d. Detective maintenance
- b. Media sanitization (scrubbing) means removing information
from media such that information recovery is not possible.
Specifically, it removes all labels, markings, and activity logs. An
organization approves, controls, and monitors remotely executed
maintenance and diagnostic activities. If the information system cannot
be sanitized due to a system failure, remote maintenance is not allowed
because it is a high-risk situation. The other three types of maintenance
are low risk situations.
- Regarding configuration change management, organizations
should analyze new software in which of the following libraries
before installation?
a. Development library
b. Test library
c. Quarantine library
d. Operational library
- b. Organizations should analyze new software in a separate test
library before installation in an operational environment. They should
look for security impacts due to software flaws, security weaknesses,
data incompatibility, or intentional malice in the test library. The
development library is used solely for new development work or
maintenance work. Some organizations use a quarantine library, as an
intermediate library, before moving the software into operational
library. The operational library is where the new software resides for
day-to-day use.
- Current operating systems are far more resistant to which of
the following types of denial-of-service attacks and have become
less of a threat?
a. Reflector attack
b. Amplified attack
c. Distributed attack
d. SYNflood attack
- d. Synchronized flood (SYNflood) attacks often target an
application and daemon, like a Web server, and not the operating
system (OS) itself; although the OS may get impacted due to resources
used by the attack. It is good to know that current operating systems
are far more resistant to SYNflood attacks, and many firewalls now
offer protections against such attacks, so they have become less of a
threat. Still, SYNfloods can occur if attackers initiate many thousands
of transmission control protocol (TCP) connections in a short time.
The other three types of attacks are more of a threat. In a reflector
attack, a host sends many requests with a spoofed source address to a
service on an intermediate host. Like a reflector attack, an amplified
attack involves sending requests with a spoofed source address to an
intermediate host. However, an amplified attack does not use a single
intermediate host; instead, its goal is to use a whole network of
intermediate hosts. Distributed attacks coordinate attacks among many
computers (i.e., zombies).
- Which of the following is the correct sequence of solutions for
containing a denial-of-service incident? - Relocate the target computer.
- Have the Internet service provider implement filtering.
- Implement filtering based on the characteristics of the attack.
- Correct the vulnerability that is being exploited.
a. 2, 3, 1, and 4
b. 2, 4, 3, and 1
c. 3, 4, 2, and 1
d. 4, 3, 1, and 2
- c. The decision-making process for containing a denial-of-service
(DoS) incident should be easier if recommended actions are
predetermined. The containment strategy should include several
solutions in sequence as shown in the correct answer.
- Computer security incident handling can be considered that
portion of contingency planning that responds to malicious
technical threats (for example, a virus). Which of the following
best describes a secondary benefit of an incident handling
capability?
a. Containing and repairing damage from incidents
b. Preventing future damage
c. Using the incident data in enhancing the risk assessment process
d. Enhancing the training and awareness program
- c. An incident capability may be viewed as a component of
contingency planning because it provides the ability to react quickly
and efficiently to disruptions in normal processing. Incidents can be
logged and analyzed to determine whether there is a recurring problem,
which would not be noticed if each incident were viewed only in
isolation. Statistics on the numbers and types of incidents in the
organization can be used in the risk assessment process as an
indication of vulnerabilities and threats.
Containing and repairing damage from incidents and preventing future
damages are incorrect because they are examples of primary benefits
of an incident handling capability. An incident handling capability can
provide enormous benefits by responding quickly to suspicious activity
and coordinating incident handling with responsible offices and
individuals as necessary. Incidents can be studied internally to gain a
better understanding of the organization’s threats and vulnerabilities.
Enhancing the training and awareness program is an example of a
secondary benefit. Based on incidents reported, training personnel will
have a better understanding of users’ knowledge of security issues.
Training that is based on current threats and controls recommended by
incident handling staff provides users with information more
specifically directed to their current needs. Using the incident data in
enhancing the risk assessment process is the best answer when
compared to enhancing the training and awareness program.
- Automatic file restoration requires which of the following?
a. Log file and checkpoint information
b. Access file and check digit information
c. Transaction file and parity bit information
d. Backup file and checkpoint information
- a. Automatic file restoration requires log file and checkpoint
information to recover from a system crash. A backup file is different
from a log file in that it can be a simple copy of the original file
whereas a log file contains specific and limited information. The other
three choices do not have the log file capabilities.
- Which of the following is the most common type of
redundancy?
a. Cable backup
b. Server backup
c. Router backup
d. Data backup
- d. In general, redundancy means having extra, duplicate elements
to compensate for any malfunctions or emergencies that could occur
during normal, day-to-day operations. The most common type of
redundancy is the data backup, although the concept is often applied to
cabling, server hardware, and network connectivity devices such as
routers and switches.
- Increasing which one of the following items increases the other
three items?
a. Reliability
b. Availability
c. Redundancy
d. Serviceability
- c. Reliability minimizes the possibility of failure and availability is
a measurement of uptime while serviceability is a measure of the
amount of time it takes to repair a problem or to restore a system
following a failure. Increasing redundancy increases reliability,
availability, and serviceability.
- Which of the following is often overlooked in building
redundancy?
a. Disks
b. Processors
c. Electrical power
d. Controllers
- c. Redundant electric power and cooling is an important but often
overlooked part of a contingency plan. Network administrators usually
plan for backup disks, processors, controllers, and system boards.
- Network availability is increased with which of the following?
a. Data redundancy
b. Link redundancy
c. Software redundancy
d. Power redundancy
- b. Link redundancy, due to redundant cabling, increases network
availability because it provides a parallel path that runs next to the
main data path and a routing methodology that can establish an
alternative path in case the main path fails. The other three
redundancies are good in their own way, but they do not increase
network availability. In other words, there are two paths: a main path
and an alternative path.
- What does an effective backup method for handling large
volumes of data in a local-area-network environment include?
a. Backing up at the workstation
b. Backing up at the file server
c. Using faster network connection
d. Using RAID technology
- b. Backing up at the file server is effective for a local-area network
due to its greater storage capacity. Backing up at the workstation lacks
storage capacity, and redundant array of independent disks (RAID)
technology is mostly used for the mainframe. Using faster network
connection increases the speed but not backup.
- Network reliability is increased most with which of the
following?
a. Alternative cable
b. Alternative network carrier
c. Alternative supplies
d. Alternative controllers
- b. An alternative network carrier as a backup provides the highest
reliability. If the primary carrier goes down, the backup can still work.
The other three choices do provide some reliability, but not the
ultimate reliability as with the alternative network carrier.
- In a local-area network environment, which of the following
requires the least redundancy planning?
a. Cables
b. Servers
c. Power supplies
d. Hubs
- d. Many physical problems in local-area networks (LANs) are
related to cables because they can be broken or twisted. Servers can be
physically damaged due to disk head crash or power irregularities such
as over or under voltage conditions. An uninterruptible power supply
provides power redundancy and protection to servers and workstations.
Servers can be disk duplexed for redundancy. Redundant topologies
such as star, mesh, or ring can provide a duplicate path should a main
cable link fail. Hubs require physical controls such as lock and key
because they are stored in wiring closets; although, they can also
benefit from redundancy, which can be expensive. Given the choices, it
is preferable to have redundant facilities for cables, servers, and power
supplies.
- System reliability controls for hardware include which of the
following?
a. Mechanisms to decrease mean time to repair and to increase
mean time between failures
b. Redundant computer hardware
c. Backup computer facilities
d. Contingency plans
- a. Mean time to repair (MTTR) is the amount of time it takes to
resume normal operation. It is expressed in minutes or hours taken to
repair computer equipment. The smaller the MTTR for hardware, the
more reliable it is. Mean time between failures (MTBF) is the average
length of time the hardware is functional. MTBF is expressed as the
average number of hours or days between failures. The larger the
MTBF for hardware, the more reliable it is.
Redundant computer hardware and backup computer facilities are
incorrect because they are examples of system availability controls.
They also address contingencies in case of a computer disaster.
- Fail-soft control is an example of which of the following?
a. Continuity controls
b. Accuracy controls
c. Completeness controls
d. Consistency controls
- a. As a part of the preventive control category, fail-soft is a
continuity control. It is the selective termination of affected
nonessential processing when a hardware or software failure is
detected in a computer system. A computer system continues to
function because of its resilience.
Accuracy controls are incorrect because they include data editing and
validation routines. Completeness controls are incorrect because they
look for the presence of all the required values or elements.
Consistency controls are incorrect because they ensure repeatability of
certain transactions with the same attributes.
- Information availability controls do not include which of the
following?
a. Backup and recovery
b. Storage media
c. Physical and logical security
d. Alternative computer equipment and facilities
- b. Storage media has nothing to do with information availability.
Data will be stored somewhere on some media. It is not a decision
criterion. Management’s goal is to gather useful information and to
make it available to authorized users. System backup and recovery
procedures and alternative computer equipment and facilities help
ensure that the recovery is as timely as possible. Both physical and
logical access controls become important. System failures and other
interruptions are common.
- From an operations viewpoint, the first step in contingency
planning is to perform a(n):
a. Operating systems software backup
b. Applications software backup
c. Documentation backup
d. Hardware backup
- d. Hardware backup is the first step in contingency planning. All
computer installations must include formal arrangements for
alternative processing capability in the event their data center or any
portion of the work environment becomes disabled. These plans can
take several forms and involve the use of another data center. In
addition, hardware manufacturers and software vendors can be helpful
in locating an alternative processing site and in some cases provide
backup equipment under emergency conditions. The more common
plans are service bureaus, reciprocal arrangements, and hot sites.
After hardware is backed up, operating systems software is backed up
next, followed by applications software backup and documentation.
- The primary contingency strategy for application systems and
data is regular backup and secure offsite storage. From an
operations viewpoint, which of the following decisions is least
important to address?
a. How often is the backup performed?
b. How often is the backup stored offsite?
c. How often is the backup used?
d. How often is the backup transported?
- c. Normally, the primary contingency strategy for applications and
data is regular backup and secure offsite storage. Important decisions
to be addressed include how often the backup is performed, how often
it is stored offsite, and how it is transported to storage, to an alternative
processing site, or to support the resumption of normal operations.
How often the backup is used is not relevant because it is hoped that it
may never have to be used.
- Which of the following is not totally possible from a security
control viewpoint?
a. Detection
b. Prevention
c. Correction
d. Recovery
- b. Prevention is totally impossible because of its high cost and
technical limitations. Under these conditions, detection becomes more
important, which could be cheaper than prevention; although, not all
attacks can be detected in time. Both correction and recovery come
after prevention or detection.
- The return on investment on quality is highest in which of the
following software defect prevention activities?
a. Code inspection
b. Reviews with users
c. Design reviews
d. Unit test
- b. It is possible to quantify the return on investment (ROI) for
various quality improvement activities. Studies have shown that
quality ROI is highest when software products are reviewed with user
customers. This is followed by code inspection by programmers,
design reviews with the project team, and unit testing by programmers.
- The IT operations management of KPT Corporation is
concerned about the reliability and availability data for its four
major, mission-critical information systems that are used by
business end-users. The KPT corporate management’s goal is to
improve the reliability and availability of these four systems in
order to increase customer satisfaction both internally and
externally. The IT operations management collected the following
data on percent reliability. Assume 365 operating days per year
and 24 hours per day for all these systems. The IT operations
management thinks that system reliability is important in
providing quality of service to end-users.
System Reliability downtime hours Availability Percent
1 99.50 44 99.50
2 97.50 219 97.50
3 98.25 153 98.25
4 95.25 416 95.25
Which of the following systems has the highest downtime in a year
expressed in hours and rounded up?
a. System 1
b. System 2
c. System 3
d. System 4
- d. The system 4 has the highest downtime in hours. Theoretically
speaking, the higher the reliability of a system, the lower its downtime
(including scheduled maintenance), and higher the availability of that
system, and vice versa. In fact, this question does not require any
calculations to perform because one can find out the correct answer
just by looking at the reliability data given in that the lower the
reliability, the higher the downtime, and vice versa.
Calculations for System 1 are shown below and calculations for other
systems follow the System 1 calculations.
Downtime = (Total hours) × [(100 − Reliability%)/100] = 8,760 ×
0.005 = 44 hours
Availability for System 1 = [(Total time − Downtime)/Total time] ×
100 = [(8,760 − 44)/8,760] × 100 = 99.50%
Check: Availability for System 1 = [Uptime/(Uptime + Downtime)]
× 100 = (8,716/8,760) × 100 = 99.50%
- Which of the following is the most important requirement for a
software quality program to work effectively?
a. Quality metrics
b. Process improvement
c. Software reengineering
d. Commitment from all parties
- d. A software quality program should reduce defects, cut service
costs, increase customer satisfaction, and increase productivity and
revenues. To achieve these goals, commitment by all parties involved
is the most important factor. The other three factors such as quality
metrics, process improvement, and software reengineering have some
merit, but none is sufficient on its own.
- As the information system changes over time, which of the
following is required to maintain the baseline configuration?
a. Enterprise architecture
b. New baselines
c. Operating system
d. Network topology
- b. Maintaining the baseline configuration involves creating new
baselines as the information system changes over time. The other three
choices deal with information provided by the baseline configuration
as a part of standard operating procedure
- Software quality is not measured by:
a. Defect levels
b. Customer satisfaction
c. Time-to-design
d. Continuous process improvement
- c. Quality is more than just defect levels. It should include
customer satisfaction, time-to-market, and a culture committed to
continuous process improvement. Time-to-design is not a complete
answer because it is a part of time-to-market, where the latter is
defined as the total time required for planning, designing, developing,
and delivering a product. It is the total time from concept to delivery.
These software quality values lead to quality education, process
assessments, and customer satisfaction.
- Which of the following responds to security incidents on an
emergency basis?
a. Tiger team
b. White team
c. Red team
d. Blue team
- b. A white team is an internal team that initiates actions to respond
to security incidents on an emergency basis. Both the red team and
blue team perform penetration testing of a system, and the tiger team is
an old name for the red team.
- Which of the following is the most important function of
software inventory tools in maintaining a consistent baseline
configuration?
a. Track operating system version numbers.
b. Track installed application systems.
c. Scan for unauthorized software.
d. Maintain current patch levels.
- c. Software inventory tools scan information for unauthorized
software to validate against the official list of authorized and
unauthorized software programs. The other three choices are standard
functions of software inventory tools.
- A user’s session auditing activities are performed in
consultation with which of the following?
a. Internal legal counsel and internal audit
b. Consultants and contractors
c. Public affairs or media relations
d. External law enforcement authorities and previous court cases
- a. An information system should provide the capability to
capture/record, log, and view all the content related to a user’s session
in real time. Session auditing activities are developed, integrated, and
used with internal legal counsel and internal audit departments. This is
because these auditing activities can have legal and audit implications.
Consultants and contractors should not be contacted at all. It is too
early to talk to the public affairs or media relations within the
organization. External law enforcement authorities should be contacted
only after the session auditing work is completed and only after there
is a discovery of high-risk incidents.
- Regarding access restrictions associated with changes to
information systems, which of the following makes it easy to
discover unauthorized changes?
a. Physical access controls
b. Logical access controls
c. Change windows
d. Software libraries
- c. Change windows mean changes occur only during specified
times, and making unauthorized changes outside the window are easy
to discover. The other three choices are also examples of access
restrictions, but changes are not easy to discover in them.
- Which of the following is an example of software reliability
metrics?
a. Number of defects per million lines of source code with
comments
b. Number of defects per function point
c. Number of defects per million lines of source code without
comments
d. The probability of failure-free operation in a specified time
- d. Software quality can be expressed in two ways: defect rate and
reliability. Software quality means conformance to requirements. If the
software contains too many functional defects, the basic requirement
of providing the desired function is not met. Defect rate is the number
of defects per million lines of source code or per function point.
Reliability is expressed as number of failures per “n” hours of
operation, mean-time-to failure, or the probability of failure-free
operation in a specified time. Reliability metrics deal with probabilities
and timeframes.
- From a Clean Room software engineering viewpoint, software
quality is certified in terms of:
a. Mean-time between failures (MTBF)
b. Mean-time-to-failure (MTTF)
c. Mean-time-to-repair (MTTR)
d. Mean-time between outages (MTBO)
- b. CleanRoom operations are carried out by small independent
development and certification (test) teams. In CleanRoom, all testing is
based on anticipated customer usage. Test cases are designed to
practice the more frequently used functions. Therefore, errors that are
likely to cause frequent failures to the users are found first. For
measurement, software quality is certified in terms of mean-time-to
failure (MTTF). MTTF is most often used with safety-critical systems
such as airline traffic control systems because it measures the time
taken for a system to fail for the first time.
Mean-time between failures (MTBF) is incorrect because it is the
average length of time a system is functional. Mean-time-to-repair
(MTTR) is incorrect because it is the total corrective maintenance time
divided by the total number of corrective maintenance actions during a
given period of time. Mean-time-between outages (MTBO) is incorrect
because it is the mean time between equipment failures that result in
loss of system continuity or unacceptable degradation.
- In redundant array of independent disks (RAID) technology,
which of the following RAID level does not require a hot spare
drive or disk?
a. RAID3
b. RAID4
c. RAID5
d. RAID6
- d. A hot spare drive is a physical drive resident on the disk array
which is active and connected but inactive until an active drive fails.
Then the system automatically replaces the failed drive with the spare
drive and rebuilds the disk array. A hot spare is a hot standby
providing a failover mechanism.
The RAID levels from 3 to 5 have only one disk of redundancy and
because of this a second failure would cause complete failure of the
disk array. On the other hand, the RAID6 level has two disks of
redundancy, providing a greater protection against simultaneous
failures. Hence, RAID6 level does not need a hot spare drive whereas
the RAID 3 to 5 levels need a shot spare drive.
The RAID6 level without a spare uses the same number of drives (i.e.,
4 + 0 spare) as RAID3 to RAID 5 levels with a hot spare (i.e., 3 + 1
spare) thus protecting data against simultaneous failures. Note that a
hot spare can be shared by multiple RAID sets. On the other hand, a
cold spare drive or disk is not resident on the disk array and not
connected with the system. A cold spare requires a hot swap, which is
a physical (manual) replacement of the failed disk with a new disk
done by the computer operator.
- An example of ill-defined software metrics is which of the
following?
a. Number of defects per thousand lines of code
b. Number of defects over the life of a software product
c. Number of customer problems reported to the size of the product
d. Number of customer problems reported per user month
- c. Software defects relate to source code instructions, and problems
encountered by users relate to usage of the product. If the numerator
and denominator are mixed up, poor metrics result. An example of an
ill-defined metric is the metric relating total customer problems to the
size of the product, where size is measured in millions of shipped
source instructions. This metric has no meaningful relation. On the
other hand, the other three choices are examples of meaningful
metrics. To improve customer satisfaction, you need to reduce defects
and overall problems.
- Which of the following information system component
inventory is difficult to monitor?
a. Hardware specifications
b. Software license information
c. Virtual machines
d. Network devices
- c. Virtual machines can be difficult to monitor because they are not
visible to the network when not in use. The other three choices are
easy to monitor.
- Regarding incident handling, which of the following deceptive
measures is used during incidents to represent a honeypot?
a. False data flows
b. False status measures
c. False state indicators
d. False production systems
- d. Honeypot is a fake (false) production system and acts as a decoy
to study how attackers do their work. The other three choices are also
acceptable deceptive measures, but they do not use honeypots. False
data flows include made up (fake) data, not real data. System-status
measures include active or inactive parameters. System-state indicators
include startup, restart, shutdown, and abort.
- For large software development projects, which of the
following models provides greater satisfactory results on software
reliability?
a. Fault count model
b. Mean-time-between-failures model
c. Simple ratio model
d. Simple regression model
- a. A fault (defect) is an incorrect step, process, or data definition in
a computer program, and it is an indication of reliability. Fault count
models give more satisfactory results than the mean-time-betweenfailures (MTBF) model because the latter is used for hardware
reliability. Simple ratio and simple regression models handle few
variables and are used for small projects.
- The objective “To provide management with appropriate
visibility into the process being used by the software development
project and of the products being built” is addressed by which of
the following?
a. Software quality assurance management
b. Software configuration management
c. Software requirements management
d. Software project management
- a. The goals of software quality assurance management include (i)
software quality assurance activities are planned, (ii) adherence of
software products and activities to the applicable standards,
procedures, and requirements is verified objectively, and (iii)
noncompliance issues that cannot be resolved are addressed by higher
levels of management.
The objectives of software configuration management are to establish
and maintain the integrity of products of the software project
throughout the project’s software life cycle. The objectives of software
requirements management are to establish a common understanding
between the customer and the software project requirements that will
be addressed by the software project. The objectives of software
project management are to establish reasonable plans for performing
the software engineering activities and for managing the software
development project.
- Which of the following identifies required functionality to
protect against or mitigate failure of the application software?
a. Software safety analysis
b. Software hazard analysis
c. Software fault tree analysis
d. Software sneak circuit analysis
- a. Software needs to be developed using specific software
development and software assurance processes to protect against or
mitigate failure of the software. A complete software safety standard
references other standards that address these mechanisms and includes
a software safety policy identifying required functionality to protect
against or mitigate failure.
Software hazard analysis is incorrect because it is a part of software
safety. Hazard analysis is the process of identifying and evaluating the
hazards of a system, and then making change recommendations that
either eliminate the hazard or reduce its risk to an acceptable level.
Software hazard analysis makes recommendations to eliminate or
control software hazards and hazards related to interfaces between the
software and the system (includes hardware and human components).
It includes analyzing the requirements, design, code, user interfaces,
and changes. Software hazards may occur if the software is improperly
developed (designed), the software dispatches incorrect information, or
the software fails to transmit information when it should.
Software fault tree analysis is incorrect because its purpose is to
demonstrate that the software will not cause a system to reach an
unsafe state, and to discover what environmental conditions will allow
the system to reach an unsafe state. Software fault tree analysis is often
conducted on the program code but can also be applied at other stages
of the life cycle process (for example, requirements and design). This
analysis is not always applied to all the program code, only to the
portion that is safety critical.
Software sneak analysis is incorrect because it is based on sneak circuit
analysis, which is used to evaluate electrical circuitry—hence the name
software sneak circuit analysis. Sneaks are the latest design conditions
or design flaws that have inadvertently been incorporated into
electrical, software, and integrated systems designs. They are not
caused by component failure.
- Which of the following provides an assessment of software
design quality?
a. Trace system requirements specifications to system requirements
in requirements definition documentation.
b. Trace design specifications to system requirements and system
requirements specifications to design.
c. Trace source code to design specifications and design
specifications to source code.
d. Trace system test cases and test data designs to system
requirements.
- b. The goal is to identify requirements with no design elements
(under-design) and design elements with no requirements (over design). It is too early to assess software design quality during system
requirements definition. It is too late to assess software design quality
during coding. The goal is to identify design elements with no source
code and source codes with no design elements. It is too late to assess
software design quality during testing.
- When executed incorrectly, which of the following nonlocal
maintenance and diagnostic activities can expose an organization
to potential risks?
a. Using strong authenticators
b. Separating the maintenance sessions from other network
sessions
c. Performing remote disconnect verification feature
d. Using physically separated communications paths
- c. An organization should employ remote disconnect verification
feature at the termination of nonlocal maintenance and diagnostic
sessions. If this feature is unchecked or performed incorrectly, this can
increase the potential risk of introducing malicious software or
intrusions due to open ports and protocols. The other three choices do
not increase risk exposure. Nonlocal maintenance work is conducted
through either an external network (mostly through the Internet) or an
internal network.
- Which of the following factors is an important consideration
during application system design and development project?
a. Software safety
b. Completing the project on schedule
c. Spending less than budgeted
d. Documenting all critical work
- a. Software safety is important compared to the other three choices
because lack of safety considerations in a computer-based application
system can cause danger or injury to people and damage to equipment
and property.
- A software product has the least impact on:
a. Loss of life
b. Loss of property
c. Loss of physical attributes
d. Loss of quality
- c. Software is an intangible item with no physical attributes such as
color and size. Although software is not a physical product, software
products have a major impact on life, health, property, safety, and
quality of life. Failure of software can have a serious economic impact
such as loss of sales, revenues, and profits.
- A dangerous misconception about software quality is that:
a. It can be inspected after the system is developed.
b. It can be improved by establishing a formal quality assurance
function.
c. It can be improved by establishing a quality assurance library in
the system.
d. It is tantamount to testing the software.
- a. Quality should be designed at the beginning of the software
development and maintenance process. Quality cannot be inspected or
tested after the system is developed. Most seem to view final testing as
quality testing. At best, this is quality control instead of quality
assurance, hopefully preventing shipment of a defective product.
Quality in the process needs to be improved, and quality assurance is a
positive function.
A software product displays quality to the extent that all aspects of the
customer’s requirements are satisfied. This means that quality is built
into the product during its development process rather than inspected
at the end. It is too late to inspect the quality when the product is
already built. Most assurance is provided when the needs are fully
understood, captured, and transformed (designed) into a software
product.
- From a security risk viewpoint, the job duties of which one of
the following should be fully separated from the others?
a. System administrator
b. Security administrator
c. Computer operator
d. System programmer
- c. Separation of duties is a security principle that divides critical
functions among different employees in an attempt to ensure that no
one employee has enough information or access privileges to
perpetrate damaging fraud or conduct other irregularities such as
damaging data and/or programs.
The computer operator‘s job duties should be fully and clearly
separated from the others. Due to concentration of risks in one job and
if the computer operator’s job duties are not fully separated from other
conflicting job duties (for example, system administrator, security
administrator, or system programmer), there is a potential risk that the
operator can issue unprivileged commands from his console to the
operating system, thus causing damage to the integrity of the system
and its data. In other words, the operator has full access to the
computer in terms of running the operating system, application
systems, special program, and utility programs where the others do not
have such full access. It is good to limit the computer operator’s access
to systems and their documentation, which will help him in
understanding the inner working of the systems running on the
computer. At the same time it is good to limit the others’ access to the
computer systems just enough to do their limited job duties.
- In maintenance, which of the following is most risky?
a. Local maintenance
b. Scheduled maintenance
c. Nonlocal maintenance
d. Unscheduled maintenance
- c. Nonlocal maintenance work is conducted through either an
external network (mostly through the Internet) or an internal network.
Because of communicating across a network connection, nonlocal
maintenance work is most risky. Local maintenance work is performed
without communicating across a network connection. For local
maintenance, the vendor brings the hardware and software into the IT
facility for diagnostic and repair work, which is less risky. Local or
nonlocal maintenance work can be either scheduled or unscheduled.
- The IT operations management of RDS Corporation is
concerned about how to increase its data storage capacity to meet
its increased growth in business systems. Based on a storage
management consultant’s report, the RDS management is
planning to install redundant array of independent disks 6
(RAID6), which is a block-level striping with double distributed
parity system to meet this growth. If four disks are arranged in
RAID6 where each disk has a storage capacity of 250GB, and if
space efficiency is computed as [1-(2/n)] where “n” is the number
of disks, how much of this capacity is available for data storage
purposes?
a. 125GB
b. 250GB
c. 375GB
d. 500GB
- d. The RAID6 storage system can provide a total of 500GB of
usable space for data storage purposes. Space efficiency represents the
fraction of the sum of the disks’ capacities that is available for use.
Space efficiency = [1−(2/n)] = [1−(2/4)] = 1−0.5= 0.5
Total available space for data storage = 0.5 × 4 × 250 = 500GB
- In redundant array of independent disks (RAID) technology,
when two drives or disks have a logical joining, it is called:
a. Disk concatenation
b. Disk striping
c. Disk mirroring
d. Disk replication
- a. Disk concatenation is a logical joining of two series of data or
disks. In data concatenation, two or more data elements or data files
are often concatenated to provide a unique name or reference. In disk
concatenation, several disk address spaces are concatenated to present
a single larger address spaces.
The other three choices are incorrect. Disk striping has more than one
disk and more than one partition, and is same as disk arrays. Disk
mirroring occurs when a file server contains two physical disks and
one channel, and all information is written to both disks
simultaneously. Disk replication occurs when data is written to two
different disks to ensure that two valid copies of the data are always
available.
- All the following are needed for a timely and emergency
maintenance work to reduce the risk to an organization except:
a. Maintenance vendor service-level agreement
b. Spare parts inventory
c. Help-desk staff
d. Commercial courier delivery service agreement
- c. Information system components, when not operational, can
result in increased risk to organizations because the security
functionality intended by that component is not being provided.
Examples of security-critical components include firewalls,
hardware/software guards, gateways, intrusion detection and
prevention systems, audit repositories, and authentication servers. The
organizations need to have a maintenance vendor service-level
agreement, stock spare parts inventory, and a delivery service
agreement with a commercial transportation courier to deliver the
required parts on time to reduce the risk of running out of components
and parts. Help-desk staff, whether they are internal or external, are not needed for all types of maintenance work, whether it is scheduled or
unscheduled, or whether it is normal or emergency. Their job is to help
system users on routine matters (problems and issues) and escalate
them to the right party when they cannot resolve these matters.
- Which of the following is the basis for ensuring software
reliability?
a. Testing
b. Debugging
c. Design
d. Programming
- c. The basis for software reliability is design, not testing,
debugging, or programming. For example, using the top-down design
and development techniques and employing modular design principles,
software can be made more reliable than otherwise. Reliability is the
degree of confidence that a system will successfully function in a
certain environment during a specified time period.
Testing is incorrect because its purpose is to validate that the software
meets its stated requirements. Debugging is incorrect because its
purpose is to detect, locate, and correct faults in a computer program.
Programming is incorrect because its purpose is to convert the design
specifications into program instructions that the computer can
understand.
- In software configuration management, changes to software
should be subjected to which of the following types of testing prior
to software release and distribution?
a. Black-box testing
b. Regression testing
c. White-box testing
d. Gray-box testing
- b. Regression testing is a method to ensure that changes to one part
of the software system do not adversely impact other parts. The other
three choices do not have such capabilities. Black-box testing is a
functional analysis of a system, and known as generalized testing.
White-box testing is a structural analysis of a system, and known as
detailed testing or logic testing. Gray-box testing assumes some
knowledge of the internal structures and implementation details of the
assessment object, and known as focused testing.
- Which of the following software quality characteristics is
difficult to define and test?
a. Functionality
b. Reliability
c. Usability
d. Efficiency
- c. Usability is a set of attributes that bear on the effort needed for
use, and on the individual assessment of such use, by a stated or
implied set of users. In a way, usability means understandability and
ease of use. Because of its subjective nature, varying from person to
person, it is hard to define and test.
Functionality is incorrect because it can easily be defined and tested. It
is a set of attributes that bear on the existence of a set of functions and
their specified properties. The functions are those that satisfy stated or
implied needs. Reliability is incorrect because it can easily be defined
and tested. It is the ability of a component to perform its required
functions under stated conditions for a specified period of time.
Efficiency is incorrect because it can easily be defined and tested. It is
the degree to which a component performs its designated functions
with minimum consumption of resources.
- Portable and removable storage devices should be sanitized to
prevent the entry of malicious code to launch:
a. Man-in-the-middle attack
b. Meet-in-the-middle attack
c. Zero-day attack
d. Spoofing attack
- c. Malicious code is capable of initiating zero-day attacks when
portable and removable storage devices are not sanitized. The other
three attacks are network-based, not storage device-based. A man-inthe-middle (MitM) attack occurs to take advantage of the store-andforward mechanism used by insecure networks such as the Internet. A
meet-in-the-middle attack occurs when one end of the network is
encrypted and the other end is decrypted, and the results are matched
in the middle. A spoofing attack is an attempt to gain access to a
computer system by posing as an authorized user.
- Verification is an essential activity in ensuring quality software,
and it includes tracing. Which of the following tracing techniques
is not often used?
a. Forward tracing
b. Backward tracing
c. Cross tracing
d. Ad hoc tracing
- c. Traceability is the ease in retracing the complete history of a
software component from its current status to its requirements
specification. Cross tracing should be used more often because it cuts
through the functional boundaries, but it is not performed due to its
difficulty in execution. The other three choices are often used due to
their ease-of-use.
Forward tracing is incorrect because it focuses on matching inputs to
outputs to demonstrate their completeness. Similarly, backward tracing
is incorrect because it focuses on matching outputs to inputs to
demonstrate their completeness. Ad hoc tracing is incorrect because it
involves spot-checking of reconcilement procedures to ensure output
totals agree with input totals, less any rejects or spot checking of
accuracy of computer calculations such as interest on deposits, late
charges, service charges, and past-due loans.
During system development, it is important to verify the backward and
forward traceability of the following: (i) user requirements to software
requirements, (ii) software requirements to design specifications, (iii)
system tests to software requirements, and (iv) acceptance tests to user
requirements. Requirements or constraints can also be traced
downward and upward due to master-subordinate and predecessor successor relationships to one another.
- Which of the following redundant array of independent disks
(RAID) data storage systems is used for high-availability systems?
a. RAID3
b. RAID4
c. RAID5
d. RAID6
- d. RAID6 is used for high-availability systems due to its high
tolerance for failure. Each RAID level (i.e., RAID0 to RAID6)
provides a different balance between increased data reliability through
redundancy and increased input/output performance. For example, in
levels from RAID3 to RAID5, a minimum of three disks is required
and only one disk provides a fault tolerance mechanism. In the RAID6
level, a minimum of four disks is required and two disks provide fault
tolerance mechanisms.
In the single disk fault tolerance mechanism, the failure of that single
disk will result in reduced performance of the entire system until the
failed disk has been replaced and rebuilt. On the other hand, the double
parity (two disks) fault tolerance mechanism gives time to rebuild the
array without the data being at risk if a single disk fails before the
rebuild is complete. Hence, RAID6 is suitable for high-availability
systems due to high fault tolerance mechanisms.
- Which of the following makes a computer system more
reliable?
a. N-version programming
b. Structured programming
c. Defensive programming
d. GOTO-less programming
- c. Defensive or robust programming has several attributes that
makes a computer system more reliable. The major attribute is
expected exception domain (i.e., errors and failures); when discovered,
it makes the system reliable.
N-version programming is based on design or version diversity,
meaning different versions of the software are developed
independently with the thinking that these versions are independent in
their failure behavior. Structured programming and GOTO-less
programming are part of robust programming techniques to make
programs more readable and executable.
- Which of the following is an example of a static quality
attribute of a software product?
a. Mean-time-between-failure
b. Simplicity in functions
c. Mean-time-to-repair
d. Resource utilization statistics
- b. Software quality attributes can be classified as either dynamic or
static. Dynamic quality attributes are validated by examining the
dynamic behavior of software during its execution. Examples include
mean time between failures (MTBF), mean-time-to-repair (MTTR),
failure recovery time, and percent of available resources used (i.e.,
resource utilization statistics).
Static quality attributes are validated by inspecting nonexecuting
software products and include modularity, simplicity, and
completeness. Simplicity looks for straightforward implementation of
functions. It is the characteristic of software that ensures definition and
implementation of functions in the most direct and understandable
manner.
Reliability models can be used to predict software reliability (for
example, MTBF and MTTR) based on the rate of occurrence of defects
and errors. There is a trade-off between complexity and security,
meaning that complex systems are difficult to secure whereas simple
systems are easy to secure.
- Auditing an information system is not reliable under which of
the following situations?
a. When audit records are stored on hardware-enforced, write-once
media
b. When the user being audited has privileged access
c. When the audit activity is performed on a separate system
d. When the audit-related privileges are separated from nonaudit
privileges
- b. Auditing an information system is not reliable when performed
by the system to which the user being audited has privileged access.
This is because the privileged user can inhibit the auditing activity or
modify the audit records. The other three choices are control
enhancements that reduce the risk of audit compromises by the
privileged user.
- Software quality is based on user needs. Which of the following
software quality factors address the user’s need for performance?
a. Integrity and survivability
b. Verifiability and manageability
c. Correctness and interoperability
d. Expandability and flexibility
- c. Correctness asks, “Does it comply with requirements?” whereas
interoperability asks, “Does it interface easily?” Quality factors such as
efficiency, correctness, safety, and interoperability are part of the
performance need.
Integrity and survivability are incorrect because they are a part of
functional need. Integrity asks, “How secure is it?” whereas
survivability asks, “Can it survive during a failure?” Quality factors
such as integrity, reliability, survivability, and usability are part of the
functional need. Verifiability and manageability are incorrect because
they are a part of the management need. Verifiability asks, “Is
performance verification easy?” whereas manageability asks, “Is the
software easily managed?” Expandability and flexibility are incorrect
because they are a part of the changes needed. Expandability asks,
“How easy is it to expand?” whereas flexibility asks, “How easy is it to
change?”
Developing safe software is crucial to prevent loss of life,property damage, or liability. Which of the following practices is
least useful to ensuring a safe software product?
a. Use high coupling between critical functions and data from noncritical ones.
b. Use low data coupling between critical units.
c. Implement a fail-safe recovery system.
d. Specify and test for unsafe conditions.
a. “Critical” may be defined as pertaining to safety, efficiency, and reliability. Each application system needs a clear definition of what “critical” means to it. Software hazards analysis and fault tree analysis
can be performed to trace system-level hazards (for example, unsafe conditions) through design or coding structures back to software requirements that could cause the hazards. Functions and features of
software that participate in avoiding unsafe conditions are termed critical. Critical functions and data should be separated from noncritical ones with low coupling, not with high coupling. Avoiding unsafe conditions or ensuring safe conditions is achieved by separating the critical units from noncritical units, by low data
coupling between critical units, and by fail-safe recovery from unsafe
conditions when they occur, and by testing for unsafe conditions. Data
coupling is the sharing or passing of simple data between system
modules via parameter lists. A low data coupling is preferred at
interfaces as it is less error prone, ensuring a safety product.
- Developing a superior quality or safe software product requires
special attention. Which of the following techniques to achieve
superior quality are based on mathematical theory?
a. Multiversion software
b. Proof-of-correctness
c. Software fault tree analysis
d. Software reliability models
- b. The proof-of-correctness (formal verification) involves the use
of theoretical and mathematical models to prove the correctness of a
program without executing it. Using this method, the program is
represented by a theorem and is proved with first-order predicate
calculus.0 The other three choices do not use mathematical theory. Multiversion software is incorrect because its goal is to provide high reliability, especially useful in applications dealing with loss of life, property, and damage. The approach is to develop more than one version of the same program to minimize the detrimental effect on reliability of latent defects.
Software fault tree analysis is incorrect because it identifies and
analyzes software safety requirements. It is used to determine possible
causes of known hazards. This is done by creating a fault tree, whose
root is the hazard. The system fault tree is expanded until it contains at
its lowest level basic events that cannot be further analyzed.
Software reliability models are incorrect because they can predict the
future behavior of a software product, based on its past behavior,
usually in terms of failure rates.
- Predictable failure prevention means protecting an information
system from harm by considering which of the following?
a. Mean-time-to-repair (MTTR)
b. Mean-time-to-failure (MTTF)
c. Mean-time between failures (MTBF)
d. Mean-time between outages (MTBO)
- b. MTTF focuses on the potential failure of specific components of
the information system that provide security capability. MTTF is the
amount of mean-time to the next failure. MTTR is the amount of time
it takes to resume normal operation. MTBF is the average length of
time the system is functional. MTBO is the mean time between
equipment failures that result in a loss of system continuity or
unacceptable degradation.
- Regarding software installation, “All software is checked
against a list approved by the organization” refers to which of the
following?
a. Blacklisting
b. Black-box testing
c. White-box testing
d. Whitelisting
- d. Whitelisting is a method to control the installation of software to
ensure that all software is checked against a list approved by the
organization. It is a quality control check and is a part of software
configuration activity. An example of blacklisting is creating a list of
electronic-mail senders who have previously sent spam to a user.
Black-box testing is a functional analysis of a system, whereas white box testing is a structural analysis of a system.
- Which of the following is not an example of the defect
prevention method in software development and maintenance
processes?
a. Documented standards
b. Clean Room processes
c. Formal technical reviews
d. Documentation standards
- c. Formal technical reviews (for example, inspections and
walkthroughs) are used for defect detection, not prevention. If properly
conducted, formal technical reviews are the most effective way to
uncover and correct errors, especially early in the life cycle, where
they are relatively easy and inexpensive to correct.
Documented standards are incorrect because they are just one example
of defect prevention methods. Documented standards should be
succinct and possibly placed into a checklist format as a ready
application reference. A documented standard also permits audits for
adherence and compliance with the approved method.
CleanRoom processes are incorrect because they are just one example
of defect prevention methods. The CleanRoom process consists of (i)
defining a set of software increments that combine to form the required
system, (ii) using rigorous methods for specification, development, and
certification of each increment, (iii) applying strict statistical quality
control during the testing process, and (iv) enforcing a strict separation
of the specification and design tasks from testing activities.
Documentation standards are incorrect because they are just one
example of defect prevention methods. Standard methods can be
applied to the development of requirements and design documents.
- The scope of formal technical reviews conducted for software
defect removal would not include:
a. Configuration management specification
b. Requirements specification
c. Design specification
d. Test specification
- a. The formal technical review is a software quality assurance
activity that is performed by software developers. The objectives of
these reviews are to (i) uncover errors in function and logic, (ii) verify
that software under review meets its requirements, (iii) ensure that
software represents the predefined standards. Configuration
management specifications are a part of project planning documents,
not technical documents. The purpose is to establish the processes that
the project uses to manage the configuration items and changes to
them. Program development, quality, and configuration management
plans are subject to review but are not directly germane to the subject
of defect removal.
The other three choices are incorrect because they are part of technical
documents. The subject matter for formal technical reviews includes
requirements specifications, detailed design, and code and test
specifications. The objectives of reviewing the technical documents are
to verify that (i) the work reviewed is traceable to the requirements set
forth by the predecessor’s tasks, (ii) the work is complete, (iii) the
work has been completed to standards, and (iv) the work is correct.
- Patch management is a part of which of the following?
a. Directive controls
b. Preventive controls
c. Detective controls
d. Corrective controls
- d. Patch management is a part of corrective controls, as it fixes
software problems and errors. Corrective controls are procedures to
react to security incidents and to take remedial actions on a timely
basis. Corrective controls require proper planning and preparation as
they rely more on human judgment.
Directive controls are broad-based controls to handle security
incidents, and they include management’s policies, procedures, and
directives. Preventive controls deter security incidents from happening
in the first place. Detective controls enhance security by monitoring
the effectiveness of preventive controls and by detecting security
incidents where preventive controls were circumvented.
- Locking-based attacks result in which of the following?
- Denial-of-service
- Degradation-of-service
- Destruction-of-service
- Distribution-of-service
a. 1 and 2
b. 1 and 3
c. 2 and 3
d. 3 and 4
- a. Locking-based attack is used to hold a critical system locked
most of the time, releasing it only briefly and occasionally. The result
would be a slow running browser without stopping it: degradation-of service. The degradation-of-service is a mild form of denial-of-service.
Destruction of service and distribution of service are not relevant here.
- Which of the following protects the information
confidentiality against a robust keyboard attack?
a. Disposal
b. Clearing
c. Purging
d. Destroying
- b. A keyboard attack is a data scavenging method using resources
available to normal system users with the help of advanced software
diagnostic tools. Clearing information is the level of media sanitization
that protects the confidentiality of information against a robust
keyboard attack. Clearing must be resistant to keystroke recovery
attempts executed from standard input devices and from data
scavenging tools.
The other three choices are incorrect. Disposal is the act of discarding
media by giving up control in a manner short of destruction. Purging is
removing obsolete data by erasure, by overwriting of storage, or by
resetting registers. Destroying is ensuring that media cannot be reused
as originally intended.
- Which of the following is the correct sequence of activities
involved in media sanitization? - Assess the risk to confidentiality.
- Determine the future plans for the media.
- Categorize the information to be disposed of.
- Assess the nature of the medium on which it is recorded.
a. 1, 2, 3, and 4
b. 2, 3, 4, and 1
c. 3, 4, 1, and 2
d. 4, 3, 2, and 1
- c. An information system user must first categorize the
information to be disposed of, assess the nature of the medium on
which it is recorded, assess the risk to confidentiality, and determine
the future plans for the media.
- All the following are examples of normal backup strategies
except:
a. Ad hoc backup
b. Full backup
c. Incremental backup
d. Differential backup
- a. Ad hoc means when needed and irregular. Ad hoc backup is not
a well-thought-out strategy because there is no systematic way of
backing up required data and programs. Full (normal) backup archives
all selected files and marks each as having been backed up.
Incremental backup archives only those files created or changed since
the last normal backup and marks each file. Differential backup
archives only those files that have been created or changed since the
last normal backup. It does not mark the files as backed up. The
backups mentioned in other three choices have a systematic procedure.
- Regarding a patch management program, which of the
following is not a method of patch remediation?
a. Developing a remediation plan
b. Installing software patches
c. Adjusting configuration settings
d. Removing affected software
- a. Remediation is the act of correcting vulnerability or eliminating
a threat. A remediation plan includes remediation of one or more
threats or vulnerabilities facing an organization’s systems. The plan
typically covers options to remove threats and vulnerabilities and
priorities for performing the remediation.
Three types of remediation methods include installing a software
patch, adjusting a configuration setting, and removing affected
software. Removing affected software requires uninstalling a software
application. The fact that a remediation plan is developed does not
itself provide actual remediation work because actions provide
remediation work not just plans on a paper.
- For media sanitization, overwriting cannot be used for which
of the following? - Damaged media
- Nondamaged media
- Rewriteable media
- Non rewriteable media
a. 1 only
b. 4 only
c. 1 or 4
d. 2 or 3
- c. Overwriting cannot be used for media that are damaged or not
rewriteable. The media type and size may also influence whether
overwriting is a suitable sanitization method.
- Regarding media sanitization, which of the following is the
correct sequence of fully and physically destroying magnetic disks,
such as hard drives? - Incinerate
- Disintegrate
- Pulverize
- Shred
a. 4, 1, 2, and 3
b. 3, 4, 2, and 1
c. 1, 4, 3, and 2
d. 2, 4, 3, and 1
- d. The correct sequence of fully and physically destroying
magnetic disks such as hard drives (for example, advanced technology
attachment (ATA) and serial ATA (SATA) hard drives), is disintegrate,
shred, pulverize, and incinerate. This is the best recommended practice
for both public and private sector organizations.
Disintegration is a method of sanitizing media and is the act of
separating the equipment into component parts. Here, the
disintegration step comes first to make the hard drive inoperable
quickly. Shredding is a method of sanitizing media and is the act of
cutting or tearing into small particles. Shredding cannot be the first
step because it is not practical to do for many companies. Pulverization
is a method of sanitizing media and is the act of grinding to a powder
or dust. Incineration is a method of sanitizing media and is the act of
burning completely to ashes done in a licensed incinerator.
Note that one does not need to complete all these methods, but can
stop after any specific method and after reaching the final goal based
on the sensitivity and criticality of data on the disk.
- Who initiates audit trails in computer systems?
a. Functional users
b. System auditors
c. System administrators
d. Security administrators
- a. Functional users have the utmost responsibility in initiating
audit trails in their computer systems for tracing and accountability
purposes. Systems and security administrators help in designing and
developing these audit trails. System auditors review the adequacy and
completeness of audit trails and issue an opinion whether they are
effectively working. Auditors do not initiate, design, or develop audit
trails due to their independence in attitude and appearance as dictated
by their Professional Standards.
- The automatic termination and protection of programs when
a failure is detected in a computer system are called a:
a. Fail-safe
b. Fail-soft
c. Fail-over
d. Fail-open
- a. The automatic termination and protection of programs when a
failure is detected in a computer system is called fail-safe. The
selective termination of affected nonessential processing when a failure
is detected in a computer system is called a fail-soft. Fail-over means
switching to a backup mechanism. Fail-open means that a program has
failed to open due to errors or failures.
- An inexpensive security measure is which of the following?
a. Firewalls
b. Intrusion detection
c. Audit trails
d. Access controls
- c. Audit trails provide one of the best and most inexpensive means
for tracking possible hacker attacks, not only after attack, but also
during the attack. You can learn what the attacker did to enter a
computer system, and what he did after entering the system. Audit
trails also detect unauthorized but abusive user activity. Firewalls,
intrusion detection systems, and access controls are expensive when
compared to audit trails.
- What is the residual physical representation of data that has
been in some way erased called?
a. Clearing
b. Purging
c. Data remanence
d. Destruction
- c. Data remanence is the residual physical representation of data
that has been in some way erased. After storage media is erased, there
may be some physical characteristics that allow the data to be
reconstructed, which represents a security threat. Clearing, purging,
and destruction are all risks involved in storage media. In clearing and
purging, data is removed, but the media can be reused. The need for
destruction arises when the media reaches the end of its useful life.
- Which of the following methods used to safeguard against
disclosure of sensitive information is effective?
a. Degaussing
b. Overwriting
c. Encryption
d. Destruction
- c. Encryption makes the data unreadable without the proper
decryption key. Degaussing is a process whereby the magnetic media
is erased, i.e., returned to its initial virgin state. Overwriting is a
process whereby unclassified data are written to storage locations that
previously held sensitive data. The need for destruction arises when the
media reaches the end of its useful life.
- Magnetic storage media sanitization is important to protect
sensitive information. Which of the following is not a general
method of purging magnetic storage media?
a. Overwriting
b. Clearing
c. Degaussing
d. Destruction
- b. The removal of information from a storage medium such as a
hard disk or tape is called sanitization. Different kinds of sanitization
provide different levels of protection. Clearing information means
rendering it unrecoverable by keyboard attack, with the data remaining
on the storage media. There are three general methods of purging
magnetic storage media: overwriting, degaussing, and destruction.
Overwriting means obliterating recorded data by writing different data
on the same storage surface. Degaussing means applying a variable,
alternating current fields for the purpose of demagnetizing magnetic
recording media, usually tapes. Destruction means damaging the
contents of magnetic media through shredding, burning, or applying
chemicals.
- Which of the following redundant array of independent disks
(RAID) technology classifications increases disk overhead?
a. RAID-1
b. RAID-2
c. RAID-3
d. RAID-4
- a. Disk array technology uses several disks in a single logical
subsystem. To reduce or eliminate downtime from disk failure,
database servers may employ disk shadowing or data mirroring. A disk
shadowing, or RAID-1, subsystem includes two physical disks. User
data is written to both disks at once. If one disk fails, all the data is
immediately available from the other disk. Disk shadowing incurs
some performance overhead (during write operations) and increases
the cost of the disk subsystem because two disks are required. RAID
levels 2 through 4 are more complicated than RAID-1. Each involves
storage of data and error correction code information, rather than a
shadow copy. Because the error correction data requires less space than
the data, the subsystems have lower disk overhead.
- Indicate the correct sequence of degaussing procedures for
magnetic disk files. - Write zeros
- Write a special character
- Write ones
- Write nines
a. 1, 3, and 2
b. 3, 1, 4, and 2
c. 2, 1, 4, and 3
d. 1, 2, 3, and 4
- a. Disk files can be demagnetized by overwriting three times with
zeros, ones, and a special character, in that order, so that sensitive
information is completely deleted.
- Which of the following is the best control to prevent a new
user from accessing unauthorized file contents when a newly
recorded file is shorter than those previously written to a computer
tape?
a. Degaussing
b. Cleaning
c. Certifying
d. Overflowing
- a. If the new file is shorter than the old file, the new user could
have open access to the existing file. Degaussing is best used under
these conditions and is considered a sound and safe practice. Tape
cleaning functions are to clean and then to properly wind and create
tension in the computer magnetic tape. Recorded tapes are normally
not erased during the cleaning process. Tape certification is performed
to detect, count, and locate tape errors and then, if possible, repair the
underlying defects so that the tape can be placed back into active
status. Overflowing has nothing to do with computer tape contents.
Overflowing is a memory or file size issue where contents could be
lost due to size limitations.
- Which of the following data integrity problems can be caused
by multiple sources?
a. Disk failure
b. File corruption
c. Power failure
d. Memory failure
- b. Hardware malfunction, network failures, human error, logical
errors, and other disasters are possible threats to ensuring data
integrity. Files can be corrupted as a result of some physical
(hardware) or network problems. Files can also become corrupted by
some flaw in an application program’s logic. Users can contribute to
this problem due to inexperience, accidents, or missed
communications. Therefore, most data integrity problems are caused
by file corruption.
Disk failure is a hardware malfunction caused by physical wear and
tear. Power failure is a hardware malfunction that can be minimized by
installing power conditioning equipment and battery backup systems.
Memory failure is an example of hardware malfunction due to
exposure to strong electromagnetic fields. File corruption has many
problem sources to consider.
- Which of the following provides network redundancy in a
local-area-network (LAN) environment?
a. Mirroring
b. Shadowing
c. Dual backbones
d. Journaling
- c. A backbone is the high traffic density connectivity portion of
any communications network. Backbones are used to connect servers
and other service providing machines on the network. The use of dual
backbones means that if the primary network goes down, the
secondary network will carry the traffic.
In packet switched networks, a backbone consists of switches and
interswitch trunks. Switched networks can be managed with a network
management console. Network component failures can be identified on
the console and responded to quickly. Many switching devices are built
modularly with hot swappable circuit boards. If a chip fails on a board
in the device, it can be replaced relatively quickly just by removing the
failed card and sliding in a new one. If switching devices have dual
power supplies and battery backups, network uptime can be increased
as well.
Mirroring, shadowing, and duplexing provide application system
redundancy, not network redundancy. Mirroring refers to copying data
as it is written from one device or machine to another. Shadowing is
where information is written in two places, one shadowing the other,
for extra protection. Any changes made will be reflected in both
places. Journaling is a chronological description of transactions that
have taken place, either locally, centrally, or remotely.
- Which of the following controls prevents a loss of data
integrity in a local-area-network (LAN) environment?
a. Data mirroring and archiving
b. Data correction
c. Data vaulting
d. Data backup
- a. Data mirroring refers to copying data as it is written from one
device or machine to another. It prevents data loss. Data archiving is
where files are removed from network online storage by copying them
to long-term storage media such as optical disks, tapes, or cartridges. It
prevents accidental deletion of files.
Data correction is incorrect because it is an example of a corrective
control where bad data is fixed. Data vaulting is incorrect because it is
an example of corrective control. It is a way of storing critical data
offsite either electronically or manually. Data backup is incorrect
because it is an example of corrective control where a compromised
system can be restored.
- In general, a fail-over mechanism is an example of which of
the following?
a. Corrective control
b. Preventive control
c. Recovery control
d. Detective control
- c. Fail-over mechanism is a backup concept in that when the
primary system fails, the backup system is activated. This helps in
recovering the system from a failure or disaster.
- Which of the following does not trigger zero-day attacks?
a. Malware
b. Web browsers
c. Zombie programs
d. E-mail attachments
- c. A zombie is a computer program that is installed on a personal
computer to cause it to attack other computers. Attackers organize
zombies as botnets to launch denial-of-server (DoS) attacks and
distributed DoS attacks, not zero-day attacks. The other three choices
trigger zero-day attacks.
With zero-day (zero-hour) attacks, attackers try to exploit computer
application vulnerabilities that are unknown to system owners and
system administrators, undisclosed to software vendors, or for which
no security fix is available. Malware writers can exploit zero-day
vulnerabilities through several different attack vectors to compromise
attacked systems or steal confidential data. Web browsers are a major
target because of their widespread distribution and usage. Hackers send
e-mail attachments to exploit vulnerabilities in the application opening
the attachment and send other exploits to take advantage of
weaknesses in common file types.
- TEMPEST is used for which of the following?
a. To detect electromagnetic disclosures
b. To detect electronic dependencies
c. To detect electronic destructions
d. To detect electromagnetic emanations
- d. TEMPEST is a short name, and not an acronym. It is the study
and control of spurious electronic signals emitted by electrical
equipment. It is the unclassified name for the studies and investigations
of compromising electromagnetic emanations from equipment. It is
suggested that TEMPEST shielded equipment is used to prevent
compromising emanations.
- Which of the following is an example of directive controls?
a. Passwords and firewalls
b. Key escrow and software escrow
c. Intrusion detection systems and antivirus software
d. Policies and standards
- d. Policies and standards are an example of directive controls.
Passwords and firewalls are an example of preventive controls. Key
escrow and software escrow are an example of recovery controls.
Intrusion detection systems and antivirus software are an example of
detective controls.
- Which of the following control terms can be used in a broad
sense?
a. Administrative controls
b. Operational controls
c. Technical controls
d. Management controls
- d. Management controls are actions taken to manage the
development, maintenance, and use of the system, including systemspecific policies, procedures, and rules of behavior, individual roles
and responsibilities, individual accountability, and personnel security
decisions.
Administrative controls include personnel practices, assignment of
responsibilities, and supervision and are part of management controls.
Operational controls are the day-to-day procedures and mechanisms
used to protect operational systems and applications. Operational
controls affect the system and application environment. Technical
controls are hardware and software controls used to provide automated
protection for the IT system or application. Technical controls operate
within the technical system and applications
- A successful incident handling capability should serve which
of the following?
a. Internal users only
b. All computer platforms
c. All business units
d. Both internal and external users
- d. The focus of a computer security incident handling capability
may be external as well as internal. An incident that affects an
organization may also affect its trading partners, contractors, or clients.
In addition, an organization’s computer security incident handling
capability may help other organizations and, therefore, help protect the
industry as a whole.
- Which of the following encourages compliance with IT
security policies?
a. Use
b. Results
c. Monitoring
d. Reporting
- c. Monitoring encourages compliance with IT security policies.
Results can be used to hold managers accountable for their information
security responsibilities. Use for its own sake does not help here.
Reporting comes after monitoring.
- Who should measure the effectiveness of security-related
controls in an organization?
a. Local security specialist
b. Business manager
c. Systems auditor
d. Central security manager
- c. The effectiveness of security-related controls should be
measured by a person fully independent of the information systems
department. The systems auditor located within an internal audit
department of an organization is the right party to perform such
measurement.
- Which of the following corrects faults and returns a system to
operation in the event a system component fails?
a. Preventive maintenance
b. Remedial maintenance
c. Hardware maintenance
d. Software maintenance
- b. Remedial maintenance corrects faults and returns the system to
operation in the event of hardware or software component fails.
Preventive maintenance is incorrect because it is done to keep
hardware in good operating condition. Both hardware and software
maintenance are included in the remedial maintenance.
- Which of the following statements is not true about audit
trails from a computer security viewpoint?
a. There is interdependency between audit trails and security
policy.
b. If a user is impersonated, the audit trail establishes events and
the identity of the user.
c. Audit trails can assist in contingency planning.
d. Audit trails can be used to identify breakdowns in logical access
controls.
- b. Audit trails have several benefits. They are tools often used to
help hold users accountable for their actions. To be held accountable,
the users must be known to the system (usually accomplished through
the identification and authentication process). However, audit trails
collect events and associate them with the perceived user (i.e., the user
ID provided). If a user is impersonated, the audit trail establishes
events but not the identity of the user.
It is true that there is interdependency between audit trails and security
policy. Policy dictates who has authorized access to particular system
resources. Therefore it specifies, directly or indirectly, what violations
of policy should be identified through audit trails.
It is true that audit trails can assist in contingency planning by leaving
a record of activities performed on the system or within a specific
application. In the event of a technical malfunction, this log can be
used to help reconstruct the state of the system (or specific files).
It is true that audit trails can be used to identify breakdowns in logical
access controls. Logical access controls restrict the use of system
resources to authorized users. Audit trails complement this activity by
identifying breakdowns in logical access controls or verifying that
access control restrictions are behaving as expected.
- Which of the following is a policy-driven storage media?
a. Hierarchical storage management
b. Tape management
c. Direct access storage device
d. Optical disk platters
- a. Hierarchical storage management follows a policy-driven
strategy in that the data is migrated from one storage medium to
another, based on a set of rules, including how frequently the file is
accessed. On the other hand, the management of tapes, direct access
storage devices, and optical disks is based on schedules, which is an
operational strategy.
- In which of the following types of denial-of-service attacks
does a host send many requests with a spoofed source address to a
service on an intermediate host?
a. Reflector attack
b. Amplifier attack
c. Distributed attack
d. SYNflood attack
- a. Because the intermediate host unwittingly performs the attack,
that host is known as reflector. During a reflector attack, a denial-of service (DoS) could occur to the host at the spoofed address, the
reflector itself, or both hosts. The amplifier attack does not use a single
intermediate host, like the reflector attack, but uses a whole network of
intermediate hosts. The distributed attack coordinates attacks among
several computers. A synchronous (SYN) flood attack is a stealth
attack because the attacker spoofs the source address of the SYN
packet, thus making it difficult to identify the perpetrator.
- Sometimes a combination of controls works better than a
single category of control, such as preventive, detective, or
corrective. Which of the following is an example of a combination
of controls?
a. Edit and limit checks, digital signatures, and access controls
b. Error reversals, automated error correction, and file recovery
c. Edit and limit checks, file recovery, and access controls
d. Edit and limit checks, reconciliation, and exception reports
- c. Edit and limit checks are an example of preventive or detective
control, file recovery is an example of corrective control, and access
controls are an example of preventive control. A combination of
controls is stronger than a single type of control.
Edit and limit checks, digital signatures, and access controls are
incorrect because they are an example of a preventive control.
Preventive controls keep undesirable events from occurring. In a
computing environment, preventive controls are accomplished by
implementing automated procedures to prohibit unauthorized system
access and to force appropriate and consistent action by users.
Error reversals, automated error correction, and file recovery are
incorrect because they are an example of a corrective control.
Corrective controls cause or encourage a desirable event or corrective
action to occur after an undesirable event has been detected. This type
of control takes effect after the undesirable event has occurred and
attempts to reverse the error or correct the mistake.
Edit and limit checks, reconciliation, and exception reports are
incorrect because they are an example of a detective control. Detective
controls identify errors or events that were not prevented and identify
undesirable events after they have occurred. Detective controls should
identify expected error types, as well as those that are not expected to
occur.
- What is an attack in which someone compels system users or
administrators into revealing information that can be used to gain
access to the system for personal gain called?
a. Social engineering
b. Electronic trashing
c. Electronic piggybacking
d. Electronic harassment
- a. Social engineering involves getting system users or
administrators to divulge information about computer systems,
including passwords, or to reveal weaknesses in systems. Personal gain
involves stealing data and subverting computer systems. Social
engineering involves trickery or coercion.
Electronic trashing is incorrect because it involves accessing residual
data after a file has been deleted. When a file is deleted, it does not
actually delete the data but simply rewrites a header record. The data is
still there for a skilled person to retrieve and benefit from.
Electronic piggybacking is incorrect because it involves gaining
unauthorized access to a computer system via another user’s legitimate
connection. Electronic harassment is incorrect because it involves
sending threatening electronic-mail messages and slandering people on
bulletin boards, news groups, and on the Internet. The other three
choices do not involve trickery or coercion.
- Indicate the correct sequence in which primary questions
must be addressed when an organization is determined to do a
security review for fraud. - How vulnerable is the organization?
- How can the organization detect fraud?
- How would someone go about defrauding the organization?
- What does the organization have that someone would want to
defraud?
a. 1, 2, 3, and 4
b. 3, 4, 2, and 1
c. 2, 4, 1, and 3
d. 4, 3, 1, and 2
- d. The question is asking for the correct sequence of activities that
should take place when reviewing for fraud. The organization should
have something of value to others. Detection of fraud is least
important; prevention is most important.
- Which of the following zero-day attack protection
mechanisms is not suitable to computing environments with a large
number of users?
a. Port knocking
b. Access control lists
c. Local server-based firewalls
d. Hardware-based firewalls
- a. The use of port knocking or single packet authorization
daemons can provide effective protection against zero-day attacks for a
small number of users. However, these techniques are not suitable for
computing environments with a large number of users. The other three
choices are effective protection mechanisms because they are a part of
multiple layer security, providing the first line-of-defense. These
include implementing access control lists (one layer), restricting
network access via local server firewalling (i.e., IP tables) as another
layer, and protecting the entire network with a hardware-based firewall
(another layer). All three of these layers provide redundant protection
in case a compromise in any one of them is discovered.
- A computer fraud occurred using an online accounts
receivable database application system. Which of the following
logs is most useful in detecting which data files were accessed from
which terminals?
a. Database log
b. Access control security log
c. Telecommunications log
d. Application transaction log
- b. Access control security logs are detective controls. Access logs
show who accessed what data files, when, and from what terminal,
including the nature of the security violation. The other three choices
are incorrect because database logs, telecommunication logs, and
application transaction logs do not show who accessed what data files,
when, and from what terminal, including the nature of the security
violation.
- Audit trails should be reviewed. Which of the following
methods is not the best way to perform a query to generate reports
of selected information?
a. By a known damage or occurrence
b. By a known user identification
c. By a known terminal identification
d. By a known application system name
- a. Damage or the occurrence of an undesirable event cannot be
anticipated or predicted in advance, thus making it difficult to make a
query. The system design cannot handle unknown events. Audit trails
can be used to review what occurred after an event, for periodic
reviews, and for real-time analysis. Reviewers need to understand what
normal activity looks like. An audit trail review is easier if the audit
trail function can be queried by user ID, terminal ID, application
system name, date and time, or some other set of parameters to run
reports of selected information.
- Which of the following can prevent dumpster diving?
a. Installing surveillance equipment
b. Using a data destruction process
c. Hiring additional staff to watch data destruction
d. Sending an e-mail message to all employees
- b. Dumpster diving can be avoided by using a high-quality data
destruction process on a regular basis. This should include paper
shredding and electrical disruption of data on magnetic media such as
tape, cartridge, or disk.
- Identify the computer-related crime and fraud method that
involves obtaining information that may be left in or around a
computer system after the execution of a job.
a. Data diddling
b. Salami technique
c. Scavenging
d. Piggybacking
- c. Scavenging is obtaining information that may be left in or
around a computer system after the execution of a job. Data diddling
involves changing data before or during input to computers or during
output from a computer system. The salami technique is theft of small
amounts of assets (primarily money) from a number of sources.
Piggybacking can be done physically or electronically. Both methods
involve gaining access to a controlled area without authorization.
- An exception-based security report is an example of which of
the following?
a. Preventive control
b. Detective control
c. Corrective control
d. Directive control
- c. Detecting an exception in a transaction or process is detective
in nature, but reporting it is an example of corrective control. Both
preventive and directive controls do not either detect or correct an
error; they simply stop it if possible.
- There is a possibility that incompatible functions may be
performed by the same individual either in the IT department or
in the user department. One compensating control for this
situation is the use of:
a. Log
b. Hash totals
c. Batch totals
d. Check-digit control
- a. A log, preferably a computer log, records the actions or
inactions of an individual during his access to a computer system or a
data file. If any abnormal activities occur, the log can be used to trace
them. The purpose of a compensating control is balancing weak
controls with strong controls. The other three choices are examples of
application system-based specific controls not tied to an individual
action, as a log is.
- When an IT auditor becomes reasonably certain about a case
of fraud, what should the auditor do next?
a. Say nothing now because it should be kept secret.
b. Discuss it with the employee suspected of fraud.
c. Report it to law enforcement officials.
d. Report it to company management.
- d. In fraud situations, the auditor should proceed with caution.
When certain about a fraud, he should report it to company
management, not to external organizations. The auditor should not talk
to the employee suspected of fraud. When the auditor is not certain
about fraud, he should talk to the audit management.
- An effective relationship between risk level and internal
control level is which of the following?
a. Low risk and strong controls
b. High risk and weak controls
c. Medium risk and weak controls
d. High risk and strong controls
- d. There is a direct relationship between the risk level and the
control level. That is, high-risk situations require stronger controls,
low-risk situations require weaker controls, and medium-risk situations
require medium controls. A control is defined as the policies, practices,
and organizational structure designed to provide reasonable assurance
that business objectives will be achieved and that undesired events
would be prevented or detected and corrected. Controls should
facilitate accomplishment of an organization’s objectives.
- Incident handling is not closely related to which of the
following?
a. Contingency planning
b. System support
c. System operations
d. Strategic planning
- d. Strategic planning involves long-term and major issues such as
management of the computer security program and the management of
risks within the organization and is not closely related to the incident
handling, which is a minor issue.
Incident handling is closely related to contingency planning, system
support, and system operations. An incident handling capability may
be viewed as a component of contingency planning because it provides
the ability to react quickly and efficiently to disruptions in normal
processing. Broadly speaking, contingency planning addresses events
with the potential to interrupt system operations. Incident handling can
be considered that portion of contingency planning that responds to
malicious technical threats.
- In which of the following areas do the objectives of systems
auditors and information systems security officers overlap the
most?
a. Determining the effectiveness of security-related controls
b. Evaluating the effectiveness of communicating security policies
c. Determining the usefulness of raising security awareness levels
d. Assessing the effectiveness of reducing security incidents
- a. The auditor’s objective is to determine the effectiveness of
security-related controls. The auditor reviews documentation and tests
security controls. The other three choices are the sole responsibilities
of information systems security officers.
- Which of the following security control techniques assists
system administrators in protecting physical access of computer
systems by intruders?
a. Access control lists
b. Host-based authentication
c. Centralized security administration
d. Keystroke monitoring
- d. Keystroke monitoring is the process used to view or record
both the keystrokes entered by a computer user and the computer’s
response during an interactive session. It is usually considered a
special case of audit trails. Keystroke monitoring is conducted in an
effort to protect systems and data from intruders who access the
systems without authority or in excess of their assigned authority.
Monitoring keystrokes typed by intruders can help administrators
assess and repair any damage they may cause.
Access control lists refer to a register of users who have been given
permission to use a particular system resource and the types of access
they have been permitted. Host-based authentication grants access
based upon the identity of the host originating the request, instead of
the identity of the user making the request. Centralized security
administration allows control over information because the ability to
make changes resides with few individuals, as opposed to many in a
decentralized environment. The other three choices do not protect
computer systems from intruders, as does the keystroke monitoring.
- Which of the following is not essential to ensure operational
assurance of a computer system?
a. System audits
b. System changes
c. Policies and procedures
d. System monitoring
- b. Security is not perfect when a system is implemented. Changes
in the system or the environment can create new vulnerabilities. Strict
adherence to procedures is rare over time, and procedures become
outdated. Thinking risk is minimal, users may tend to bypass security
measures and procedures. Operational assurance is the process of
reviewing an operational system to see that security controls, both
automated and manual, are functioning correctly and effectively.
To maintain operational assurance, organizations use three basic
methods: system audits, policies and procedures, and system
monitoring. A system audit is a one-time or periodic event to evaluate
security. Monitoring refers to an ongoing activity that examines either
the system or the users. In general, the more real time an activity is, the
more it falls into the category of monitoring. Policies and procedures
are the backbone for both auditing and monitoring.
System changes drive new requirements for changes. In response to
various events such as user complaints, availability of new features and
services, or the discovery of new threats and vulnerabilities, system
managers and users modify the system and incorporate new features,
new procedures, and software updates. System changes by themselves
do not assure that controls are working properly.
- What is an example of a security policy that can be legally
monitored?
a. Keystroke monitoring
b. Electronic mail monitoring
c. Web browser monitoring
d. Password monitoring
- d. Keystroke monitoring, e-mail monitoring, and Web browser
monitoring are controversial and intrusive. These kinds of efforts could
waste time and other resources due to their legal problems. On the
other hand, examples of effective security policy statements include (i)
passwords shall not be shared under any circumstances and (ii)
password usage and composition will be monitored.
- What is a common security problem?
a. Discarded storage media
b. Telephone wiretapping
c. Intelligence consultants
d. Electronic bugs
- a. Here, the keyword is common, and it is relative. Discarded
storage media, such as CDs/DVDs, paper documents, and reports, is a
major and common problem in every organization. Telephone
wiretapping and electronic bugs require expertise. Intelligent
consultants gather a company’s proprietary data and business
information and government trade strategies.
- When controlling access to information, an audit log provides
which of the following?
a. Review of security policy
b. Marking files for reporting
c. Identification of jobs run
d. Accountability for actions - d. An audit log must be kept and protected so that any actions
impacting security can be traced. Accountability can be established
with the audit log. The audit log also helps in verifying the other three
choices indirectly.
- What is a detective control in a computer operations area?
a. Policy
b. Log
c. Procedure
d. Standard
- b. Logs, whether manual or automated, capture relevant data for
further analysis and tracing. Policy, procedure, and standard are
directive controls and are part of management controls because they
regulate human behavior.
- In terms of security functionality verification, which of the
following is the correct order of information system’s transitional
states? - Startup
- Restart
- Shutdown
- Abort
a. 1, 2, 3, and 4
b. 1, 3, 2, and 4
c. 3, 2, 1, and 4
d. 4, 3, 2, and 1
- b. The correct order of information system’s transitional states is
startup, shutdown, restart, and abort. Because the system is in
transitional states, which is an unstable condition, if the restart
procedures are not performed correctly or facing technical recovery
problems, then the system has no choice except to abort.
- Which of the following items is not related to the other items?
a. Keystroke monitoring
b. Penetration testing
c. Audit trails
d. Telephone wiretap
- b. Penetration testing is a test in which the evaluators attempt to
circumvent the security features of a computer system. It is unrelated
to the other three choices. Keystroke monitoring is the process used to
view or record both the keystrokes entered by a computer user and the
computer’s response during an interactive session. It is considered as a
special case of audit trails. Some consider the keystroke monitoring as
a special case of unauthorized telephone wiretap and others are not.
- All the following are tools that help both system intruders and
systems administrators except:
a. Network discovery tools
b. Intrusion detection tools
c. Port scanners
d. Denial-of-service test tools
- b. Intrusion detection tools detect computer attacks in several
ways: (i) outside of a network’s firewall, (ii) behind a network’s
firewall, or (iii) within a network to monitor insider attacks. Network
discovery tools and port scanners can be used both by intruders and
system administrators to find vulnerable hosts and network services.
Similarly, denial-of-service test tools can be used to determine how
much damage can be done to a computing site.
- Audit trail records contain vast amounts of data. Which of the
following review methods is best to review all records associated
with a particular user or application system?
a. Batch-mode analysis
b. Real-time audit analysis
c. Audit trail review after an event
d. Periodic review of audit trail data
- b. Audit trail data can be used to review what occurred after an
event, for periodic reviews, and for real-time analysis. Audit analysis
tools can be used in a real-time, or near real-time, fashion. Manual
review of audit records in real time is not feasible on large multi-user
systems due to the large volume of records generated. However, it
might be possible to view all records associated with a particular user
or application and view them in real time.
Batch-mode analysis is incorrect because it is a traditional method of
analyzing audit trails. The audit trail data are reviewed periodically.
Audit records are archived during that interval for later analysis. The
three incorrect choices do not provide the convenience of displaying or
reporting all records associated with a user or application, as do the
real-time audit analysis.
- Many errors were discovered during application system file maintenance work. What is the best control?
a. File labels
b. Journaling
c. Run-to-run control
d. Before and after image reporting
- d. Before and after image reporting ensures data integrity by
reporting data field values both before and after the changes so that
functional users can detect data entry and update errors.
File labels are incorrect because they verify internal file labels for
tapes to ensure that the correct data file is used in the processing.
Journaling is incorrect because it captures system transactions on a
journal file so that recovery can be made should a system failure occur.
Run-to-run control is incorrect because it verifies control totals
resulting from one process or cycle to the subsequent process or cycle
to ensure their accuracy.
- Which of the following is not an example of denial-of-service
attacks?
a. Flaw-based attacks
b. Information attacks
c. Flooding attacks
d. Distributed attacks
- b. An information attack is not relevant here because it is too
general. Flaw-based attacks take advantage of a flaw in the target
system’s software to cause a processing failure, escalate privileges, or
to cause it to exhaust system resources. Flooding attacks simply send a
system more information than it can handle. A distributed attack is a
subset of denial-of-service (DoS) attacks, where the attacker uses
multiple computers to launch the attack and flood the system.
- All the following are examples of technical controls for
ensuring information systems security except:
a. User identification and authentication
b. Assignment of security responsibility
c. Access controls
d. Data validation controls
- b. Assignment of security responsibility is a part of management
controls. Screening of personnel is another example of management
controls. The other three choices are part of technical controls.
- Which of the following individuals or items cause the highest
economic loss to organizations using computer-based information
systems?
a. Dishonest employees
b. Disgruntled employees
c. Errors and omissions
d. Outsiders
- c. Users, data entry clerks, system operators, and programmers
frequently make errors that contribute directly or indirectly to security
problems. In some cases, the error is the threat, such as a data entry
error or a programming error that crashes a system. In other cases, the
errors create vulnerabilities. Errors can occur during all phases of the
system life cycle. Many studies indicate that 65 percent of losses to
organizations are the result of errors and omissions followed by
dishonest employees (13%), disgruntled employees (6%), and
outsiders/hackers (3%).
- Which one of the following situations renders backing up
program and data files ineffective?
a. When catastrophic accidents happen
b. When disruption to the network occurs
c. When viruses are timed to activate at a later date
d. When backups are performed automatically
- c. Computer viruses that are timed to activate at a later date can
be copied onto the backup media thereby infecting backup copies as
well. This makes the backup copy ineffective, unusable, or risky.
Backups are useful and effective (i) in the event of a catastrophic
accident, (ii) in case of disruption to the network, and (iii) when they
are performed automatically. Human error is eliminated.
- What does an ineffective local-area-network backup strategy
include?
a. Backing up servers daily
b. Securing the backup workstations
c. Scheduling backups during regular work hours
d. Using file recovery utility programs
- c. It is not a good operating practice to schedule backups during
regular work hours because it interrupts the business functions. It is
advised to schedule backups during off hours to avoid file contention
(when files are open and the backup program is scheduled to run). As
the size and complexity of local-area networks (LANs) increase,
backups have assumed greater importance with many options
available. It is a common practice to back up servers daily, taking
additional backups when extensive database changes occur. It is good
to secure the backup workstations to prevent interruption of backup
processes that can result in the loss of backup data. It is a better
practice to use the network operating system’s file recovery utility for
immediate restoration of accidentally deleted files before resorting to
the time consuming process of file recovery from backup tapes.
- Which one of the following types of restores is used when
performing system upgrades and reorganizations?
a. Full restores
b. Individual file restores
c. Redirected restores
d. Group file restores
- a. Full restores are used to recover from catastrophic events or
when performing system upgrades and system reorganizations and
consolidations. All the data on media is fully restored.
Individual file restores, by their name, restore the last version of a file
that was written to media because it was deleted by accident or ruined.
Redirected restores store files on a different location or system than the
one they were copied from during the backup operations. Group file
restores handle two or more files at a time.
- Which of the following file backup strategies is preferred
when a full snapshot of a server is required prior to upgrading it?
a. Full backups
b. Incremental backups
c. Differential backups
d. On-demand backups
- d. On-demand backups refer to the operations that are done
outside of the regular backup schedule. This backup method is most
useful when backing up a few files/directories or when taking a full
snapshot of a server prior to upgrading it. On-demand backups can act
as a backup for regular backup schedules.
Full backups are incorrect because they copy all data files and
programs. It is a brute force method providing a peace of mind at the
expense of valuable time. Incremental backups are incorrect because
they are an inefficient method and copy only those files that have
changed since the last backup. Differential backups are incorrect
because they copy all data files that have changed since the last full
backup. Only two files are needed to restore the entire system: the last
full backup and the last differential backup.
- Which one of the following database backup strategies is
executed when a database is running in a local-area-network
environment?
a. Cold backup
b. Hot backup
c. Logical backup
d. Offline backup
- b. Hot backups are taken when the database is running and
updates are being written to it. They depend heavily on the ability of
log files to stack up transaction instructions without actually writing
any data values into database records. While these transactions are
stacking up, the database tables are not being updated, and therefore
can be backed up with integrity. One major problem is that if the
system crashes in the middle of the backup, all the transactions
stacking up in the log file are lost.
The idea of cold backup is to shut down the database and back it up
while no end users are working on the system. This is the best
approach where data integrity is concerned, but it does not service the
customer (end user) well.
Logical backups use software techniques to extract data from the
database and write the results to an export file, which is an image file.
The logical backup approach is good for incremental backups. Offline
backup is another term for cold backup.
- Contrary to best practices, information systems’ security
training is usually not given to which of the following parties?
a. Information systems security staff
b. Functional users
c. Computer operations staff
d. Corporate internal audit staff
- c. The information systems’ security training program should be
specifically tailored to meet the needs of computer operations staff so
that they can deal with problems that have security implications.
However, the computer operations staff is usually either taken for
granted or completely forgotten from training plans.
The information systems’ security staff is provided with periodic
training to keep its knowledge current. Functional users will definitely
be given training so that they know how to practice security. Corporate
internal audit staff is given training because it needs to review the IT
security goals, policies, procedures, standards, and practices.
- Which one of the following is a direct example of social
engineering from a computer security viewpoint?
a. Computer fraud
b. Trickery or coercion techniques
c. Computer theft
d. Computer sabotage
- b. Social engineering is a process of tricking or coercing people
into divulging their passwords. Computer fraud involves deliberate
misrepresentation, alteration, or disclosure of data to obtain something
of value. Computer theft involves stealing of information, equipment,
or software for personal gain. Computer sabotage includes planting a
Trojan horse, trapdoor, time bomb, virus, or worm to perform
intentional harm or damage. The difference in the other three choices is
that there is no trickery or coercion involved.
- A fault-tolerant design feature for large distributed systems
considers all the following except:
a. Using multiple components to duplicate functionality
b. Using duplicated systems in separate locations
c. Using modular components
d. Providing backup power supplies
- d. A fault tolerant design should make a system resistant to failure
and able to operate continuously. Many ways exist to develop fault
tolerance in a system, including using two or more components to
duplicate functionality, duplicating systems in separate locations, or
using modular components in which failed components can be
replaced with new ones. It does not include providing backup power
supplies because it is a part of preventive maintenance, which should
be used with fault tolerant design. Preventive maintenance measures
reduce the likelihood of significant impairment to components.
- The process of degaussing involves which of the following?
a. Retrieving all stored information
b. Storing all recorded information
c. Removing all recorded information
d. Archiving all recorded information
- c. The purpose of degaussing is to remove all recorded
information from a computer-recorded magnetic tape. It does this by
demagnetizing (removing) the recording media, the tape, or the hard
drive. After degaussing is done, the magnetic media is in a fully
demagnetized state. However, degaussing cannot retrieve, store, or
archive information.
- An audit trail record should include sufficient information to
trace a user’s actions and events. Which of the following
information in the audit trail record helps the most to determine if
the user was a masquerader or the actual person specified?
a. The user identification associated with the event
b. The date and time associated with the event
c. The program used to initiate the event
d. The command used to initiate the event
- b. An audit trail should include sufficient information to establish
what events occurred and who (or what) caused them. Date and
timestamps can help determine if the user was a masquerader or the
actual person specified. With date and time, one can determine whether
a specific user worked on that day and at that time.
The other three choices are incorrect because the masquerader could be
using a fake user identification (ID) number or calling for invalid and
inappropriate programs and commands.
In general, an event record should specify when the event occurred, the
user ID associated with the event, the program or command used to
initiate the event, and the result.
- Automated tools help in analyzing audit trail data. Which one
of the following tools looks for anomalies in user or system
behavior?
a. Trend analysis tools
b. Audit data reduction tools
c. Attack signature detection tools
d. Audit data-collection tools
- a. Many types of tools have been developed to help reduce the
amount of information contained in audit records, as well as to distill
useful information from the raw data. Especially on larger systems,
audit trail software can create large files, which can be extremely
difficult to analyze manually. The use of automated tools is likely to be
the difference between unused audit trail data and a robust program.
Trend analysis and variance detection tools look for anomalies in user
or system behavior.
Audit data reduction tools are preprocessors designed to reduce the
volume of audit records to facilitate manual review. These tools
generally remove records generated by specified classes of events,
such as records generated by nightly backups.
Attack signature detection tools look for an attack signature, which is a
specific sequence of events indicative of an unauthorized access
attempt. A simple example is repeated failed log-in attempts. Audit
data-collection tools simply gather data for analysis later.
- Regarding a patch management program, which of the
following helps system administrators most in terms of monitoring
and remediating IT resources? - Supported equipment
- Supported applications software
- Unsupported hardware
- Unsupported operating systems
a. 1 only
b. 2 only
c. 1 and 2
d. 3 and 4
- d. Here, supported and unsupported means whether a company
management has approved the acquisition, installation, and operation
of hardware and software; approved in the former case and not
approved in the latter case. System administrators should be taught
how to independently monitor and remediate unsupported hardware,
operating systems, and applications software because unsupported
resources are vulnerable to exploitation. This is because non-compliant
employees could have purchased and installed the unsupported
hardware and software on their personal computers, which is riskier
than the supported ones. A potential risk is that the unsupported
systems could be incompatible with the supported systems and may
not have the required security controls.
A list of supported resources is needed to analyze the inventory and
identify those resources that are used within the organization. This
allows the system administrators to know which hardware, operating
systems, and applications will be checking for new patches,
vulnerabilities, and threats. Note that not patching the unsupported
systems can negatively impact the patching of the supported systems
as they both coexist and operate on the same computer or network.
- Which of the following is the best action to take when an
information system media cannot be sanitized?
a. Clearing
b. Purging
c. Destroying
d. Disposal
- c. An information system media that cannot be sanitized should
be destroyed. Destroying is ensuring that media cannot be reused as
originally intended and that information is virtually impossible to
recover or prohibitively expensive to do.
Sanitization techniques include disposal, clearing, purging, and
destruction. Disposal is the act of discarding media by giving up
control in a manner short of destruction and is not a strong protection.
Clearing is the overwriting of classified information such that that the
media may be reused. Purging is the removal of obsolete data by
erasure, by overwriting of storage, or by resetting registers. Clearing
media would not suffice for purging.
- Regarding a patch management program, which of the
following benefits confirm that the remediations have been
conducted appropriately? - Avoiding an unstable website
- Avoiding an unusable website
- Avoiding a security incident
- Avoiding unplanned downtime
a. 1 only
b. 2 only
c. 1 and 2
d. 3 and 4
- d. There are understandable benefits in confirming that the
remediations have been conducted appropriately, possibly avoiding a
security incident or unplanned downtime. Central system
administrators can send remediation information on a disk to local
administrators as a safe alternative to an e-mail list if the network or
the website is unstable or unusable.
- Regarding a patch management program, which of the
following should be used when comparing the effectiveness of the
security programs of multiple systems? - Number of patches needed
- Number of vulnerabilities found
- Number of vulnerabilities per computer
- Number of unapplied patches per computer
a. 1 only
b. 2 only
c. 1 and 2
d. 3 and 4
- d. Ratios, not absolute numbers, should be used when comparing
the effectiveness of the security programs of multiple systems. Ratios
reveal better information than absolute numbers. In addition, ratios
allow effective comparison between systems. Number of patches
needed and number of vulnerabilities found are incorrect because they
deal with absolute numbers
- All the following are examples of denial-of-service attacks
except:
a. IP address spoofing
b. Smurf attack
c. SYNflood attack
d. Sendmail attack
- a. IP address spoofing is falsifying the identity of a computer
system on a network. It capitalizes on the packet address the Internet
Protocol (IP) uses for transmission. It is not an example of a denial-of service attack because it does not flood the host computer.
Smurf, synchronized flood (SYNflood), and sendmail attacks are
examples of denial-of-service attacks. Smurf attacks use a network that
accepts broadcast ping packets to flood the target computer with ping
reply packets. SYN flood attack is a method of overwhelming a host
computer on the Internet by sending the host a high volume of SYN
packets requesting a connection, but never responding to the
acknowledgment packets returned by the host. Recent attacks against
sendmail include remote penetration, local penetration, and remote
denial of service.
- Ping-of-death is an example of which of the following?
a. Keyboard attack
b. Stream attack
c. Piggyback attack
d. Buffer overflow attack
- d. The ping-of-death is an example of buffer overflow attack, a
part of a denial-of-service attack, where large packets are sent to
overfill the system buffers, causing the system to reboot or crash.
A keyboard attack is a resource starvation attack in that it consumes
system resources (for example, CPU utilization and memory),
depriving legitimate users. A stream attack sends TCP packets to a
series of ports with random sequence numbers and random source IP
addresses, resulting in high CPU usage. In a piggybacking attack, an
intruder can gain unauthorized access to a system by using a valid
user’s connection.
- Denial-of-service attacks compromise which one of the
following properties of information systems?
a. Integrity
b. Availability
c. Confidentiality
d. Reliability
- b. A denial-of-service (DoS) is an attack in which one user takes
up so much of the shared resource that none of the resource is left for
other users. It compromises the availability of system resources (for
example, disk space, CPU, print paper, and modems), resulting in
degradation or loss of service.
A DoS attack does not affect integrity because the latter is a property
that an object is changed only in a specified and authorized manner. A
DoS attack does not affect confidentiality because the latter is a
property ensuring that data is disclosed only to authorized subjects or
users. A DoS attack does not affect reliability because the latter is a
property defined as the probability that a given system is performing
its mission adequately for a specified period of time under the expected
operating conditions.
- Which of the following is the most complex phase of incident
response process for malware incidents?
a. Preparation
b. Detection
c. Recovery
d. Remediation
- c. Of all the malware incident-response life-cycle phases,
recovery phase is the most complex. Recovery involves containment,
restore, and eradication. Containment addresses how to control an
incident before it spreads to avoid consuming excessive resources and
increasing damage caused by the incident. Restore addresses bringing
systems to normal operations and hardening systems to prevent similar
incidents. Eradication addresses eliminating the affected components
of the incident from the overall system to minimize further damage to
it. More tools and technologies are relevant to the recovery phase than to
any other phase; more technologies mean more complexity. The
technologies involved and the speed of malware spreading make it
more difficult to recover. The other three phases such as preparation, detection, and remediation are less complex. The scope of preparation and prevention phase covers establishing plans, policies, and procedures. The scope of detection phase covers identifying classes of incidents and defining appropriate actions to take. The scope of remediation phase covers tracking and documenting security incidents on an ongoing basis to help in forensics analysis and in establishing trends.
- Which of the following determines the system availability rate
for a computer-based application system?
a. (Available time / scheduled time) x 100
b. [(1 + available time) / (scheduled time)] x 100
c. [(Available time)/(1 – scheduled time)] x 100
d. [(Available time – scheduled time) / (scheduled time)] x 100
- a. System availability is expressed as a rate between the number
of hours the system is available to the users during a given period and
the scheduled hours of operation. Overall hours of operation also
include sufficient time for scheduled maintenance activities. Scheduled
time is the hours of operation, and available time is the time during
which the computer system is available to the users.
- A computer security incident was detected. Which of the
following is the best reaction strategy for management to adopt?
a . Protect and preserve
b. Protect and recover
c. Trap and prosecute
d. Pursue and proceed
- b. If a computer site is vulnerable, management may favor the
protect-and-recover reaction strategy because it increases defenses
available to the victim organization. Also, this strategy brings
normalcy to the network’s users as quickly as possible. Management
can interfere with the intruder’s activities, prevent further access, and
begin damage assessment. This interference process may include
shutting down the computer center, closing of access to the network,
and initiating recovery efforts.
Protect-and-preserve strategy is a part of a protect-and-recover
strategy. Law enforcement authorities and prosecutors favor the trap and-prosecute strategy. It lets intruders continue their activities until
the security administrator can identify the intruder. In the mean time,
there could be system damage or data loss. Pursue-and-proceed
strategy is not relevant here.
- A computer security incident handling capability should meet
which of the following?
a. Users’ requirements
b. Auditors’ requirements
c. Security requirements
d. Safety requirements
- a. There are a number of start-up costs and funding issues to
consider when planning an incident handling capability. Because the
success of an incident handling capability relies so heavily on the
users’ perceptions of its worth and whether they use it, it is important
that the capability meets users’ requirements. Two important funding
issues are personnel and education and training.
- Which of the following is not a primary benefit of an incident
handling capability?
a. Containing the damage
b. Repairing the damage
c. Preventing the damage
d. Preparing for the damage
- d. The primary benefits of an incident handling capability are
containing and repairing damage from incidents and preventing future
damage. Preparing for the damage is a secondary and side benefit.
- All the following can co-exist with computer security incident
handling except:
a. Help-desk function
b. System backup schedules
c. System development activity
d. Risk management process
- c. System development activity is engaged in designing and
constructing a new computer application system, whereas incident
handling is needed during operation of the same application system.
For example, for purposes of efficiency and cost-savings, incident handling capability is co-operated with a user help desk. Also, backups
of system resources need to be used when recovering from an incident.
Similarly, the risk analysis process benefits from statistics and logs
showing the numbers and types of incidents that have occurred and the
types of controls that are effective in preventing such incidents. This
information can be used to help select appropriate security controls and
practices.
- Which of the following decreases the response time for
computer security incidents?
a. Electronic mail
b. Physical bulletin board
c. Terminal and modem
d. Electronic bulletin board
- a. With computer security incidents, rapid communications is
important. The incident team may need to send out security advisories
or collect information quickly; thus some convenient form of
communication, such as electronic mail (e-mail), is generally highly
desirable. With e-mail, the team can easily direct information to
various subgroups within the constituency, such as system managers or
network managers, and broadcast general alerts to the entire
constituency as needed. When connectivity already exists, e-mail has
low overhead and is easy to use.
Although there are substitutes for e-mail, they tend to increase
response time. An electronic bulletin board system (BBS) can work
well for distributing information, especially if it provides a convenient
user interface that encourages its use. A BBS connected to a network is
more convenient to access than one requiring a terminal and modem;
however, the latter may be the only alternative for organizations
without sufficient network connectivity. In addition, telephones,
physical bulletin boards, and flyers can be used, but they increase
response time.
- Which of the following incident response life-cycle phases is
most challenging for many organizations?
a. Preparation
b. Detection
c. Recovery
d. Reporting
- b. Detection, for many organizations, is the most challenging
aspect of the incident response process. Actually detecting and
assessing possible incidents is difficult. Determining whether an
incident has occurred and, if so, the type, extent, and magnitude of the
problem is not an easy task.
The other three phases such as preparation, recovery, and reporting are
not that challenging. The scope of preparation and prevention phase
covers establishing plans, policies, and procedures. The scope of
recovery phase includes containment, restore, and eradication. The
scope of reporting phase involves understanding the internal and
external reporting requirements in terms of the content and timeliness
of the reports.
- Regarding incident response data, nonperformance of which
one of the following items makes the other items less important?
a. Quality of data
b. Review of data
c. Standard format for data
d. Actionable data
- b. If the incident response data is not reviewed regularly, the
effectiveness of detection and analysis of incidents is questionable. It
does not matter whether the data is of high quality with standard
format for data, or actionable data. Proper and efficient reviews of
incident-related data require people with extensive specialized
technical knowledge and experience.
- Which of the following statements about incident
management and response is not true?
a. Most incidents require containment.
b. Containment strategies vary based on the type of incident.
c. All incidents need eradication.
d. Eradication is performed during recovery for some incidents.
- c. For some incidents, eradication is either unnecessary or is
performed during recovery. Most incidents require containment, so it is
important to consider it early in the course of handling each incident.
Also, it is true that containment strategies vary based on the type of
incident.
- Which of the following is the correct sequence of events taking
place in the incident response life cycle process?
a. Prevention, detection, preparation, eradication, and recovery
b. Detection, response, reporting, recovery, and remediation
c. Preparation, containment, analysis, prevention, and detection
d. Containment, eradication, recovery, detection, and reporting
- b. The correct sequence of events taking place in the incident
response life cycle is detection, response, reporting, recovery, and
remediation. Although the correct sequence is started with detection,
there are some underlying activities that should be in place prior to
detection. These prior activities include preparation and prevention,
addressing the plans, policies, procedures, resources, support, metrics,
patch management processes, host hardening measures, and properly
configuring the network perimeter.
Detection involves the use of automated detection capabilities (for
example, log analyzers) and manual detection capabilities (for
example, user reports) to identify incidents. Response involves security
staff offering advice and assistance to system users for the handling
and reporting of security incidents (for example, held desk or forensic
services). Reporting involves understanding the internal and external
reporting requirements in terms of the content and timeliness of the
reports. Recovery involves containment, restore, and eradication.
Containment addresses how to control an incident before it spreads to
avoid consuming excessive resources and increasing damage caused
by the incident. Restore addresses bringing systems to normal
operations and hardening systems to prevent similar incidents.
Eradication addresses eliminating the affected components of the
incident from the overall system to minimize further damage to the
overall system. Remediation involves tracking and documenting
security incidents on an ongoing basis.
- Which of the following is not a recovery action after a
computer security incident was contained?
a. Rebuilding systems from scratch
b. Changing passwords
c. Preserving the evidence
d. Installing patches
- c. Preserving the evidence is a containment strategy, whereas all
the other choices are part of recovery actions. Preserving the evidence
is a legal matter, not a recovery action, and is a part of the containment
strategy. In recovery action, administrators restore systems to normal
operation and harden systems to prevent similar incidents, including
the actions taken in the other three choices.
- Contrary to best practices, which of the following parties is
usually not notified at all or is notified last when a computer
security incident occurs?
a. System administrator
b. Legal counsel
c. Disaster recovery coordinator
d. Hardware and software vendors - b. The first part of a response mechanism is notification, whether
automatic or manual. Besides technical staff, several others must be
notified, depending on the nature and scope of the incident.
Unfortunately, legal counsel is not always notified or is notified
thinking that involvement is not required.
- Which of the following is not a viable option in the event of an
audit processing failure or audit storage capacity being reached?
a. Shut down the information system.
b. Overwrite the oldest-audit records.
c. Stop generating the audit records.
d. Continue processing after notification.
- d. In the event of an audit processing failure or audit storage
capacity being reached, the information system alerts appropriate
management officials and takes additional actions such as shutting
down the system, overwriting the oldest-audit records, and stopping
the generation of audit records. It should not continue processing,
either with or without notification because the audit-related data would
be lost.
- Which of the following surveillance techniques is passive in
nature?
a. Audit logs
b. Keyboard monitoring
c. Network sniffing
d. Online monitoring
- a. Audit logs collect data passively on computer journals or files
for later review and analysis followed by action. The other three
choices are examples of active surveillance techniques where
electronic (online) monitoring is done for immediate review and
analysis followed by action.
- A good computer security incident handling capability is
closely linked to which of the following?
a. Systems software
b. Applications software
c. Training and awareness program
d. Help desk
- c. A good incident handling capability is closely linked to an
organization’s training and awareness program. It will have educated
users about such incidents so users know what to do when they occur.
This can increase the likelihood that incidents will be reported early,
thus helping to minimize damage. The help desk is a tool to handle
incidents. Intruders can use both systems software and applications
software to create security incidents.
- System users seldom consider which of the following?
a. Internet security
b. Residual data security
c. Network security
d. Application system security
- b. System users seldom consider residual data security as part of
their job duties because they think it is the job of computer operations
or information security staff. Residual data security means data
remanence where corporate spies can scavenge discarded magnetic or
paper media to gain access to valuable data. Both system users and
system managers usually consider the measures mentioned in the other
three choices.
- Which of the following is not a special privileged user?
a. System administrator
b. Business end-user
c. Security administrator
d. Computer operator
- b. A special privileged user is defined as an individual who has
access to system control, monitoring, or administration functions. A
business end-user is a normal system user performing day-to-day and
routine tasks required by his job duties, and should not have special
privileges as does with the system administrator, security
administrator, computer operator, system programmer, system
maintainer, network administrator, or desktop administrator. Privileged
users have access to a set of access rights on a given system. Privileged
access to privileged function should be limited to only few individuals
in the IT department and should not be given to or shared with
business end-users who are so many.
- Which of the following is the major consideration when an
organization gives its incident response work to an outsourcer?
a. Division of responsibilities
b. Handling incidents at multiple locations
c. Current and future quality of work
d. Lack of organization-specific knowledge
- c. The quality of the outsourcer’s work remains an important
consideration. Organizations should consider not only the current
quality of work, but also the outsourcer’s efforts to ensure the quality
of future work, which are the major considerations. Organizations
should think about how they could audit or otherwise objectively
assess the quality of the outsourcer’s work. Lack of organizationspecific knowledge will reflect in the current and future quality of
work. The other three choices are minor considerations and are a part
of the major considerations.
- The incident response team should work with which of the
following when attempting to contain, eradicate, and recover from
large-scale incidents?
a. Advisory distribution team
b. Vulnerability assessment team
c. Technology watch team
d. Patch management team
- d. Patch management staff work is separate from that of the
incident response staff. Effective communication channels between the
patch management team and the incident response team are likely to
improve the success of a patch management program when containing,
eradicating, and recovering from large-scale incidents. The activities
listed in the other choices are the responsibility of the incident
response team.
- Which of the following is the foundation of the incident
response program?
a. Incident response policies
b. Incident response procedures
c. Incident response standards
d. Incident response guidelines
- a. The incident response policies are the foundation of the
incident response program. They define which events are considered as
incidents, establish the organizational structure for the incident
response program, define roles and responsibilities, and list the
requirements for reporting incidents.
- All the following can increase an information system’s
resilience except:
a. A system achieves a secure initial state.
b. A system reaches a secure failure state after failure.
c. A system’s recovery procedures take the system to a known
secure state after failure.
d. All of a system’s identified vulnerabilities are fixed.
- d. There are vulnerabilities in a system that cannot be fixed, those
that have not yet been fixed, those that are not known, and those that
are not practical to fix due to operational constraints. Therefore, a
statement that “all of a system’s identified vulnerabilities are fixed” is
not correct. The other three choices can increase a system’s resilience.
- Media sanitization ensures which of the following?
a. Data integrity
b. Data confidentiality
c. Data availability
d. Data accountability
- b. Media sanitization refers to the general process of removing
data from storage media, such that there is reasonable assurance, in
proportion to the confidentiality of the data, that the data may not be
retrieved and reconstructed. The other three choices are not relevant
here.
- Regarding media sanitization, degaussing is the same as:
a. Incinerating
b. Melting
c. Demagnetizing
d. Smelting
- c. Degaussing reduces the magnetic flux to virtual zero by
applying a reverse magnetizing field. It is also called demagnetizing.
- Regarding media sanitization, what is residual information
remaining on storage media after clearing called?
a. Residue
b. Remanence
c. Leftover data
d. Leftover information
- b. Remanence is residual information remaining on storage media
after clearing. Choice (a) is incorrect because residue is data left in
storage after information-processing operations are complete but
before degaussing or overwriting (clearing) has taken place. Leftover
data and leftover information are too general as terms to be of any use
here.
- What is the security goal of the media sanitization requiring
an overwriting process?
a. To replace random data with written data.
b. To replace test data with written data.
c. To replace written data with random data.
d. To replace written data with statistical data.
- c. The security goal of the overwriting process is to replace
written data with random data. The process may include overwriting
not only the logical storage of a file (for example, file allocation table)
but also may include all addressable locations.
- Which of the following protects the confidentiality of
information against a laboratory attack?
a. Disposal
b. Clearing
c. Purging
d. Disinfecting
- c. A laboratory attack is a data scavenging method through the aid
of what could be precise or elaborate and powerful equipment. This
attack involves using signal-processing equipment and specially
trained personnel. Purging information is a media sanitization process
that protects the confidentiality of information against a laboratory
attack and renders the sanitized data unrecoverable. This is
accomplished through the removal of obsolete data by erasure, by
overwriting of storage, or by resetting registers.
The other three choices are incorrect. Disposal is the act of discarding
media by giving up control in a manner short of destruction, and is not
a strong protection. Clearing is the overwriting of classified
information such that the media may be reused. Clearing media would
not suffice for purging. Disinfecting is a process of removing malware
within a file.
- Computer fraud is increased when:
a. Employees are not trained.
b. Documentation is not available.
c. Audit trails are not available.
d. Employee performance appraisals are not given.
- c. Audit trails indicate what actions are taken by the system.
Because the system has adequate and clear audit trails deters fraud
perpetrators due to fear of getting caught. For example, the fact that
employees are trained, documentation is available, and employee
performance appraisals are given (preventive measures) does not
necessarily mean that employees act with due diligence at all times.
Hence, the need for the availability of audit trails (detection measures)
is very important because they provide a concrete evidence of actions
and inactions.
- Which of the following is not a prerequisite for system
monitoring?
a. System logs and audit trails
b. Software patches and fixes
c. Exception reports
d. Security policies and procedures
- c. Exception reports are the result of a system monitoring activity.
Deviations from standards or policies will be shown in exception
reports. The other three choices are needed before the monitoring
process starts.
- What is the selective termination of affected nonessential
processing when a failure is detected in a computer system called?
a. Fail-safe
b. Fail-soft
c. Fail-over
d. Fail-under
- b. The selective termination of affected nonessential processing
when a failure is detected in a computer system is called fail-soft. The
automatic termination and protection of programs when a failure is
detected in a computer system is called a fail-safe. Fail-over means
switching to a backup mechanism. Fail-under is a meaningless phrase.
- What is an audit trail is an example of?
a. Recovery control
b. Corrective control
c. Preventive control
d. Detective control
- d. Audit trails show an attacker’s actions after detection; hence
they are an example of detective controls. Recovery controls facilitate
the recovery of lost or damaged files. Corrective controls fix a problem
or an error. Preventive controls do not detect or correct an error; they
simply stop it if possible.
- From a best security practices viewpoint, which of the
following falls under the ounce-of-prevention category?
a. Patch and vulnerability management
b. Incident response
c. Symmetric cryptography
d. Key rollover
- a. It has been said that “An ounce of prevention equals a pound of
cure.” Patch and vulnerability management is the “ounce of
prevention” compared to the “pound of cure” in the incident response,
in that timely patches to software reduce the chances of computer
incidents.
Symmetric cryptography uses the same key for both encryption and
decryption, whereas asymmetric cryptography uses separate keys for
encryption and decryption, or to digitally sign and verify a signature.
Key rollover is the process of generating and using a new key
(symmetric or asymmetric key pair) to replace one already in use.
- Which of the following must be manually keyed into an
automated IT resources inventory tool used in patch management
to respond quickly and effectively?
a. Connected network port
b. Physical location
c. Software configuration
d. Hardware configuration
- b. Although most information can be taken automatically from the
system data, the physical location of an IT resource must be manually
entered. Connected network port numbers can be taken automatically
from the system data. Software and hardware configuration
information can be taken automatically from the system data.
- Regarding a patch management program, which of the
following is not an example of a threat?
a. Exploit scripts
b. Worms
c. Software flaws
d. Viruses
- c. Software flaw vulnerabilities cause a weakness in the security
of a system. Threats are capabilities or methods of attack developed by
malicious entities to exploit vulnerabilities and potentially cause harm
to a computer system or network. Threats usually take the form of
exploit scripts, worms, viruses, rootkits, exploits, and Trojan horses.
- Regarding a patch management program, which of the
following does not always return the system to its previous state?
a. Disable
b. Uninstall
c. Enable
d. Install
- b. There are many options available to a system administrator in
remediation testing. The ability to “undo” or uninstall a patch should
be considered; however, even when this option is provided, the
uninstall process does not always return the system to its previous
state. Disable temporarily disconnects a service. Enable or install is not
relevant here.
- Regarding media sanitization, degaussing is not effective for
which of the following?
a. Nonmagnetic media
b. Damaged media
c. Media with large storage capacity
d. Quickly purging diskettes
- a. Degaussing is exposing the magnetic media to a strong
magnetic field in order to disrupt the recorded magnetic domains. It is
not effective for purging nonmagnetic media (i.e., optical media), such
as compact discs (CD) and digital versatile discs (DVD). However,
degaussing can be an effective method for purging damaged media, for
purging media with exceptionally large storage capacities, or for
quickly purging diskettes.
- Which of the following is the ultimate form of media
sanitization?
a. Disposal
b. Clearing
c. Purging
d. Destroying
- d. Media destruction is the ultimate form of sanitization. After
media are destroyed, they cannot be reused as originally intended, and
that information is virtually impossible to recover or prohibitively
expensive from that media. Physical destruction can be accomplished
using a variety of methods, including disintegration, incineration,
pulverization, shredding, melting, sanding, and chemical treatment.
- Organizations that outsource media sanitization work should
exercise:
a. Due process
b. Due law
c. Due care
d. Due diligence
- d. Organizations can outsource media sanitization and destruction
if business and security management decide this would be the most
reasonable option for maintaining confidentiality while optimizing
available resources. When choosing this option, organizations exercise
due diligence when entering into a contract with another party engaged
in media sanitization. Due diligence requires organizations to develop
and implement an effective security program to prevent and detect
violation of policies and laws.
Due process means each person is given an equal and a fair chance of
being represented or heard and that everybody goes through the same
process for consideration and approval. It means all are equal in the
eyes of the law. Due law covers due process and due care. Due care
means reasonable care in promoting the common good and
maintaining the minimal and customary practices.
- Redundant arrays of independent disks (RAID) provide
which of the following security services most?
a. Data confidentiality
b. Data reliability
c. Data availability
d. Data integrity
- b. Forensic investigators are encountering redundant arrays of
independent disks (RAID) systems with increasing frequency as
businesses elect to utilize systems that provide greater data reliability.
RAID provides data confidentiality, data availability, and data integrity
security services to a lesser degree than data reliability.
- The fraud triangle includes which of the following elements?
a. Pressure, opportunity, and rationalization
b. Technique, target, and time
c. Intent, means, and environment
d. Place, ability, and need
- a. Pressure includes financial and nonfinancial types, and it could
be real or perceived. Opportunity includes real or perceived categories
in terms of time and place. Rationalization means the illegal actions
are consistent with the perpetrator’s personal code of conduct or state
of mind.
- When a system preserves a secure state, during and after a
failure is called a:
a. System failure
b. Fail-secure
c. Fail-access
d. System fault
- b. In fail-secure, the system preserves a secure condition during
and after an identified failure. System failure and fault are generic and
do not preserve a secure condition like fail-secure. Fail-access is a
meaningless term here.
- Fault-tolerance systems provide which of the following
security services?
a. Confidentiality and integrity
b. Integrity and availability
c. Availability and accountability
d. Accountability and confidentiality
- b. The goal of fault-tolerance systems is to detect and correct a
fault and to maintain the availability of a computer system. Fault tolerance systems play an important role in maintaining high data and
system integrity and in ensuring high-availability of systems.
Examples include disk mirroring and server mirroring techniques.
- What do fault-tolerant hardware control devices include?
a. Disk duplexing and mirroring
b. Server consolidation
c. LAN consolidation
d. Disk distribution
- a. Disk duplexing means that the disk controller is duplicated.
When one disk controller fails, the other one is ready to operate. Disk
mirroring means the file server contains duplicate disks, and that all
information is written to both disks simultaneously. Server
consolidation, local-area network (LAN) consolidation, and disk
distribution are meaningless to fault tolerance; although, they may
have their own uses.
- Performing automated deployment of patches is difficult for
which of the following?
a. Homogeneous computing platforms
b. Legacy systems
c. Standardized desktop systems
d. Similarly configured servers
- b. Manual patching is useful and necessary for many legacy and
specialized systems due to their nature. Automated patching tools
allow an administrator to update hundreds or even thousands of
systems from a single console. Deployment is fairly simple when there
are homogeneous computing platforms, with standardized desktop
systems, and similarly configured servers.
- Regarding media sanitization, degaussing is an acceptable
method for which of the following?
a. Disposal
b. Clearing
c. Purging
d. Disinfecting
- c. Degaussing is demagnetizing magnetic media to remove
magnetic memory and to erase the contents of media. Purging is the
removal of obsolete data by erasure, by overwriting of storage, or by
resetting registers. Thus, degaussing and executing the firmware
Secure Purge command (for serial advanced technology attachment
(SATA) drives only) are acceptable methods for purging.
The other three choices are incorrect. Disposal is the act of discarding
media by giving up control in a manner short of destruction and is not
a strong protection. Clearing is the overwriting of classified
information such that that the media may be reused. Clearing media
would not suffice for purging. Disinfecting is a process of removing
malware within a file.
- Regarding a patch management program, which of the
following should be done before performing the patch
remediation?
a. Test on a nonproduction system.
b. Check software for proper operation.
c. Conduct a full backup of the system.
d. Consider all implementation differences.
- c. Before performing the remediation, the system administrator
may want to conduct a full backup of the system to be patched. This
allows for a timely system restoration to its previous state if the patch
has an unintended or unexpected impact on the host. The other three
choices are part of the patch remediation testing procedures.
- Regarding a patch management program, an experienced
administrator or security officer should perform which of the
following?
a. Test file settings.
b. Test configuration settings.
c. Review patch logs.
d. Conduct exploit tests.
- d. Conducting an exploit test means performing a penetration test
to exploit the vulnerability. Only an experienced administrator or
security officer should perform exploit tests because this involves
launching actual attacks within a network or on a host. Generally, this
type of testing should be performed only on nonproduction equipment
and only for certain vulnerabilities. Only qualified staff who are
thoroughly aware of the risk and who are fully trained should conduct
the tests. Testing file settings, testing configuration settings, and reviewing patch logs are routine tasks a less experienced administrator or security
officer can perform.
- Which of the following best describes operations security?
A. Continual vigilance about hacker activity and possible vulnerabilities
B. Enforcing access control and physical security
C. Taking steps to make sure an environment, and the things within it, stay at a certain level of protection
D. Doing strategy planning to develop a secure environment and then
implementing it properly
- C. All of these are necessary security activities and procedures—they just don’t all fall under the operations umbrella. Operations is about keeping production up and running in a healthy and secure manner. Operations is not usually the entity that carries out strategic planning. It works at an operational, day-to-day level, not at the higher strategic level.
- Which of the following describes why operations security is important?
A. An environment continually changes and has the potential of lowering its level of protection.
B. It helps an environment be functionally sound and productive.
C. It ensures there will be no unauthorized access to the facility or its
resources.
D. It continually raises a company’s level of protection.
- A. This is the best answer because operations has the goal of keeping
everything running smoothly each and every day. Operations implements
new software and hardware and carries out the necessary security tasks passed down to it. As the environment changes and security is kept in the loop with these changes, there is a smaller likelihood of opening up vulnerabilities.
- Why should employers make sure employees take their vacations?
A. They have a legal obligation.
B. It is part of due diligence.
C. It is a way for fraud to be uncovered.
D. To ensure the employee does not get burnt out.
- C. Many times, employees who are carrying out fraudulent activities do not take the vacation they have earned because they do not want anyone to find out what they have been doing. Forcing employees to take vacations means that someone else has to do that person’s job and can possibly uncover any misdeeds.
- What is the difference between due care and due diligence?
A. Due care is the continual effort of ensuring that the right thing takes place, and due diligence is the continual effort to stay compliant with regulations.
B. Due care and due diligence are in contrast to the “prudent person” concept.
C. They mean the same thing.
D. Due diligence involves investigating the risks, while due care involves
carrying out the necessary steps to mitigate these risks.
- D. Due care and due diligence are legal terms that do not just pertain to
security. Due diligence involves going through the necessary steps to know
what a company’s or individual’s actual risks are, while due care involves
carrying out responsible actions to reduce those risks. These concepts
correspond with the “prudent person” concept.
- Which of the following best describes separation of duties and job rotation?
A. Separation of duties ensures that more than one employee knows how to perform the tasks of a position, and job rotation ensures that one person cannot perform a high-risk task alone.
B. Separation of duties ensures that one person cannot perform a high-risk task alone, and job rotation can uncover fraud and ensure that more than one person knows the tasks of a position.
C. They are the same thing, but with different titles.
D. They are administrative controls that enforce access control and protect the company’s resources.
- B. Rotation of duties enables a company to have more than one person trained in a position and can uncover fraudulent activities. Separation of duties is put into place to ensure that one entity cannot carry out a critical task alone.
- If a programmer is restricted from updating and modifying production code, what is this an example of?
A. Rotation of duties
B. Due diligence
C. Separation of duties
D. Controlling input values
- C. This is just one of several examples of separation of duties. A system must be set up for proper code maintenance to take place when necessary, instead of allowing a programmer to make changes arbitrarily. These types of changes should go through a change control process and should have more entities involved than just one programmer.
- Why is it important to control and audit input and output values?
A. Incorrect values can cause mistakes in data processing and be evidence of fraud.
B. Incorrect values can be the fault of the programmer and do not comply
with the due care clause.
C. Incorrect values can be caused by brute force attacks.
D. Incorrect values are not security issues.
- A. There should be controls in place to make sure the data input into a system and the results generated are in the proper format and have expected values. Improper data being put into an application or system could cause bad output and security issues, such as buffer overflows.
- What is the difference between least privilege and need to know?
A. A user should have least privilege that restricts her need to know.
B. A user should have a security clearance to access resources, a need to know about those resources, and least privilege to give her full control of all resources.
C. A user should have a need to know to access particular resources, and least privilege should be implemented to ensure she only accesses the resources she has a need to know.
D. They are two different terms for the same issue.
- C. Users should be able to access only the resources they need to fulfill the
duties of their positions. They also should only have the level of permissions
and rights for those resources that are required to carry out the exact operations
they need for their jobs, and no more. This second concept is more granular
than the first, but they have a symbiotic relationship.
- Which of the following would not require updated documentation?
A. An antivirus signature update
B. Reconfiguration of a server
C. A change in security policy
D. The installation of a patch to a production server
- A. Documentation is very important for data processing and networked
environments. This task often gets pushed to the back burner or is totally
ignored. If things are not properly documented, employees will forget
what actually took place with each device. If the environment needs to be
rebuilt, for example, it may be done incorrectly if the procedure was poorly or improperly documented. When new changes need to be implemented, the current infrastructure may not be totally understood. Continually documenting when virus signatures are updated would be overkill. The other answers contain events that certainly require documentation.
- If sensitive data are stored on a CD-ROM and are no longer needed, which would be the proper way of disposing of the data?
A. Degaussing
B. Erasing
C. Purging
D. Physical destruction
- D. One cannot properly erase data held on a CD-ROM. If the data are
sensitive and you need to ensure no one has access to the same, the media should be physically destroyed.
- If SSL is being used to encrypt messages that are transmitted over the network, what is a major concern of the security professional?
A. The network segments have systems that use different versions of SSL.
B. The user may have encrypted the message with an application-layer
product that is incompatible with SSL.
C. Network tapping and wiretapping.
D. The networks that the message will travel that the company does not
control.
- D. This is not a great question, but could be something that you run into on the exam. Let’s look at the answers. Different SSL versions are usually not a concern, because the two communicating systems will negotiate and agree upon the necessary version. There is no security violation issue here. SSL works at the transport layer; thus, it will not be affected by what the user does, as stated in answer B. SSL protects against network tapping and wiretapping. Answer D talks about the network segments the company does not own. You do not know at what point the other company will decrypt the SSL connection because you do not have control of that environment. Your data could be traveling unencrypted and unprotected on another network.
- What is the purpose of SMTP?
A. To enable users to decrypt mail messages from a server
B. To enable users to view and modify mail messages from a server
C. To transmit mail messages from the client to the mail server
D. To encrypt mail messages before being transmitted
- C. Simple Mail Transfer Protocol (SMTP) is the protocol used to allow clients to send e-mail messages to each other. It lets different mail servers exchange messages.
- If a company has been contacted because its mail server has been used to spread spam, what is most likely the problem?
A. The internal mail server has been compromised by an internal hacker.
B. The mail server in the DMZ has private and public resource records.
C. The mail server has e-mail relaying misconfigured.
D. The mail server has SMTP enabled.
- C. Spammers will identify the mail servers on the Internet that have relaying enabled and are “wide open,” meaning the servers will forward any e-mail messages they receive. These servers can be put on a black list, which means other mail servers will not accept mail from them.
- Which of the following is not a reason fax servers are used in many companies?
A. They save money by not needing individual fax devices and the constant use of fax paper.
B. They provide a secure way of faxing instead of having faxed papers sitting in bins waiting to be picked up.
C. Faxes can be routed to employees’ electronic mailboxes.
D. They increase the need for other communication security mechanisms.
- D. The other three answers provide reasons why fax servers would be used instead of individual fax machines: ease of use, they provide more protection, and their supplies may be cheaper.
- If a company wants to protect fax data while it is in transmission, which of the following are valid mechanisms?
A. PGP and MIME
B. PEM and TSL
C. Data link encryption or fax encryptor
D. Data link encryption and MIME
- C. This is the best answer for this question. The other components could provide different levels of protection, but a fax encryptor (which is a data link encryptor) provides a higher level of protection across the board because everything is encrypted. Even if a user does not choose to encrypt something, it will be encrypted anyway before it is sent out the fax server.
- What is the purpose of TCP wrappers?
A. To monitor requests for certain ports and control access to sensitive files
B. To monitor requests for certain services and control access to password files
C. To monitor requests for certain services and control access to those services
D. To monitor requests to system files and ensure they are not modified
- C. This is a technology that wraps the different services available on a system. What this means is that if a remote user makes a request to access a service, this product will intercept this request and determine whether it is valid and legal before allowing the interaction to take place.
- How do network sniffers work?
A. They probe systems on a network segment.
B. They listen for ARP requests and ICMP packets.
C. They require an extra NIC to be installed and configured.
D. They put the NIC into promiscuous mode.
- D. A sniffer is a device or software component that puts the NIC in promiscuous mode, meaning the NIC will pick up all frames it “sees” instead of just the frames addressed to that individual computer. The sniffer then shows the output to the user. It can have capture and filtering capabilities.
- Which of the following is not an attack against operations?
A. Brute force
B. Denial-of-service
C. Buffer overflow
D. ICMP sting
- D. The first three choices are attacks that can directly affect security
operations. There is no such attack as an ICMP sting.
- Why should user IDs be included in data captured by auditing procedures?
A. They show what files were attacked.
B. They establish individual accountability.
C. They are needed to detect a denial-of-service attack.
D. They activate corrective measures.
- B. For auditing purposes, the procedure should capture the user ID, time of event, type of event, and the source workstation. Capturing the user ID allows the company to hold individuals accountable for their actions.
- Which of the following controls requires separate entities, operating together, to complete a task?
A. Least privilege
B. Data hiding
C. Dual control
D. Administrative
- C. Dual control requires two or more entities working together to complete a task. An example is key recovery. If a key must be recovered, and key recovery requires two or more people to authenticate to a system, the act of them coming together and carrying out these activities is known as dual control. This reduces the possibility of fraud.
- Which of the following would not be considered an operations media
control task?
A. Compressing and decompressing storage materials
B. Erasing data when its retention period is over
C. Storing backup information in a protected area
D. Controlling access to media and logging activities
- A. The last three tasks fall under the job functions of an individual or
department responsible for controlling access to media. Compressing and
decompressing data does not.
- How is the use of clipping levels a way to track violations?
A. They set a baseline for normal user errors, and any violations that exceed that threshold should be recorded and reviewed to understand why they are happening.
B. They enable the administrator to view all reduction levels that have been made to user codes and that have incurred violations.
C. They disallow the administrator to customize the audit trail to record only those violations deemed security related.
D. They enable the administrator to customize the audit trail to capture only access violations and denial-of-service attacks.
- A. Clipping levels are thresholds of acceptable user errors and suspicious activities. If the threshold is exceeded, it should be logged and the administrator should decide if malicious activities are taking place or if the user needs more training.
- Tape library management is an example of operations security through which of the following?
A. Archival retention
B. The review of clipping levels
C. Resource protection
D. Change management
- C. The reason to have tape library management is to have a centralized and standard way of protecting how media is stored, accessed, and destroyed.
- A device that generates coercive magnetic force for the purpose of reducing magnetic flux density to zero on media is called
A. Magnetic saturation
B. Magnetic field
C. Physical destruction
D. Degausser
- D. A degausser is a device that generates a magnetic field (coercive magnetic force) that changes the orientation of the bits held on the media (reducing magnetic flux density to zero).
- Which of the following controls might force a person in operations into collusion with personnel assigned organizationally within a different function for the sole purpose of gaining access to data he is not authorized to access?
A. Limiting the local access of operations personnel
B. Enforcing auditing
C. Enforcing job rotation
D. Limiting control of management personnel
- A. If operations personnel are limited in what they can access, they would need to collude with someone who actually has access to the resource. This question is not very clear, but it is very close to the way many CISSP exam questions are formatted.
- Christine is helping her organization implement a DevOps approach to deploying code. Which one of the following is not a component of the DevOps model?
A. Information security
B. Software development
C. Quality assurance
D. IT operations
- A. The three elements of the DevOps model are software development, quality assurance, and IT operations. Information security is only introduced in the DevSecOps model.
- Bob is developing a software application and has a field where users may enter a date. He wants to ensure that the values provided by the users are accurate dates to prevent security issues. What technique should Bob use?
A. Polyinstantiation
B. Input validation
C. Contamination
D. Screening
- B. Input validation ensures that the input provided by users matches the design parameters. Polyinstantiation includes additional records in a database for presentation to users with differing security levels as a defense against inference attacks. Contamination is the mixing
of data from a higher classification level and/or need-to-know requirement with data from a lower classification level and/or need-to-know requirement. Screening is a generic term and does not represent any specific security technique in this context.
- Frank is conducting a risk analysis of his software development environment and, as a mitigation measure, would like to introduce an approach to failure management that places the system in a high level of security in the event of a failure. What approach should he use?
A. Fail-open
B. Fail mitigation
C. Fail-secure
D. Fail clear
- C. In a fail-secure state, the system remains in a high level of security until an administrator intervenes. In a fail-open state, the system defaults to a low level of security, disabling controls until the failure is resolved. Failure mitigation seeks to reduce the impact of a failure.
Fail clear is not a valid approach.
- Vincent is a software developer who is working through a backlog of change tasks. He is not sure which tasks should have the highest priority. What portion of the change management process would help him to prioritize tasks?
A. Release control
B. Configuration control
C. Request control
D. Change audit
- C. Request control provides users with a framework to request changes and developers with the opportunity to prioritize those requests. Configuration control ensures that changes to software versions are made in accordance with the change and configuration management
policies. Request control provides an organized framework for users to request modifications. Change auditing is used to ensure that the production environment is consistent with the change accounting records.
- What software development model uses a seven-stage approach with a feedback loop that allows progress one step backward?
A. Boyce-Codd
B. Iterative waterfall
C. Spiral
D. Agile
- B. The iterative waterfall model uses a seven-stage approach to software development and includes a feedback loop that allows development to return to the previous phase to correct
defects discovered during the subsequent phase.
- Jane is conducting a threat assessment using threat modeling techniques as she develops security requirements for a software package her team is developing. Which business function is she engaging in under the Software Assurance Maturity Model (SAMM)?
A. Governance
B. Design
C. Implementation
D. Verification
- B. The activities of threat assessment, threat modeling, and security
requirements are all part of the Design function under SAMM.
- Which one of the following key types is used to enforce referential integrity between database tables?
A. Candidate key
B. Primary key
C. Foreign key
D. Alternate key
- C. Foreign keys are used to enforce referential integrity constraints between tables that participate in a relationship. Candidate keys are sets of fields that may potentially serve as the primary key, the key used to uniquely identify database records. Alternate keys are candidate
keys that are not selected as the primary key.
- Richard believes that a database user is misusing his privileges to gain information about the company’s overall business trends by issuing queries that combine data from a large number of records. What process is the database user taking advantage of?
A. Inference
B. Contamination
C. Polyinstantiation
D. Aggregation
- D. In this case, the process the database user is taking advantage of is aggregation. Aggregation attacks involve the use of specialized database functions to combine information from a large number of database records to reveal information that may be more sensitive
than the information in individual records would reveal. Inference attacks use deductive reasoning to reach conclusions from existing data. Contamination is the mixing of data from a higher classification level and/or need-to-know requirement with data from a lower classification level and/or need-to-know requirement. Polyinstantiation is the creation of different database records for users of differing security levels.
- What database technique can be used to prevent unauthorized users from determining classified information by noticing the absence of information normally available to them?
A. Inference
B. Manipulation
C. Polyinstantiation
D. Aggregation
- C. Polyinstantiation allows the insertion of multiple records that appear to have the same primary key values into a database at different classification levels. Aggregation attacks involve the use of specialized database functions to combine information from a large number of database records to reveal information that may be more sensitive than the information in individual records would reveal. Inference attacks use deductive reasoning to reach conclusions from existing data. Manipulation is the authorized or unauthorized alteration of data in a database.
- Which one of the following is not a principle of Agile development?
A. Satisfy the customer through early and continuous delivery.
B. Businesspeople and developers work together.
C. Pay continuous attention to technical excellence.
D. Prioritize security over other requirements.
- D. In Agile, the highest priority is to satisfy the customer through early and continuous delivery of valuable software. It is not to prioritize security over other requirements. The Agile principles also include satisfying the customer through early and continuous delivery, businesspeople and
developers working together, and paying continuous attention to technical excellence.
- What type of information is used to form the basis of an expert system’s decision making process?
A. A series of weighted layered computations
B. Combined input from a number of human experts, weighted according to past performance
C. A series of “if/then” rules codified in a knowledge base
D. A biological decision-making process that simulates the reasoning process used by the human mind
- C. Expert systems use a knowledge base consisting of a series of “if/then” statements to form decisions based on the previous experience of human experts.
- In which phase of the SW-CMM does an organization use quantitative measures to gain a detailed understanding of the development process?
A. Initial
B. Repeatable
C. Defined
D. Managed
- D. In the Managed phase, level 4 of the SW-CMM, the organization uses quantitative measures to gain a detailed understanding of the development process.
- Which of the following acts as a proxy between an application and a database to support interaction and simplify the work of programmers?
A. SDLC
B. ODBC
C. PCI DSS
D. Abstraction
- B. Open Database Connectivity (ODBC) acts as a proxy between applications and the back-end DBMS. The software development lifecycle (SDLC) is a model for the software development process that incorporates all necessary activities. The Payment Card Industry Data Security Standard (PCI DSS) is a regulatory framework for credit card processing.
Abstraction is a software development concept that generalizes common behaviors of software objects into more abstract classes.
- In what type of software testing does the tester have access to the underlying source code?
A. Static testing
B. Dynamic testing
C. Cross-site scripting testing
D. Black-box testing
- A. In order to conduct a static test, the tester must have access to the underlying source code. Black-box testing does not require access to source code. Dynamic testing is an example of black-box testing. Cross-site scripting is a specific type of vulnerability, and it may be discovered using both static and dynamic techniques, with or without access to the source code.
- What type of chart provides a graphical illustration of a schedule that helps to plan, coordinate, and track project tasks?
A. Gantt
B. Venn
C. Bar
D. PERT
- A. A Gantt chart is a type of bar chart that shows the interrelationships over time between projects and schedules. It provides a graphical illustration of a schedule that helps to plan,
- Which database security risk occurs when data from a higher classification level is mixed with data from a lower classification level?
A. Aggregation
B. Inference
C. Contamination
D. Polyinstantiation
- C. Contamination is the mixing of data from a higher classification level and/or need-to know requirement with data from a lower classification level and/or need-to-know requirement. Aggregation attacks involve the use of specialized database functions to combine information from a large number of database records to reveal information that may be
more sensitive than the information in individual records would reveal. Inference attacks use deductive reasoning to reach conclusions from existing data. Polyinstantiation includes additional records in a database for presentation to users with differing security levels as a defense against inference attacks.
- Tonya is performing a risk assessment of a third-party software package for use within her organization. She plans to purchase a product from a vendor that is very popular in her industry. What term best describes this software?
A. Open source
B. Custom-developed
C. ERP
D. COTS
- D. Tonya is purchasing the software, so it is not open source. It is used widely in her industry, so it is not custom developed for her organization. There is no indication in the question that the software is an enterprise resource planning (ERP) system. The best answer here is
commercial-off-the-shelf software (COTS).
- Which one of the following is not part of the change management process?
A. Request control
B. Release control
C. Configuration audit
D. Change control
- C. Configuration audit is part of the configuration management process rather than the change control process. Request control, release control, and change control are all components of the configuration management process.
- What transaction management principle ensures that two transactions do not interfere with each other as they operate on the same data?
A. Atomicity
B. Consistency
C. Isolation
D. Durability
- C. The isolation principle states that two transactions operating on the same data must be temporarily separated from each other so that one does not interfere with the other. The atomicity principle says that if any part of the transaction fails, the entire transaction must be rolled back. The consistency principle says that the database must always be in a state that complies with the database model’s rules. The durability principle says that transactions committed to the database must be preserved.
- Tom built a database table consisting of the names, telephone numbers, and customer IDs for his business. The table contains information on 30 customers. What is the degree of this table?
A. Two
B. Three
C. Thirty
D. Undefined
- B. The cardinality of a table refers to the number of rows in the table, whereas the degree of a table is the number of columns. In this case, the table has three columns (name, telephone number, and customer ID), so it has a degree of three
- What describes a more agile development and support model, where developers directly support operations?
A. DevOps
B. Sashimi
C. Spiral
D. Waterfall
- Correct answer and explanation: A. DevOps is a more agile development and support model, where developers directly support operations.
Incorrect answers and explanations: Answers B, C, and D are incorrect.
Sashimi, spiral, and waterfall are software development methodologies that do not describe a model for developers directly supporting operations.
- Two objects with the same name have different data. What OOP concept does this illustrate?
A. Delegation
B. Inheritance
C. Polyinstantiation
D. Polymorphism
- Correct answer and explanation: C. Polyinstantiation means “many instances,” such as two objects with the same names that have different data. Incorrect answers and explanations: Answers A, B, and D are incorrect. Delegation allows objects to delegate messages to other objects. Inheritance means an object inherits capabilities from its parent class. Polymorphism allows the ability to overload operators, performing different methods depending on the context of the input message.
- What type of testing determines whether software meets various end-state requirements from a user or customer, contract, or compliance perspective?
A. Acceptance testing
B. Integration testing
C. Regression testing
D. Unit testing
- Correct answer and explanation: Answer A is correct; acceptance testing
determines whether software meets various end-state requirements from a user or customer, contract, or compliance perspective.
Incorrect answers and explanations: Answers B, C, and D are incorrect.
Integration testing tests multiple software components as they are combined into a working system. Regression testing tests software after updates, modifications, or patches. Unit testing consists of low-level tests of software components, such as functions, procedures, or objects.
- A database contains an entry with an empty primary key. What database concept has been violated?
A. Entity integrity
B. Normalization
C. Referential integrity
D. Semantic integrity
- Correct answer and explanation: A. Entity integrity means each tuple has a unique primary key that is not null.
Incorrect answers and explanations: Answers B, C, and D are incorrect.
Normalization seeks to make the data in a database table logically concise,
organized, and consistent. Referential integrity means that every foreign key in a secondary table matches a primary key in the parent table; if this is not true, referential integrity has been broken. Semantic integrity means each attribute (column) value is consistent with the attribute data type.
- Which vulnerability allows a third party to redirect static content within the security context of a trusted site?
A. Cross-site request forgery (CSRF)
B. Cross-site ccripting (XSS)
C. PHP remote file inclusion (RFI)
D. SQL injection
- Correct answer and explanation: A. Cross-site request forgery (CSRF) allows a third party to redirect static content within the security context of a trusted site. Incorrect answers and explanations: Answers B, C, and D are incorrect. XSS is a third-party execution of web scripting languages, such as Javascript, within the security context of a trusted site. XSS is similar to CSRF; the difference is XSS uses active code. PHP RFI alters normal PHP variables to reference remote content, which can lead to execution of malicious PHP code. SQL injection manipulates a back-end SQL server via a front-end web server
Which phase of the Software Development Life Cycle (SDLC) emphasizes the importance of risk analysis and threat modeling?
A. Deployment
B. Maintenance
C. Early phases
D.Decommissioning
- Answer: C. Early phases
Explanation: Risk analysis and threat modeling
are critical components of the early phases of the
SDLC. They continue through to the architecture
and design phase.
- Which development methodology does not allow revisiting a previous phase?
A. Agile
B. Spiral Method
C. Waterfall
D. Cleanroom
- Answer: C. Waterfall
Explanation: The Waterfall model requires the
completion of each development phase before
moving to the next. It does not allow revisiting a
previous phase.
- What does DevOps ideally incorporate to make security an integral part of the development process?
A. DevSecOps
B. DevTestOps
C. DevNetOps
D. DevSysOps
- Answer: A. DevSecOps
Explanation: DevOps should ideally be referred to
as DevSecOps, where security is an integral part of
the development process.
- Which maturity model is described as “the prime maturity model for software assurance” by OWASP?
A. Capability Maturity Model (CMM)
B. Software Assurance Maturity Model (SAMM)
C. Development Maturity Model (DMM)
D. Application Maturity Model (AMM)
- Answer: B. Software Assurance Maturity Model
(SAMM)
Explanation: OWASP’s Software Assurance
Maturity Model (SAMM) is described as the prime
maturity model for software assurance.
- Which type of testing focuses on quick preliminary testing after a change to identify any simple failures of the most important existing functionality?
A. Regression testing
B. Canary testing
C. Smoke testing
D. Black box testing
- Answer: C. Smoke testing
Explanation: Smoke testing focuses on quick
preliminary testing after a change to identify any
simple failures of the most important existing
functionality that worked before the change was
made.
- Which of the following refers to a storage location for software and application source code?
A. Integrated Development Environment (IDE)
B. Code repository
C. Software Development Kit (SDK)
D. Application Programming Interface (API)
- Answer: B. Code repository
Explanation: A code repository is a storage
location for software and application source code.
- What does the term polyinstantiation refer to in the context of software development?
A. Code that can vary based on requirements
B. Instantiating into multiple separate or independent instances
C. Code that can be placed inside another
D. Code that can inherit characteristics of previously created objects
- Answer: B. Instantiating into multiple separate or
independent instances
Explanation: Polyinstantiation refers to something
being instantiated into multiple separate or
independent instances.
- Which of the following is a common software vulnerability arising from the use of insecure coding practices?
A. Buffer overflow
B. Code encapsulation
C. Code inheritance
D. Code polymorphism
- Answer: A. Buffer overflow
Explanation: Buffer overflow is a common
problem with applications and occurs when
information sent to a storage buffer exceeds the
buffer’s capacity.
- Which of the following APIs is XML based?
A. Representational State Transfer (REST)
B. Simple Object Access Protocol (SOAP)
C. Code Repository API
D. Integrated Development Environment (IDE) API
- Answer: B. Simple Object Access Protocol (SOAP)
Explanation: Simple Object Access Protocol
(SOAP) is an XML-based API.
- In the context of software development, what does the term “encapsulation” refer to?
A. The ability of an object to inherit characteristics of other objects
B. Code that can vary based on requirements
C. The idea that an object can be placed inside another, protecting it by wrapping it in other objects
D. Hiding or obscuring code to protect it from unauthorized viewing
- Answer: C. The idea that an object can be placed
inside another, protecting it by wrapping it in other
objects
Explanation: Encapsulation refers to the idea that
an object – a piece of code – can be placed inside
another. Other objects can be called by doing this,
and objects can be protected by encapsulating or
wrapping them in other objects.
- Which of the following best describes “code obfuscation”?
A. The process of making code more efficient
B. The practice of writing code in multiple programming languages
C. Intentionally creating source code that is difficult for humans to understand
D. The process of documenting code for better readability
- Answer: C. Intentionally creating source code that is
difficult for humans to understand
Explanation: Code obfuscation refers to hiding or
obscuring code to protect it from unauthorized
viewing It intentionally makes source code difficult
viewing. It intentionally makes source code difficult
for humans to understand.
- Which software development approach is risk-driven and follows an iterative model while also including waterfall elements?
A. Agile
B. Spiral Method
C. Waterfall
D. Cleanroom
- Answer: B. Spiral Method
Explanation: The Spiral Method is a risk-driven
development process that follows an iterative model
while also including waterfall elements.
- What is the primary purpose of “software configuration management (SCM)” in the software development process?
A. To accelerate the development process
B. To manage changes in software
C. To integrate security into the development process
D. To facilitate communication between development teams
- Answer: B. To manage changes in software
Explanation: Software configuration management
focuses explicitly on managing changes in software
and is part of the overall configuration/change
management.
- Which of the following is NOT a characteristic of a Relational Database Management System (RDBMS)?
A. Allows objects and data to be stored and linked together.
B. Data is stored in two-dimensional tables composed of rows and columns.
C. Data is stored hierarchically with parent-child relationships.
D. Information can be related to other information, driving inference and deeper understanding.
- Answer: C. Data is stored hierarchically with parent child relationships.
Explanation: RDBMS systems store data in tables,
not in hierarchical structures.
- What does the term “ACID” stand for in the context of an RDBMS environment?
A. Atomicity, Clarity, Isolation, Durability
B. Accuracy, Consistency, Integrity, Durability
C. Atomicity, Consistency, Isolation, Durability
D. Accuracy, Clarity, Integrity, Durability15. Answer: B. Data that offers insights into other data
Explanation: The term metadata refers to
information that offers insights into other data.
Essentially, it’s data about data.
- Answer: C. Atomicity, Consistency, Isolation, Durability
Explanation: ACID stands for atomicity,
consistency, isolation, and durability and relates to
how information and transactions in an RDBMS
environment should be treated.
- Which of the following is a primary concern when citizen developers write code?
A. They often produce highly optimized code.
B. They typically follow best practices for secure coding.
C. They often have access to powerful programming tools but may lack secure coding practices.
D. They always rely on open source software.
- Answer: C. They often have access to powerful
programming tools but may lack secure coding
practices.
Explanation: Citizen developers often have access
to powerful programming tools. Still, they’re
typically self-taught and unskilled regarding secure
coding practices, leading to insecure and unreliable
application development.
- Which of the following APIs provides a way for applications to communicate using HTTP?
A. Representational State Transfer (REST)
B. Simple Object Access Protocol (SOAP)
C. Code Repository API
D. Integrated Development Environment (IDE) API
- Answer: A. Representational State Transfer (REST)
Explanation: Representational State Transfer
(REST) is an HTTP-based API.
- In software development, what does “coupling” refer
to?
A. The level of relatedness between units of a codebase
B. The process of making code more efficient
C. The practice of writing code in multiple programming languages
D. The process of documenting code for better readability
- Answer: A. The level of relatedness between units of
a codebase
Explanation: Coupling and cohesion are relational
terms that indicate the level of relatedness between
units of a codebase (coupling) and the level of
relatedness between the code that makes up a unit
of code (cohesion).
- In the context of software development, what does
“cohesion” refer to?
A. The level of relatedness between different units of a codebase
B. The level of relatedness between the code that makes up a unit of code
C. The process of making code more efficient
D. The practice of writing code in multiple programming languages
- Answer: B. The level of relatedness between the
code that makes up a unit of code
Explanation: Cohesion refers to the level of
relatedness between the code that makes up a unit
of code. High cohesion means that the code within a
module or class is closely related.
- Which of the following best describes “sandboxing” in software development?
A. A method to test new code in isolation
B. The process of documenting code for better readability
C. A technique to optimize code performance
D. The practice of writing code in a collaborative environment
- Answer: A. A method to test new code in isolation
Explanation: Sandboxing refers to a method
where new or untested code is run in a separate
environment (a “sandbox”) to ensure it doesn’t affect
the functioning of existing systems.
- Which of the following is NOT a characteristic of “object-oriented programming (OOP)”?
A. Polymorphism
B. Encapsulation
C. Cohesion
D. Inheritance
- Answer: C. Cohesion
Explanation: While cohesion is an important
concept in software design, it is not a specific
characteristic of object-oriented programming. OOP
is characterized by concepts like polymorphism,
encapsulation, and inheritance.
- What is the primary purpose of “code signing” in the software development process?
A. To optimize the performance of the code
B. To verify the authenticity and integrity of the code
C. To document the changes made in the code
D. To make the code more readable
- Answer: B. To verify the authenticity and integrity of
the code
Explanation: Code signing is a technique used to
verify the authenticity and integrity of code. It
ensures that the code has not been altered since it
was signed.
- What is the primary concern of “secure coding practices”?
A. To accelerate the development process
B. To ensure the code is optimized for performance
C. To ensure the software is free from vulnerabilities
D. To make the code more readable and maintainable
- Answer: C. To ensure the software is free from
vulnerabilities
Explanation: Secure coding practices aim to
ensure that software is developed in a way that it is
free from vulnerabilities that could be exploited by
malicious actors.
- Which of the following best describes “race conditions” in software development?
A. Conditions where two or more threads access shared data simultaneously
B. Conditions where the software runs faster than expected
C. Conditions where the software is tested for speed and performance
D. Conditions where the software is developed in a competitive environment
- Answer: A. Conditions where two or more threads
access shared data simultaneously
Explanation: Race conditions occur when two or
more threads access shared data at the same time
and at least one of them modifies the data, leading to
unpredictable outcomes.
- In the context of databases, what does “normalization” refer to?
A. The process of optimizing database performance
B. The process of ensuring data integrity and reducing data redundancy
C. The process of backing up the database regularly
D. The process of encrypting the database for security purposes
- Answer: B. The process of ensuring data integrity
and reducing data redundancy
Explanation: Normalization is a process in
database design to ensure data integrity and reduce
data redundancy by organizing data in tables and
establishing relationships between them.
- Which of the following is a common method to prevent SQL injection attacks?
A. Using regular expressions to validate input
B. Encrypting the database
C. Using parameterized queries
D. Increasing the database’s storage capacity
- Answer: C. Using parameterized queries
Explanation: Parameterized queries ensure that
input is always treated as data and not executable
code, thus preventing SQL injection attacks.
- What is the primary purpose of “version control” in the software development process?
A. To optimize the performance of the software
B. To ensure the software is free from vulnerabilities
C. To track and manage changes to the codebase
D. To make the code more readable
- Answer: C. To track and manage changes to the
codebase
Explanation: Version control systems track and
manage changes to the codebase, allowing
developers to revert to previous versions,
collaborate, and understand the history of changes.
- Which of the following best describes “fuzz testing” in software development?
A. Testing the software’s user interface for usability
B. Testing the software by providing random and unexpected inputs
C. Testing the software for speed and performance
D. Testing the software in a real-world environment
- Answer: B. Testing the software by providing
random and unexpected inputs
Explanation: Fuzz testing, or fuzzing, involves
testing software by providing random and
unexpected inputs to identify potential
vulnerabilities and crashes.
- Which of the following best describes the “principle of least privilege” in software development?
A. Granting users only the permissions they need to perform their tasks
B. Encrypting sensitive data to prevent unauthorized access
C. Ensuring that software is updated regularly
D. Making the codebase open source for transparency
- Answer: A. Granting users only the permissions they
need to perform their tasks
Explanation: The principle of least privilege
emphasizes that users should be granted only the
permissions they absolutely need, reducing the risk
of unauthorized access or actions.
- What is the primary goal of “threat modeling” in the software development process?
A. To identify potential threats and vulnerabilities in the software
B. To optimize the performance of the software
C. To document the software development process
D. To ensure code readability and maintainability
- Answer: A. To identify potential threats and
vulnerabilities in the software
Explanation: Threat modeling is a structured
approach used to identify and evaluate potential
threats and vulnerabilities in a software system,
helping developers address them proactively.
- Which of the following is NOT a type of software testing?
A. Canary testing
B. Waterfall testing
C. Regression testing
D. Penetration testing
- Answer: B. Waterfall testing
Explanation: While “Waterfall” is a software
development methodology, there isn’t a specific type
of testing called “Waterfall testing.”
- In the context of software development, what does “refactoring” refer to?
A. Adding new features to the software
B. Testing the software for vulnerabilities
C. Rewriting certain parts of the code to improve its structure without changing its functionality
D. Changing the user interface of the software
- Answer: C. Rewriting certain parts of the code to
improve its structure without changing its
functionality
Explanation: Refactoring involves restructuring
existing code without changing its external behavior,
aiming to improve the nonfunctional attributes of the
software.
- Which of the following best describes “static code analysis”?
A. Analyzing the software’s performance during runtime
B. Reviewing the codebase without executing the program
C. Testing the software in a production environment
D. Analyzing user feedback about the software
- Answer: B. Reviewing the codebase without
executing the program
Explanation: Static code analysis involves
examining the code without executing the program,
aiming to find vulnerabilities, errors, or areas of
improvement.
- What is the primary purpose of “code reviews” in the software development process?
A. To optimize the software’s performance
B. To ensure the software is free from vulnerabilities
C. To ensure the quality and correctness of the code
D. To make the codebase open source
- Answer: C. To ensure the quality and correctness of
the code
Explanation: Code reviews involve systematically
examining the source code of a program with the
primary goal of finding and fixing mistakes
overlooked during the initial development phase,
ensuring the code’s quality and correctness.
- Which of the following is a common method to ensure data confidentiality in software applications?
A. Data normalization
B. Data encryption
C. Data refactoring
D. Data versioning
- Answer: B. Data encryption
Explanation: Data encryption is a method used to
protect data by converting it into a code to prevent
unauthorized access, ensuring data confidentiality.
- In the context of software development, what does “integrity” refer to?
A. Ensuring the software is free from vulnerabilities
B. Ensuring the data is accurate and has not been tampered with
C. Ensuring the software performs optimally
D. Ensuring the software is user-friendly
- Answer: B. Ensuring the data is accurate and has not
been tampered with
Explanation: In software development, integrity
refers to the assurance that data is accurate and
reliable and has not been tampered with or altered
without authorization.
- Which of the following best describes “runtime application self-protection (RASP)”?
A. A method to optimize software performance during runtime
B. A tool that detects and prevents real-time application attacks
C. A technique to refactor code during runtime
D. A tool for static code analysis
- Answer: B. A tool that detects and prevents real-time
application attacks
Explanation: Runtime application self-protection
(RASP) is a security technology that uses runtime
instrumentation to detect and block attacks by
taking advantage of information from inside the
running software.
- Which of the following is a primary concern when using third-party libraries or components in software development?
A. The size of the library or component
B. The popularity of the library or component
C. Potential vulnerabilities or security risks associated with the library or component
D. The cost of the library or component
- Answer: C. Potential vulnerabilities or security risks
associated with the library or component
Explanation: When using third-party libraries or
components, a primary concern is potential
vulnerabilities or security risks that they might
introduce into the software.
40.Which of the following best describes the security by design” principle in software development?
A. Implementing security measures after the software is developed
B. Designing the software with security considerations from the outset
C. Relying solely on third-party security tools
D. Focusing only on the user interface security
- Answer: B. Designing the software with security
considerations from the outset
Explanation: “Security by design” means that the
software has been designed from the ground up to
be secure, ensuring that security is integrated into
every part of the software development process.
- In the context of software development, what is the primary goal of “input validation”?
A. To optimize the software’s performance
B. To ensure the software’s user interface is intuitive
C. To verify that the input meets the specified criteria before it’s processed
D. To ensure the software is compatible with various devices
- Answer: C. To verify that the input meets the
specified criteria before it’s processed
Explanation: Input validation is a process that
ensures an application is rendering the correct data
and prevents malicious data from harming the
system.
- Which of the following is NOT a type of “authentication” method in software development?
A. Something you know
B. Something you have
C. Something you are
D. Something you dislike
- Answer: D. Something you dislike
Explanation: Authentication methods typically
revolve around something you know, something you
have, or something you are. “Something you dislike”
is not a recognized authentication factor.
- What is the primary purpose of “penetration testing” in the software development process?
A. To document the software development process
B. To ensure the software’s user interface is user friendly
C. To identify vulnerabilities by simulating cyberattacks on the software
D. To verify the software’s compatibility with various operating systems
- Answer: C. To identify vulnerabilities by simulating
cyberattacks on the software
Explanation: Penetration testing involves
simulating cyberattacks on software to identify
vulnerabilities that could be exploited in real-world
attacks.
- Which of the following best describes “two-factor authentication (2FA)” in software development?
A. Using two different passwords for authentication
B. Verifying the user’s identity using two different methods or factors
C. Using biometric authentication twice for added security
D. Asking the user to input their password at two different stages of login
- Answer: B. Verifying the user’s identity using two
different methods or factors
Explanation: Two-factor authentication (2FA)
requires users to verify their identity using two
different methods or factors, enhancing security.
- In software development, what does “availability” in the context of the CIA triad refer to?
A. Ensuring that software is free from vulnerabilities
B. Ensuring that software is accessible and usable when needed
C. Ensuring that software data remains confidential
D. Ensuring that software data is accurate and trustworthy
- Answer: B. Ensuring that software is accessible and
usable when needed
Explanation: In the CIA (confidentiality, integrity,
availability) triad, “availability” refers to ensuring
that resources are accessible and usable when
needed.
- Which of the following is a common method to ensure “data integrity” in software applications?
A. Data compression
B. Data encryption
C. Data hashing
D. Data visualization
- Answer: C. Data hashing
Explanation: Data hashing involves creating a
fixed-size string of bytes from input data of any size,
ensuring data integrity by verifying that data has not
been altered.
- What is the primary concern of “defense in depth” in software security?
A. Relying on a single layer of security
B. Implementing multiple layers of security measures
C. Focusing solely on external threats
D. Prioritizing speed over security
- Answer: B. Implementing multiple layers of security
measures
Explanation: “Defense in depth” is a strategy that
employs a series of mechanisms to slow the advance
of an attack aimed at acquiring unauthorized access
to information.
- In the context of software development, what does “confidentiality” in the CIA triad refer to?
A. Ensuring that software is free from vulnerabilities
B. Ensuring that software data remains private and restricted to authorized individuals
C. Ensuring that software is accessible and usable when needed
D. Ensuring that software data is accurate and trustworthy
- Answer: B. Ensuring that software data remains
private and restricted to authorized individuals
Explanation: In the CIA (confidentiality, integrity,
availability) triad, “confidentiality” refers to
ensuring that data remains private and is only
accessible to those with the proper authorization.
- Which of the following best describes the “principle of non-repudiation” in software security?
A. Ensuring that users cannot deny their actions
B. Ensuring that software is free from vulnerabilities
C. Verifying the user’s identity using multiple authentication methods
D. Ensuring that data remains confidential
- Answer: A. Ensuring that users cannot deny their
actions
Explanation: Non-repudiation ensures that a user
cannot deny having performed a particular action,
providing proof of origin or delivery.
- In the context of software security, which of the following best describes “data at rest”?
A. Data that is being transmitted over a network
B. Data that is stored and not actively being used or processed
C. Data that is currently being processed by an application
D. Data that is temporarily stored in memory
- Answer: B. Data that is stored and not actively being
used or processed
Explanation: “Data at rest” refers to data that is
stored in persistent storage (like hard drives) and is
not actively being used, processed, or transmitted.
- Which of the following is a primary concern when considering “data in transit” in software security?
A. Ensuring data storage optimization
B. Ensuring data remains confidential while being transmitted
C. Ensuring data is regularly backed up
D. Ensuring data is indexed for faster retrieval
- Answer: B. Ensuring data remains confidential while
being transmitted
Explanation: “Data in transit” refers to data that
is being transferred over a network. The primary
concern is to ensure its confidentiality and integrity
during transmission.
- What is the main goal of “security patches” in the software development process?
A. To add new features to the software
B. To improve the software’s user interface
C. To fix known security vulnerabilities in the software
D. To optimize the software’s performance
- Answer: C. To fix known security vulnerabilities in
the software
Explanation: Security patches are updates
released by software developers to address known
security vulnerabilities in the software.
- Which of the following best describes “zero-day vulnerabilities” in software security?
A. Vulnerabilities that are discovered and patched within a day
B. Vulnerabilities that have no impact on the software’s functionality
C. Vulnerabilities that are unknown to the software developer and have no available patches
D. Vulnerabilities that are discovered during the software’s first day of release
- Answer: C. Vulnerabilities that are unknown to the
software developer and have no available patches
Explanation: Zero-day vulnerabilities refer to
software vulnerabilities that are unknown to the
vendor. This security risk is called a “zero-day”
because the developer has had zero days to fix it.
- In the context of software security, what is the primary purpose of “intrusion detection systems (IDS)”?
A. To detect and prevent unauthorized access to the software
B. To back up the software’s data
C. To optimize the software’s performance
D. To manage user permissions and roles
- Answer: A. To detect and prevent unauthorized
access to the software
Explanation: Intrusion detection systems (IDS)
monitor network traffic or system activities for
malicious activities or policy violations and produce
reports to a management station.
- Which of the following is NOT a type of “malware”?
A. Ransomware
B. Adware
C. Debugger
D. Trojan
- Answer: C. Debugger
Explanation: While ransomware, adware, and
trojans are types of malicious software, a debugger
is a tool used by developers to test and debug their
code.
- What is the primary goal of “allow listing” in software security?
A. To list all known vulnerabilities in the software
B. To specify which users have administrative privileges
C. To define a list of approved software or processes that are allowed to run
D. To list all outdated components of the software
- Answer: C. To define a list of approved software or
processes that are allowed to run
Explanation: Allow listing is a security approach
where a list of approved software applications or
processes is created, and only those on the list are
allowed to run.
- Which of the following best describes “phishing” in the context of software security threats?
A. An attack where the attacker floods the network with excessive requests
B. An attack where the attacker tricks users into revealing sensitive information
C. An attack where the attacker exploits a zero-day vulnerability
D. An attack where the attacker uses brute force to crack passwords
- Answer: B. An attack where the attacker tricks users
into revealing sensitive information
Explanation: Phishing is a type of social
engineering attack where the attacker tricks users
into revealing sensitive information, often by
masquerading as a trustworthy entity.
- In software security, what is the primary purpose of “firewalls”?
A. To detect software bugs and errors
B. To manage user permissions and roles
C. To monitor and control incoming and outgoing network traffic
D. To back up the software’s data
- Answer: C. To monitor and control incoming and
outgoing network traffic
Explanation: Firewalls are network security
devices that monitor and filter incoming and
outgoing network traffic based on an organization’s
previously established security policies.
- Which of the following is a common method to ensure “data redundancy” in software applications?
A. Data encryption
B. Data compression
C. Data replication
D. Data hashing
- Answer: C. Data replication
Explanation: Data replication involves creating
copies of data so that this duplicate data can be used
to restore the original data in case of data loss.
- In the context of software security, which of the following best describes “heuristic analysis”?
A. A method of detecting malware based on known signatures
B. A method of analyzing software performance metrics
C. A method of detecting potential threats based on behavioral patterns
D. A method of encrypting data for secure transmission
- Answer: C. A method of detecting potential threats
based on behavioral patterns
Explanation: Heuristic analysis involves
identifying malicious activities or threats based on
behavioral patterns rather than relying on specific
signatures.
- Which of the following is a primary concern when considering “data disposal” in software security?
A. Ensuring data is transmitted securely
B. Ensuring data is stored in an optimized format
C. Ensuring data is permanently deleted and cannot be recovered
D. Ensuring data is regularly backed up
- Answer: C. Ensuring data is permanently deleted
and cannot be recovered
Explanation: Proper data disposal ensures that
data is not only deleted but also cannot be
recovered, preventing unauthorized access or data
breaches.
- What is the main goal of “security awareness training” in the context of software security?
A. To teach developers how to write code
B. To inform users about the latest software features
C. To educate employees about security threats and best practices
D. To introduce new security tools and technologies
- Answer: C. To educate employees about security
threats and best practices
Explanation: Security awareness training aims to
educate employees about various security threats
and the best practices to prevent potential breaches.
- Which of the following best describes “brute-force attacks” in software security?
A. Exploiting software vulnerabilities using advanced tools
B. Attempting to guess passwords or encryption keys through trial and error
C. Sending large volumes of data to crash a system
D. Tricking users into revealing their credentials
- Answer: B. Attempting to guess passwords or encryption keys through trial and error
Explanation: A brute-force attack involves trying
multiple combinations to guess a password or
encryption key, relying on trial and error.
- In the context of software security, what does
“hardening” refer to?
A. Making the software’s user interface more intuitive
B. Strengthening the software against potential attacks or vulnerabilities
C. Compressing the software’s data for optimized storage
D. Upgrading the software to the latest version
- Answer: B. Strengthening the software against potential attacks or vulnerabilities
Explanation: Hardening involves configuring a
system to reduce its surface of vulnerability, making
it more secure against potential threats.
- Which of the following is NOT a type of “intrusion detection system (IDS)”?
A. Network-based IDS
B. Host-based IDS
C. Signature-based IDS
D. Encryption-based IDS
- Answer: D. Encryption-based IDS
Explanation: While network-based, host-based,
and signature-based are types of intrusion detection
systems, there isn’t a specific type called
“encryption-based IDS.”
- What is the primary purpose of “role-based access control (RBAC)” in software security?
A. To define user roles based on their job functions
B. To encrypt user data based on their roles
C. To monitor user activities in real time
D. To back up user data based on their roles
- Answer: A. To define user roles based on their job
functions
Explanation: Role-based access control (RBAC) is a method where roles are created based on job functions, and permissions to access resources are
assigned to specific roles.
- In software security, which of the following best describes “honeypots”?
A. Software tools to detect vulnerabilities in the code
B. Decoy systems designed to attract potential attackers
C. Systems designed to store sensitive data securely
D. Tools to optimize the performance of the software
- Answer: B. Decoy systems designed to attract potential attackers
Explanation: Honeypots are decoy systems set up to lure potential attackers, allowing security professionals to study their behaviors and tactics.
- Which of the following best describes “cross-site scripting (XSS)” in the context of software security threats?
A. An attack where malicious scripts are injected into trusted websites
B. An attack where the attacker floods the network with excessive requests
C. An attack where the attacker gains unauthorized access to the database
D. An attack where the attacker redirects users to a fake website
- Answer: A. An attack where malicious scripts are injected into trusted websites
Explanation: Cross-site scripting (XSS) is a type of attack where malicious scripts are injected into otherwise benign and trusted websites.
- What is the primary goal of “input sanitization” in the software development process?
A. To optimize the software’s performance
B. To ensure the software’s user interface is user friendly
C. To clean user input to prevent malicious data from harming the system
D. To compress user input data for optimized storage
- Answer: C. To clean user input to prevent malicious
data from harming the system
Explanation: Input sanitization involves cleaning or filtering user input to ensure that potentially harmful or malicious data doesn’t harm or
compromise the system.
- In the context of software security, which of the following best describes “tokenization”?
A. The process of converting sensitive data into non sensitive tokens
B. The process of authenticating users based on tokens
C. The process of optimizing software tokens for better performance
D. The process of distributing software tokens to users
- Answer: A. The process of converting sensitive data
into non sensitive tokens
Explanation: Tokenization involves replacing sensitive data with non sensitive tokens, which can’t be reversed to the original data without a specific key.
- Which of the following is a primary concern when considering “secure software deployment”?
A. Ensuring the software is compatible with all devices
B. Ensuring the software is free from known vulnerabilities before deployment
C. Ensuring the software has the latest features
D. Ensuring the software is available in multiple languages
- Answer: B. Ensuring the software is free from known vulnerabilities before deployment
Explanation: Secure software deployment focuses on ensuring that the software is free from known vulnerabilities and is securely configured before it’s deployed to a live environment.
- What is the main goal of “digital signatures” in the context of software security?
A. To optimize the software’s performance
B. To verify the authenticity and integrity of a message or document
C. To encrypt data for secure storage
D. To provide a unique identifier for each user
- Answer: B. To verify the authenticity and integrity of a message or document
Explanation: Digital signatures are cryptographic equivalents of handwritten signatures, used to verify the authenticity and integrity of a message or document.
- In software security, which of the following best describes “cross-site request forgery (CSRF)”?
A. An attack where the attacker tricks a user into executing unwanted actions on a web application
B. An attack where the attacker injects malicious scripts into trusted websites
C. An attack where the attacker gains unauthorized access to user accounts
D. An attack where the attacker redirects users to malicious websites
- Answer: A. An attack where the attacker tricks a user into executing unwanted actions on a web application
Explanation: CSRF is an attack that tricks the victim into submitting a malicious request, exploiting the trust that a website has in the user’s browser.
- Which of the following is NOT a primary component of “public key infrastructure (PKI)”?
A. Digital certificate
B. Certificate authority (CA)
C. Key exchange protocol
D. Private key
- Answer: C. Key exchange protocol
Explanation: While digital certificate, certificate authority (CA), and private key are components of PKI, a key exchange protocol is not a primary
component of PKI.
- What is the primary purpose of “secure boot” in the context of software security?
A. To ensure faster booting of the system
B. To ensure that only signed and trusted software can run during the system startup
C. To encrypt data during the boot process
D. To provide a user-friendly interface during booting
- Answer: B. To ensure that only signed and trusted software can run during the system startup
Explanation: Secure boot is a security standard that ensures that a device boots using only software that is trusted by the manufacturer.
- In the context of software security, what does “chain of trust” refer to?
A. A sequence of trusted entities ensuring overall system security
B. A sequence of software patches applied to the system
C. A sequence of user authentication methods
D. A sequence of encryption algorithms used in the system
- Answer: A. A sequence of trusted entities ensuring overall system security
Explanation: The chain of trust refers to a series of trusted entities or components in a system where each component can vouch for the integrity and trustworthiness of the next component.
- Which of the following best describes “containerization” in software security?
A. The process of segmenting software into isolated environments
B. The process of encrypting software containers
C. The process of optimizing software containers for better performance
D. The process of distributing software containers to users
- Answer: A. The process of segmenting software into isolated environments
Explanation: Containerization involves encapsulating an application and its dependencies into a “container.” This ensures that it runs consistently across various environments.
- What is the primary goal of “anomaly-based intrusion detection” in software security?
A. To detect intrusions based on known attack signatures
B. To detect intrusions based on deviations from a baseline of normal behavior
C. To detect intrusions based on user feedback
D. To detect intrusions based on system performance metrics
- Answer: B. To detect intrusions based on deviations from a baseline of normal behavior
Explanation: Anomaly-based intrusion detection systems monitor network traffic and compare it against an established baseline to detect any
deviations, which could indicate a potential intrusion.
- Which of the following is NOT a type of “access control” in software security?
A. Mandatory access control (MAC)
B. Role-based access control (RBAC)
C. Discretionary access control (DAC)
D. Performance-based access control (PBAC)
- Answer: D. Performance-based access control (PBAC)
Explanation: While MAC, RBAC, and DAC are recognized types of access control methods, there isn’t a specific type called “performance-based
access control (PBAC).”
- In the context of software security, which of the following best describes “sandboxing”?
A. The process of testing software in a controlled environment
B. The process of isolating applications in a restricted environment to prevent malicious activities
C. The process of optimizing software for better performance
D. The process of backing up software data
- Answer: B. The process of isolating applications in a restricted environment to prevent malicious activities
Explanation: Sandboxing involves running applications in a controlled environment to restrict what actions they can perform, preventing potential malicious activities.
- Which of the following is a primary concern when considering “secure coding practices”?
A. Ensuring the software has a user-friendly interface
B. Ensuring the software is developed without introducing vulnerabilities
C. Ensuring the software is compatible with all devices
D. Ensuring the software has the latest features
- Answer: B. Ensuring the software is developed without introducing vulnerabilities
Explanation: Secure coding practices focus on writing code in a way that prevents the introduction of vulnerabilities and security flaws.
- What is the main goal of “data loss prevention (DLP)” tools in the context of software security?
A. To optimize the software’s performance
B. To prevent unauthorized access and data breaches
C. To prevent the unintentional loss or exposure of sensitive data
D. To ensure data is stored in an optimized format
- Answer: C. To prevent the unintentional loss or exposure of sensitive data
Explanation: Data loss prevention (DLP) tools are designed to detect and prevent the unauthorized transmission or loss of sensitive data.
- Which of the following is NOT a primary component of “Identity and Access Management (IAM)”?
A. User authentication
B. User authorization
C. User profiling
D. Role-based access
- Answer: C. User profiling
Explanation: While user authentication, user authorization, and role-based access are components of IAM, user profiling is not a primary component of IAM.
- In software security, which of the following best describes “session management”?
A. The process of managing user access to software features
B. The process of managing and maintaining the state of a user’s interaction with software
C. The process of managing software updates
D. The process of managing software backups
- Answer: B. The process of managing and maintaining the state of a user’s interaction with software
Explanation: Session management involves maintaining and tracking a user’s state and data as they interact with an application, ensuring that the
session remains secure and consistent.
- What is the primary purpose of “cryptographic hashing” in software security?
A. To create a unique fixed-size output from input data
B. To encrypt data for secure transmission
C. To optimize data storage
D. To create a backup of data
- Answer: A. To create a unique fixed-size output from input data
Explanation: Cryptographic hashing functions take input data and produce a fixed-size string of characters, which is typically a sequence of numbers
and letters. The output, called the hash value, should be the same length regardless of the length of the input.
- Which of the following best describes “security orchestration, automation, and response (SOAR)” in software security?
A. A platform for managing and automating security operations
B. A tool for static code analysis
C. A method for optimizing software performance
D. A tool for user authentication
- Answer: A. A platform for managing and automating security operations
Explanation: SOAR platforms allow organizations to collect data about security threats and respond to low-level security events without human
intervention.
- Which of the following is a common method to ensure “data authenticity” in software applications?
A. Data compression
B. Data encryption
C. Digital signatures
D. Data replication
- Answer: C. Digital signatures
Explanation: Digital signatures are used to verify the authenticity of data, ensuring that it has not been tampered with and comes from a verified
source.
- In the context of software security, what does “endpoint protection” refer to?
A. Protecting the software’s database endpoints
B. Protecting the user interface of the software
C. Protecting devices like computers and mobile devices that connect to the network
D. Protecting the software’s API endpoints
- Answer: C. Protecting devices like computers and mobile devices that connect to the network
Explanation: Endpoint protection focuses on ensuring that devices such as computers, mobile devices, and other endpoints that connect to a network are secure from potential threats.
- What is the primary goal of “security information and event management (SIEM)” systems in software security?
A. To manage user permissions and roles
B. To provide real-time analysis of security alerts generated by applications and network hardware
C. To back up and restore software data
D. To manage software updates and patches
- Answer: B. To provide real-time analysis of security alerts generated by applications and network hardware
Explanation: SIEM systems provide real-time analysis of security alerts generated by various hardware and software resources in an organization.
- In the context of software security, which of the
following best describes “threat modeling”?
A. The process of designing user-friendly interfaces
B. The process of predicting software performance under various conditions
C. The systematic identification and evaluation of potential threats to the software
D. The process of simulating user interactions with software
- Answer: C. The systematic identification and evaluation of potential threats to the software
Explanation: Threat modeling involves identifying, understanding, and addressing potential threats in the early stages of software development.
- Which of the following is a primary concern when considering “secure software design”?
A. Ensuring the software has the latest features
B. Ensuring the software’s user interface is visually appealing
C. Ensuring the software architecture is designed with security principles in mind
D. Ensuring the software is compatible with all devices
- Answer: C. Ensuring the software architecture is designed with security principles in mind
Explanation: Secure software design focuses on building software that is resilient to threats by incorporating security principles into its architecture.
- What is the main goal of “application allow listing” in the context of software security?
A. To create a list of users authorized to access the application
B. To specify which applications are allowed to run on a system
C. To identify and block malicious applications
D. To optimize the performance of authorized applications
- Answer: B. To specify which applications are allowed to run on a system
Explanation: Application allow listing is a security approach where only specified applications are permitted to run, preventing unauthorized or
malicious software from executing.
- In software security, which of the following best describes “security misconfiguration”?
A. A situation where security settings are left at their default values
B. A situation where security software is not updated regularly
C. A situation where security protocols are overly complex
D. A situation where security measures are redundant
- Answer: A. A situation where security settings are left at their default values
Explanation: Security misconfiguration occurs when security settings are not appropriately configured, often left at default, making the system
vulnerable.
- Which of the following is NOT a primary component of “incident response” in software security?
A. Identification of the incident
B. Containment of the incident
C. Resolution of the software bug
D. Recovery and lessons learned
- Answer: C. Resolution of the software bug
Explanation: While identification, containment, and recovery are stages of incident response, the resolution of software bugs is a part of the software
development and maintenance process, not specifically incident response.
- In the context of software security, what does “patch management” refer to?
A. The process of designing user interfaces
B. The process of regularly updating and managing patches for software vulnerabilities
C. The process of managing user feedback and reviews
D. The process of optimizing software code
- Answer: B. The process of regularly updating and managing patches for software vulnerabilities
Explanation: Patch management involves the systematic acquisition, testing, and installation of updates and patches to software to address
updates and patches to software to address vulnerabilities and improve security.
- What is the primary purpose of “security audits” in software security?
A. To identify and fix performance issues in the software
B. To verify that the software meets user requirements
C. To assess and ensure the software adheres to security standards and policies
D. To introduce new features to the software
- Answer: C. To assess and ensure the software adheres to security standards and policies
Explanation: Security audits are systematic evaluations of the security of a system or application to ensure compliance with security standards and
policies.
- Which of the following best describes “man-in-the middle (MITM)” attacks in software security?
A. Attacks where the attacker directly communicates with the victim
B. Attacks where the attacker intercepts and possibly alters the communication between two parties
C. Attacks where the attacker impersonates a software application
software application
D. Attacks where the attacker floods a system with traffic
- Answer: B. Attacks where the attacker intercepts and possibly alters the communication between two parties
Explanation: In a man-in-the-middle attack, the attacker secretly intercepts and potentially alters the communication between two parties without their knowledge.
- What is the primary goal of “multifactor authentication (MFA)” in software security?
A. To provide multiple layers of encryption
B. To verify user identity using multiple methods or factors
C. To allow multiple users to access the same account
D. To optimize the user login process
- Answer: B. To verify user identity using multiple methods or factors
Explanation: Multifactor authentication (MFA) enhances security by requiring users to provide multiple forms of identification before granting
access.
- In the context of software security, which of the following best describes “risk assessment”?
A. The process of designing secure software architectures
B. The process of evaluating the potential risks associated with software vulnerabilities
C. The process of training users on software features
D. The process of updating software to the latest version
- Answer: B. The process of evaluating the potential risks associated with software vulnerabilities
Explanation: Risk assessment involves identifying, evaluating, and prioritizing risks to determine the potential impact of software vulnerabilities and to decide on mitigation strategies.
- What service can integrate an app with a social media site that provides software libraries and tools?
A. Software Development Kit (SDK)
B. Data Loss Prevention (DLP)
C. Integrated Development Environment (IDE)
D. Application Programming Interface (API)
- Answer: A. Software Development Kit (SDK)
Explanation: A Software Development Kit (SDK) typically includes a set of software libraries, development tools, and documentation that developers can use to create or enhance software. In this case, the social media site provides software libraries and other tools to integrate better applications, characteristic of an SDK.
- To overcome resistance to a change, which of the following approaches provides the best solution?
a. The change is well planned.
b. The change is fully communicated.
c. The change is implemented in a timely way.
d. The change is fully institutionalized.
- d. Managing change is a difficult process. People resist change due to a certain amount of discomfort that a change may bring. It does not matter how well the change is planned, communicated, or implemented if it is not spread throughout the organization evenly. Institutionalizing
the change means changing the climate of the company. This needs to be done in a consistent and orderly manner. Any major change should be done using a pilot approach. After a number of pilots have been successfully completed, it is time to use these success stories as leverage to change the entire company.
- During the system design of data input control procedures, the least consideration should be given to which of the following items?
a. Authorization
b. Validation
c. Configuration
d. Error notification
- c. Configuration management is a procedure for applying technical and administrative direction and monitoring to (i) identify and document the functional and physical characteristics of an item or system, (ii) control any changes made to such characteristics, and (iii) record and report the change, process, and implementation status. The
authorization process may be manual or automated. All authorized transactions should be recorded and entered into the system for processing. Validation ensures that the data entered meets predefined criteria in terms of its attributes. Error notification is as important as
error correction.
- Software configuration management (SCM) should primarily address which of the following questions?
a. How does software evolve during system development?
b. How does software evolve during system maintenance?
c. What constitutes a software product at any point in time?
d. How is a software product planned?
- c. Software configuration management (SCM) is a discipline for managing the evolution of computer products, both during the initial stages of development and through to maintenance and final product termination. Visibility into the status of the evolving software product is provided through the adoption of SCM on a software project.
Software developers, testers, project managers, quality assurance staff, and the customer benefit from SCM information. SCM answers questions such as (i) what constitutes the software product at any point in time? (ii) What changes have been made to the software product?
How a software product is planned, developed, or maintained does not matter because it describes the history of a software product’s evolution, as described in the other choices.
- What is the main feature of software configuration management (SCM)?
a. Tracing of all software changes
b. Identifying individual components
c. Using computer-assisted software engineering tools
d. Using compilers and assemblers
- a. Software configuration management (SCM) is practiced and integrated into the software development process throughout the entire life cycle of the product. One of the main features of SCM is the tracing of all software changes.
Identifying individual components is incorrect because it is a part of configuration identification function. The goals of configuration identification are to create the ability to identify the components of the system throughout its life cycle and to provide traceability between the
software and related configuration identification items.
Computer-assisted software engineering (CASE) tools, compilers, and assemblers are incorrect because they are examples of technical factors. SCM is essentially a discipline applying technical and administrative direction and surveillance for managing the evolution of
computer program products during all stages of development and maintenance. Some examples of technical factors include use of CASE tools, compilers, and assemblers.
- Which of the following areas of software configuration
management (SCM) is executed last?
a. Identification
b. Change control
c. Status accounting
d. Audit
- d. There are four elements of configuration management. The first element is configuration identification, consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. The second element is configuration change control, consisting of
evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.
The third element is configuration status accounting, consisting of recording and reporting of information that is needed to manage a configuration effectively.
The fourth element is software configuration audit, consisting of periodically performing a review to ensure that the SCM practices and procedures are rigorously followed. Auditing is performed last after all the elements are in place to determine whether they are properly
working.
- Which of the following is an example of input validation error?
a. Access validation error
b. Configuration error
c. Buffer overflow error
d. Race condition error
- c. In an input validation error, the input received by a system is not properly checked, resulting in a vulnerability that can be exploited by sending a certain input sequence. In a buffer overflow, the input received by a system is longer than the expected input length, but the system does not check for this condition. In an access validation error, the system is vulnerable because the access control mechanism is faulty. A configuration error occurs when user controllable settings in a system are set so that the system is vulnerable. Race condition error occurs when there is a delay between the time when a system checks to see if an operation is allowed by the security model and the time when the system actually performs the operation.
- From a risk management viewpoint, new system interfaces are addressed in which of the following system development life cycle (SDLC) phases?
a. Initiation
b. Development/acquisition
c. Implementation
d. Operation/maintenance
- d. In the operation/maintenance phase of the SDLC, risk
management activities are performed whenever major changes are made to an IT system in its operational (production) environment (for example, new system interfaces).
- The initiation phase of the security certification and
accreditation process does not contain which of the following?
a. Preparation
b. Resource identification
c. Action plan and milestones
d. Security plan acceptance
- c. The action plan and milestones document is a latter part of security certification and accreditation phases, which describe the measures that have been implemented or planned to correct any deficiencies noted during the assessment of the security controls and to reduce or eliminate known system vulnerabilities.
The other three choices are part of the initiation phase, which is the first phase, where it is too early to develop the action plan and milestones.
- Which of the following comes first in the security certification and accreditation process of an information system?
a. Security certification
b. Security recertification
c. Security accreditation
d. Security reaccreditation
- a. The security certification work comes first as it determines the extent to which the security controls in the information system are implemented correctly, operating as intended, and producing the desired system security posture. This assurance is achieved through system security assessments. The security accreditation package
documents the results of the security certification.
Recertification and reaccreditation occur periodically and sequentially whenever there is a significant change to the system or its operational environment as part of ongoing monitoring of security controls.
- In the continuous monitoring phase of the security certification and accreditation process, ongoing assessment of security controls is based on which of the following?
a. Configuration management documents
b. Action plan and milestone documents
c. Configuration control documents
d. Security impact analyses documents
- b. To determine what security controls to select for ongoing review, organizations should first prioritize testing on “action plan and milestones” items that become closed. These newly implemented controls should be validated first.
The other three documents are part of the continuous monitoring phase and come into play when there are major changes or modifications to the operational system.
- What is the major purpose of configuration management?
a. To reduce risks from system insertions
b. To reduce risks from system installations
c. To reduce risks from modifications
d. To minimize the effects of negative changes
- d. The purpose of configuration management is to minimize the effects of negative changes or differences in configurations on an information system or network. The other three choices are examples of minor purposes, all leading to the major purpose. Note that modifications could be proper or improper where the latter leads to a
negative effect and the former leads to a positive effect.
- The primary implementation of the configuration management process is performed in which of the following system development life cycle (SDLC) phases?
a. Initiation
b. Acquisition/development
c. Implementation
d. Operation/maintenance
- d. The primary implementation of the configuration management process is performed during the operation/maintenance phase of the SDLC, the operation/maintenance phase. The other phases are too
early for this process to take place.
- Which of the following phases of the security certification and accreditation process primarily deals with configuration management?
a. Initiation
b. Security certification
c. Security accreditation
d. Continuous monitoring
- d. The fourth phase of the security certification and accreditation process, continuous-monitoring, primarily deals with configuration management. Documenting information system changes and assessing
the potential impact those changes may have on the security of the system is an essential part of continuous monitoring and maintaining the security accreditation.
- An impact analysis of changes is conducted in which of the following configuration management process steps?
a. Identify changes.
b. Evaluate change request.
c. Implement decisions.
d. Implement approved change requests.
- b. After initiating a change request, the effects that the change may have on a specific system or other interrelated systems must be evaluated. An impact analysis of the change is conducted in the “evaluate change request” step. Evaluation is the end result of identifying changes, deciding what changes to approve and how to implement them, and actually implementing the approved changes.
- Additional testing or analysis may be needed in which of the following operational decision choices of the
configuration management process?
a. Approve
b. Implement
c. Deny
d. Defer
- d. In the “defer” choice, immediate decision is postponed until further notice. In this situation, additional testing or analysis may be needed before a final decision can be made later. On the other hand, approve, implement, and deny choices do not require additional testing and analysis because management is already satisfied with the testing and analysis.
- During the initiation phase of a system development life cycle (SDLC) process, which of the following tasks is not typically performed?
a. Preliminary risk assessment
b. Preliminary system security plans
c. High-level security test plans
d. High-level security system architecture
- c. A security-test-plan, whether high level or low level, is
developed in the development/acquisition phase. The other three choices are performed in the initiation phase.
- Security controls are designed and implemented in which of the following system development life cycle (SDLC) phases?
a. Initiation
b. Development/acquisition
c. Implementation
d. Disposal
- b. Security controls are developed, designed, and implemented in the development/acquisition phase. Additional controls may be developed to support the controls already in place or planned.
- Product acquisition and integration costs are determined in which of the following system development life cycle (SDLC) phases?
a. Initiation
b. Development/acquisition
c. Implementation
d. Disposal
- b. Product acquisition and integration costs that can be attributed to information security over the life cycle of the system are determined in the development/acquisition phase. These costs include hardware, software, personnel, and training.
- A formal authorization to operate an information system is obtained in which of the following system development life cycle (SDLC) phases?
a. Initiation
b. Development/acquisition
c. Implementation
d. Disposal
- c. In the implementation phase, the organization configures and enables system security features, tests the functionality of these features, installs or implements the system, and finally, obtains a formal authorization to operate the system.
- Which of the following gives assurance as part of system’s security and functional requirements defined for an information system?
a. Access controls
b. Background checks for system developers
c. Awareness
d. Training
- b. Security and functional requirements can be expressed as technical (for example, access controls), assurances (for example, background checks for system developers), or operational practices (for example, awareness and training).
- System users must perform which of the following when new security controls are added to an existing application system?
a. Unit testing
b. Subsystem testing
c. Full system testing
d. Acceptance testing
- d. If new security controls are added to an existing application system or to a support system, system users must perform additional acceptance tests of these new controls. This approach ensures that new controls meet security specifications and do not conflict with or invalidate existing controls.
- Periodic reaccreditation of a system is done in which of the following system development life cycle (SDLC) phases?
a. Initiation
b. Development/acquisition
c. Implementation
d. Operation/maintenance
- d. Documenting information system changes and assessing the potential impact of these changes on the security of a system is an essential part of continuous monitoring and key to avoiding a lapse in the system security reaccreditation. Periodic reaccreditation is done in
the operation phase.
- Which of the following tests is driven by system requirements?
a. Black-box testing
b. White-box testing
c. Gray-box testing
d. Integration testing
- a. Black-box testing, also known as functional testing, executes part or all the system to validate that the user requirement is satisfied. White-box testing, also known as structural testing, examines the logic of the units and may be used to support software requirements for test
coverage, i.e., how much of the program has been executed.
Gray-box testing can be looked at as anything that is not tested in white-box or black-box. An integration testing is performed to examine how units interface and interact with each other with the assumption that the units and the objects (for example, data) they manipulate have all passed their unit tests.
- System integration is performed in which of the following system development life cycle (SDLC) phases?
a. Initiation
b. Development/acquisition
c. Implementation
d. Operation/maintenance
- c. The new system is integrated at the operational site where it is to be deployed for operation. Security control settings and switches are enabled.
- Formal risk assessment is conducted in which of the following system development life cycle (SDLC) phases?
a. Initiation
b. Development/acquisition
c. Implementation
d. Operation/maintenance
- b. Formal risk assessment is conducted in the
development/acquisition phase to identify system protection requirements. This analysis builds on the initial (preliminary or informal) risk assessment performed during the initiation phase, but will be more in-depth and specific.