Chapter 04: Vulnerability Scanning Flashcards
Vulnerability Management Programs
Seek to identify, prioritize, and remediate vulnerabilities before an attacker exploits them to undermine the CIA of enterprise information assets
PCI-DSS Vulnerability Scans
- Orgs must run both internal and external scans
- Orgs must run scans on at least a quarterly basis and after any significant change in the network, like new system component installations, changes in network topology, firewall rule modifications, product upgrades
- Internal scans must be conducted by qualified personnel
- Orgs must remedaite any high-risk vulnerabilities and repeat scans to confirm that they’re resolved until they receive a clean scan report
- External scans must be conducted by an ASV (approved scanning vendor) authorized by PCI-DSS
FISMA Vulnerability Scans
Federal Information Security Management Act of 2002
Requires government agencies and other orgs operating systems on behalf of government comply with a series of standards
- Scans for vulnerabilities in the infomration system and hosted apps and when new vulnerabilities potentially affecting the system/application are identified and reported
- Employs vulnerability scanning tools and techniques that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for:
1. Enumerating platforms, software flaws, and improper configs
2. Formatting checklists and test procuedures
3. Measuring vulnerability impact - Analyzes vulnerability scan reports and results from security control assessments
- Remediates legitimate vulnerabilities in accordance with an organizational assessment of risk
- Shares information obtained from the vulnerability scanning process and security control assessments to help eliminate similar vulnerabilities in other information systems (EX: systemic weaknesses or deficiencies)
Page 114-115
NIST 800-53
Security and Privacy Controls for Federal Information Systems and Organizations
Describes eight control enhancements that may be required depending on their circumstances:
1. The org employes vulnerability scanning tools that include the capability to readily update the information system vulnerabilities to be scanned
2. The org updates the information system vulneabilities scanned prior to a new scan and/or when new vulnerabilities are identified and reported
3. The org employs vulnerability scanning procedures that can identify the breadth and depth of coverage (EX: information system components scanned and vulnerabilities checked)
4. The org determines what information about the information systems is discoverable by adversaries and subsequently takes org-defined corrective actions
5. The information system implements privileged access authorization to information system components for select vulnerability scanning activities
6. The org employs automated mechanisms to compare the results of vulnerability scans over time to determine trends in information system vunlerabilities
7. Withdrawn by NIST
8. The org reviews historic audit logs to determine if a vulnerability identified in the information system has been previously exploited
9. Withdrawn by NIST
10. The org correlates the output from vulnerability scanning tools to determine the presence of multi-vulnerability or multi-hop attack vectors
Page 115-116
Identifying Scan Targets
Some orgs choose to cover all systems in their scanning process while others select only certain ones depending on the answers to these questions like:
* What is the data classification of the information stored, processed, or transmitted by the system?
* Is the system exposed to the internet or other public or semipublic networks?
* What services are offered by the system?
* Is the system a production, test, or development system?
Scanning tools can automate the identification of systems to be scanned and build an asset inventory
Determining Scan Frequency
Factors that influence how an org decides this:
* Risk appetite
* Regulatory requirements
* Technical constraints
* Business constraints
* Licensing limitations
* Operational constraints
Pentesters need to know where the trade-offs are made for orgs in this decision process
These limitations can point to areas where pentesters should supplement the org’s existing scans with custom scans designed specifically for pentesting
Page119, but you know from CySA
Active vs Passive Scanning
Active Scans
Interact with the host to identify open services and check for possible vulnerabilities
Active provides high-quality results, but it comes with drawbacks:
* Noise and will likely be detected by admins of the scanned systems—not an issue in environments where admins have knowledge of the scan, but problematic for stealth
* Potential to accidentally exploit vulnerabilites and interfere with the functioning of production systems
* May miss some systems if they’re blocked by firewalls, IPS, segmentation, etc
Passive Scans
Take a different approach and supplement active scans
Instead of probing for vulnerabilities, passive scanners monitor the network similar to an IDS
Instead of watching for intrusion attempts though, they look for signatures of outdated systems and apps, reporting results to admins
Only capable of detecting vulnerabilities that are reflected in network traffic and aren’t a replacement for active scans
Scoping Vulnerability Scans
Figuring out the extent of the scan by answering questions like:
* What systems, networks, services, apps, and protocols will be included in the scan?
* What technical measures will be used to test whether systems are present on the network?
* What tests will be performed against systems discovered by a vulnerability scanner?
* When will you run the scans?
When scans are taking place for pentests, avoid business interruptions as much as is possible
But the invasiveness of the test and the degree of coordination with management should be guided by the agreed-upon SOW for the pentest
DION NOTES
* You need to know what you’re going to add to the scope of your vuln scan, because it will consume more resources
* But the more you add, the more likely you’ll be seen by network defenders and blocked
* This is why conducting recon upfront to determine what OS, services, and versions are being run on a given target—if you can scan for just the vulns associated with your findings, you drastically reduce your time and chances of being stopped
* If you’re doing web app scanning, those can take a long time too so know what you’re doing there with regard to the amount of code an app might be built on
* Know your protocols, because scanning all 65,535 ports is not ideal and will kill your timing, or will you just look at the web server on 80 and 443?
* This is all based on your initial scoping documentation
Scan Sensitivity Levels
Often it’s more productive to adjust the scan settings to the specific needs of an assessment vs conducting a full scan using all available vulnerability tests
Sensitivity settings determine the types of checks that the scanner will perform and should be customized to ensure that the scan meets its objectives while minimizing the possibility of disrutpion to the environment
Pentesters don’t want to cause issues in assets, especially ICS, IoT, and specialized medical equipment
The best way to avoid that is to maintain a test environment that contains copies of the same systems running on the production network, and then run scans against the test—if anything is found you can fix them in production and not worry about breaking them with a scan
Stealth Scans
The default for most scanners is TCP connect which is noisy and will attract immediate attention
Stealth scans are a good workaround for this, especially if you’re simulating how an attacker might actually approach a target
Scan Perspective
External Scans
Run from the perspective of the internet, giving admins a view of what an attacker located outside the org would see as potential vulnerabilities
Internal Scans
Might run from a scanner on the general corporate network, providing the view that a malicious insider might encounter
Data Center Scans
Scans located inside the data center and agents located on servers offer the most accurate view of the real state of the server by showing vulnerabilities that might be blocked by other security controls on the network
DION NOTES
* Important to understand the network topology here, because this will impact where you scan from
SCAP
Security Content Automation Protocol
An effort by the security community, led by NIST, to create a standardized approach for communicating security-related information
This standardization is important to the automation of interactions between security components
SCAP standards include the following:
* CCE: Common Configuration Enumeration—Discussing system config issues
* CPE: Common Platform Enumeration—Describing product names and versions
* CVE: Common Vulnerabilities and Exposures—Security-realted software flaws
* CVSS: Common Vulnerability Scoring System—Measuring and describing the severity of security-related software flaws
* XCCDF: Extensible Configuration Checklist Description Formats—Language for specifying checklists and reporting checklist results
* OVAL: Open Vulnerability and Assessment Language—Language for specifying low-level testing procedures used by checklists
Static Code Anlaysis
Conducted by reviewing the code for an application
Uses the source code for an app, which makes it white-box testing with full visibility to the testers
Allows testers to find problems other tests miss because the logic isn’t exposed to other testing methods or internal business logic problems
Static analysis doesn’t run the program being analyzed, but instead focuses on understanding how it’s written and what the code is intended to do
EXAM NOTE: BrakeMan is a static code analysis tool only used for Ruby on Rails
Dynamic Code Analysis
Relies on the execution of the code while providing it with input to test the software
Pentesters will likely find themselves conducting dynamic analysis rather than static because of RoE and SOW often restrict access to source code
Fuzzing
Sending invalid or random data to an app to test its ability to handle unexpeted data
App is monitored to determine if it crashes, failes, or responds in an incorrect manner when stressed
Can usually be performed externally without any privileged access to systems and is therefore a popular technique with pentesters
Very noisy and can attrack attention from cybersecurity teams
Vulnerability Lifecycle
Discover
* Identify the vulnerability
* Create an exploit, done first as POC to prove that this vuln can be exploited
Coordinate
* Report the vulnerability
* Generate a CVE
Mitigate
* Release the CVE
* Create a patch, release security config, etc
Manage
* Deploy a patch
* Test the system
Document
* Record the results
* Record the lessons learned
DION NOTES
* The exam lists Discover as the first stage
* But there really is a stage zero, and that’s with unknown vulns aka zero-days
* When a zero day comes out, this is a new tool or tactic in a pentesters arsenal for breaking into a system