MODULE 27 - CERTIFICATION EXAM PREPARATION Flashcards
A Common Data Platform
ELK - Elasticsearch, Logstash, and Kibana (ELK).
A Common Data Platform
ELK - Elasticsearch, Logstash, and Kibana (ELK).
ELK:
A typical network has a multitude of different logs to keep track of and most of those logs are in different formats.
With huge amounts of disparate data, how is it possible to get an overview of network operations while also getting a sense of subtle anomalies or changes in the network?
ELK:
A typical network has a multitude of different logs to keep track of and most of those logs are in different formats.
With huge amounts of disparate data, how is it possible to get an overview of network operations while also getting a sense of subtle anomalies or changes in the network?
ELK PART 2 :
The Elastic Stack attempts to solve this problem by providing a single interface view into a heterogenous network.
The Elastic Stack consists of Elasticsearch, Logstash, and Kibana (ELK).
It is a highly scalable and modular framework for ingesting, analyzing, storing and visualizing data.
Elasticsearch is an open-core platform (open source in the core components) for searching and analyzing an organization’s data in near real time.
It can be used in many different contexts but has gained popularity in network security as a SIEM tool.
ELK PART 2 :
The Elastic Stack attempts to solve this problem by providing a single interface view into a heterogenous network.
The Elastic Stack consists of Elasticsearch, Logstash, and Kibana (ELK).
It is a highly scalable and modular framework for ingesting, analyzing, storing and visualizing data.
Elasticsearch is an open-core platform (open source in the core components) for searching and analyzing an organization’s data in near real time.
It can be used in many different contexts but has gained popularity in network security as a SIEM tool.
ELK:
Security Onion includes ELK and other components from Elastic including:
–Beats
–ElastAlert
–Curator
ELK:
Security Onion includes ELK and other components from Elastic including:
–Beats
–ElastAlert
–Curator
Security Onion includes ELK and other components from Elastic including:
Beats
ELK:
Security Onion includes ELK and other components from Elastic including:
–Beats
–ElastAlert
–Curator
– Beats
This is a series of software plugins that send different types of data to the Elasticsearch data stores.
Security Onion includes ELK and other components from Elastic including:
ElastAlert
ELK:
Security Onion includes ELK and other components from Elastic including:
–Beats
–ElastAlert
–Curator
– ElastAlert
This provides queries and security alerts based on user-defined criteria and other information from data in Elasticsearch.
Alert notifications can be sent to a console, or email and other notification systems such as TheHive security incident response platform.
Security Onion includes ELK and other components from Elastic including:
Curator
ELK:
Security Onion includes ELK and other components from Elastic including:
–Beats
–ElastAlert
–Curator
– Curator
This provides actions to manage Elasticsearch data indices.
ELK:
Elasticsearch, which is the search engine component, uses RESTful web services and APIs, a distributed computing cluster with multiple server nodes, and a distributed NoSQL database made up of JSON documents.
Additional functionality can be added through custom-created extensions.
The Elasticsearch company offers a commercial extension called X-Pack which adds security, alerting, monitoring, reporting, and graphs.
The company also offers a machine-learning add-on as well their own Elastic SIEM product.
ELK:
Elasticsearch, which is the search engine component, uses RESTful web services and APIs, a distributed computing cluster with multiple server nodes, and a distributed NoSQL database made up of JSON documents.
Additional functionality can be added through custom-created extensions.
The Elasticsearch company offers a commercial extension called X-Pack which adds security, alerting, monitoring, reporting, and graphs.
The company also offers a machine-learning add-on as well their own Elastic SIEM product.
ELK:
Logstash enables the collection and normalization of network data into data indexes that can be efficiently searched by Elasticsearch.
Logstash and Beats modules are used to ingest data into the Elasticsearch cluster.
ELK:
Logstash enables the collection and normalization of network data into data indexes that can be efficiently searched by Elasticsearch.
Logstash and Beats modules are used to ingest data into the Elasticsearch cluster.
ELK:
Kibana provides a graphical interface to data that is compiled by Elasticsearch.
It enables visualization of network data and provides tools and shortcuts for querying that data in order to isolate potential security breaches.
ELK:
Kibana provides a graphical interface to data that is compiled by Elasticsearch.
It enables visualization of network data and provides tools and shortcuts for querying that data in order to isolate potential security breaches.
ELK:
The core open source components of the Elastic Stack are Logstash, Beats, Elasticsearch, and Kibana, as shown in the figure.
Elastic Stack Core Components:
https://snipboard.io/0IMdHL.jpg
ELK: Logstash
Logstash is an extract, transform and load system with the ability to take in various sources of log data and transform or parse the data through translation, sorting, aggregating, splitting, and validation.
After transforming the data, the data is loaded into the Elasticsearch database in the proper file format.
The figure shows some of the fields that are available in Logstash as shown in the Kibana Management interface.
https://snipboard.io/jdEBIf.jpg
ELK: Logstash
Logstash is an extract, transform and load system with the ability to take in various sources of log data and transform or parse the data through translation, sorting, aggregating, splitting, and validation.
After transforming the data, the data is loaded into the Elasticsearch database in the proper file format.
The figure shows some of the fields that are available in Logstash as shown in the Kibana Management interface.
https://snipboard.io/jdEBIf.jpg
ELK:
BEATS
Beats agents are open source software clients used to send operational data directly into Elasticsearch or through Logstash.
Elastic, as well as the open source community, actively develop Beats agents, so there are a huge variety of Beats agents for sending data to Elasticsearch in near real time.
Some of the Beats agents provided by Elastic are Auditbeat for audit data, Metricbeat for metric data, Heartbeat for availability, Packetbeat for network traffic, Journalbeat for Systemd journals, and Winlogbeat for Windows event logs.
Some community-sourced Beats are
Amazonbeat,
Apachebeat,
Dockbeat,
Nginxbeat, and Mqttbeat to name a few.
ELK:
BEATS
Beats agents are open source software clients used to send operational data directly into Elasticsearch or through Logstash.
Elastic, as well as the open source community, actively develop Beats agents, so there are a huge variety of Beats agents for sending data to Elasticsearch in near real time.
Some of the Beats agents provided by Elastic are Auditbeat for audit data, Metricbeat for metric data, Heartbeat for availability, Packetbeat for network traffic, Journalbeat for Systemd journals, and Winlogbeat for Windows event logs.
Some community-sourced Beats are
Amazonbeat,
Apachebeat,
Dockbeat,
Nginxbeat, and Mqttbeat to name a few.
ELK: Elasticsearch
Elasticsearch is a cross platform enterprise search engine written in Java.
The core components are open-source with commercial addons called X-packs that give additional functionality.
Elasticsearch supports near real-time search using simple REST APIs to create or update JavaScript Object Notation (JSON) documents using HTTP requests.
Searches can be made using any program capable of making HTTP requests such as a web browser, Postman, cURL, etc.
These APIs can also be accessed by Python or other programming language scripts for automated operations.
ELK: Elasticsearch
Elasticsearch is a cross platform enterprise search engine written in Java.
The core components are open-source with commercial addons called X-packs that give additional functionality.
Elasticsearch supports near real-time search using simple REST APIs to create or update JavaScript Object Notation (JSON) documents using HTTP requests.
Searches can be made using any program capable of making HTTP requests such as a web browser, Postman, cURL, etc.
These APIs can also be accessed by Python or other programming language scripts for automated operations.
ELK: Elasticsearch PART 2:
The Elasticsearch data structure is called an inverted index, which is designed to allow very fast full-text searches.
An index is like a database, it is a namespace for a collection of documents that are related to each other.
An index can be partitioned or mapped into different types.
If you compare an Elasticsearch index to a traditional relational database, the index is like the database, the types are like the tables, and the documents are like the columns and rows, as shown in the table.
https://snipboard.io/iOMTqt.jpg
Elasticsearch stores data in JSON-formatted documents.
A JSON document is organized into hierarchies of key/value pairs, with a key being a name and the corresponding value being either a string, number, Boolean, date, array, or other type of data.
ELK: Kibana
Kibana provides an easy to use graphical user interface for managing Elasticsearch.
By using a web browser, an analyst can use the Kibana interface to search and view indices. The management tab allows you to create and manage indices and their types and formats.
The discovery tab is a quick and powerful way to view your data and search it using the search tools.
The visualize tab allows you to create custom visualizations like bar charts, line charts, pie charts, heat maps, and more.
The visualizations you create can be organized into customized dashboards for monitoring and analyzing your data.
A Kibana dashboard is shown in the figure.
ELK: Kibana
Kibana provides an easy to use graphical user interface for managing Elasticsearch.
By using a web browser, an analyst can use the Kibana interface to search and view indices. The management tab allows you to create and manage indices and their types and formats.
The discovery tab is a quick and powerful way to view your data and search it using the search tools.
The visualize tab allows you to create custom visualizations like bar charts, line charts, pie charts, heat maps, and more.
The visualizations you create can be organized into customized dashboards for monitoring and analyzing your data.
A Kibana dashboard is shown in the figure.
ELK: A Kibana Dashboard:
https://snipboard.io/rnSfst.jpg
ELK: A Kibana Dashboard:
https://snipboard.io/rnSfst.jpg
Data Reduction
The amount of network traffic that is collected by packet captures and the number of log file entries and alerts that are generated by network and security devices can be enormous.
Even with recent advances in Big Data, processing, storing, accessing, and archiving NSM-related data is a daunting task.
For this reason, it is important to identify the network data that should be gathered.
Not every log file entry, packet, and alert needs to be gathered.
By limiting the volume of data, tools like Elasticsearch will be far more useful, as shown in the figure.
Data Reduction
The amount of network traffic that is collected by packet captures and the number of log file entries and alerts that are generated by network and security devices can be enormous.
Even with recent advances in Big Data, processing, storing, accessing, and archiving NSM-related data is a daunting task.
For this reason, it is important to identify the network data that should be gathered.
Not every log file entry, packet, and alert needs to be gathered.
By limiting the volume of data, tools like Elasticsearch will be far more useful, as shown in the figure.
Data Reduction PART 2
Some network traffic has little value to NSM.
Encrypted data, such as IPsec or SSL traffic, is largely unreadable.
Some traffic, such as that generated by routing protocols or spanning-tree protocol, is routine and can be excluded.
Other broadcast and multicast protocols can usually be eliminated from packet captures, as can traffic from other protocols that generate a lot of routine traffic.
Data Reduction PART 2
Some network traffic has little value to NSM.
Encrypted data, such as IPsec or SSL traffic, is largely unreadable.
Some traffic, such as that generated by routing protocols or spanning-tree protocol, is routine and can be excluded.
Other broadcast and multicast protocols can usually be eliminated from packet captures, as can traffic from other protocols that generate a lot of routine traffic.
Data Reduction PART 3
In addition, alerts that are generated by a HIDS, such as Windows security auditing or OSSEC, should be evaluated for relevance.
Some are informational or of low potential security impact.
These messages can be filtered from NSM data.
Similarly, syslog may store messages of very low severity that could be disregarded to diminish the quantity of NSM data to be handled.
https://snipboard.io/4JLfh7.jpg
Data Reduction PART 3
In addition, alerts that are generated by a HIDS, such as Windows security auditing or OSSEC, should be evaluated for relevance.
Some are informational or of low potential security impact.
These messages can be filtered from NSM data.
Similarly, syslog may store messages of very low severity that could be disregarded to diminish the quantity of NSM data to be handled.
https://snipboard.io/4JLfh7.jpg
Data Normalization
Data normalization is the process of combining data from a number of data sources into a common format.
Logstash provides a series of transformations that process security data and transform it before adding it to Elasticsearch.
Additional plugins can be created to suit the needs of the organization.
Data Normalization
Data normalization is the process of combining data from a number of data sources into a common format.
Logstash provides a series of transformations that process security data and transform it before adding it to Elasticsearch.
Additional plugins can be created to suit the needs of the organization.
Data Normalization PART 2
A common schema will specify the names and formats for the required data fields.
Formatting of the data fields can vary widely between sources.
However, if searching is to be effective, the data fields must be consistent.
For example, IPv6 addresses, MAC addresses, and date and time information can be represented in varying formats.
Similarly, subnet masks, DNS records, and so on can vary in format between data sources.
Logstash transformations accept the data in its native format and make elements of the data consistent across all sources.
For example, a single format will be used for addresses and timestamps for data from all sources.
Data Normalization PART 3 IPv6 Address Formats
2001:db8:acad:1111:2222::33 2001:DB8:ACAD:1111:2222::33 2001:DB8:ACAD:1111:2222:0:0:33 2001:DB8:ACAD:1111:2222:0000:0000:0033
Data Normalization PART 3 IPv6 Address Formats
2001:db8:acad:1111:2222::33 2001:DB8:ACAD:1111:2222::33 2001:DB8:ACAD:1111:2222:0:0:33 2001:DB8:ACAD:1111:2222:0000:0000:0033
Data Normalization PART 4 MAC Formats
A7:03:DB:7C:91:AA A7-03-DB-7C-91-AA A70.3DB.7C9.1AA
Data Normalization PART 4 MAC Formats
A7:03:DB:7C:91:AA A7-03-DB-7C-91-AA A70.3DB.7C9.1AA
Data Normalization PART 5 Date Formats
Monday, July 24, 2017 7:39:35pm Mon, 24 Jul 2017 19:39:35 +0000 2017-07-24T19:39:35+00:00 1500925254
Data normalization is required to simplify searching for correlated events.
If differently formatted values exist in the NSM data for IPv6 addresses, for example, a separate query term would need to be created for every variation in order for correlated events to be returned by the query.
Data Normalization PART 5 Date Formats
Monday, July 24, 2017 7:39:35pm Mon, 24 Jul 2017 19:39:35 +0000 2017-07-24T19:39:35+00:00 1500925254
Data normalization is required to simplify searching for correlated events.
If differently formatted values exist in the NSM data for IPv6 addresses, for example, a separate query term would need to be created for every variation in order for correlated events to be returned by the query.
Data Archiving
Everyone would love the security of collecting and saving everything, just in case.
However, retaining NSM data indefinitely is not feasible due to storage and access issues.
It should be noted that the retention period for certain types of network security information may be specified by compliance frameworks.
For example, the Payment Card Industry Security Standards Council (PCI DSS) requires that an audit trail of user activities related to protected information be retained for one year.
Data Archiving
Everyone would love the security of collecting and saving everything, just in case.
However, retaining NSM data indefinitely is not feasible due to storage and access issues.
It should be noted that the retention period for certain types of network security information may be specified by compliance frameworks.
For example, the Payment Card Industry Security Standards Council (PCI DSS) requires that an audit trail of user activities related to protected information be retained for one year.
Data Archiving PART 2
Security Onion has different data retention periods for different types of NSM data.
For pcaps and raw Bro logs, a value assigned in the **securityonion.conf** file controls the percentage of disk space that can be used by log files.
By default, this value is set to 90%.
Data Archiving PART 2
Security Onion has different data retention periods for different types of NSM data.
For pcaps and raw Bro logs, a value assigned in the **securityonion.conf** file controls the percentage of disk space that can be used by log files.
By default, this value is set to 90%.
Data Archiving PART 3 For Elasticsearch, retention of data indices is controlled by Elasticsearch curator.
Curator runs in a Docker container and executes every minute according to **cron** jobs.
Curator logs its activity to curator.log. Curator defaults to closing indices older than 30 days.
To modify this, change CURATOR_CLOSE_DAYS in /etc/nsm/securityonion.conf.
As a disk reaches capacity, Curator deletes old indices to prevent your disk from filling up.
To change the limit, modify LOG_SIZE_LIMIT in /etc/nsm/securityonion.conf.
Data Archiving PART 3 For Elasticsearch, retention of data indices is controlled by Elasticsearch curator.
Curator runs in a Docker container and executes every minute according to **cron** jobs.
Curator logs its activity to curator.log. Curator defaults to closing indices older than 30 days.
To modify this, change CURATOR_CLOSE_DAYS in /etc/nsm/securityonion.conf.
As a disk reaches capacity, Curator deletes old indices to prevent your disk from filling up.
To change the limit, modify LOG_SIZE_LIMIT in /etc/nsm/securityonion.conf.
Data Archiving PART 4
Sguil alert data is retained for 30 days by default.
This value is set in the securityonion.conf file.
Security Onion is known to require a lot of storage and RAM to run properly.
Depending on the size of the network, multiple terabytes of storage may be required.
Of course, Security Onion data can always be archived to external storage by a data archive system, depending on the needs and capabilities of the organization.
Data Archiving PART 4
Sguil alert data is retained for 30 days by default.
This value is set in the securityonion.conf file.
Security Onion is known to require a lot of storage and RAM to run properly.
Depending on the size of the network, multiple terabytes of storage may be required.
Of course, Security Onion data can always be archived to external storage by a data archive system, depending on the needs and capabilities of the organization.
Working in Sguil The primary duty of a cybersecurity analyst is the verification of security alerts.
Depending on the organization, the tools used to do this will vary.
For example, a ticketing system may be used to manage task assignment and documentation.
In Security Onion, the first place that a cybersecurity analyst will go to verify alerts is Sguil.
Working in Sguil The primary duty of a cybersecurity analyst is the verification of security alerts.
Depending on the organization, the tools used to do this will vary.
For example, a ticketing system may be used to manage task assignment and documentation.
In Security Onion, the first place that a cybersecurity analyst will go to verify alerts is Sguil.