Domain 8. Chapter 20 Flashcards

1
Q

Domain 8. Chapter 19
1. Introducing Systems Development Controls
1.1 Software Development
1.2 Systems Development Lifecycle
1.3 Lifecycle Models
1.4 Application Programming Interfaces
1.5 Software Testing
1.6 Service-Level Agreements
1.7 Third-Party Software Acquisition
2. Establishing Databases and Data Warehousing
2.1 Database Management System Architecture
2.2 Database Transactions
2.3 Security for Multilevel Databases
2.4 Open Database Connectivity
3. Storage Threats
4. Understanding Knowledge-Based Systems
4.1 Expert Systems
4.2 Machine Learning
4.3 Neural Networks

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Introducing Systems Development Controls
    1.1 Software Development
    1.1.1 Programming Languages
    You might not know that several types of languages can be used simultaneously by the same system.
    The instructions that a computer follows consist of a long series of binary digits in a language known as machine language.
    Assembly language is a higher-level alternative that uses mnemonics to represent the basic instruction set of a CPU. A task as simple as adding two numbers together could take five or six lines of assembly code!
    They prefer to use high-level languages, such as Python, C++, Ruby, R, Java, and Visual Basic.

Once programmers are ready to execute their programs, two options are available to them: compilation and interpretation.
Some languages (such as C, Java, and Fortran) are compiled languages. A compiler to convert source code from a higher-level language into an executable file designed for use on a specific operating system.
It’s not possible to directly view or modify the software instructions in an executable file. However, specialists in the field of reverse engineering may be able to reverse the compilation process with the assistance of tools known as decompilers and disassemblers.
Code protection techniques seek to either prevent or impede the use of decompilers and disassemblers through a variety of techniques. For example, obfuscation techniques seek to modify executables to make it more difficult to retrieve intelligible code from them.
In some cases, languages rely on runtime environments to allow the portable execution of code across different operating systems. The Java virtual machine (JVM) is a well-known example of this type of runtime. Users install the JVM runtime on their systems and may then rely on that runtime to execute compiled Java code.
Other languages (such as Python, R, JavaScript, and VBScript) are interpreted languages. When these languages are used, the programmer distributes the source code, which contains instructions in the higher-level language. When end users execute the program on their systems, that automatically triggers the use of an interpreter to execute the source code stored on the system. If the user opens the source code file, they’re able to view the original instructions written by the programmer.

A

Методы защиты кода направлены на предотвращение или затруднение использования декомпиляторов и дизассемблеров с помощью различных методов. Например, методы обфускации направлены на изменение исполняемых файлов, чтобы затруднить извлечение из них понятного кода.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

1.1.2 Libraries
Developers often rely on shared software libraries that contain reusable code. Many of these libraries are available as open source projects, whereas others may be commercially sold or maintained internally by a company.

To protect against similar vulnerabilities, developers should be aware of the origins of their shared code and keep abreast of any security vulnerabilities that might be discovered in libraries that they use.

A

Чтобы защититься от подобных уязвимостей, разработчики должны знать происхождение своего общего кода и быть в курсе любых уязвимостей безопасности, которые могут быть обнаружены в библиотеках, которые они используют.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

1.1.3 Development Toolsets
The integrated development environment (IDE). IDEs provide programmers with a single environment where they can write their code, test it, debug it, and compile it (if applicable).

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

1.1.4 Object-Oriented Programming
Many modern programming languages, such as C++, Java, and the .NET languages, support the concept of object-oriented programming (OOP).
You can think of it as a group of objects that can be requested to perform certain operations or exhibit certain behaviors.
Objects work together to provide a system’s functionality or capabilities. For example, a banking program might have three object classes that correspond to accounts, account holders, and employees, respectively. When a new account is added to the system, a new instance, or copy, of the appropriate object is created to contain the details of that account.

Each object in the OOP model has methods that correspond to specific actions that can be taken on the object. For example, the account object can have methods to add funds, deduct funds, close the account, and transfer ownership.

Here are some common object-oriented programming terms you might come across in your work:
- Message
- Method
- Behavior The results or output exhibited by an object is a behavior.
- Class A collection of the common methods from a set of objects that defines the behavior of those objects is a class.
- Instance Objects are instances of or examples of classes that contain their methods.
- Inheritance Inheritance occurs when methods from a class (parent or superclass) are inherited by another subclass (child) or object.
- Delegation Delegation is the forwarding of a request by an object to another object or delegate. An object delegates if it does not have a method to handle the message.
- Polymorphism A polymorphism is the characteristic of an object that allows it to respond with different behaviors to the same message or method because of changes in external conditions.
-Cohesion связанность Cohesion describes the strength of the relationship between the purposes of the methods within the same class. When all the methods have similar purposes, there is high cohesion, a desirable condition that promotes good software design principles. When the methods of a class have low cohesion, this is a sign that the system is not well designed.
- Coupling Связь Coupling is the level of interaction between objects. Lower coupling means less interaction. Lower coupling provides better software design because objects are more independent. Lower coupling is easier to troubleshoot and update. Objects that have low cohesion require lots of assistance from other objects to perform tasks and have high coupling.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

1.1.5 Assurance
Assurance procedures are simply formalized processes by which trust is built into the lifecycle of a system. The Common Criteria provide a standardized approach to assurance used in government settings.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

1.1.6 Avoiding and Mitigating System Failure
You can employ many methods to avoid failure, including using input validation and creating fail-secure or fail-open procedures.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

1.1.6.1 Authentication and Session Management
The level of authentication required by an application should be tied directly to the level of sensitivity of that application.
In most cases, developers should seek to integrate their applications with the organization’s existing authentication systems. It is generally more secure to make use of an existing, hardened authentication system than to try to develop an authentication system for a specific application.

This includes ensuring that any cookies used for web session management be transmitted only over secure, encrypted channels and that the identifiers used in those cookies be long and randomly generated. Session tokens should expire after a specified period of time and require that the user reauthenticate.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

1.1.6.2 Error Handling
Developers love detailed error messages for debugging.
Developers should disable detailed error messages (also known as debugging mode) on any servers and applications that are publicly accessible.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

1.1.6.3 Logging
Applications should be configured to send detailed logging of errors and other security events to a centralized log repository for cybersecurity analysts.
The Open Web Application Security Project (OWASP) Secure Coding Practices suggest logging the following events:

Input validation failures
- Authentication attempts, especially failures
- Access control failures
- Tampering attempts Попытки взлома
- Use of invalid or expired session tokens
- Exceptions raised by the operating system or applications
- Use of administrative privileges
- Transport Layer Security (TLS) failures
- Cryptographic errors

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

1.1.6.4 Fail-Secure and Fail-Open
- The fail-secure failure state puts the system into a high level of security (and possibly even disables it entirely) until an administrator can diagnose the problem and restore the system to normal operation.
- The fail-open state allows users to bypass failed security controls, erring on the side of permissiveness.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

1.2 Systems Development Lifecycle
These core activities are essential to the development of sound, secure systems:
- Conceptual definition
- Functional requirements determination
- Control specifications development
- Design review
- Coding
- Code review walk-through Пошаговое руководство по проверке кода
- System test review Обзор тестирования системы
- Maintenance and change management

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

1.2.1 Conceptual Definition
The conceptual definition phase of systems development involves creating the basic concept statement for a system. It’s a simple statement agreed on by all interested stakeholders (the developers, customers, and management) that states the purpose of the project as well as the general system requirements.
At this point in the process, designers commonly identify the classification(s) of data that will be processed by the system and the applicable handling requirements.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

1.2.2 Functional Requirements Determination
In this phase, specific system functionalities are listed, and developers begin to think about how the parts of the system should interoperate to meet the functional requirements.

The three major characteristics of a functional requirement:
Input(s) The data provided to a function
Behavior The business logic describing what actions the system should take in response to different inputs
Output(s) The data provided from a function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

1.2.3 Control Specifications Development Разработка технических характеристик управления
During the development of control specifications, you should analyze the system from a number of security perspectives.
1)First, adequate access controls must be designed into every system to ensure that only authorized users are allowed to access the system and that they are not permitted to exceed their level of authorization.
2) Second, the system must maintain the confidentiality of vital data through the use of appropriate encryption and data protection technologies.
3) Next, the system should provide both an audit trail to enforce individual accountability and a detective mechanism for illegitimate activity.
4) Finally, depending on the criticality of the system, availability and fault-tolerance issues should be addressed as corrective actions.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

1.2.4 Design Review
After the design team completes the formal design documents, a review meeting with the stakeholders should be held to ensure that everyone is in agreement that the process is still on track for the successful development of a system with the desired functionality. This design review meeting should include security professionals who can validate that the proposed design meets the control specifications developed in the previous phase.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

1.2.5 Coding
Developers should use the secure software coding principles discussed in this chapter to craft code that is consistent with the agreed-upon design and meets user requirements.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

1.2.6 Code Review Walk-Through
Обзор кода: пошаговое руководство
Project managers should schedule several code review walk-through проход meetings at various milestones этапах throughout the coding process.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

1.2.7 Testing
The initial system testing using development personnel to seek out any obvious errors.

Regression testing formalizes the process of verifying that the new code performs in the same manner as the old code, other than any changes expected as part of the new release.

User acceptance testing (UAT), where users verify that the code meets their requirements and formally accept it as ready to move into production use.

Once this phase is complete, the code may move to deployment.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

1.2.8 Maintenance and Change Management
Once a system is operational, a variety of maintenance tasks are necessary to ensure continued operation in the face of changing operational, data processing, storage, and environmental requirements. It’s essential that you have a skilled support team in place to handle any routine or unexpected maintenance.

A
21
Q

1.3 Lifecycle Models
Pioneers like Winston Royce and Barry Boehm proposed several software development lifecycle (SDLC) models to help guide the practice toward formalized processes.
Cybersecurity professionals should ensure that security principles are interwoven into the implementation of whatever model(s) the organization uses for software development.

A
22
Q

1.3.1 Waterfall Model
The original, traditional waterfall model was a simple design that was intended to be sequential steps from inception to conclusion. As each stage is completed, the project moves into the next phase. Feedback loop characteristic - is when the dev team returns to the previous phase to correct defects discovered during the subsequent phase.

However, one of the major criticisms of this model is that it allows the developers to step back only one phase in the process. It does not make provisions for the discovery of errors at a later phase in the development cycle.

A
23
Q

1.3.2 Spiral Model
Allows for multiple iterations of a waterfall-style process
It is known as a metamodel, or a “model of models.”

The waterfall model focuses on a large-scale effort to deliver a finished system, whereas the spiral model focuses on iterating through a series of increasingly “finished” prototypes that allow for enhanced quality control.

A
24
Q

1.3.3 Agile Software Development
Developers increasingly embraced охваченный approaches to software development that eschewed сторониться the rigid жесткий models of the past in favor of approaches that placed an emphasis on сделало акцент the needs of the customer and on quickly developing new functionality that meets those needs in an iterative fashion.
Manifesto for Agile Software Development.

Through this work we have come to value пришли к выводу:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan.

The 12 principles, as stated in the Agile Manifesto, are as follows:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
Business people and developers must work together daily throughout the project.
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
Continuous attention to technical excellence and good design enhances agility.
Simplicity—the art of maximizing the amount of work not done—is essential.
The best architectures, requirements, and designs emerge from self-organizing teams.
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

It’s important to note, however, that Agile is a philosophy and not a specific methodology. Several specific methodologies have emerged that take these Agile principles and define specific processes that implement them. These include Scrum, Kanban, Rapid Application Development (RAD), Agile Unified Process (AUP), the Dynamic Systems Development Model (DSDM), and Extreme Programming (XP).

Of these, the Scrum approach is the most popular.

A

Разработчики все чаще используют подходы к разработке программного обеспечения, которые отказываются от жестких моделей прошлого в пользу подходов, которые делают акцент на потребностях клиента и на быстрой итеративной разработке новых функций, которые удовлетворяют эти потребности.

25
Q

1.3.4 Capability Maturity Model (CMM)
The stages of the SW-CMM are as follows:
Level 1: Initial. In this phase, you’ll often find hardworking people charging ahead in a disorganized fashion. There is usually little or no defined software development process.
Level 2: Repeatable. In this phase, basic lifecycle management processes are introduced. Reuse of code in an organized fashion begins to enter the picture, and repeatable results are expected from similar projects. SEI defines the key process areas for this level as Requirements Management, Software Project Planning, Software Project Tracking and Oversight, Software Subcontract Management, Software Quality Assurance, and Software Configuration Management.
Level 3: Defined. In this phase, software developers operate according to a set of formal, documented software development processes. All development projects take place within the constraints of the new standardized management model. SEI defines the key process areas for this level as Organization Process Focus, Organization Process Definition, Training Program, Integrated Software Management, Software Product Engineering, Intergroup Coordination, and Peer Reviews.
Level 4: Managed. In this phase, management of the software process proceeds to the next level. Quantitative measures are used to gain a detailed understanding of the development process. SEI defines the key process areas for this level as Quantitative Process Management and Software Quality Management.
Level 5: Optimizing. In the optimized organization, a process of continuous improvement occurs. Sophisticated software development processes are in place that ensure that feedback from one phase reaches to the previous phase to improve future results. SEI defines the key process areas for this level as Defect Prevention, Technology Change Management, and Process Change Management.

A
26
Q

1.3.5 Software Assurance Maturity Model (SAMM) Модель зрелости Software Assurance (SAMM)

The Software Assurance Maturity Model (SAMM) is an open source project maintained by the Open Web Application Security Project (OWASP). It seeks to provide a framework for integrating security activities into the software development and maintenance process and to offer organizations the ability to assess their maturity.

SAMM divides the software development process into five business functions:
1) Governance. The activities an organization undertakes to manage its software development process. This function includes practices for strategy, metrics, policy, compliance, education, and guidance.
2) Design. The process used by the organization to define software requirements and create software. This function includes practices for threat modeling, threat assessment, security requirements, and security architecture.
3) Implementation. The process of building and deploying software components and managing flaws in those components. This function includes the secure build, secure deployment, and defect management practices.
4) Verification. The set of activities undertaken by the organization to confirm that code meets business and security requirements. This function includes architecture assessment, requirements-driven testing, and security testing.
5) Operations. The actions taken by an organization to maintain security throughout the software lifecycle after code is released. This function includes incident management, environment management, and operational management.

A
27
Q

1.3.6 IDEAL Model
The Software Engineering Institute also developed the IDEAL model for software development, which implements many of the SW-CMM attributes. The IDEAL model has five phases:

1: Initiating. In the initiating phase of the IDEAL model, the business reasons behind the change are outlined, support is built for the initiative, and the appropriate infrastructure is put in place.
2: Diagnosing. During the diagnosing phase, engineers analyze the current state of the organization and make general recommendations for change.
3: Establishing. In the establishing phase, the organization takes the general recommendations from the diagnosing phase and develops a specific plan of action that helps achieve those changes.
4: Acting. In the acting phase, it’s time to stop “talking the talk” and “walk the walk.” The organization develops solutions and then tests, refines, and implements them.
5: Learning. As with any quality improvement process, the organization must continuously analyze its efforts to determine whether it has achieved the desired goals, and when necessary, propose new actions to put the organization back on course.

A
28
Q

1.3.7 Gantt Charts and PERT
A Gantt chart is a type of bar chart that shows the interrelationships over time between projects and schedules. It provides a graphical illustration of a schedule that helps you plan, coordinate, and track specific tasks in a project.
Program Evaluation Review Technique (PERT) is a project-scheduling tool used to judge the size of a software product in development and calculate the standard deviation (SD) for risk assessment. PERT relates the estimated lowest possible size, the most likely size, and the highest possible size of each component. The PERT chart clearly shows the dependencies between different project tasks. Project managers can use these size estimates and dependencies to better manage the time of team members and perform task scheduling. PERT is used to direct improvements to project management and software coding in order to produce more efficient software.

A
29
Q

1.4 Change and Configuration Management
The change management process has three basic components:

  • Request Control. The request control process provides an organized framework within which users can request modifications, managers can conduct cost/benefit analysis, and developers can prioritize tasks.
  • Change Control. The change control process is used by developers to re-create the situation encountered by the user and to analyze the appropriate changes to remedy the situation. It also provides an organized framework within which multiple developers can create and test a solution prior to rolling it out into a production environment. Change control includes conforming to quality control restrictions, developing tools for update or change deployment, properly documenting any coded changes, and restricting the effects of new code to minimize diminishment of security.
  • Release Control. Once the changes are finalized, they must be approved for release through the release control procedure. An essential step of the release control process is to double-check and ensure that any code inserted as a programming aid during the change process (such as debugging code and/or backdoors) is removed before releasing the new software to production. This process also ensures that only approved changes are made to production systems. Release control should also include acceptance testing to ensure that any alterations to end-user work tasks are understood and functional.

Software configuration management (SCM) has four main components:
- Configuration Identification. During the configuration identification process, administrators document the configuration of covered software products throughout the organization.
- Configuration Control. The configuration control process ensures that changes to software versions are made in accordance with the change control and configuration management policies. Updates can be made only from authorized distributions in accordance with those policies.
- Configuration Status Accounting. Formalized procedures are used to keep track of all authorized changes that take place.
- Configuration Audit. A periodic configuration audit should be conducted to ensure that the actual production environment is consistent with the accounting records and that no unauthorized configuration changes have taken place.

A
30
Q

1.5 The DevOps Approach
The word DevOps is a combination of Development and Operations, symbolizing that these functions must merge and cooperate to meet business requirements. Слово DevOps представляет собой комбинацию слов «Разработка» и «Эксплуатация», символизируя, что эти функции должны сливаться и сотрудничать для удовлетворения бизнес-требований.

The DevOps model is closely aligned with the Agile development approach and aims to dramatically decrease the time required to develop, test, and deploy software changes.

Some organizations even strive to reach the goal of continuous integration/continuous delivery (CI/CD), where code may roll out dozens or even hundreds of times per day.
This requires a high degree of automation, including integrating code repositories, the software configuration management process, and the movement of code between development, testing, and production environments.

For this reason, many people prefer to use the term DevSecOps to refer to the integration of development, security, and operations. The DevSecOps approach also supports the concept of software-defined security, where security controls are actively managed by code, allowing them to be directly integrated into the CI/CD pipeline.

A
31
Q

1.6 Application Programming Interfaces (API)
For these cross-site functions to work properly, the websites must interact with one another. Many organizations offer application programming interfaces (APIs) for this purpose.
APIs allow application developers to bypass traditional web pages and interact directly with the underlying service through function calls.

API developers must know when to require authentication and ensure that they verify credentials and authorization for every API call. This authentication is typically done by providing authorized API users with a complex API key that is passed with each API call. The back-end system validates this API key before processing a request, ensuring that the system making the request is authorized to make the specific API call.

APIs must also be tested thoroughly for security flaws, just like any web application. You’ll learn more about this in the next section.

A
32
Q

1.7 Software Testing
Testing is done before releasing it to the market.
The third-party test allows for a broader and more thorough test and prevents the bias and inclinations of the programmers from affecting the results of the test.

There are three different philosophies that you can adopt when applying software security testing techniques:

  • White-Box Testing White-box testing examines the internal logical structures of a program and steps through the code line by line, analyzing the program for potential errors. The key attribute of a white-box test is that the testers have access to the source code.
  • Black-Box Testing Black-box testing examines the program from a user perspective by providing a wide variety of input scenarios and inspecting the output. Black-box testers do not have access to the internal code. Final acceptance testing Окончательное приемочное тестирование that occurs prior to system delivery is a common example of black-box testing.
  • Gray-Box Testing Gray-box testing combines the two approaches and is popular for software validation. In this approach, testers examine the software from a user perspective, analyzing inputs and outputs. They also have access to the source code and use it to help design their tests. They do not, however, analyze the inner workings of the program during their testing.
A
33
Q

1.8 Code Repositories
Code repositories act as a central storage point for developers to place their source code.
In addition, code repositories such as GitHub, Bitbucket, and SourceForge also provide version control, bug tracking, web hosting, release management, and communications functions that support software development.
Code repositories are often integrated with popular code management tools. For example, the git tool is popular among many software developers, and it is tightly integrated with GitHub and other repositories.

Developers must take care not to include sensitive information in public code repositories. This is particularly true of API keys.

Further worsening the situation, malicious hackers have written bots that scour public code repositories searching for exposed API keys. These bots may detect an inadvertently posted key in seconds, allowing the hacker to quickly provision massive computing resources before the developer even knows of their mistake!
Similarly, developers should also be careful to avoid placing passwords, internal server names, database names, and other sensitive information in code repositories.

A
34
Q

1.8 Service-Level Agreements
Using service-level agreements (SLAs) is an increasingly popular way to ensure that organizations providing services to internal and/or external customers maintain an appropriate level of service agreed on by both the service provider and the vendor.
The following issues are commonly addressed in SLAs:
- System uptime (as a percentage of overall operating time);
- Maximum consecutive downtime (in seconds/minutes/and so on);
- Peak load;
- Average load;
- Responsibility for diagnostics;
- Failover time (if redundancy is in place).

A
35
Q

1.9 Third-Party Software Acquisition
Most of the software used by enterprises is not developed internally but purchased from third-party vendors. Commercial off-the-shelf (COTS) software is purchased to run on servers managed by the organization, either on premises or in an IaaS environment.
Other software is purchased and delivered over the internet through web browsers, in a software-as-a-service (SaaS) approach. Still more software is created and maintained by community-based open source software (OSS) projects. These open source projects are freely available for anyone to download and use, either directly or as a component of a larger system.
In the case of SaaS environments, security staff take on responsibility for monitoring the vendor’s security: audits, assessments, vulnerability scans, and other measures designed to verify that the vendor maintains proper controls.
The organization may also retain full or partial responsibility for legal compliance obligations, depending on the nature of the regulation and the agreement that is in place with the service provider.

Whenever an organization acquires any type of software, be it COTS or OSS, run on-premises or in the cloud, that software should be tested for security vulnerabilities. Organizations may conduct their own testing, rely on the results of tests provided by vendors, and/or hire third parties to conduct independent testing.

A
36
Q
  1. Establishing Databases and Data Warehousing
    Создание баз данных и хранилищ данных
    2.1 Database Management System Architecture
    DBMS - database management system.
    RDBMSs - relational database management systems.
    A corporate organization chart корпоративная организационная структура
    2.1.1 Hierarchical and Distributed Databases
    A hierarchical data model combines records and fields that are related in a logical tree structure. This results in a one-to-many data model, where each node may have zero, one, or many children but only one parent.
    Examples: A corporate organization chart корпоративная организационная структура, Domain Name System (DNS) records used on the internet.

The distributed data model has data stored in more than one database, but those databases are logically connected. The user perceives воспринимать the database as a single entity, even though it consists of numerous parts interconnected over a network. Each field can have numerous children as well as numerous parents. Thus, the data mapping relationship for distributed databases is many-to-many.

A
37
Q

2.1.2 Relational Databases

A relational database consists of flat two-dimensional tables плоские двумерные таблицы made up of rows and columns. In fact, each table looks similar to a spreadsheet file. The row and column structure provides for one-to-one data mapping relationships. The main building block of the relational database is the table (also known as a relation).
Each table contains a number of attributes, or fields. Each attribute corresponds to a column in the table.
The tuple кортеж, represented by a row in the table.
The number of rows in the relation is referred to as cardinality мощностью, and the number of columns is the degree степенью. The domain of an attribute is the set of allowable values that the attribute can take.
Records are identified using a variety of keys. Quite simply, keys are a subset of the fields of a table and are used to uniquely identify records. They are also used to join tables when you wish to cross-reference information. You should be familiar with three types of keys:
- Candidate Keys A candidate key is a subset of attributes that can be used to uniquely identify any record in a table.
- Primary Keys A primary key is selected from the set of candidate keys for a table to be used to uniquely identify the records in a table.
- Alternate Keys Any candidate key that is not selected as the primary key is referred to as an alternate key.
- Foreign Keys A foreign key is used to enforce relationships between two tables, also known as referential integrity.

All relational databases use a standard language, SQL, to provide users with a consistent interface for the storage, retrieval, and modification of data and for administrative control of the DBMS. SQL’s primary security feature is its granularity of authorization. This means that SQL allows you to set permissions at a very fine level of detail. You can limit user access by table, row, column, or even by individual cell in some cases.

A
38
Q

2.2 Database Transactions
Relational databases support the explicit and implicit use of transactions to ensure data integrity. Each transaction is a discrete set of SQL instructions that should either succeed or fail as a group. It’s not possible for one part of a transaction to succeed while another part fails.
When a transaction is rolled back откате , the database restores itself to the condition it was in before the transaction began.
Relational database transactions have four required characteristics: atomicity, consistency, isolation, and durability. Together, these attributes are known as the ACID model:
- Atomicity Database transactions must be atomic—that is, they must be an “all-or-nothing” affair. If any part of the transaction fails, the entire transaction must be rolled back as if it never occurred.
- Consistency непротиворечивость All transactions must begin operating in an environment that is consistent with all of the database’s rules (for example, all records have a unique primary key). When the transaction is complete, the database must again be consistent with the rules, regardless of whether those rules were violated during the processing of the transaction itself. No other transaction should ever be able to use any inconsistent data that might be generated during the execution of another transaction.
- Isolation The isolation principle requires that transactions operate separately from each other. If a database receives two SQL transactions that modify the same data, one transaction must be completed in its entirety before the other transaction is allowed to modify the same data. This prevents one transaction from working with invalid data generated as an intermediate step by another transaction.
- Durability Database transactions must be durable. That is, once they are committed to the database, they must be preserved. Databases ensure durability through the use of backup mechanisms, such as transaction logs.

A
39
Q

2.3 Security for Multilevel Databases
The security labels assigned to data objects and individual users in the organizations should be extended to the organization’s databases.
Mixing data with different classification levels and/or need-to-know requirements, known as database contamination загрязнение базы данных, is a significant security challenge. Often, administrators will deploy a trusted front end to add multilevel security to a legacy or insecure DBMS.
Another way to implement multilevel security in a database is through the use of database views представлений базы данных. Views are simply SQL statements операторы SQL that present data to the user as if the views were tables themselves. Because views are so flexible, many database administrators use them as a security tool—allowing users to interact only with limited views rather than with the raw tables of data underlying them.

A

Смешение данных с разными уровнями классификации и/или обязательными требованиями, известное как загрязнение базы данных, представляет собой серьезную проблему безопасности. Часто администраторы развертывают доверенный интерфейс, чтобы добавить многоуровневую безопасность в устаревшую или небезопасную СУБД.

40
Q

2.4 Concurrency Параллелизм
Concurrency, or edit control, is a preventive security mechanism that endeavors стремление to make certain that the information stored in the database is always correct or at least has its integrity and availability protected.
Databases that fail to implement concurrency correctly may suffer from the following issues:
- Lost Updates Occur when two different processes make updates to a database, unaware of each other’s activity.
- Dirty Reads Occur when a process reads a record from a transaction that did not successfully commit.

Concurrency uses a “lock” feature to allow one user to make changes but deny other users access to views or make changes to data elements at the same time.

A

Параллелизм, или контроль редактирования, — это превентивный механизм безопасности, который стремится гарантировать, что информация, хранящаяся в базе данных, всегда правильна или, по крайней мере, ее целостность и доступность защищены.

41
Q

2.5 Aggregation
SQL provides a number of functions that combine records from one or more tables to produce potentially useful information. This process is called aggregation.

Aggregation attacks are used to collect numerous low-level security items or low-value items and combine them to create something of a higher security level or value. In other words, a person or group may be able to collect multiple facts about or from a system and then use these facts to launch an attack.
For this reason, it’s especially important for database security administrators to strictly control access to aggregate functions and adequately assess the potential information they may reveal to unauthorized individuals.

A
42
Q

2.7 Inference
The database security issues posed by inference attacks возникающие в результате атак на основе логического вывода, are similar to those posed by the threat of data aggregation. However, inference makes use of the human mind’s deductive capacity rather than the raw необработанный сырой mathematical ability of modern database platforms.
As with aggregation, the best defense against inference attacks is to maintain constant vigilance бдительность over the permissions granted to individual users. Furthermore, intentional blurring of data размытие данных may be used to prevent the inference of sensitive information. Finally, you can use database partitioning (discussed in the next section) to help subvert these attacks.

A
43
Q

2.8 Other Security Mechanisms
2.8.1 Semantic integrity ensures that user actions don’t violate any structural rules. It also checks that all stored data types are within valid domain ranges, ensures that only logical values exist, and confirms that the system complies with any and all uniqueness constraints.
2.8.2 Employing time and date stamps to maintain data integrity and availability. Especially in distributed database systems. All changes are applied to all members, but they are implemented in correct chronological order.
2.8.3 Content-dependent access control is an example of granular object control. Because decisions must be made on an object-by-object basis, content-dependent control increases processing overhead расходы на обработку. Another form of granular control is cell suppression Подавление ячеек. Cell suppression is the concept of hiding individual database fields or cells or imposing more security restrictions on them.
2.8.4 Context-dependent access control evaluates the big picture to make access control decisions. The key factor in context-dependent access control is how each object or packet or field relates to the overall activity or communication.
2.8.5 Database partitioning to subvert устранения aggregation and inference vulnerabilities. Database partitioning is the process of splitting a single database into multiple parts, each with a unique and distinct security level or type of content.
2.8.6 Polyinstantiation,Полиэкземпляризация, in the context of databases, occurs when two or more rows in the same relational database table appear to have identical primary key elements but contain different data for use at differing classification levels.
2.8.7 Finally, administrators can insert false or misleading data ложные или вводящие в заблуждение into a DBMS in order to redirect or thwart перенаправить или предотвратить information confidentiality attacks. This is a concept known as noise and perturbation.известная как шум и возмущение.

A
44
Q
  1. Open Database Connectivity
    Open Database Connectivity (ODBC) is a database feature that allows applications to communicate with different types of databases without having to be directly programmed for interaction with each type. ODBC acts as a proxy between applications and back-end database drivers, giving application programmers greater freedom in creating solutions without having to worry about the back-end database system.
    3.1 NoSQL
    As database technology evolves, many organizations are turning away from the relational model for cases where they require increased speed or their data does not neatly fit into tabular form. NoSQL databases are a class of databases that use models other than the relational model to store data.
    There are many different types of NoSQL database:
    - Key/value stores are perhaps the simplest possible form of database. They store information in key/value pairs, where the key is essentially an index used to uniquely identify a record, which consists of a data value. Key/value stores are useful for high-speed applications and very large datasets where the rigid structure of a relational model would require significant, and perhaps unnecessary, overhead.
    - Graph databases store data in graph format, using nodes узлы to represent objects and edges ребра to represent relationships. They are useful for representing any type of network, such as social networks, geographic locations, and other datasets that lend themselves to graph representations.
    - Document stores are similar to key/value stores in that they store information using keys, but the type of information they store is typically more complex than that in a key/value store and is in the form of a document. Common document types used in document stores include XML and JSON.
A
45
Q
  1. Storage Threats Угрозы хранилищам
  2. First, the threat of illegitimate access to storage resources exists no matter what type of storage is in use. If administrators do not implement adequate file system access controls, an intruder might stumble across sensitive data simply by browsing the file system. In more sensitive environments, administrators should also protect against attacks that involve bypassing operating system controls and directly accessing the physical storage media to retrieve data. This is best accomplished through the use of an encrypted file system, which is accessible only through the primary operating system.
    Furthermore, systems that operate in a multilevel security environment should provide adequate controls to ensure that shared memory and storage resources are set up with appropriate controls so that data from one classification level is not readable at a lower classification level.
  3. Covert channel attacks pose the second primary threat against data storage resources. Covert storage channels allow the transmission of sensitive data between classification levels through the direct or indirect manipulation of shared storage media. This may be as simple as writing sensitive data to an inadvertently shared portion of memory в случайно использованную часть памяти or physical storage. More complex covert storage channels might be used to manipulate the amount of free space available on a disk or the size of a file to covertly convey information between security levels.
A
46
Q
  1. Understanding Knowledge-Based Systems
    5.1 Expert Systems
    Expert systems seek to embody the accumulated knowledge of experts on a particular subject and apply it in a consistent fashion to future decisions.
    Every expert system has two main components: the knowledge base and the inference engine механизма вывода.
    The knowledge base contains the rules known by an expert system. The knowledge base seeks стремится to codify the knowledge of human experts in a series of “if/then” statements.
    The second major component of an expert system—the inference engine—analyzes information in the knowledge base to arrive at the appropriate decision. The expert system user employs some sort of user interface to provide the inference engine with details about the current situation, and the inference engine uses a combination of logical reasoning and fuzzy logic techniques to draw a conclusion based on past experience.
A

Экспертные системы стремятся воплотить накопленные экспертами знания по конкретному предмету и последовательно применять их для будущих решений.

47
Q

5.2 Machine Learning
Machine learning techniques use analytic capabilities to develop knowledge from datasets without the direct application of human insight. без прямого применения человеческого понимания. The core approach of machine learning is to allow the computer to analyze and learn directly from data, developing and updating models of activity.
Machine learning techniques fall into two major categories:
- Supervised learning techniques Методы контролируемого обучения use labeled data for training. The analyst creating a machine learning model provides a dataset along with the correct answers and allows the algorithm to develop a model that may then be applied to future cases. For example, if an analyst would like to develop a model of malicious system logins, the analyst would provide a dataset containing information about logins to the system over a period of time and indicate which were malicious. The algorithm would use this information to develop a model of malicious logins.
- Unsupervised learning techniques Методы обучения без учителя use unlabeled data for training. The dataset provided to the algorithm does not contain the “correct” answers; instead, the algorithm is asked to develop a model independently. In the case of logins, the algorithm might be asked to identify groups of similar logins. An analyst could then look at the groups developed by the algorithm and attempt to identify groups that may be malicious.

A
48
Q

5.3 Neural Networks Нейронные сети
In neural networks, chains of computational units are used in an attempt to imitate the biological reasoning process of the human mind. In an expert system, a series of rules is stored in a knowledge base, whereas in a neural network, a long chain of computational decisions that feed into each other которые переплетаются друг с другом and eventually sum to produce the desired output is set up. Neural networks are an extension of machine learning techniques and are also commonly referred to as deep learning or cognitive systems. акже часто называются системами глубокого обучения или когнитивными системами.

Benefits of neural networks include linearity, input-output mapping, and adaptivity линейность, отображение ввода-вывода и адаптивность. These benefits are evident in the implementations of neural networks for voice recognition, face recognition, weather prediction, and the exploration of models of thinking and consciousness.

Typical neural networks involve many layers of summation, each of which requires weighting information to reflect the relative importance of the calculation in the overall decision-making process. The weights must be custom-tailored индивидуально адаптированы for each type of decision the neural network is expected to make. This is accomplished through the use of a training period during which the network is provided with inputs for which the proper decision is known. The algorithm then works backward from these decisions to determine the proper weights for each node in the computational chain. This activity is performed using what is known as the Delta rule or learning rule. Through the use of the Delta rule, neural networks are able to learn from experience.

Knowledge-based analytic techniques have great applications in the field of computer security. One of the major advantages offered by these systems is their capability to rapidly make consistent decisions. One of the major problems in computer security is the inability of system administrators to consistently and thoroughly analyze massive amounts of log and audit trail data to look for anomalies.

A