Chapter 20 - Software Development Security Flashcards
In which phase of the SM-CMM does an organization uses quantitative measures to gain a detailed understanding of the development process?
Level 4: Managed In this phase, management of the software process proceeds to the next level. Quantitative measures are utilized to gain a detailed understanding of the devel- opment process. SEI de nes the key process areas for this level as Quantitative Process Management and Software Quality Management
Differences between content-dependent and context-dependent access control
Context-dependent access control is often discussed alongside content-dependent access control because of the similarity of the terms. Context-dependent access control evaluates the big picture to make access control decisions. The key factor in context-dependent access control is how each object or packet or eld relates to the overall activity or communica- tion. Any single element may look innocuous by itself, but in a larger context that element may be revealed to be benign or malign. Content-dependent control is concerned primarily with the data stored by a field.
What type of information is used to form the basis of an expert system’s decision-making process?
Expert systems use a knowledge base consisting of a series of “if/then” statements to form decisions based on the previous experience of human experts.
Who is responsible for reviewing the result and deliverables within and at the end of each phase, as well as confirming compliance with requirements?
Quality Assurance personnel review result and deliverables within each phase and at the end of each phase, and confirm compliance with requirements. Their objective is to ensure that the quality of the project by measuring adherence of the project staff to the organization’s software development life cycle (SDLC), advise on the deviation and propose recommendation for process improvement or greater control points when deviation occur.
Common database model
Common logical data models for databases include: Hierarchical database model Network model Relational model Object-relational database models
Code Signing
Code Signing is the process of digitally signing executables and scripts to confirm that the code is authentic and has not changed since you digitally signed it.
Code Signing is common in all major software vendors and is important to maintaining a trusted computing platform.
Neural Network
Neural Network based IDS monitors the general patterns of activity and traffic on the network, and create a database of normal activities within the system. This is similar to statistical model but with added self-learning functionality.
Final Acceptance Testing
Final Acceptance Testing - It has two major parts: Quality Assurance Testing(QAT) focusing on the technical aspect of the application and User acceptance testing focusing on functional aspect of the application.
QAT focuses on documented specifications and the technology employed. It verifies that application works as documented by testing the logical design and the technology itself. It also ensures that the application meet the documented technical specifications and deliverables. QAT is performed primarily by IS department. The participation of end user is minimal and on request. QAT does not focus on functionality testing.
UAT supports the process of ensuring that the system is production ready and satisfies all documented requirements. The methods include:
- Definition of test strategies and procedure.
- Design of test cases and scenarios
- Execution of the tests.
- Utilization of the result to verify system readiness.
Acceptance criteria are defined criteria that a deliverable must meet to satisfy the predefined needs of the user. A UAT plan must be documented for the final test of the completed system. The tests are written from a user’s perspective and should test the system in a manner as close to production possible.
Which software development methodology uses minimal planning and in favor of rapid prototyping?
Rapid application development (RAD) is a software development methodology that uses minimal planning in favor of rapid prototyping. The “planning” of software developed using RAD is interleaved with writing the software itself. The lack of extensive per-planning generally allows software to be written much faster, and makes it easier to change requirements.
Rapid Application Development
Cmmi five models
Level 1 - Initial (Chaotic)
It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes.
Level 2 - Repeatable
It is characteristic of processes at this level that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization.
Level 4 - Managed
It is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level.
Level 5 - Optimizing
It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements.
Trusted front end database
If you are “retrofitting” that means you are adding to an existing database management system (DBMS). You could go back and redesign the entire DBMS but the cost of that could be expensive and there is no telling what the effect will be on existing applications, but that is redesigning and the question states retrofitting. The most cost effective way with the least effect on existing applications while adding a layer of security on top is through a trusted front-end.
Clark-Wilson is a synonym of that model as well. It was used to add more granular control or control to database that did not provide appropriate controls or no controls at all. It is one of the most popular model today. Any dynamic website with a back-end database is an example of this today.
Such a model would also introduce separation of duties by allowing the subject only specific rights on the objects they need to access.
High-level language
high-level languages may be said to be beneficial because they enforce coding standards and can provide more security. On the other hand, higher level languages automate certain functions, and provide complicated operations for the program, implemented by the programming environment or tool, the internal details of which may be poorly understood by the programmer. Therefore, it is possible that high-level languages may introduce security vulnerabilities in ways that are not apparent to the developer.
Software Development Verification vs. Validation:
Verification determines if the product accurately represents and meets the design specifications given to the developers. A product can be developed that does not match the original specifications. This step ensures that the specifications are properly met and closely followed by the development team.
Validation determines if the product provides the necessary solution intended real-world problem. It validates whether or not the final product is what the user expected in the first place and whether or not it solve the problem it intended to solve. In large projects, it is easy to lose sight of overall goal. This exercise ensures that the main goal of the project is met.
The object-relational and object-oriented models are better suited to managing complex data such as required for which of the following?
The object-relational and object-oriented models are better suited to managing complex data such as required for computer-aided design and imaging.
Database view
A view is considered as a virtual table that is derived from other tables. It can be used to restrict access to certain information within the database, to hide attributes, and to implement content-dependent access restrictions. It does not implement referential integrity.
In which phase of the System Development Lifecycle (SDLC) is Security Accreditation Obtained?
Testing and evaluation control
The main approaches of KDD (Knowledge Discovery in Database)
Probabilistic approach: uses graphical representation models to compare different knowledge representations. The models are based on probabilities and data independencies. The probabilistic models are useful for applications involving uncertainty, such as those used in planning and control systems.
- Statistical approach: uses rule discovery and is based on data relationships. Learning algorithm can automatically select useful data relationship paths and attributes. These paths and attributes are then used to construct rules for discovering meaningful information. This approach is used to generalize patterns in the data and to construct rules from the noted patterns. An example of the statistical approach is OLAP.
- Classification approach: groups data according to similarities. One example is a pattern discovery and data-cleaning model that reduces a large database to only a few specific records. By eliminating redundant and non-important data, the discovery of patterns in the data is simplified.
- Deviation and trend analysis: uses filtering techniques to detect patterns. An example is an intrusion detection system that filters a large volume of data so that only the pertinent data is analyzed.
- Neural networks: methods used to develop classification, regression, association, and segmentation models. A neural net method organizes data into nodes that are arranged in layers, and links between the nodes have specific weighting classifications. The neural net is helpful in detecting the associations among the input patterns or relationships. It is also considered a learning system because new information is utomatically incorporated into the system. However, the value and relevance of the decisions made by the neural network are only as good as the experience it is given. The greater the experience, the better the decision. Note that neural nets have a specific problem in terms of an individual’s ability to substantiate processing in that they are subject to superstitious knowledge, which is a tendency to identify relations when no relations actually exist. More sophisticated neural nets are less subject to this problem.
• Expert system approach: uses a knowledge base (a collection of all the data, or knowledge, on a particular matter) and a set of algorithms and/or rules that infer new facts from knowledge and incoming data. The knowledge base could be the human experience that is available in an organization. Because the system reacts to a set of rules, if the rules are faulty, the response will also be faulty. Also, because human decision is removed from the point of action, if an
error were to occur, the reaction time from a human would be longer.
• Hybrid approach: a combination of more than one approach that provides a more powerful and useful system.
ACID
Atomicity - Atomicity requires that each transaction is “all or nothing”: if one part of the transaction fails, the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors, and crashes. To the outside world, a committed transaction appears (by its effects on the database) to be indivisible (“atomic”), and an aborted transaction does not happen.
Consistency - The consistency property ensures that any transaction will bring the database from one valid state to another. Any data written to the database must be valid according to all defined rules, including but not limited to constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code) but merely that any programming errors do not violate any defined rules.
Isolation - The isolation property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed serially, i.e. one after the other. Providing isolation is the main goal of concurrency control. Depending on concurrency control method, the effects of an incomplete transaction might not even be visible to another transaction.[citation needed]
Durability - Durability means that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.