Lesson 15: Summarizing Secure Application Development Concepts Flashcards

1
Q

Software exploitation

A

means an attack that targets a vulnerability in OS or application software. Applications such as web servers, web browsers, email clients, and databases are often targeted. An application vulnerability is a design flaw that can cause the application security system to be circumvented or that will cause the application to crash. Typically, vulnerabilities can only be exploited in quite specific circumstances but because of the complexity of modern software and the speed with which new versions must be released to market, almost no software is free from vulnerabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

zero-day exploit

A

Most vulnerabilities are discovered by software and security researchers, who notify the vendor to give them time to patch the vulnerability before releasing details to the wider public. A vulnerability that is exploited before the developer knows about it or can release a patch is called a zero-day exploit. These can be extremely destructive, as it can take the vendor a lot of time to develop a patch, leaving systems vulnerable for days, weeks, or even years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

input validation

A

Most software accepts user input of some kind, whether the input is typed manually or passed to the program by another program (such as a browser passing a URL to a web server). Good programming practice dictates that input should be tested to ensure that it is valid (that is, the sort of data expected by the program). An input validation attack passes invalid data to the application, and because the input handling on the routine is inadequate, it causes the application or even the OS to behave in an unexpected way.

As discussed earlier, the primary vector for attacking applications is to exploit faulty input validation. Input could include user data entered into a form or URL passed by another application or link. Malicious input could be crafted to perform an overflow attack or some type of injection attack. To mitigate this risk, all input methods should be documented with a view to reducing the potential attack surface exposed by the application. There must be routines to check user input, and anything that does not conform to what is required must be rejected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

There are many ways of exploiting improper input handling, but many attacks can be described as either overflow-type attacks or injection-type attacks:

A
  • Overflow—the attacker submits input that is larger than the variables assigned by the application to store it can cope with.
  • Injection—the attacker embeds code within the input or appends code to it that executes when the server processes the submission.

When an attacker tries to exploit improper input handling, the result might simply be to crash the process hosting the code or even the OS (performing Denial of Service). The attacker may be able to use the exploit to obtain sufficient privileges to run whatever malware (or arbitrary code) he or she chooses. A successful exploit can also facilitate data exfiltration from applications, databases, and operating systems if it allows the adversary to obtain privileges over the data that they would not normally have.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

buffer overflow

A

To exploit a buffer overflow vulnerability, the attacker passes data that deliberately overfills the buffer (an area of memory) that the application reserves to store the expected data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

There are three principal exploits of buffer overflow:

A
  • Stack overflow—the stack is an area of memory used by a program subroutine. It includes a return address, which is the location of the program that called the subroutine. An attacker could use a buffer overflow to change the return address, allowing the attacker to run arbitrary code on the system. Two examples of this are the Code Red worm, which targeted Microsoft’s IIS web server (version 5) and the SQLSlammer worm, which targeted Microsoft SQL Server® 2000.
  • Heap overflow—a heap is an area of memory allocated by the application during execution to store a variable of some sort. A heap overflow can overwrite those variables, with unexpected effects. An example is a known vulnerability in Microsoft’s GDI+ processing of JPEG images.
  • Array index overflow—an array is a type of variable designed to store multiple values. It is possible to exploit unsecure code to load the array with more values than it expects, creating an exception that could be exploited.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

integer overflow

A

An integer is a positive or negative number with no fractional component (a whole number). Integers are widely used as a data type, where they are commonly defined with fixed lower and upper bounds. An integer overflow attack causes the target software to calculate a value that exceeds these bounds. This may cause a positive number to become negative (changing a bank debit to a credit, for instance). It could also be used where the software is calculating a buffer size; if the attacker is able to make the buffer smaller than it should be, he or she may then be able to launch a buffer overflow attack.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Race conditions

A

occur when the outcome from execution processes is directly dependent on the order and timing of certain events, and those events fail to execute in the order and timing intended by the developer. A race condition vulnerability is typically found where multiple threads are attempting to write a variable or object at the same memory location. Race conditions have been used as an anti-virus evasion technique. In 2016, the Linux® kernel was discovered to have an exploitable race condition vulnerability, known as Dirty COW (https://www.theregister.co.uk/2016/10/21/linux_privilege_escalation_hole).

This type of vulnerability is mitigated by ensuring that a memory object is locked when one thread is manipulating it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

pointer

A

reference to an object at a particular memory location. Attempting to access that memory address is called dereferencing. If the pointer has been set to a null value (perhaps by some malicious process altering the execution environment), this creates a null pointer type of exception and the process will crash. Programmers can use logic statements to test that a pointer is not null before trying to use it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Memory leaks

A

If a process is operating correctly, when it no longer requires a block of memory, it should release it. If the program code does not do this, it could create a situation where the system continually leaks memory to the faulty process. This means less memory is available to other processes and the system could crash. Memory leaks are particularly serious in service/background applications, as they will continue to consume memory over an extended period. Memory leaks in the OS kernel are also extremely serious. A memory leak may itself be a sign of a malicious or corrupted process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Dynamic Link Library (DLL)

A

A Dynamic Link Library (DLL) is a binary package that implements some sort of standard functionality, such as establishing a network connection or performing cryptography. The main process of a software application is likely to load (or call) several DLLs during the normal course of operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

DLL injection

A

not a vulnerability of an application but of the way the operating system allows one process to attach to another. This functionality can be abused by malware to force a legitimate process to load a malicious link library. The link library will contain whatever functions the malware author wants to be able to run. Malware uses this technique to move from one host process to another to avoid detection.

To perform DLL injection, the malware must already be operating with sufficient privileges (typically, local administrator or system privileges). It must also evade detection by anti-virus software. One means of doing this is code refactoring. Refactoring means that the code performs the same function by using different methods (control blocks, variable types, and so on). This might be done legitimately to improve the code in some way, such as making it run more efficiently or making it easier to maintain and update. Refactoring can also be used by malware authors to evade detection by A-V scanners because the different code syntax means that the malware must be identified by a new signature, or be caught by heuristic analysis.

OS function calls to allow DLL injection are legitimately used for operations such as debugging and monitoring. Another opportunity for malware authors to exploit these calls is the Windows Application Compatibility framework. This allows legacy applications written for an OS, such as Windows® XP, to run on Windows 10. The code library that intercepts and redirects calls to enable legacy mode functionality is called a shim. The shim must be added to the registry and its files (packed in a shim database/.SDB file) added to the system folder. The shim database represents another way that malware with local administrator privileges can run on reboot (persistence).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

arbitrary code execution

A

The purpose of the attacks against application or coding vulnerabilities is to allow the attacker to run his or her own code on the system. This is referred to as arbitrary code execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

remote code execution

A

Where the code is transmitted from one machine to another, it is sometimes referred to as remote code execution. The code would typically be designed to install some sort of Trojan or to disable the system in some way (Denial of Service).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

privilege escalation

A

An application or process must have privileges to read and write data and execute functions. Depending on how the software is written, a process may run using a system account, the account of the logged-on user, or a nominated account. If a software exploit works, the attacker may be able to execute his or her own process (a worm or Trojan, for instance) with the same privilege level as the exploited process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

There are two main types of privilege escalation:

A
  • Vertical privilege escalation (or elevation) is where a user or application can access functionality or data that should not be available to them. For instance, a user might have been originally assigned read-only access (or even no access) to certain files, but after vertical escalation, the user can edit or even delete the files in question.
  • Horizontal privilege escalation is where a user accesses functionality or data that is intended for another user. For instance, a user might have the means to access another user’s online bank account.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

SQL injection

A

As the name suggests, an SQL injection attack attempts to insert an SQL query as part of user input. The attack can either exploit poor input validation or unpatched vulnerabilities in the database application. If successful, this could allow the attacker to extract or insert information into the database or execute arbitrary code on the remote system using the same privileges as the database application.

XML injection is fundamentally the same thing but targeted against web services using XML data formats, rather than SQL.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Directory traversal

A

another common input validation attack. The attacker submits a request for a file outside the web server’s root directory by using the command to navigate to the parent directory (../). This attack can succeed if the input is not filtered properly and access permissions on the file are the same as those on the web server root.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

command injection

A

A command injection attack attempts to run OS shell commands from the browser. As with directory traversal, the web server should normally be able to prevent commands from operating outside of the server’s directory root and to prevent commands from running with any other privilege level than the web “guest” user (who is normally granted only very restricted privileges). A successful command injection attack would find some way of circumventing this security (or find a web server that is not properly configured).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Transitive access

A

describes the problem of authorizing a request for a service that depends on an intermediate service. For example, say a user orders an ebook through some e‑commerce application on a merchant site. The merchant site processes the order and then places a request to a publisher site to fulfill the ebook to the user. Designing the trust relationships between these three parties is complicated:

  • The merchant site could impersonate the end user to obtain publisher site services fraudulently.
  • The end user could exploit weaknesses in the merchant site to obtain unauthorized services from the publisher site.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Cross-Site Scripting (XSS)

A

The attacks just described mostly target weaknesses of server-side application code or security measures. There are also many attacks against the browser (client-side code and security measures). Cross-Site Scripting (XSS) is one of the most powerful input validation exploits. XSS involves a trusted site, a client browsing the trusted site, and the attacker’s site.

A typical attack would proceed as follows:

  1. The attacker identifies an input validation vulnerability in the trusted site.
  2. The attacker crafts a URL to perform a code injection against the trusted site. This could be coded in a link from the attacker’s site to the trusted site or a link in an email message.

Note: The key to a successful XSS attack is making the link seem innocuous or trustworthy to the user. There are various ways of encoding a link to conceal its true nature.

  1. When the user clicks the link, the trusted site returns a page containing the malicious code injected by the attacker. As the browser is likely to be configured to allow the site to run scripts, the malicious code will execute.
  2. The malicious code could be used to deface the trusted site (by adding any sort of arbitrary HTML code), steal data from the user’s cookies, try to intercept information entered into a form, or try to install malware. The crucial point is that the malicious code runs in the client’s browser with the same permission level as the trusted site.

Note: A common technique is to leverage iFrames to disguise the presence of malicious code. An iFrame is a legitimate HTML coding technique that can be used to embed one site within another. A malicious iFrame could either overlay a site with a fake login or host malicious code in an “invisible” 1x1 pixel frame. iFrame attacks can also be launched simply by compromising the web server security and uploading compromised code.

The attack is particularly effective not only because it breaks the browser’s security model, but also because it relies only on scripting, which is generally assumed by browsers to be safe. The vast majority of sites use some sort of scripting and so will not display correctly without it.

The attack described is a reflected or non-persistent XSS attack. A stored (or persistent) XSS attack aims to insert code into a back-end database used by the trusted site. For example, the attacker may submit a post to a bulletin board with a malicious script embedded in the message. When other users view the message, the malicious script is executed.

Both the attacks described exploit server-side scripts. A third type of XSS attack exploits vulnerabilities in client-side scripts. Such scripts often use the Document Object Model (DOM) to modify the content and layout of a web page. For example, the “document.write” method enables a page to take some user input and modify the page accordingly. An attacker could submit a malicious script as input and have the page execute the script. Such exploits can be very powerful as they run with the logged in user’s privileges of the local system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

cookie

A

HTTP is a stateless protocol meaning that the server preserves no information about the client. As most web applications depend on retaining information about clients, various mechanisms have been used to preserve this sort of stateful information. A cookie is one of those methods. A cookie is created when the server sends an HTTP response header with the cookie. Subsequent request headers sent by the client will usually include the cookie. Cookies are either non-persistent (or session) cookies, in which case they are stored in memory and deleted when the browser instance is closed, or persistent, in which case they are stored on the hard drive until deleted by the user or pass a defined expiration date. For example, if, when logging in, the user selects the Remember Me option, then a cookie is saved and accessed the next time they visit that web page.

Normally, a cookie can only be used by the server or domain that created it, but this can be subverted by a Cross-Site Scripting attack. Another weakness is where cookies are used to establish sessions in an application or for user authentication. Session IDs are often generated using predictable patterns (such as IP address with the date and time), making the session vulnerable to eavesdropping and possibly hijacking, by replaying the cookie to re-establish the session.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Cross-Site Request Forgery (XSRF)

A

exploit applications that use cookies to authenticate users and track sessions. To work, the attacker must convince the victim to start a session with the target site. The attacker must then pass an HTTP request to the victim’s browser that spoofs an action on the target site, such as changing a password or an email address. This request could be disguised in a few ways (as an image tag, for instance) and so could be accomplished without the victim necessarily having to click a link. If the target site assumes that the browser is authenticated because there is a valid session cookie and doesn’t complete any additional authorization process on the attacker’s input (or if the attacker is able to spoof the authorization), it will accept the input as genuine. This is also referred to as a confused deputy attack (the point being that the user and the user’s browser are not necessarily the same thing).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Locally Shared Objects (LSOs)

A

Flash cookies, are data that is stored on a user’s computer by websites that use Adobe® Flash® Player. A site may be able to track a user’s browsing behavior through LSOs, causing a breach of privacy. Even if a user wipes tracking objects from their browser, LSOs may remain on their system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

HTTP headers

A

information processed by the server and browser but not necessarily displayed to the user. One of the headers is the action (GET or POST, for instance). Other headers may contain the user-agent (the type of browser) or custom information. Some applications may use headers to encode some user data, such as setting a cookie or returning the value of a cookie. If this is the case, as with forms and URLs, an attacker could try to inject code to perform a malicious action on the target server or client if the web application does not process the header correctly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

HTTP Response Splitting or CRLF injection

A

The best-known HTTP header manipulation attack is HTTP Response Splitting or CRLF injection. The attacker would craft a malicious URL and convince the victim to submit it to the web server. This could be encoded in something like an image tag, so the user may not have to choose to click a link. The URL contains extra line feeds, which may be coded in some non-obvious way. Unless the web server strips these out when processing the URL, it will be tricked into displaying a second HTTP response, containing content crafted by the attacker. This content could deface the genuine page, overlay a fake authentication form, perform some sort of XSS injection attack, and so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Man-in-the-Browser (MitB)

A

attack is where the web browser is compromised by installing malicious plug-ins or scripts or intercepting API calls between the browser process and DLLs. The Browser Exploitation Framework (BeEF) (https://beefproject.com) is one well-known MitB tool. There are various vulnerability exploit kits that can be installed to a website and actively try to exploit vulnerabilities in clients browsing the site (https://www.trendmicro.com/vinfo/ie/security/definition/exploit-kit). These kits may either be installed to a legitimate site without the owner’s knowledge (by compromising access control on the web server) and load in an iFrame (invisible to the user), or the attacker may use phishing/social engineering techniques to trick users into visiting the site, using Google™ search results, ads, typosquatting, or clicking an email link.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Clickjacking

A

n attack where what the user sees and trusts as a web application with some sort of login page or form contains a malicious layer or invisible iFrame that allows an attacker to intercept or redirect user input. Clickjacking can be launched using any type of compromise that allows the adversary to run JavaScript (XSS, CSRF, or MitB, for instance). Clickjacking can be mitigated by using HTTP response headers that instruct the browser not to open frames from different origins (domains) and by ensuring that any buttons or input boxes on a page are positioned on the top-most layer.

29
Q

Software Development Lifecycle (SDLC)

A

Security must be a key component of the application design process. Even a simple form and script combination can make a web server vulnerable if the script is not well written. A Software Development Lifecycle (SDLC) divides the creation and maintenance of software into discrete phases. There are two principal SDLCs: the waterfall model and Agile development.

30
Q

The waterfall model includes the following phases:

A
  • Requirements—capture everything that the system must do and the levels to which it must perform.
  • Design—develop a system architecture and unit structure that fulfills the requirements.
  • Implementation—develop the system units as programming code.
  • Verification—ensure the implementation meets the requirements and design goals.
  • Testing—integrate the units and ensure they work as expected.
  • Maintenance—deploy the system to its target environment and ensure that it is operated correctly.
  • Retirement—remove (deprovision) the system and any dependencies if they are no longer used.

In the waterfall framework, each phase must be completed and signed off before the next phase can begin. In this model, it can be hard to go back and make changes to the original specification, whether because of changed customer requirements or because of requirements or design problems discovered during implementation, testing, and deployment.

31
Q

Agile development flips the waterfall model by iterating through phases concurrently on smaller modules of code or sub-projects. The phases of the Agile model are:

A
  • Concept—devise the initial scope and vision for the project and determine its feasibility.
  • Inception—identify stakeholders and support for the project and start to provision resources and determine requirements.
  • Iteration—prioritize requirements and work through cycles of designing, developing, testing, and test deploying solutions to the project goals, adapting to changing requirements, priorities, and resources as needed.
  • Transition—perform final integration and testing of the solution and prepare for deployment in the user environment.
  • Production—ensure that the solution operates effectively.
  • Retirement—deprovision the solution and any environmental dependencies.

This piecemeal approach can react to change better, but has the disadvantage of lacking overall focus and can become somewhat open-ended

32
Q

security requirements definition

A

A legacy software design process might be heavily focused on highly visible elements, such as functionality, performance, and cost. You can also envisage a Security Development Lifecycle (SDLC) running in parallel or integrated with the focus on software functionality and usability. Examples include Microsoft’s SDL (https://www.microsoft.com/en-us/securityengineering/sdl) and the OWASP Software Security Assurance Process (https://www.owasp.org/index.php/OWASP_Software_Security_Assurance_Process ).

33
Q

Secure development means that at each phase, security considerations are accounted for:

A
  • Planning—train developers and testers in security issues, acquire security analysis tools, and ensure the security of the development environment.
  • Requirements—determine needs for security and privacy in terms of data processing and access controls.
  • Design—identify threats and controls or secure coding practices to meet the requirements.
  • Implementation—perform “white box” source code analysis and code review to identify and resolve vulnerabilities.
  • Testing—perform “black box” or “gray box” analysis to test for vulnerabilities in the published application (and its publication environment).
  • Deployment—ensure source authenticity of installer packages and publish best practice configuration guides.
  • Maintenance—ongoing security monitoring and incident response procedures, patch development and management, and other security controls.

Note: Black box (or blind) testing means that the analyst is given no privileged information about the software, whereas white box (or full disclosure) means that the analyst is given the source code. Gray box testing would mean some partial disclosure or more privileged access than an external party would have.

34
Q

During development, the code is normally passed through several different environments:

A
  • Development—The code will be hosted on a secure server. Each developer will check out a portion of code for editing on his or her local machine. The local machine will normally be configured with a sandbox for local testing. This ensures that whatever other processes are being run locally do not interfere with or compromise the application being developed.
  • Test/integration—In this environment, code from multiple developers is merged to a single master copy and subjected to basic unit and functional tests (either automated or by human testers). These tests aim to ensure that the code builds correctly and fulfills the functions required by the design.
  • Staging—This is a mirror of the production environment but may use test or sample data and will have additional access controls so that it is only accessible to test users. Testing at this stage will focus more on usability and performance.
  • Production—The application is released to end users.
35
Q

It is important to be able to validate the integrity of each coding environment. Compromise in any environment could lead to the release of compromised code.

A
  • Sandboxing—Each development environment should be segmented from the others. No processes should be able to connect to anything outside the sandbox. Only the minimum tools and services necessary to perform code development and testing should be allowed in each sandbox.
  • Secure baseline—Each development environment should be built to the same specification, possibly using automated provisioning.
  • Integrity measurement—This process determines whether the development environment varies from the secure baseline. Perhaps a developer added an unauthorized tool to solve some programming issue. Integrity measurement may be performed by scanning for unsigned files or files that do not otherwise match the baseline.
36
Q

Provisioning

A

process of deploying an application to the target environment, such as enterprise desktops, mobile devices, or cloud infrastructure. An enterprise provisioning manager might assemble multiple applications in a package. Alternatively, the OS and applications might be defined as a single instance for deployment on a virtualized platform. The provisioning process must account for changes to any of these applications so that packages or instances are updated with the latest version.

37
Q

Deprovisioning

A

process of removing an application from packages or instances. This might be necessary if software has to be completely rewritten or no longer satisfies its purpose. As well as removing the application itself, it is also important to make appropriate environment changes to remove any configurations (such as open firewall ports) that were made just to support that application.

38
Q

version control

A

Software version control is an ID system for each iteration of a software product. Most version control numbers represent both the version, as made known to the customer or end user, and internal build numbers for use in the development process.

39
Q

change management

A

Version control supports the change management process for software development projects. Most software development environments use a build server to maintain a repository of previous versions of the source code. When a developer commits new or changed code to the repository, the new source code is tagged with an updated version number and the old version archived. This allows changes to be rolled back if a problem is discovered.

40
Q

Continuous integration

A

the principle that developers should commit updates often (every day or sometimes even more frequently). This is designed to reduce the chances of two developers spending time on code changes that are later found to conflict with one another.

41
Q

normalization

A

Where an application accepts string input, the input should be subjected to normalization procedures before being accepted. Normalization means that a string is stripped of illegal characters or substrings and converted to the accepted character set. This ensures that the string is in a format that can be processed correctly by the input validation routines.

42
Q

canonicalization attack

A

An attacker might use a canonicalization attack to disguise the nature of the malicious input. Canonicalization refers to the way the server converts between the different methods by which a resource such as a file path or URL may be represented and submitted to the simplest (or canonical) method used by the server to process the input. Examples of encoding schemes include HTML entities and character set encoding (ASCII and Unicode). An attacker might be able to exploit vulnerabilities in this process to perform code injection or facilitate directory traversal.

43
Q

Fuzzing

A

a means of testing that an application’s input validation routines work well. Fuzzing means that the test or vulnerability scanner generates large amounts of deliberately invalid and/or random input and records the responses made by the application. This is a form of “stress testing” that can reveal how robust the application is.

44
Q

server-side versus client-side validation

A

A web application (or any other client-server application) can be designed to perform input validation locally (on the client) or remotely (on the server). Applications may use both techniques for different functions. The main issue with client-side validation is that the client will always be more vulnerable to some sort of malware interfering with the validation process. The main issue with server-side validation is that it can be time-consuming, as it may involve multiple transactions between the server and client. Consequently, client-side validation is usually restricted to informing the user that there is some sort of problem with the input before submitting it to the server. Even after passing client-side validation, the input will still undergo server-side validation before it can be posted (accepted). Relying on client-side validation only is poor programming practice.

45
Q

XSS/XSRF prevention

A

Input validation should be enough to defeat most cross-site style attacks. The other consideration is for the application to use secure authentication and authorization procedures. Naïve methods of recording sessions, such as unencrypted cookies, should be deprecated. Even if a user has authenticated, any actions the user attempts to perform should be properly authorized using some sort of secure token that an attacker cannot spoof or replay.

46
Q

handler

A

A well-written application must be able to handle errors and exceptions gracefully. This means that the application performs in a more-or-less expected way when something unexpected happens. An exception means that the current procedure cannot continue. An exception could be caused by invalid user input, a loss of network connectivity, another server or process failing, and so on. Ideally, the programmer will have written an error or exception handler to dictate what the application should then do. Each procedure can have multiple error handlers. Some handlers will deal with anticipated errors and exceptions; there should also be a catch-all handler that will deal with the unexpected.

The main goal must be for the application not to fail in a way that allows the attacker to execute code or perform some sort of injection attack. Another issue is that an application’s interpreter will default to a standard handler and display default error messages when something goes wrong. These may reveal the inner workings of code to an attacker. It is better for an application to use custom error handlers so that the developer can choose the amount of information shown when an error is caused.

47
Q

memory management

A

Many arbitrary code attacks depend on the target application having faulty memory management procedures. This allows the attacker to execute his or her own code in the space marked out by the target application. There are known unsecure practices for memory management that should be avoided and checks for processing untrusted input, such as strings, to ensure that it cannot overwrite areas of memory.

48
Q

Developing code to perform some function is always hard work, so developers will often look to see if someone else has done that work already. A program may make use of existing code in the following ways:

A
  • Code reuse—using a block of code from elsewhere in the same application or from another application to perform a different function (or perform the same function in a different context). The risk here is that the copy and paste approach causes the developer to overlook potential vulnerabilities (perhaps the function’s input parameters are no longer validated in the new context).
  • Third-party library—a binary package (such as a Dynamic Link Library) that implements some sort of standard functionality, such as establishing a network connection or performing cryptography. Each library must be monitored for vulnerabilities and patched promptly.
  • Software Development Kit (SDK)—the programming environment used to create the software might provide sample code or libraries of pre-built functions. As with other third-party libraries or code, it is imperative to monitor for vulnerabilities.
49
Q

stored procedure

A

a part of a database that executes a custom query. The procedure is supplied an input by the calling program and returns a pre-defined output for matched records. This can provide a more secure means of querying the database. Any stored procedures that are part of the database but not required by the application should be disabled.

50
Q

Code signing

A

principal means of proving the authenticity and integrity of code (an executable or a script). The developer creates a cryptographic hash of the file then signs the hash using his or her private key. The program is shipped with a copy of the developer’s code signing certificate, which contains a public key that the destination computer uses to read and verify the signature. The OS then prompts the user to choose whether to accept the signature and run the program.

51
Q

Unreachable code

A

a part of application source code that can never be executed. For example, there may be a routine within a logic statement (If … Then) that can never be

called because the conditions that would call it can never be met. Dead code is executed but has no effect on the program flow. For example, there may be code to perform a calculation, but the result is never stored as a variable or used to evaluate a condition. Unreachable and dead code should be removed from the application to forestall the possibility that it could be misused in some way. The presence of unreachable/dead code can indicate that the application is not being well maintained.

52
Q

Data exposure

A

a fault that allows privileged information (such as a token, password, or PII) to be read without being subject to the appropriate access controls. Applications must only transmit such data between authenticated hosts, using cryptography to protect the session. When incorporating encryption in your code, it’s important to use encryption algorithms and techniques that are known to be strong, rather than creating your own.

53
Q

obfuscation/camouflage

A

In development, it is important that code be well documented, to assist the efforts of multiple programmers working on the same project. Well-documented code is also easier to analyze. Code can be made difficult to analyze by using an obfuscator, which is software that randomizes the names of variables, constants, functions, and procedures, removes comments and white space, and performs other operations to make the compiled code physically and mentally difficult to read and follow. This sort of technique might be used to make reverse engineering an application more difficult and as a way of disguising malware code.

Another option is to encrypt the code, but if the code is to run, the encryption key must be made available on the host at some point. This gives a malicious process on the same host the chance of recovering the key from memory.

54
Q

application auditing

A

A new application should be audited to ensure that it meets the goals of confidentiality, integrity, and availability critical to any secure computer system. Test any new or updated applications thoroughly before deploying them to a production server. Use pen test methods to try to discover and exploit any weaknesses in the application’s design or implementation. Application vulnerability scanners automate the process of testing for known vulnerabilities and unsecure coding practice, monitoring typical user behavior (beta testers) to find out if the application could be used in ways the developers might not have expected. As well as testing the application in production, submit new applications for architecture, design, and code reviews. These should take place when the application is first commissioned and when it is upgraded or at regular intervals thereafter to ensure that the application is not vulnerable to new threats

55
Q

design review

A

A design review will ensure that security is a requirement for the application. One of the design goals of a secure application should be to reduce the attack surface. The attack surface is all the ways that a user (including malicious users) can interact with the application. This includes ways that the application designer has foreseen, such as form fields and Application Programming Interfaces (API)—methods other applications can call—and those that they have not. As well as simplifying the application, it is also important to reduce the attack surface of the host OS and network. These should be set at the minimum configuration required to run the application

56
Q

code review

A

an in-depth examination of the way the application is written to ensure that it is well written and does not expose the application to known input validation or injection attacks.

57
Q

architecture review

A

analyze the systems on which the application depends. This could include the underlying OS and database application, programming language and development environment, client platform (PC and/or mobile), browsers, and plug-ins, and so on.

58
Q

An application model is a statement of the requirements driving the software development project. The requirements model is tested using processes of Verification and Validation (V&V):

A
  • Verification is a compliance testing process to ensure that the product or system meets its design goals.
  • Validation is the process of determining whether the application is fit-for-purpose (so for instance, its design goals meet the user requirements).
59
Q

compiled

A

When an application is compiled, the compiler tests that the code is well-formed. Well-formed does not mean that the code will execute without errors, just that its syntax is compliant with the requirements of the programming language.

60
Q

runtime environment

A

For functional testing, code must be executed in its runtime environment.

61
Q

A runtime environment will use one of two approaches for execution on a host system:

A
  • Compiled code is converted to binary machine language that can run independently on the target OS.
  • Interpreted code is packaged pretty much as is but is compiled line-by-line by an interpreter. This offers a solution that is platform independent because the interpreter resolves the differences between OS types and versions.

As well as the OS/interpreter, the runtime environment will include any additional libraries containing functions called by the main program.

62
Q

Static code analysis (or source code analysis)

A

performed against the application code before it is packaged as an executable process. The analysis software must support the programming language used by the source code. The software will scan the source code for signatures of known issues, such as OWASP Top 10 Most Critical Web Application Security Risks or injection vulnerabilities generally. NIST maintains a list of source code analyzers and their key features (https://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html).

Human analysis of software source code is described as a code review or as a manual peer review. It is important that the code be reviewed by developers (peers) other than the original coders to try to identify oversights, mistaken assumptions, or a lack of knowledge or experience. It is important to establish a collaborative environment in which reviews can take place effectively.

63
Q

Dynamic analysis

A

Static code review techniques will not reveal vulnerabilities that might exist in the runtime environment, such as exposure to race conditions. Dynamic analysis means that the application is tested under “real world” conditions using a staging environment.

64
Q

Fuzzing

A

a technique designed to test software for bugs and vulnerabilities

65
Q

There are generally three types of fuzzers, representing different ways of injecting manipulated input into the application:

A
  • Application UI—identify input streams accepted by the application, such as input boxes, command line switches, or import/export functions.
  • Protocol—transmit manipulated packets to the application, perhaps using unexpected values in the headers or payload.
  • File format—attempt to open files whose format has been manipulated, perhaps manipulating specific features of the file.

Fuzzers are also distinguished by the way in which they craft each input (or test case). The fuzzer may use semi-random input (dumb fuzzer) or might craft specific input based around known exploit vectors, such as escaped command sequences or character literals, or by mutating intercepted inputs.

Associated with fuzzing is the concept of stress testing an application to see how an application performs under extreme performance or usage scenarios.

Finally, the fuzzer needs some means of detecting an application crash and recording which input sequence generated the crash.

66
Q

Agile operations

A

Agile development principles can also be applied to system administration/operations tasks (Agile operations). Amongst other principles, Agile addresses the idea that resiliency, the ability to sustain performance despite failures, is a better and more achievable goal than the elimination of faults. This principle is referred to as fail fast (and learn quickly). The concept is that faults are much better identified in a production environment and that this is a more effective way to improve an application, as long as developers are able to respond quickly.

67
Q

DevOps

A

Consequently, there is also growing opinion that development and operations functions should be more closely tied together. This model is referred to as software development and operations (DevOps). DevOps means that there is much more collaboration between developers and system administrators.

68
Q

The concepts of Agile operations and DevOps support a few new approaches to deploying code:

A
  • Immutable infrastructure—This approach first strictly divides data from the components processing data. Once designed and provisioned as instances, the components are never changed or patched in place. Deploying a patch or adding a new application means building a new instance and deploying that.
  • Infrastructure as Code—This is the principle that when deploying an application, the server instance supporting the application can be defined and provisioned through the software code. Imagine a setup program that not only installs the application but also creates a VM and OS on which to run the application.
  • Security automation—The concept of scripted or programmed infrastructure can also be applied to security infrastructure (firewalls, IDS, SIEM, and privilege management). For example, security automation might mean that a user account is provisioned by running a script for the appropriate role rather than relying on a human administrator to select the appropriate security groups and policy settings.
69
Q

Follow these guidelines when incorporating security in the software development lifecycle:

A
  • Integrate security into each phase of the software development lifecycle.
  • Choose a software development model that most suits your security and business needs.
  • Incorporate a version control system in the development process to better manage changes to your project.
  • Incorporate secure coding techniques like input validation and stored procedures to avoid vulnerabilities in code.
  • Put your software project through various testing methods to evaluate its security, stability, and functionality.
  • Consider adopting a DevOps culture in order to integrate software development with systems operations.
  • Take advantage of software automation and infrastructure as code in a DevOps culture.