Chapter 9 Security Vulnerabilities, Threats, and Countermeasures Flashcards
What are the hardware execution types?
Multitasking means handlng two or more tasks simultaneously. A single core multitasking system can juggle more than one task at a time, but it’s like a juggler; it is only touching one thing at a time, but coordination keeps all the balls moving.
Multicore means having more than one execution core. Two, four, eight, dozens, or thousands of cores acting simultaneously and independently.
Multiprocessing means harnessing the power of more than one processor to complete the execution of a multi-threaded application.Some multiprocessor systems dedicate a process thread to a specific core. This is an affinity.
Multiprogramming involves the pseudo-execution of two tasks on a single processor, coordinated by the OS. It is a way to batch or serialize multiple processes so that when one stops to wait on a peripheral, its state is saved and the next process in line begins. The first program does not return to processing until all other processes in the batch have had their chance to execute and wait for the peripheral. Although this delays a single program, the overall time to complete all tasks is reduced.
Multithreading permits multiple concurrent tasks to be performed within a single process. Unlike multitasking, where multiple tasks consist of multiple processes, it permits multiple tasks to be performed within a single process. a thread is a self-contained sequence of instructions that can execute in parallel with other threads that are part of the same parent process. It is often used in applications where frequent context shifting between multiple active processes causes excessive overhead.
What are the protection rings?
Organize code and components in an OS, as well as applications, utilities, or other code under the control of the OS, into concentric rings. The deeper into the circle, the higher the privilege. The original Multics implementation had seven rings, but most modern OSes use four, numbered 0 through 3.
Ring 0 is the innermost ring, has the highest level of privilege, and can access any resource, file, or memory location. the kernal is the part of the OS that always remains resident in memory (so that it can run on demand) and is in Ring 0. The kernel can override any other code.
Ring 1 contains the remaining parts of the OS–those that can come and go as various tasks are requested.
Ring 2 contains the drivers and system utilities. They can access peripheral devices, special files, and so forth that applications and other programs cannot access.
Ring 3 contains the applications.
Processes associated with lower-numbered rings run first, can access more resources, and interact with the OS more directly. Processes in the higher-numbered rings must ask a handler or driver for services they need (known as a system call). This is sometimes known as a mediated-access model.
In practice, many modern OS use only two rings, one for system-level access (0 through 2) called kernel mode or privilege mode, and one for user-level programs and applications called user mode.
This model allows an OS to protect and insulate itself from users and applications. It permits the enforcement of strict boundaries between highly privileged OS components and less privileged parts.
What are the process states?
Aka operating states. Various forms of execution in which a process may run. Where the OS is concerned, it can either be in the supervisor state, with a privileged all-access mode; or operating in problem state, where privileges are low and all access requests must be checked against credentials. It is called problem state not because problems will occur but because the unprivileged accss means they can occur and the system must protect itself.
Processes line up for execution in a queue, where they are scheduled to run as a processor becomes available. Most OSes allow processes to only consume processor time in fixed increments or chunks.
A process can operate in one of several states:
–Ready means a process is ready to resume or begin processing as soon as it is scheduled for execution.
–Running, aka problem state, refers to when a process executes on the CPU and keeps going until it finishes, its time sice expires, or it is blocked (such as when it as generated an interrupt for I/O). if the time slice ends and it is not completed, it returns tot he ready state. Otherwise, it goes to waiting.
–Waiting is when a process is ready for continued execution but is waiting for I/O to be serviced. Once I/O is complete, it typically returns to the ready state.
–Supervisory is used when the process must perform an action that requires privileges greater than the problem state’s set of privileges. This includes modifying system configuration, installing device drivers, or modifying security settings. Basically, if it’s not Ring 3, or not in problem state, it has to be in supervisory mode.
–Stopped is when a process finishes or must be terminated (because of an error, a required resource is not available, or a resource request cannot be met). At this point, the OS recovers all memory and other resources allocated to the process.
What is read only memory, and what are the types?
Memory the system can read but can’t change. The contents of a standard ROM chip are burned in at the factory. They often contain ““bootstrap”” information that is used to start up prior to loading an OS from disk. This includes the ““power on self test”” (POST) series of diagnostics run each time you boot a PC.
There is a type of ROM that can be altered to some extent, known as programmable read-only memory (PROM). It has several subtypes:
–a basic PROM chip does not have its contents burned in, but instead allows an end user to burn in the contents later. However, once they are burned in, they cannot be changed.
–An erasable PROM chip, or EPROM, is intended to allow the code to be changed. The ultraviolet EPROM, or UEVEPROM, can be erased with a light. Once done, it is like the chip was never programmed.
–An electronically erasable PROM, or EEPROM, uses electric voltage to force erasure of the chip.
–Flash memory is a derivative of EEPROM. It is a nonvolatile form of storage. The big difference is that EEPROM must be fully erased, while flash memory can be erased and written in blocks. The most common type is NAND flash.
What is random access memory?
Readable and writable; contains information a computer uses during processing. Only retains its contents when power is supplied to it. Only used for temporary storage. Critical data should never be stored only in RAM.
Real memory, aka main memory or primary memory, is typically the largest RAM storage resources available to a computer. Normally composed of a number of dynamic RAM chips and must be refreshed periodically.
Cache RAM is faster and holds data that is likely to be used repeatedly. there are usually multiple caches, referred to as L1, L2, L3, and maybe L4. Three is typical. L1 or 2 may be reserved for a single processor core, while L3 is probably shared. L4, if present, may be located on the motherboard or a graphic processing unit.
Many peripherals also include onboard caches to reduce the storage burden. Many storage devices like HDD and SSD do as well. The caches must be flushed to permanent storage before power loss in order to avoid loss of cached data.
What is the difference between dynamic and static RAM?
Most computers contain both.
Dynamic RAM uses a series of capacitors, which are tiny electrical devices that hold a charge. If they have a charge, it is a 1; if not, it is a 0. Since capacitors lose their charge over time, volatile memory must be refreshed so as not to change the memory.
Static RAM uses a logical device called the flip-flop. It maintains its contents as long as it has power, so no refresh is needed.
Dynamic RAM is cheaper; static RAM is faster.
What are registers?
the CPU includes a limited amount of onboard memory, known as registers, that provide it with directly accessible memory locations. The ALU, or arithmetic logical unit, uses this when performing calculations. Typical CPUs have 8 to 32 registers, each between 32 and 64 bits of size. The main advantage of this memory is that it operates at CPU speeds.
Describe memory addressing?
The processor must have some way of referring to locations in memory.
Register addressing: when the CPU needs information, it uses a register address, aka Register 1.
Immediate addressing: not a memory addressing scheme per se, but a way of referring to data supplied to the CPU as part of an instruction. Example: add 2 to the value of register 1.
Direct addressing: the CPU is provided with an actual address of the memory location. The address must be located on the same memory page as the instruction being executed. More flexible than immediate addressing, since the contents of the memory location can be changed more readily than reprogramming the immediate addressing’s hard-coded data.
Indirect addressing: similar to direct addressing, but it doesn’t contain the actual value the CPU is to use as an operand. Instead, the memory address contains another memory address. the CPU reads the indirect address to learn the address where the desired data resides.
Base+offset addressing: Uses a value stored in one of the CPU’s registers or pointers as the base location from which to begin counting. the CPU then adds the offset supplied with the instruction to that base address.
A pointer is a basic element in many programming languages uses to store a memory address. The act of accessing a pointer to read that memory location is known as dereferencing.
What is secondary memory?
Refers to magnetic, optical, or flash based media that contains data not immediately available to the CPU. The OS must first read it and store it in real memory.
What is virtual memory?
A special time of secondary memory that is used to expand the addressable space of real memory. The most common type is the pagefile or swapfile that most OSes manage as part of memory management. It contains data previously stored in real memory but not used. when the OS needs to access it, it checks whether the page is memory-resident (can be accessed immediately) or whether it has been swapped to disk (meaning it must be paged, or moved back to real memory).
Describe key considerations for data storage devices.
Primary vs secondary: Primary memory is the RAM a computer uses to keep necessary information available to the CPU. Secondary memory includes HDDs, SSDs, flash drives, magnetic tapes, CDs, DVDs, and flash memory cards.
Volatile vs Nonvolatile: Volatility is a measure of how likely the memory is to lose its data while the power is turned off. Dynamic or static RAM is volatile, while ROMs, optical media, etc is nonvolatile.
Random vs sequential: Random access storage devices allow an OS to read/write from any point within the device by using some sort of memory addressing. Almost all primary storage devices are random, and most secondary are as well. Sequential requires you read all data in order. A magnetic tape drive is an example of this.
Memory security issues: Important to purge secondary memory and ROM/PROM/EPROM/EEPROM before allowing media to leave your organization. Also, ti is technically possible for electrical components in volatile storage to retain charge for a limited period. A cold boot attack freezes memory chips to delay the decay of resident data. Data remanence is when data remains on secondary storage devices even after being erased. You must overwrite all traces of data to truly sanitize it, or destroy beyond repair. A traditional zeroization wipe is less effective for SSDs. Consider theft and removal of removable media as issues.
What is firmware?
Also known as microcode. It is used to describe the software that is stored in ROM or EEPROM chips. It is never changed (ROM) or seldom changed (ee/PROM) and often drives the basic operation of a computing device. Many hardware devices need some limited set of instructions and processing power to complete their tasks, but you do not want to burden the OS. The ““mini OS”” is contained in the firmware chips inside these devices. Often found in mobile devices, IoT equipment, edge computing devices, fog computing devices, and ICS.
BIOS (basic input/output system) is the legacy basic low-end firmware or software embedded in the EEPROM or flash chip. It contains the OS-independent primitive instructions the computer needs to start and load the OS. UEFI (Unified Extensible Firmware Interface) has replaced in most modern systems. It supports larger hard drives, faster boot times, enhanced security features, and even the ability to use a mouse when making system changes.
Updating the UEFI, BIOS, or firmware is known as flashing. If hackers can do this, they may be able to bypass security features. Phlashing is when a malicious version of the official BIOS or firmware is installed.
Boot attestation or secure boot is a feature of UEFI that protects the local OS by preventing the loading or installing of device drivers or OS if it is not signed by a preapproved digital certificate. It protects against certain rootkits and backdoors. Measured boot is an optional feature that takes a hash calculation of every element involved in the booting process and compares them to known good values. It does not stop booting but records what is happening.
What is an applet?
Applets are code objects sent from a server to a client to perform some action. They are self-contained miniature programs that execute independently of the server that sent them. Not as common as it used to be, and most browsers support them. Example: a mortgage calculator that performs the calculations in the client side. This shifts the processing burden to the client, can be faster, and will maintain the privacy of the user’s data. However, security admins must take steps to make sure the code sent to their network is safe.
Two examples were Java and ActiveX. Now JavaScript is the most widely used mobile code scripting language. It is embedded into HTML documents using
. It is dependent on its HTML document and is not a standalone–therefore, it is not an applet. However, it is automatically downloaded with the HTML documents, and 95 percent of websites use it. It enables dynamic Web pages and supports web applications as well as client-side activities and page behaviors.Most browsers support JS via a dedicated JS engine. Most implementations use sandbox isolation to restrict JS to Web-related activities while minimizing its ability to do general purpose programming tasks. Also, most browsers prohibit JS code from accessing content from another origin, typically via a combination of protocol, domain/IP address, and port number.
However, there are ways to abuse JS. Hackers can create believable fake sites that duplicate the JS dynamic elements. Hackers have also found ways to defeat the sandbox isolation and same-origin protection. XSS and XSRF/CSRF can be used to exploit JS. The best protections are to keep browsers updated, implement JS subsets server-side, and use a content security policy (CSP) that rigorously enforces the same-origin issue. A WAF or NGFW also can help–the use of browser helper objects (BHO), add ons, and extensions can help.
Flash was another legacy applet.
What are large-scale parallel data systems?
Aka parallel computing. Designed to perform numerous calculations simultaneously. Often divide up a large task into smaller elements and distribute each subelement to a different processing system. This is based on the idea that some problems can be solved efficiently if broken into smaller tasks worked concurrently. This can be done by distinct CPUs or multicore CPUs, virtual systems, or a combination.
Division between symmetric multiprocessing (SMP) and asymmetric multiprocessing (AMP). SMP is a single computer containing multiple processors that are treated equally and controlled by a single OS. In AMP, each processor usually has its own OS, dedicated data bus, and memory resources. Processors can be configured to execute only specific code or operate on specific tasks. A variation of this is massive parallel processing (MPP), whre numerous AMP systems are linked together in order to work on a single primary task across multiple processes in multiple linked systems. This is for computationally intensive operations that would overwhelm a single OS. Some MPPs have over 10 million execution cores. A single processor will break up tasks and assign them to other processors.
The advantage of SMP systems is that they are adept at handling simple operations at extremely high rates, whereas MPP systems are uniquely suited for processing very large, complex tasks.
What is grid computing?
A form of parallel distributed processing that loosely groups a significant number of processing nodes to work toward a specific processing goal. Members of the grid can enter and leave the grid at random intervals. Often, grid members only join when their processing capacities are not being taxed for local workloads.
The concern with grid computing is that the content of each work packet is exposed to the world. Grid members could potentially keep copies of the work, so ou would not want to use them for anything confidential. Also, grid computing can vary greatly in terms of computational capacity. Packets are sometimes not returned, returned late, or returned corrupted. Grid computing often uses a central primary core of servers to manage the project, so if they are overloaded or go offline, failure or crashing of the grid can occur. Compromise of the grid could be leverage attack grid members.