architecture and design Flashcards
What is data masking?
data masking refers to a technique used to protect sensitive or confidential information by partially or fully concealing certain portions of the data. The purpose of data masking is to preserve the usability of the data for testing, development, or analysis purposes while ensuring that sensitive information is not exposed to unauthorized individuals.
Partial Masking
This is replacing part of the data value, such as replacing every number in a credit card number with “X” except the last four digits. This is useful when you don’t need full access to the entire data value, but unlike encrypted data, it is impossible to unmask. Prevent unauthorized users from viewing personal data. Block spam bots from gaining access to information.
Dynamic Masking
dynamic data masking is a data anonymization technique that limits sensitive data exposure by masking it for all non-authorized users or even QA and development teams that needs data to test.
Dynamic data masking allows data teams to specify the type and extent of sensitive data non-authorized users can access.
Dynamic data masking (DDM) aims to replace sensitive data in transit leaving the original at-rest data intact and unaltered.
Unlike SDM, DDM applies masking techniques at query-time, and does not involve moving, copying, or separating the data from its original source. This helps teams avoid any confusion and silos around data copies that have been scrubbed and masked for different reasons. It also remains updated and “live,” which is critical for analytics.
Since DDM is not tied to where the data is copied or stored, it is often considered to be the most widely-applicable type of masking. It also easily scales to more complex policy scenarios and use cases, making compliance much easier to manage.
Encryption
Encryption is the process of converting plaintext data into ciphertext using an algorithm and a cryptographic key. The ciphertext can only be decrypted and read by individuals or systems with the appropriate decryption key.
At rest
the term “at rest” refers to the state of data when it is not actively being used or transmitted. Data at rest is typically stored on physical or digital media, such as hard drives, solid-state drives, magnetic tapes, or any other storage device. The concept of data at rest is important in the context of information security, and securing data at rest is a key aspect of safeguarding sensitive information.
What is data in transit/motion?
refers to the state of data when it is actively being transmitted or moved from one location to another. Data in transit is in motion, traveling over a network or communication channel between two or more devices.
Securing data in transit is crucial to maintaining the confidentiality and integrity of information as it traverses networks. This can include data transferred over the internet, intranets, extranets, or any other network infrastructure.
Unsecured data in transit is susceptible to various forms of attacks, such as man-in-the-middle attacks. One of the primary methods for securing data in transit is the use of encryption.
The use of secure communication protocols, such as HTTPS (Hypertext Transfer Protocol Secure) for web traffic or VPN (Virtual Private Network) connections, enhances the security of data in transit. These protocols incorporate encryption and authentication mechanisms. TLS/SSL: Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide secure communication over a computer network. They are widely used to secure data in transit, especially in web applications.
Digital Signatures
A type of electronic signature that encrypts the document.
Digital signatures can be used to verify the authenticity and integrity of data in transit. A digital signature ensures that the data has not been altered during transmission and that it originates from a trusted source.
Tokenization
replaces a sensitive data element, for example, a bank account number, with a non-sensitive substitute, known as a token. The token is a randomized data string that has no essential or exploitable value or meaning.
When a retailer or merchant processes a customer’s credit card, the PAN is replaced with a token. 1111-2222-3333-4444 is substituted by alternatives such as Gb&t23d%kl0U. The merchant may use the token ID to maintain client records; for example, Gb&t23:%kl0U is associated with Jane Doe. The token subsequently goes to the payment processor, who de-tokenizes the ID and verifies the payment. The notation for Gb&t23d%kl0U is 1111-2222-3333-4444. The token is solely readable by the payment processor; it has no value to anybody else. Additionally, the token may only be used with that specific merchant.
Rights management
In the context of the Security+ exam, “rights management” typically refers to Digital Rights Management (DRM) or Information Rights Management (IRM). Digital Rights Management is a broader term used to control access to digital content, whereas Information Rights Management is more specific to securing and controlling access to documents and sensitive information within organizations.Encryption is often a key component of rights management. It ensures that even if unauthorized access occurs, the content remains unreadable without the proper decryption keys. This is crucial for protecting sensitive information.
DRM
DRM is commonly used to protect digital media, such as music, videos, e-books, and software. It helps prevent unauthorized copying, distribution, and use of copyrighted materials. Apple’s iTunes store uses DRM to limit how many devices customers can use to listen to songs. Audio files that users download from iTunes include data about their purchase and usage of songs. This prevents the files from being accessed on unauthorized devices. Apple also protects the content in its iBooks store with FairPlay technology, which ensures books can only be read on iOS devices.
What does IRM applies to?
Unlike traditional Digital Rights Management (DRM) that applies to mass-produced media like songs and movies, IRM applies to documents, spreadsheets, and presentations created by individuals.
Geographical consideration
Giographical consideration refer to factors and practices related to the physical locations and environments in which information technology (IT) systems and assets are deployed.
Geographical considerations encompass a range of aspects, including:
physical security,
environmental conditions,
regulatory requirements,
and disaster recovery planning.
Evaluating and securing physical locations where IT equipment is housed, such as data centers, server rooms, or network closets.
Maintaining optimal environmental conditions within data centers to ensure that IT equipment operates reliably. Implementing fire suppression systems to protect IT equipment from potential fire hazards and minimize the risk of data loss or hardware damage. Extreme temperatures or humidity can adversely affect hardware performance. Implementing redundancy across geographically dispersed locations to ensure business continuity and high availability in the event of a localized disaster or disruption.
Response and recovery controls – are measures and processes designed to address and mitigate the impact of security incidents and disruptions to normal business operations.
Incident response controls are proactive measures and plans put in place to detect, respond to, and recover from security incidents promptly. include the establishment of an incident response team, incident response plans, communication procedures, and tools for monitoring and detecting security incidents. This may involve intrusion detection systems, log analysis, and other monitoring mechanisms.
Business Continuity and Recovery Controls: focus on ensuring that critical business functions can continue or resume after a disruptive event. Business continuity controls include disaster recovery plans, backup and recovery procedures, redundant systems, and off-site data storage. Implementing redundancy in critical systems to ensure continuous operation in the event of hardware or software failures. Designing systems with high availability architectures to minimize downtime and ensure rapid recovery.
SSL
The primary purpose of SSL is to establish a secure and encrypted connection between a client (such as a web browser) and a server. SSL uses encryption algorithms to encrypt the data transmitted between the client and server. This encryption helps prevent unauthorized parties from intercepting and understanding the information being exchanged. While SSL was widely used, it has been succeeded by TLS, which is an updated and more secure version of the protocol. The terms SSL and TLS are often used interchangeably, and the Security+ exam may reference both.
TLS
TLS stands for Transport Layer Security. TLS is a cryptographic protocol designed to secure communication over a computer network, ensuring the privacy and integrity of data exchanged between systems. It is the successor to the earlier Secure Sockets Layer (SSL) protocol, and the terms TLS and SSL are often used interchangeably. TLS employs encryption algorithms to secure the data transmitted between the client and server. This encryption helps prevent unauthorized interception and ensures that even if the data is intercepted, it remains unreadable without the appropriate decryption key.
Hashing
It is a cryptographic hash function that uses md5 or SHA-256 to convert a string to a hash value.
It is a one-way function, meaning it is computationally infeasible to reverse the process and obtain the original input from the hash value.
Hashing is used to verify the integrity of data. By comparing the hash value of the original data with the hash value of the received or stored data, it can be determined whether the data has been altered or tampered with. Hashing is commonly used to store passwords securely. Hashing is a key component of digital signatures. In digital signatures, a hash value of a message is signed using a private key, and the recipient can use the corresponding public key to verify the authenticity and integrity of the message. To enhance security, a random value called a “salt” can be added to the input before hashing. Salting prevents attackers from using precomputed tables (rainbow tables) to quickly determine the original input.
Common cryptographic hash functions include MD5, SHA-1, SHA-256, and SHA-3. However, MD5 and SHA-1 are considered weak for security purposes, and SHA-256 and SHA-3 are commonly recommended for stronger security.
API considerations
API considerations refer to the security-related factors that should be considered when developing, implementing, or interacting with APIs.
Use strong authentication methods such as API keys, OAuth tokens, or other secure authentication protocols, implement input validation to prevent common vulnerabilities like SQL injection and cross-site scripting (XSS).
Validate and sanitize inputs received from clients before processing them to ensure data integrity.
Implement rate limiting and throttling mechanisms to prevent abuse, limit the number of requests a client can make in each timeframe, and protect against denial-of-service (DoS) attacks,
Design secure error-handling mechanisms Implement robust logging mechanisms to record API activities and monitor for any abnormal or potentially malicious behavior.
Use versioning to manage changes to APIs.
Configuration Management
refers to the practice of managing and controlling changes to the configuration of the software, network, or the company’s information system.
Baseline configuration
known and stable state of the system. It serves as a reference point.
Standard naming convention
there must be a consistent naming convention when labeling entities within an organization because it helps ensure consistency, clarity, and management within the entities of a company.
Data loss prevention (DLP)
Data Loss Prevention (DLP) is the practice of detecting and preventing data breaches, exfiltration, or unwanted destruction of sensitive data. Organizations use DLP to protect and secure their data and comply with regulations.
For example, an Intrusion Detection System (IDS) can alert about attacker attempts to access to sensitive data. Antivirus software can prevent attackers from compromising sensitive systems. A firewall can block access from any unauthorized party to systems storing sensitive data.
Masking
In the context of data protection, “masking” refers to a technique used to protect sensitive or confidential information by partially or fully concealing certain portions of the data.
hoax
Hoaxes are security threats that seem like they could exist but in fact are not real at all. But they consume a lot of time. A deceptive act or false information with the intention of deceiving or tricking somebody to get sensitive information. Spam filters can help, especially these days where you have cloud spam filters where other people who have already received this email can mark it is something that is malicious.
Influence campaigns
a coordinated effort by different communication channels with the aim of shaping the perception and opinion, spreading propaganda or disinforming the target audience. These campaigns are from political groups or nation states. These acts can even be conducted in person.
OSINT
Open-Source Intelligence refers to the practice of collecting and analyzing information from publicly available sources to gather intelligence. It does not involve unauthorized access. This is public information. They collect data from web sites, social platforms, govt records, online forums and more. They do whois lookups, dns records lookups. The adhere to ethical standards.