Comptia Security Plus Acronyms Revisions - Diamond Flashcards

1
Q

ICMP

A

Il existe de nombreux équipements réseau qui communiquent entre eux.
Mais comment notifier d’un problème de connexion, de transmissions de données ou encore la nécessité d’utiliser tel ou tel chemin pour se connecter à un équipement réseau.
C’est là que le protocole ICMP (Internet message protocol) entre en jeu grâce à la possibilité d’envoyer des messages.
Avec TCP et UDP, le protocole ICMP est l’un des protocoles fondamentaux qui permet à un réseau de fonctionner.

Qu’est ce que le protocole ICMP

ICMP est un protocole de niveau réseau (couche 3 du modèle OSI).
ICMP signifie Internet message protocol et comme son nom l’indique est un protocole axé message.

Les périphériques d’un réseau utilisent les messages ICMP pour communiquer les problèmes de transmission de données.
Il est donc utilisé pour transmettre des informations sur les problèmes de connectivité du réseau à la source de la transmission compromise. Il envoie des messages de contrôle tels que “destination network unreachable”, “source route failed” et “source quench”.
Par exemple :

Annoncer les erreurs de réseau : par exemple, un hôte ou une partition entière du réseau est inaccessible, en raison d’une défaillance quelconque. Un paquet TCP ou UDP dirigé vers un numéro de port sans récepteur attaché est également signalé par ICMP
Annoncer l’encombrement du réseau : lorsqu’un routeur commence à recevoir trop de paquets, en raison d’une incapacité à les transmettre aussi vite qu’ils sont reçus, il génère des messages ICMP Source Quench. Dirigés vers l’expéditeur, ces messages devraient entraîner un ralentissement du rythme de transmission des paquets

Pour cela, l’une des principales façons dont ICMP est utilisé est de déterminer si les données arrivent à destination et au bon moment. L’ICMP est donc un aspect important du processus de signalement des erreurs et des tests visant à déterminer si un réseau transmet bien les données.

ICMP est utilisé aussi par les outils tels que ping et traceroute.

Ping s’appuie sur le message 0 (ICMP Echo) pour vérifier si un hôte est en ligne par sa réponse. Cela permet de vérifier la vitesse de réponse : La latence.
Traceroute s’appuie sur le TTL, lorsque ce dernier atteint 0, un message ICMP est envoyé. Traceroute analyse ces retours pour générer le chemin ou carte de la connexion

Quelles sont les messages d’erreur d’ICMP ?
Liste des codes erreurs et messages d’erreur
Type Code Description
3 0-15 Destination unreachable Notification d’un paquet qui ne peut être transmis.
Ce dernier est abandonné.
Le champs du code fournit une explication.
5 0-3 Redirect Informe d’une route alternative pour le datagramme et devrait entraîner une mise à jour de la table de routage.
Le champ code explique la raison du changement de route.
11 0,1 Time excedeed Envoyé lorsque le champ TTL a atteint zéro (code 0) ou lorsqu’il y a un délai d’attente pour le réassemblage des segments (code 1).
12 0,11 Parameter Problem Envoyé lorsque l’en-tête IP est invalide (code 0) ou lorsqu’une option de l’en-tête IP est manquante (code 1).
Les code et messages ICMP
Le type Destination unreachable

Ce type a pour but d’annoncer les erreurs de réseau lorsqu’un équipement réseau ne parvient pas à communiquer avec un autre.
Voici les codes erreurs et messages d’erreur pour le type Destination unreachable (Code 3).
Code erreur Message ICMP
0 Destination network unreachable
1 Destination host unreachable
2 Destination protocol unreachable
3 Destination port unreachable
4 Fragmentation required, and DF flag set
5 Source route failed
6 Destination network unknown
7 Destination host unknown
8 Source host isolated
9 Network administratively prohibited
10 Host administratively prohibited
11 Network unreachable for ToS
12 Host unreachable for ToS
13 Communication administratively prohibited
14 Host Precedence Violation
15 Precedence cutoff in effect
Les messages ICMP pour le type Destination unreachable
Message d’erreur Description
Destination Unreachable Ce message est généré lorsqu’un paquet de données ne peut atteindre sa destination finale pour une autre raison. Par exemple, il peut y avoir des défaillances matérielles, des défaillances de port, des déconnexions de réseau, etc.
Redirection Error Ce message est généré lorsque l’ordinateur source (tel que le PDC) demande que le flux de paquets de données soit envoyé sur une autre route que celle prévue à l’origine. Cela est souvent fait afin d’optimiser le trafic réseau, en particulier s’il existe un autre moyen pour que les paquets de données atteignent leur destination en moins de temps. Cela implique la mise à jour des tables de routage dans les routeurs associés concernés.
Source Quench Il s’agit d’un message généré par l’ordinateur source pour réduire ou diminuer le flux de trafic réseau envoyé à l’ordinateur de destination. En d’autres termes, le PDC détecte que le taux de transmission des paquets de données est trop élevé et qu’il doit ralentir afin de s’assurer que l’ordinateur de destination reçoit tous les paquets de données qu’il est censé recevoir
Time Exceeded : Il s’agit du même événement que le Time to Live (TTL) basé sur le réseau
Les descriptions du type ICMP Destination unreachable
Le type redirect Message

Ce type est utiliser par des routeurs pour informer d’une route alternative pour le datagramme.
Il est est conçu pour informer un hôte d’un itinéraire plus optimal à travers un réseau, ce qui peut induire une une mise à jour de la table de routage.
Code erreur Message ICMP
0 Redirect Datagram for the Network
1 Redirect Datagram for the Host
2 Redirect Datagram for the ToS & network
3 Redirect Datagram for the ToS & host
Les messages ICMP pour le type redirect
Le type Time Exceeded

Le message ICMP – Time exceeded est généré lorsque la passerelle qui traite le datagramme (ou le paquet, selon la façon dont vous le regardez) constate que le champ Time To Live (ce champ se trouve dans l’en-tête IP de tous les paquets) est égal à zéro et doit donc être éliminé. Cette même passerelle peut également notifier l’hôte source via le message de dépassement de temps.

Le terme “fragment” signifie “couper en morceaux”. Lorsque les données sont trop volumineuses pour tenir dans un seul paquet, elles sont découpées en petits morceaux et envoyées à la destination. À l’autre bout, l’hôte de destination recevra les morceaux fragmentés et les rassemblera pour créer le gros paquet de données original qui a été fragmenté à la source.
Code erreur Message ICMP
0 TTL expired in transit
1 Fragment reassembly time exceeded
Les messages ICMP pour le type Time Exceeded
Le type Parameter Problem

Les messages ICMP Parameter Problem sont utilisés pour indiquer qu’un hôte ou un routeur n’a pas pu interpréter un paramètre invalide dans un en-tête de datagramme IPv4.
Lorsqu’un hôte ou un routeur du réseau trouve un mauvais paramètre dans un en-tête de datagramme IPv4, il abandonne le paquet et envoie un message ICMP Parameter Problem à l’expéditeur d’origine.
Le message ICMP Parameter Problem comporte également une option pour un pointeur spécial afin d’informer l’expéditeur de l’endroit où l’erreur s’est produite dans l’en-tête IPv4 original.
Code erreur Message ICMP
0 Pointer indicates the error
1 Missing a required option
2 Bad length
Les messages ICMP pour le type Parameter Problem
Quelle est la structure d’un paquet ICMP (Datagramme)

Il utilise une structure de paquet de données avec un en-tête de 8 octets et une section de données de taille variable.
Quelle est la structure d’un paquet ICMP (Datagramme)
La structure d’un paquet ICMP (Datagramme)

Voici un description des champs de l’en-tête ICMP (header) :

Type : Le type de message ICMP
Code : Sous-type du message ICMP
Somme de contrôle : Similaire à la somme de contrôle de l’en-tête IP. La somme de contrôle est calculée en fonction du message ICMP entier
Reste de l’en-tête : Données additionnelles qui peuvent être à zéro si aucune

Les messages d’erreur ICMP contiennent une section de données qui comprend une copie de l’intégralité de l’en-tête IPv4, plus au moins les huit premiers octets de données du paquet IPv4 à l’origine du message d’erreur.
La longueur des messages d’erreur ICMP ne doit pas dépasser 576 octets.
Ces données sont utilisées par l’hôte pour faire correspondre le message au processus approprié. Si un protocole de niveau supérieur utilise des numéros de port, ils sont supposés se trouver dans les huit premiers octets des données du datagramme original[6].
Les attaques par ICMP

Il existe des détournements du protocole ICMP afin de mener des attaques.
Voici quelques exemples.

Le type Time Exceeded peut être utilisé de manière malveillante pour des attaques visant à rediriger le trafic vers un système spécifique. Dans ce type d’attaque, le pirate, se faisant passer pour un routeur, envoie un message de redirection ICMP (Internet Control Message Protocol) à un hôte, qui indique que tout le trafic futur doit être dirigé vers un système spécifique en tant que route plus optimale pour la destination.
Un IDS peut être utilisé pour avertir lorsque ces messages de redirection ICMP se produisent ou pour qu’il les ignore.

Puis on trouve les attaques DoS et ICMP flood :

Ping of the death : Cette attaque vise à exploiter la taille variable de la section des données du paquet ICMP.
Dans le “Ping of death”, les paquets ICMP volumineux ou fragmentés sont utilisés pour des attaques par déni de service. Les données ICMP peuvent également être utilisées pour créer des canaux secrets de communication. Ces canaux sont connus sous le nom de tunnels ICMP.

Smurf attack : L’attaquant transmet un paquet ICMP dont l’adresse IP est usurpée ou falsifiée. Lorsque l’équipement du réseau répond, chaque réponse est envoyée à l’adresse IP usurpée, et la cible est inondée d’une tonne de paquets ICMP. Ce type d’attaque n’est généralement un problème que pour les équipements anciens.

Twinge Attack : Cette attaque est similaire à l’attaque Ping Flood, mais les demandes d’écho ICMP ne proviennent pas d’un seul ordinateur, mais de plusieurs. Elles comportent également une fausse adresse IP source dans l’en-tête du paquet de données.

En lien, vous pouvez lire :

Faut-il bloquer ICMP : Pour et contre
Iptables : bloquer ping (ICMP)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

SLE

A

Single Loss Expectancy = AV x EF

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

AV

A

asset value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

EF

A

exposure factor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

NDP

A

Network Discovery Protocol
Neighbor Discovery Protocol est un protocole utilisé par IPv6. Il opère en couche 3 et est responsable de la découverte des autres hôtes sur le même lien, de la détermination de leur adresse et de l’identification des routeurs présents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

ARO

A

Annualized Rate of Occurrence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

ALE

A

Annual Loss Expectancy = SLE x ARO

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

qualitative risk analysis

A

guessing, subjective experience based

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

quantitative risk analysis

A

numbers and costs- La loi fédérale sur la gestion de la sécurité des informations (Federal Information Security est une loi-cadre visant à protéger le gouvernement américain contre la cybercriminalité et les catastrophes naturelles qui représentent un risque pour les données sensibles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

FISMA

A

Federal Information Security Management Act

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

PCI-DSS

A

Payment Card Industry Digital Security Standard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

(PTA)

A

physical (fences and door locks and alarm systems and security guards), technical (safeguards, countermeasures), and administrative (changing the behavior of people ) - 1st ctrl type (security controls)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

NIST

A

National Institute of Standards and Technology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

DSA

A

Digital Signature Algorithm

Digital Signature Algorithm (DSA) is one of the Federal Information Processing Standard for making digital signatures depends on the mathematical concept or we can say the formulas of modular exponentiation and the discrete logarithm problem to cryptograph the signature digitally in this algorithm.

It is Digital signatures are the public-key primitives of message authentication in cryptography. In fact, in the physical world, it is common to use handwritten signatures on handwritten or typed messages at this time. Mainly, they are used to bind signatory to the message to secure the message.

Therefore, a digital signature is a technique that binds a person or entity to the digital data of the signature. Now, this will binding can be independently verified by the receiver as well as any third party to access that data.

Here, Digital signature is a cryptographic value that is calculated from the data and a secret key known only by the signer or the person whose signature is that.

In fact, in the real world, the receiver of message needs assurance that the message belongs to the sender and he should not be able to hack the origination of that message for misuse or anything. Their requirement is very crucial in business applications or any other things since the likelihood of a dispute over exchanged data is very high to secure that data.

Block Diagram of Digital Signature
The digital signature scheme depends on public-key cryptography in this algorithm.
Explanation of the block diagram
Firstly, each person adopting this scheme has a public-private key pair in cryptography.
The key pairs used for encryption or decryption and signing or verifying are different for every signature. Here, the private key used for signing is referred to as the signature key and the public key as the verification key in this algorithm.
Then, people take the signer feeds data to the hash function and generates a hash of data of that message.
Now, the Hash value and signature key are then fed to the signature algorithm which produces the digital signature on a given hash of that message. This signature is appended to the data and then both are sent to the verifier to secure that message.
Then, the verifier feeds the digital signature and the verification key into the verification algorithm in this DSA. Thus, the verification algorithm gives some value as output as a ciphertext.
Thus, the verifier also runs the same hash function on received data to generate hash value in this algorithm.
Now, for verification, the signature, this hash value, and output of verification algorithm are compared with each variable. Based on the comparison result, the verifier decides whether the digital signature is valid for this or invalid.
Therefore, the digital signature is generated by the ‘private’ key of the signer and no one else can have this key to secure the data, the signer cannot repudiate signing the data in the future to secure that data by the cryptography.
Importance of Digital Signature
Therefore, all cryptographic analysis of the digital signature using public-key cryptography is considered a very important or main and useful tool to achieve information security in cryptography in cryptoanalysis.

Thus, apart from the ability to provide non-repudiation of the message, the digital signature also provides message authentication and data integrity in cryptography.

This is achieved by the digital signature are,

Message authentication: Therefore, when the verifier validates the digital signature using the public key of a sender, he is assured that signature has been created only by a sender who possesses the corresponding secret private key and no one else does by this algorithm.
Data Integrity: In fact, in this case, an attacker has access to the data and modifies it, the digital signature verification at the receiver end fails in this algorithm, Thus, the hash of modified data and the output provided by the verification algorithm will not match the signature by this algorithm. Now, the receiver can safely deny the message assuming that data integrity has been breached for this algorithm.
Non-repudiation: Hence, it is just a number that only the signer knows the signature key, he can only create a unique signature on a given data of that message to change in cryptography. Thus, the receiver can present data and the digital signature to a third party as evidence if any dispute arises in the future to secure the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Public Key Cryptography

A

Asymmetric algorithms are also known as Public Key Cryptography

▪ Confidentiality
▪ Integrity
▪ Authentication
▪ Non-repudiation

Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key.[1][2] Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security.[3]

In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.[4]

For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext. Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources’ messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not conceal metadata like what computer a source used to send a message, when they sent it, or how long it is. Public-key encryption on its own also does not tell the recipient anything about who sent a message—it just conceals the content of a message in a ciphertext that can only be decrypted with the private key.

In a digital signature system, a sender can use a private key together with a message to create a signature. Anyone with the corresponding public key can verify whether the signature matches the message, but a forger who does not know the private key cannot find any message/signature pair that will pass verification with the public key.[5][6]

For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine.

Public key algorithms are fundamental security primitives in modern cryptosystems, including applications and protocols which offer assurance of the confidentiality, authenticity and non-repudiability of electronic communications and data storage. They underpin numerous Internet standards, such as Transport Layer Security (TLS), SSH, S/MIME and PGP. Some public key algorithms provide key distribution and secrecy (e.g., Diffie–Hellman key exchange), some provide digital signatures (e.g., Digital Signature Algorithm), and some provide both (e.g., RSA). Compared to symmetric encryption, asymmetric encryption is rather slower than good symmetric encryption, too slow for many purposes.[7] Today’s cryptosystems (such as TLS, Secure Shell) use both symmetric encryption and asymmetric encryption, often by using asymmetric encryption to securely exchange a secret key which is then used for symmetric encryption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

MOT

A

management (decision-making and the management of risk), operational (things that are done by people) and technical (controls)- NIST MOT controls (2nd ctrl type) - management controls are all about how your system’s security is going to be managed and overseen (things like policies, procedures, legal compliance, software development methodologies ). operational controls are focused on things that are done by people = user training, configuration management, testing our disaster recovery plans, and conducting incident handling. technical controls are put into a system to help secure it. This is things like AAA, the authentication, authorization, and accounting, access control, encryption technology, passwords, and configuring your security devices= Anything that is technical and performed by the computer can really be put into this category.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

PDC

A

preventive, detective, and corrective - 3rd control type - some things can go into multiple categories. Preventative controls are security controls that are installed before an event happens and they’re designed to prevent something from occurring. Detective controls are used during an event to find out whether or not something bad may have happened. Corrective controls are used after an event occurs. a closed-circuit TV system = a detective control & physical control. a password policy = a management control but it’s also an administrative control (policies…). a compensating control is used whenever you can’t meet the requirements for a normal control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

IP

A

Intellectual Property

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

DLP

A

data loss prevention systems - to fight IP theft…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

SMB

A

Server Message Block is a service for file sharing on port 445

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

TTX

A

Table-top exercises - exercises that use an incident scenario against a framework of controls or a red team.

Tabletop Exercise (TTX): A security incident preparedness activity, taking participants through the process of dealing with a simulated incident scenario and providing hands-on training for participants that can then highlight flaws in incident response planning.

The exercise begins with the Incident Response Plan and gauges team performance against the following questions:

What happens when you encounter a breach?
Who does what, when, how, and why?
What roles will legal, IT, law enforcement, marketing, and company officers play?
Who is spearheading the effort and what authority do they have?
What resources are available when you need them?

Since most companies are unprepared when a cyber attack occurs, every company needs a well-executed Incident Response Plan. You do not want to wait until a cyber attack occurs to figure out what you need to do.

https://www.redlegg.com/solutions/advisory-services/tabletop-exercise-pretty-much-everything-you-need-to-know

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

pentest

A

penetration test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

OVAL (pentest)

A

Open Vulnerability and Assessment Language - OVAL is an attempt to create a standard way for vulnerability management software, scanners, and other tools to share their data with each other and with other programs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

(KOCLA)

A

Knowledge, ownership, characteristic, location, and action - Are five basic factors of authentication that you can consider when determining if somebody is who they say they are). - Well, because a username and a password are both something you know, or a knowledge factor, this is still considered single-factor authentication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

OTP

A

one-time password - These are implemented generally using either a time-based or a hash-based mechanism. this time-based approach is actually a variation of the hash-based approach known as the HMAC-based One Time Password

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

TOTP

A

Time-based One-Time use Password - a password is computed from a shared secret and a current time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

HOTP

A

HMAC-based One Time Password - HMAC-based one-time password (HOTP) is a one-time password (OTP) algorithm based on hash-based message authentication codes (HMAC).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

HMAC

A

hash-based message authentication code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

SSO

A

Single Sign-On

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

FIDM

A

Federated IDentity Management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

SAML

A

Security Assertion Markup Language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

OpenID

A

open standard decentralized protocol

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

RADIUS

A

remote authentication dialing user service - standard ports 1812/1813 or proprietary ports 1645/1646 (UDP). - RADIUS is cross-platform that uses port 1812 for its authentication messages and port 1813 for its accounting messages. It provides centralized administration of dial-up, VPN, and wir (less authentication so that you can use that with both 802.1x and the Extensible Authentication Protocol (EAP). RADIUS is a client/server protocol that runs over the application layer (layer 7th).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

TACACS+,

A

terminal access controller access control system plus - port 49 anbd UDP - a proprietary protocol from Cisco called TACACS+ which can perform the role of an authenticator in an 802.1x network. It supports all network protocols and uses AAA processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

EAP

A

extensible authentication protocol (protocole d’authentification extensible) - EAP is actually not a single protocol by itself, but a framework in a series of protocol that allows for numerous different mechanisms of authentication, including things like simple passwords, digital certificates, and public key infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

TTLS

A

Tunneled Transport Layer Security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

EAP-TTLS

A

EAP Tunneled Transport Layer Security (EAP-TTLS) is an EAP protocol that extends TLS.

This form of EAP that is going to require a digital certificate on the server, but not on the client. Instead, the client is going to use a password for its authentication. - This makes it more secure than the traditional EAP-MD5, which just uses passwords, but it is less secure than the EAP-TLS because that one removes the password vulnerability by using two-digit certificates.

EAP Tunneled Transport Layer Security (EAP-TTLS)
“TTLS” redirects here. For the children’s song, see Twinkle, Twinkle, Little Star.

EAP Tunneled Transport Layer Security (EAP-TTLS) is an EAP protocol that extends TLS. It was co-developed by Funk Software and Certicom and is widely supported across platforms. Microsoft did not incorporate native support for the EAP-TTLS protocol in Windows XP, Vista, or 7. Supporting TTLS on these platforms requires third-party Encryption Control Protocol (ECP) certified software. Microsoft Windows started EAP-TTLS support with Windows 8,[19] support for EAP-TTLS[20] appeared in Windows Phone version 8.1.[21]

The client can, but does not have to be authenticated via a CA-signed PKI certificate to the server. This greatly simplifies the setup procedure since a certificate is not needed on every client.

After the server is securely authenticated to the client via its CA certificate and optionally the client to the server, the server can then use the established secure connection (“tunnel”) to authenticate the client. It can use an existing and widely deployed authentication protocol and infrastructure, incorporating legacy password mechanisms and authentication databases, while the secure tunnel provides protection from eavesdropping and man-in-the-middle attack. Note that the user’s name is never transmitted in unencrypted clear text, improving privacy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

EAP-FAST

A

EAP flexible authentication via secure tunneling

EAP-FAST (Flexible Authentication via Secure Tunneling) was developed by Cisco*. Instead of using a certificate to achieve mutual authentication. EAP-FAST authenticates by means of a PAC (Protected Access Credential) which can be managed dynamically by the authentication server. The PAC can be provisioned (distributed one time) to the client either manually or automatically. Manual provisioning is delivery to the client via disk or a secured network distribution method. Automatic provisioning is an in-band, over the air, distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

PEAP

A

Protected EAP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

LEAP

A

Lightweight EAP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

LDAP

A

lightweight directory access protocol Secure - Port 389 and port 636 for LDAPS (LDAP Secure using SSL). - Application layer protocol for accessing and modifying directory services data (AD uses it).AD is Microsoft’s version of LDAP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

LDAPS

A

lightweight directory access protocol - Port 389 and port 636 for LDAPS (LDAP Secure using SSL).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

AD

A

Active Directory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

DC

A

domain controller - DC in Kerberos acts as the key distribution center, or KDC. This KDC has two basic functions, authentication and ticket granting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

KDC

A

key distribution center (Kerberos) - This KDC has two basic functions, authentication and ticket granting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

TGT

A

ticket-granting ticket. (ticket d’octroi de tickets) (Kerberos authentication process)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

RDP

A

Remote desktop protocol (port 3389) for remote desktop service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

VNC

A

Virtual Network Computing for remote desktop service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

GUI

A

graphical user interface - see VNC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

PAP

A

Password Authentication Protocol (for remote access service)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

CHAP

A

Challenge Handshake Authentication Protocol for remote access service. Authentication scheme that is used in dial-up connections. - used mostly with dial-up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

EAP

A

Extensible Authentication Protocol (for remote access service) - used mostly with dial-up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

MS-CHAP

A

MicroSoft Challenge Handshake Authentication Protocol for remote access service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

VPN

A

Virtual private network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

PTP

A

point tunneling protocol (#2) - VPNs rely on two different protocols when they’re being operated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

L2TP

A

layer two tunneling protocol (#1) VPNs rely on two different protocols when they’re being operated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

RAS

A

Remote Access Services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

MITM

A

Man-In-The-Middle attack - attaque de l’home “du milieu” - A man-in-the-middle attack (MITM) is an attack where the attacker secretly relays and possibly alters the communications between two parties who believe they are directly communicating with each other. One example of a MITM attack is active eavesdropping, in which the attacker makes independent connections with the victims and relays messages between them to make them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. The attacker must be able to intercept all relevant messages passing between them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

MITB

A

Man-In-The-Browser

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

POS

A

Point of sale. A payment terminal, also known as a POS terminal, or credit card terminal. - A payment terminal allows a merchant to capture required credit and debit card information and to transmit this data to the merchant services provider or bank for authorization and finally, to transfer funds to the merchant. T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

DAC

A

Discretionary Access Control (file owner) - access control models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

MAC

A

Mandatory Access Control (here not Message Authentication Code (HMAC= hash-based… and Media Access Control (MAC address). - access control models. do not confuse with Media Access Control address (MAC address) - access control models

Mandatory Access Control
Mandatory access control is a method of limiting access to resources based on the sensitivity of the information that the resource contains and the authorization of the user to access information with that level of sensitivity.

You define the sensitivity of the resource by means of a security label. The security label is composed of a security level and zero or more security categories. The security level indicates a level or hierarchical classification of the information (for example, Restricted, Confidential, or Internal). The security category defines the category or group to which the information belongs (such as Project A or Project B). Users can access only the information in a resource to which their security labels entitle them. If the user’s security label does not have enough authority, the user cannot access the information in the resource.

Message Authentication Code (HMAC= hash-based
Hash-based Message Authentication Code (HMAC) is a message authentication code that uses a cryptographic key in conjunction with a hash function. Hash-based message authentication code (HMAC) provides the server and the client each with a private key that is known only to that specific server and that specific client.

Media Access Control
Media access control, medium access control or simply MAC, is a specific network data transfer policy. It determines how data transmits through a regular network cable. The protocol exists to ease data packets’ transfer between two computers and ensure no collision or simultaneous data transit occurs.
What is a MAC address?
Sending data between computers is only possible if both software and hardware are involved. However, for every device to know where to send the data, a third component is required – addresses.

Since both hardware and software are involved, there are two types of addresses here. The software address is the IP address, while the hardware address is the media access control address.

The MAC address ties to the network interface card, or network interface controller (NIC), located inside each computer today. The NIC acts as the transmission medium that turns data into electrical signals, which can then transmit over the web.

It consists of six sets of two characters or digits that colons or hyphens may separate. The limitations of this number are due to the address itself being 48-bits in length.

A typical MAC address has six groups of two hexadecimal digits. For example, 00:05:85:00:34:SG or 00-05-85-00-BZ-05. The first three groups here are intentionally the same, as they correspond to the same NIC manufacturer. In this case – Juniper.
Every NIC manufacturer has its own unique Organizationally Unique Identifier (OUI), or the first 24-bit part of the MAC address. This addressing scheme helps manufacturers distinguish themselves and their products.

MAC addresses are static and never change, unlike dynamic IP addEvery data packet sent over the network is sent from one MAC address to another. So, when the network adapter receives a packet, it compares the packet’s MAC address to its own. These addresses need to match so that the network interface card or network adapter can receive information.

This part is seamless, but it cannot happen without the help of IP addresses. Why are they important? They are a part of the data transmission process.

In plain terms, MAC addresses use IP addresses to recognize devices on the wide web. IP is a protocol above ethernet networks, and ethernet solely uses MAC addresses. And since they cannot send packets between each other unless they are part of the same network, be it cable or wireless, they need to go above.

In other words, there is no routing between MAC addresses. So, they use something called the Address Resolution Protocol (ARP). ARP’s primary function is to map IP addresses to MAC addresses. It’s also a protocol above ethernet, on the same level as IP.

Thanks to APR, when a piece of hardware needs to know the MAC address of the IP address that sends information, it sends a packet asking that question. ARP is responsible that the device with the proper MAC address can respond, confirming its identity. Once that’s ready, the two devices can finally exchange data packets.
. Each MAC address is a unique identifier, making it more reliable for network administrators who have to identify the ones sending and those receiving data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

RBAC

A

Role-Based Access Control - access control models - Role-based access control (RBAC) is a modification of DAC that provides a set of organizational roles that users may be assigned in order to gain access rights. The system is non-discretionary since the individual users cannot modify the ACL of a resource. Users gain their access rights implicitly based on the groups to which they are assigned as members.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

ABAC

A

Attribute-Based Access Control - access control models

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

MAC (address)

A

Media Access Control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

UAC (Windows)

A

User Account Controls - separation of duties under Windows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

ADUC

A

Active Directory Users and Computers - a program in Windows

Active Directory Users and Computers (ADUC) is built as an add-on for the Microsoft Management Console (MMC), and it’s the go-to tool for IT Pros to manage their Active Directory (AD) environments. You can use ADUC to:

Create AD objects like users, groups, organizational units (OUs), and even printers.
Make changes to existing users, groups, OUs, etc.
Delegate permissions
Move FSMO roles
Raise the domain functional level
Work with advanced features like the LostAndFound container, NTDS Quotas, Program Data, and System information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

chmod

A

Change Mod in Linux.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

CCTV

A

closed-circuit TV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

PTZ

A

Pan Tilt Zoom - PTZ (=CCTV) with a joystick and move the camera to look at different direction and tilt it up and down, pan it left and right, or zoom in or zoom out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

FAR

A

false acceptance rate (biometrics)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

FRR

A

false rejection or false rejection rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

CER

A

crossover error rate (=EER)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

EER

A

equal error rate (=CER)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

HVAC

A

heating, ventilation, and air conditioning system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

ICS

A

industrial control systems - Système de contrôle industriel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

SCADA system

A

supervisory control and data acquisition system - So, when I’m talking about ICS, I’m looking at one plant. When I talk about SCADA, I’m talking about multiple plants. SCADA (supervisory control and data acquisition) networks is a type of network that works off of an ICS (industry control system) and is used to maintain sensors and control systems over large geographic areas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

STP

A

Shielded Twisted Pair cables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

EMP

A

electromagnetic pulse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

CAN

A

Controller Area Network - vehicular vulnerabilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

CAN

A

Campus Area Network (network) - network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

OBD-II

A

Onboard Diagnostic module (primary CAN method)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

IoT

A

Internet of Things - Supervisory control and data acquisition (SCADA) systems, industrial control systems (ICS), internet-connected televisions, thermostats, and many other things examples of devices classified as the Internet of Things (IoT).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

PLC

A

programmable logic controller (contrôleur de logique programmable).

A programmable logic controller (PLC) or programmable controller is an industrial computer that has been ruggedized and adapted for the control of manufacturing processes, such as assembly lines, machines, robotic devices, or any activity that requires high reliability, ease of programming, and process fault diagnosis. Dick Morley is considered as the father of PLC as he had invented the first PLC, the Modicon 084, for General Motors in 1968.

PLCs can range from small modular devices with tens of inputs and outputs (I/O), in a housing integral with the processor, to large rack-mounted modular devices with thousands of I/O, and which are often networked to other PLC and SCADA systems.[1]

They can be designed for many arrangements of digital and analog I/O, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.

PLCs were first developed in the automobile manufacturing industry to provide flexible, rugged and easily programmable controllers to replace hard-wired relay logic systems. Since then, they have been widely adopted as high-reliability automation controllers suitable for harsh environments.

A PLC is an example of a hard real-time system since output results must be produced in response to input conditions within a limited time, otherwise unintended operation will result.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

SoC

A

system on a chip - more performant than PLCs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

RTOS

A

real-time operating system - OS for embedded systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

FPGA

A

field programmable gate array - OS for embedded systems

Un circuit prédiffusé programmable (FPGA, Field-Programmable Gate Array) est un circuit intégré que l’on peut programmer sur le terrain après sa sortie des chaînes de fabrication. Sur le principe, ces circuits FPGA ressemblent aux puces de mémoire morte programmable (PROM), mais leur potentiel d’application est bien plus vaste. Les ingénieurs s’en servent pour concevoir des circuits intégrés spécialisés qui seront ensuite câblés et produits en grandes quantités pour être commercialisés auprès des fabricants d’ordinateurs et des consommateurs. À terme, les circuits FPGA pourraient permettre aux utilisateurs de fabriquer des microprocesseurs adaptés à leur besoins propres.

What is an FPGA?

Field Programmable Gate Arrays (FPGAs) are semiconductor devices that are based around a matrix of configurable logic blocks (CLBs) connected via programmable interconnects. FPGAs can be reprogrammed to desired application or functionality requirements after manufacturing. This feature distinguishes FPGAs from Application Specific Integrated Circuits (ASICs), which are custom manufactured for specific design tasks. Although one-time programmable (OTP) FPGAs are available, the dominant types are SRAM based which can be reprogrammed as the design evolves. - Learn More
What is the difference between an ASIC and an FPGA?

ASIC and FPGAs have different value propositions, and they must be carefully evaluated before choosing any one over the other. Information abounds that compares the two technologies. While FPGAs used to be selected for lower speed/complexity/volume designs in the past, today’s FPGAs easily push the 500 MHz performance barrier. With unprecedented logic density increases and a host of other features, such as embedded processors, DSP blocks, clocking, and high-speed serial at ever lower price points, FPGAs are a compelling proposition for almost any type of design. - Learn More
FPGA Applications

Due to their programmable nature, FPGAs are an ideal fit for many different markets. As the industry leader, Xilinx provides comprehensive solutions consisting of FPGA devices, advanced software, and configurable, ready-to-use IP cores for markets and applications such as:

Aerospace & Defense - Radiation-tolerant FPGAs along with intellectual property for image processing, waveform generation, and partial reconfiguration for SDRs.
ASIC Prototyping - ASIC prototyping with FPGAs enables fast and accurate SoC system modeling and verification of embedded software
Automotive - Automotive silicon and IP solutions for gateway and driver assistance systems, comfort, convenience, and in-vehicle infotainment. - Learn how Xilinx FPGA's enable Automotive Systems
Broadcast & Pro AV - Adapt to changing requirements faster and lengthen product life cycles with Broadcast Targeted Design Platforms and solutions for high-end professional broadcast systems.
Consumer Electronics - Cost-effective solutions enabling next generation, full-featured consumer applications, such as converged handsets, digital flat panel displays, information appliances, home networking, and residential set top boxes.
Data Center - Designed for high-bandwidth, low-latency servers, networking, and storage applications to bring higher value into cloud deployments.
High Performance Computing and Data Storage - Solutions for Network Attached Storage (NAS), Storage Area Network (SAN), servers, and storage appliances.
Industrial - Xilinx FPGAs and targeted design platforms for Industrial, Scientific and Medical (ISM) enable higher degrees of flexibility, faster time-to-market, and lower overall non-recurring engineering costs (NRE) for a wide range of applications such as industrial imaging and surveillance, industrial automation, and medical imaging equipment.
Medical - For diagnostic, monitoring, and therapy applications, the Virtex FPGA and Spartan® FPGA families can be used to meet a range of processing, display, and I/O interface requirements.
Security - Xilinx offers solutions that meet the evolving needs of security applications, from access control to surveillance and safety systems.
Video & Image Processing - Xilinx FPGAs and targeted design platforms enable higher degrees of flexibility, faster time-to-market, and lower overall non-recurring engineering costs (NRE) for a wide range of video and imaging applications.
Wired Communications - End-to-end solutions for the Reprogrammable Networking Linecard Packet Processing, Framer/MAC, serial backplanes, and more
Wireless Communications - RF, base band, connectivity, transport and networking solutions for wireless equipment, addressing standards such as WCDMA, HSDPA, WiMAX and others.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

ASIC

A

application-specific integrated circuit - circuit intégré (micro-électronique) spécialisé.

An application-specific integrated circuit (ASIC /ˈeɪsɪk/) is an integrated circuit (IC) chip customized for a particular use, rather than intended for general-purpose use. For example, a chip designed to run in a digital voice recorder or a high-efficiency video codec (e.g. AMD VCE) is an ASIC. Application-specific standard product (ASSP) chips are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series.[1] ASIC chips are typically fabricated using metal-oxide-semiconductor (MOS) technology, as MOS integrated circuit chips.[2]

As feature sizes have shrunk and design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 logic gates to over 100 million. Modern ASICs often include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs often use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.[1]

Field-programmable gate arrays (FPGA) are the modern-day technology improvement on breadboards, meaning that they are not made to be application-specific as opposed to ASICs. Programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications. For smaller designs or lower production volumes, FPGAs may be more cost-effective than an ASIC design, even in production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars. Therefore, device manufacturers typically prefer FPGAs for prototyping and devices with low production volume and ASICs for very large production volumes where NRE costs can be amortized across many devices.[3]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

OT (OT & IT)

A

operational technology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

HMI

A

Human-machine interface

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

BAS

A

Building Automation System for premise systems - A building automation system (BAS) for offices and data centers (“smart buildings”) can include physical access control systems (PACS), but also heating, ventilation, and air conditioning (HVAC), fire control, power and lighting, and elevators and escalators.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

PACS

A

Physical Access Control System (PACS)

What is a Physical Access Control System?

Physical access control systems (PACS) are a form of physical security system that allows or restricts entry to a specific area or building. PACS are frequently in place to safeguard businesses and property. For example, from vandalism, theft, and trespassing, and they are particularly effective in locations that require higher levels of security and protection.

Also, physical access control processes, unlike physical obstacles such as retaining walls, fences, or strategic landscaping, regulate who, how, and when a person can get access.

Thus, being able to control physical access is an essential part of any security program.
Different Physical Access Control Systems

Here are examples of different physical access control systems.
1. Property monitoring

This is helpful to keep watch over the security of a certain area. This helps make sure that no one breaks into restricted areas or steals something off of someone’s property.
2. Entry control

Entry control aims to track who enters and exits a building. This can be very useful for recording employee hours, tracking visitors, and seeing who has come in contact with certain data or information.

Perhaps you can equip doors with sensors that can detect if someone has opened the door without authorization. This is what you call an exit detection system. The sensor sends a signal or alarm to the security staff if anyone opens the door without permission.
3. Video surveillance

Video surveillance enables video cameras to monitor entry and exit points around an organization’s perimeter. Also, inside buildings and even within sensitive areas like server rooms and workstations. We then use these recorded data to analyze and to see how threats enter or exit the facility.
4. Time-and-attendance systems

Time-and-attendance systems are used to manage employees’ access to specific areas during certain times of the day. Employees must swipe their proximity card at the beginning of their shift and swipe it again at the end of their shift. By doing so, you can record their hours worked (it also records when they leave).
5. Geo-fencing

Geo-fencing is a feature that creates virtual boundaries around real-world geographical areas. Such as cities or counties, by configuring the GPS location on the device with the geo-fence location. This method is used by businesses that want to provide access (or deny access) based on where a person is located concerning their company’s boundaries.
6. Visitor tracking

Visitor tracking is a feature that allows security personnel to indicate whether or not a person is authorized to enter a certain area. A person may be allowed to enter the company campus, but not be allowed to enter certain buildings.

This feature allows security personnel to know when unauthorized people are attempting to enter an area.
Two Types of Access Control

Access control systems also have two types of access control. They are “passive” access control and “active” access control.

Passive access control systems are automatic, meaning that they detect whether or not you have permission to enter the facility without human interaction. A simple example would be if you have a retinal scanner, an infrared beam will check your eyes for permission to enter the room. This system requires no human interaction since it detects your presence automatically, hence the term “passive”.

Active access control systems are manual, meaning that they rely on human interaction in some way. For example, if the person at the front desk looks up your name on their computer and manually allows you to enter the building after verifying you are allowed to do so.
Conclusion

Physical access control systems are a form of physical security system that allows or restricts entry to a specific area or building. Also, it aims to track who enters and exits a building.

More importantly, physical access control processes, unlike physical obstacles such as retaining walls, fences, or strategic landscaping, regulate who, how, and when a person can get access.

Thus, being able to control physical access is an essential part of any security program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

IANA

A

Internet assigned numbers authority

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

ICMP

A

Internet Control Message Protocol - = ping = port 1 TCP/IP, network layer of OSI model
- TheICMP echo requestand theICMP echo replymessages are commonly known aspingmessages.
Pingis a troubleshooting tool used by system administrators to manually test for connectivity between network devices, and also to test for network delay and packet loss.
Thepingcommand sends anICMP echo requestto a device on the network, and the device immediately responds with anICMP echo reply.
Sometimes, a company’s network security policy requiresping(ICMP echo reply) to be disabled on all devices to make them more difficult to be discovered by unauthorized persons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

PDOS

A

Permanent Denial of Service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

DDoS attack.

A

distributed denial of service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

DNS

A

Domain Name Service - Domain Name Service is used to resolve hostnames to IPs and IPs to hostnames

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

MFA

A

Multi-Factor Authentications - But what types of cyberattacks does MFA protect against?
• Phishing
• Spear phishing
• Keyloggers
• Credential stuffing
• Brute force and reverse brute force attacks
• Man-in-the-middle (MITM) attacks, spoofing…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

XSS

A

Cross-site scripting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

IPC

A

Inter Process Communication (IPC)

Interprocess communications share known as the IPC dollar (ipc $) sign. Windows, see Fraggle attack

A process can be of two types:

Independent process.
Co-operating process.

An independent process is not affected by the execution of other processes while a co-operating process can be affected by other executing processes. Though one can think that those processes, which are running independently, will execute very efficiently, in reality, there are many situations when co-operative nature can be utilized for increasing computational speed, convenience, and modularity. Inter-process communication (IPC) is a mechanism that allows processes to communicate with each other and synchronize their actions. The communication between these processes can be seen as a method of co-operation between them. Processes can communicate with each other through both:

Shared Memory
Message passing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

ARP

A

address resolution protocol - it’s used to convert an IP address into a MAC address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

DHCP

A

Dynamic Host Configuration Protocol (DHCP, protocole de configuration dynamique des hôtes) est un protocole réseau dont le rôle est d’assurer la configuration automatique des paramètres IP d’une station ou d’une machine, notamment en lui attribuant automatiquement une adresse IP et un masque de sous-réseau. DHCP peut aussi configurer l’adresse de la passerelle par défaut, des serveurs de noms DNS et des serveurs de noms NBNS (connus sous le nom de serveurs WINS sur les réseaux de la société Microsoft).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

EMI

A

Electromagnetic Interference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

RFI

A

Radio Frequency Interference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

PDS

A

Protected Distribution System

Wire line or fiber optic system that includes adequate safeguards and/or countermeasures (e.g., acoustic, electric, electromagnetic, and physical) to permit its use for the transmission of unencrypted information through an area of lesser classification or control.

A PDS is used to protect unencrypted national security information (NSI) that is transmitted
on wire line or optical fiber. Because the NSI is unencrypted, the PDS must provide
safeguards to deter exploitation. The emphasis is on intrusion detection rather than
prevention of penetration.

A PDS is intended primarily for use in low and medium threat locations, and is not recommended for use in high or critical threat locations. It is also NOT PERMITTED in uncontrolled access areas. For those areas, you must use an encryption solution instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

SSID

A

Service Set Identifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

WEP

A

Wired Equivalent Privacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

PSK

A

Pre-Shared Key - in wifi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

IV

A

Initialization Vector - in WEP nd WPA

What is an initialization vector (IV)?

An initialization vector (IV) is an arbitrary number that can be used with a secret key for data encryption to foil cyber attacks. This number, also called a nonce (number used once), is employed only one time in any session to prevent unauthorized decryption of the message by a suspicious or malicious actor.

Initialization Vector (IV) attacks with WEP
Christophe March 6, 2022
0 Comments

Understanding Initialization Vector (IV) attacks is important for the CompTIA Security+ exam, but it can be confusing if you’re not as familiar with cryptography concepts. In this post, we’ll explain what an IV is, how it’s used to encrypt data, what IV attacks are, and how to defend against them.
What are Initialization Vectors (IVs) for anyway?

When it comes to encrypting data, there are many different types of encryption. Some are more effective than others, and some are more complicated than others.

There are even different ways of encrypting blocks of information, and we call those different methods modes of operation.

Some approaches involve using something called an Initialization Vector (aka IV). The IV is combined with the secret key in order to encrypt data that’s about to be transmitted.
CBC mode encryption with initialization vector (IV)

Just before encryption occurs, we add the initialization vector, or IV, and it adds extra randomization to the final ciphertext. Then, on the second block of data, we use the resulting ciphertext as the IV for the next block, and so on.

This is important because it ensures that even if we’re using the exact same plaintext and secret key more than once, the resulting encryption will look different every time. This also makes it much more difficult for an attacker to reverse engineer a network’s encryption, even if they were able to gain access to plaintext information.
What are IV attacks?

There can be some situations where an IV attack can overcome the protection that we just talked about, and end up allowing an attacker to figure out the secret key being used. More modern wireless protocols like WPA2 and WPA3 prevent this from happening, but WEP was vulnerable to this attack.

Because WEP uses 24-bit IVs, which is quite small, IVs ended up being re-used with the same key. Because IV keys are transferred with the data in plaintext so that the receiving party is able to decrypt the communication, an attacker can capture these IVs.
WEP IV attack, step 1

By capturing enough repeating IVs, an attacker can easily crack the WEP secret key, because they’re able to make sense of the encrypted data, and they’re able to decrypt the secret key.
WEP IV attack, step 2

This is one of the many reasons that WEP was deprecated and replaced with much more secure wireless protocols.
Defenses against IV attacks

Defending against IV attacks comes down to using more secure wireless protocols such as WPA2 or WPA3. WEP was deprecated a while ago, and WPA is considered less secure than WPA2, so both should be avoided.

WPA2 and 3 use 48-bit IVs instead of 24-bit IVs, which may not sound like much, but it adds a massive number of new potential IV combinations as compared to WEP, which makes it far less likely to repeat.

That’s not the only reason that WPA2 and 3 are stronger than WEP, but it certainly does help. We’ll review some of the other reasons in a future blog post and in our CompTIA Security+ preparation course.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

WPA

A

WiFi Protected Access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

TKIP

A

Temporal Key Integrity Protocol (TKIP) - WPA

The Temporal Key Integrity Protocol, or TKIP, is a wireless network technology encryption protocol. It was designed and implemented as an emergency, short-term fix for the security vulnerabilities in WEP (Wired Equivalent Privacy). TKIP is the core component of WPA (Wi-Fi Protected Access) and works on legacy WEP hardware.

TKIP was developed and endorsed by the Wi-Fi Alliance and the IEEE 802.11i task group 2002-2004, and was limited because it had to work on older WEP hardware. It could only be implemented by software (not firmware), had limited processing power, and had to use WEP’s per-packet encryption process using the RC4 (Rivest Cipher 4) stream cipher.

TKIP includes three main parts: a 64-bit MIC (Message Integrity Check) called Michael, a packet sequencing control, and a per-packet key mixing function. The mixing function uses a pairwise transient key, the sender’s MAC address, and the packet’s 48-bit serial number. It is combined with the IV (initialization vector) or SV (starting variable) and sent to the RC4 cipher.

TKIP is vulnerable to attacks originating in the same network and PSK (pre-shared key) attacks. The vulnerability is due to the session secret not changing and being the same for everyone on that network.

TKIP was officially deprecated in the 802.11 standard in 2012.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

MIC

A

Message Integrity Check - WPA

A message integrity check (MIC), is a security improvement for WEP encryption found on wireless networks. The check helps network administrators avoid attacks that focus on using the bit-flip technique on encrypted network data packets. Unlike the older ICV (Integrity Check Value) method, MIC is able to protect both the data payload and header of the respective network packet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

RC4

A

Rivest Cipher 4 - WPA

Rivest Cipher 4 is a type of encryption that has been around since the 1980s. It’s one of the most common and earliest stream ciphers. It has been widely used in the Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols, Wireless Equivalent Protocol (WEP), and IEEE 802.11 wireless LAN standard.

While its use has been quite widespread over the years because of its speed and ease of use, today, RC4 is considered to pose many security risks.

Stream ciphers work byte by byte on a data stream. RC4, in particular, is a variable key-size stream cipher using 64-bit and 128-bit sizes. The cipher uses a permutation and two 8-bit index pointers to generate the keystream. The permutation itself is done with the Key Scheduling Algorithm (KSA) and then is entered into a Pseudo-Random Generation Algorithm (PRG), which generates a bitstream.

The pseudorandom stream that the RC4 generates is as long as the plaintext stream. Then through the Exclusive Or (X-OR) operation, the stream and the plaintext generate the ciphertext. Unlike stream ciphers, block ciphers separate plaintext into different blocks. Then it attaches to the blocks the plaintext and performs encryption on the blocks.

What does the encryption procedure look like for RC4? First, the user enters a plaintext file and an encryption key. Then, the RC4 encryption engine generates keystream bytes with the help of the Key Scheduling Algorithm and the Pseudo-Random Generation Algorithm. The X-OR operation is executed byte-by-byte, and the byte output is the encrypted text, which the receiver gets. Once they decrypt it through a byte-by-byte X-OR operation, they can access the plaintext stream.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

WPA2

A

WiFi Protected Access version 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

CCMP

A

Counter Mode Cipher Block Chaining Message Authentication Code Protocol (Counter Mode CBC-MAC Protocol) or CCM mode Protocol (CCMP) is an encryption protocol designed for Wireless LAN products that implements the standards of the IEEE 802.11i amendment to the original IEEE 802.11 standard.

CCMP is an enhanced data cryptographic encapsulation mechanism designed for data confidentiality and based upon the Counter Mode with CBC-MAC (CCM mode) of the Advanced Encryption Standard (AES) standard. It was created to address the vulnerabilities presented by Wired Equivalent Privacy (WEP), a dated, insecure protocol

Counter-Mode/CBC-Mac protocol (Counter Mode with Cipher Block Chaining) - WPA2 and 3 - CCMP est un protocole de chiffrement qui gère les clés et l’intégrité des messages. Il s’agit d’une alternative considérée comme plus sûre que TKIP qui est utilisé dans WPA sur ce système basé sur AES.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

AES

A

Advanced Encryption Standard - WPA3

WPA3 Security

Aruba Instant supports WPA3 security improvements that include:

Simultaneous Authentication of Equals (SAE)—Replaces WPA2-PSK with password-based authentication that is resistant to dictionary attacks.

WPA3-Enterprise 192-Bit Mode—Brings Suite-B 192-bit security suite that is aligned with Commercial National Security Algorithm (CNSA) for enterprise network. SAE-based keys are not based on PSK and are therefore pairwise and unique between clients and the AP. Suite B restricts the deployment to one of two options:

128-bit security

192-bit security without the ability to mix-and-match ciphers, Diffie-Hellman groups, hash functions, and signature modes
SAE

SAE replaces the less-secure WPA2-PSK authentication. Instead of using the PSK as the PMK, SAE arrives at a PMK, by mapping the PSK to an element of a finite cyclic group, PassWord Element (PWE), doing FCG operations on it, and exchanging it with the peer.

Aruba Instant supports:

SAE without PMK caching

SAE with PMK caching

SAE or WPA2-PSK mixed mode
SAE Without PMK Caching

Instant advertises support for SAE by using an AKM suite selector for SAE in all beacons and probe response frames. Besides, PMF is set to required (MFPR=1).

A client that wishes to perform SAE sends an 802.11 authentication request with authentication algorithm set to value 3 (SAE). This frame contains a well-formed commit message, that is, authentication transaction sequence set to 1, an FCG, commit-scalar, and commit-element.

Instant supports group 19, a 256-bit Elliptic Curve group. Instant responds with an 802.11 authentication containing its own commit message.

Instant and the client compute the PMK and send the confirm message to each other using an authentication frame with authentication transaction sequence set to 2.

The client sends an association request with the AKM suite set to SAE and Instant sends an association response.

Instant initiates a 4-way key handshake with the client to derive the PTK.
SAE With PMK Caching

If SAE has been established earlier, a client that wishes to perform SAE with PMK caching sends an authentication frame with authentication algorithm set to open. Instant sends an authentication response and the client sends a reassociation request with AKM set to SAE and includes the previously derived PMKID.

Instant checks if the PMKID is valid and sends an association response with the status code success.

Instant initiates a 4-way key handshake with the client to derive the PTK.
SAE or WPA2-PSK Mixed Mode

SAE or WPA2-PSK mixed mode allows both SAE clients and clients that can only perform WPA2-PSK to connect to the same BSSID. In this mode, the beacon or probe responses contain a AKM list which contains both PSK (00-0F-AC:2) and SAE (00-0F-AC:8). Clients that support SAE send an authentication frame with SAE payload and connect to the BSSID.

Clients that support only WPA2-PSK send an authentication frame with authentication algorithm set to open.

Instant initiates a 4-way key handshake similar to WPA2.
WPA3-Enterprise

WPA3-Enterprise enforces top secret security standards for an enterprise Wi-Fi in comparison to secret security standards. Top secret security standards includes:

Deriving at least 384-bit PMK/MSK using Suite B compatible EAP-TLS.

Securing pairwise data between STA and authenticator using AES-GCM-256.

Securing group addressed data between STA and authenticator using AES-GCM-256.

Securing group addressed management frames using BIP-GMAC-256

WPA3-Enterprise advertises or negotiates the following capabilities in beacons, probes response, or 802.11 association:

AKM Suite Selector as 00-0F-AC:12

Pairwise Cipher Suite Selector as 00-0F-AC:9

Group data cipher suite selector as 00-0F-AC:9

Group management cipher suite (MFP) selector as 00-0F-AC:12

If WPA3-Enterprise is enabled, STA is successfully associated only if it uses one of the four suite selectors for AKM selection, pairwise data protection, group data protection, and group management protection. If a STA mismatches any one of the four suite selectors, the STA association fails.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

WPS

A

WiFi Protected Setup - in wifi

Wi-Fi Protected Setup (WPS) is a feature supplied with many routers. It is designed to make the process of connecting to a secure wireless network from a computer or other device easier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

WPA

A

Wireless Access Point - in wifi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

AP

A

Access Point - in wifi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

SAE

A

Simultaneous Authentication of Equals - WPA3

Simultaneous Authentication of Equals (SAE) is based on the Dragonfly handshake protocol and enables the secure exchange of keys of password-based authentication methods. In WPA3, SAE replaces the previous methods of negotiating session keys using pre-shared keys and is also used in WLAN mesh implementations.

What is SAE (Simultaneous Authentication of Equals)?

The acronym SAE stands for Simultaneous Authentication of Equals and refers to a secure key negotiation and exchange method for password-based authentication methods. It is a variant of the Dragonfly key exchange protocol specified in RFC 7664, which in turn is based on Diffie-Hellmann key exchange.

Among other things, SAE is used in WPA3 (Wi-Fi Protected Access 3) and replaces the previous method of negotiating session keys using pre-shared keys. In addition, Simultaneous Authentication of Equals is used in IEEE 802.11s WLAN mesh networks during the peer discovery process. SAE improves the security of key exchange in the handshake process.

Even when weak passwords are used, authentication is protected. Dictionary or brute force attacks and attack methods such as KRACK (Key Reinstallation Attack) are virtually impossible when using Simultaneous Authentication of Equals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

PFS

A

Perfect Forward Secrecy (PFS), also called forward secrecy (FS), in WPA3, refers to an encryption system that changes the keys used to encrypt and decrypt information frequently and automatically. This ongoing process ensures that even if the most recent key is hacked, a minimal amount of sensitive data is exposed.

Web pages, calling apps, and messaging apps all use encryption tools with perfect forward secrecy that switch their keys as often as each call or message in a conversation, or every reload of an encrypted web page. This way, the loss or theft of one decryption key does not compromise any additional sensitive information—including additional keys.

Determine whether forward secrecy is present by inspecting the decrypted, plain-text version of the data exchange from the key agreement phase of session initiation. An application or website’s encryption system provides perfect forward secrecy if it does not reveal the encryption key throughout the session.

What is Perfect Forward Secrecy?

Perfect forward secrecy helps protect session keys against being compromised even when the server’s private key may be vulnerable. A feature of specific key agreement protocols, an encryption system with forward secrecy generates a unique session key for every user initiated session. In this way, should any single session key be compromised, the rest of the data on the system remains protected. Only the data guarded by the compromised key is vulnerable.

Before perfect forward secrecy, the Heartbleed bug affected OpenSSL, one of the common SSL/TLS protocols. With forward secrecy in place, even man-in-the-middle attacks and similar attempts fail to retrieve and decrypt sessions and communications despite compromise of passwords or secret long-term keys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

RFID

A

Radio Frequency Identification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

NFC

A

Near Field Communication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

GPS

A

Global Positioning System

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

Wi-Fi

A

Wireless Fidelity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

CI/CD

A

continuous integration, continuous delivery, and continuous deployment

L’approche CI/CD permet d’augmenter la fréquence de distribution des applications grâce à l’introduction de l’automatisation au niveau des étapes de développement des applications. Les principaux concepts liés à l’approche CI/CD sont l’intégration continue, la distribution continue et le déploiement continu. L’approche CI/CD représente une solution aux problèmes posés par l’intégration de nouveaux segments de code pour les équipes de développement et d’exploitation (ce qu’on appelle en anglais « integration hell », ou l’enfer de l’intégration).

Plus précisément, l’approche CI/CD garantit une automatisation et une surveillance continues tout au long du cycle de vie des applications, des phases d’intégration et de test jusqu’à la distribution et au déploiement. Ensemble, ces pratiques sont souvent désignées par l’expression « pipeline CI/CD » et elles reposent sur une collaboration agile entre les équipes de développement et d’exploitation, que ce soit dans le cadre d’une approche DevOps ou d’ingénierie de la fiabilité des sites (SRE).
Découvrir comment l’automatisation prend en charge les pipelines CI/CD
Quelle est la différence entre CI et CD (et l’autre CD) ?

L’acronyme « CI/CD » a plusieurs significations. « CI » désigne toujours l’« intégration continue », à savoir un processus d’automatisation pour les développeurs. Si les développeurs parviennent à apporter régulièrement des modifications au code de leur application, à les tester, puis à les fusionner dans un référentiel partagé, cela signifie que l’intégration continue est réussie. Cette solution permet d’éviter de travailler en même temps sur un trop grand nombre d’éléments d’une application, qui pourraient entrer en conflit les uns avec les autres.

« CD » désigne la « distribution continue » et/ou le « déploiement continu », qui sont des concepts très proches, parfois utilisés de façon interchangeable. Les deux concepts concernent l’automatisation d’étapes les plus avancées du pipeline, mais ils sont parfois dissociés pour illustrer le haut degré d’automatisation.

En général, dans le cadre de la distribution continue, les modifications apportées par l’équipe de développement à une application sont automatiquement testées et chargées dans un référentiel (tel que GitHub ou un registre de conteneurs), où elles peuvent être déployées dans un environnement de production actif par l’équipe d’exploitation. Le processus de distribution continue permet de résoudre les problèmes de visibilité et de communication entre l’équipe de développement et l’équipe métier. Son objectif est donc de simplifier au maximum le déploiement du nouveau code.

Le déploiement continu (l’autre signification possible de « CD ») peut désigner le transfert automatique des modifications du développeur depuis le référentiel vers l’environnement de production, où elles peuvent être utilisées par les clients. Ce processus permet de soulager les équipes d’exploitation surchargées par les tâches manuelles qui ralentissent la distribution des applications. Il repose sur la distribution continue en automatisant l’étape suivante du pipeline.

L’expression « CI/CD » peut désigner soit uniquement les deux pratiques liées d’intégration continue et de distribution continue, soit les trois pratiques, c’est-à-dire l’intégration continue, la distribution continue et le déploiement continu. Pour compliquer encore les choses, il arrive que l’expression « distribution continue » englobe également le processus de déploiement continu.

En conclusion, mieux vaut ne pas trop s’attarder sur ces questions de sémantique. Il suffit de retenir que l’approche CI/CD se rapporte à un processus, souvent représenté sous forme de pipeline, qui consiste à introduire un haut degré d’automatisation et de surveillance continues dans le processus de développement des applications.

La signification réelle de ces termes varie au cas par cas, selon le niveau d’automatisation du pipeline CI/CD. De nombreuses entreprises commencent par l’intégration continue, puis se mettent peu à peu à automatiser la distribution et le déploiement, par exemple dans le cadre du développement d’applications cloud-native.

Nos experts peuvent vous aider à mettre en place les outils, les pratiques et la culture nécessaires pour moderniser efficacement vos applications et pour en créer de nouvelles.
Obtenir l’aide d’un expert pour votre projet de développement d’applications cloud-native
Intégration continue

Le concept de développement d’applications modernes consiste à faire travailler plusieurs développeurs simultanément sur différentes fonctions d’une même application. Toutefois, si une entreprise prévoit de fusionner tous ces morceaux de code source le même jour (le « merge day » ou « jour du fusionnement »), alors la tâche risque de s’avérer laborieuse et de nécessiter beaucoup de procédures manuelles et de temps. En effet, lorsqu’un développeur qui travaille seul apporte des modifications à une application, celles-ci peuvent entrer en conflit avec les différentes modifications apportées simultanément par d’autres développeurs. Ce problème se complexifie encore si chaque développeur a personnalisé son propre environnement de développement intégré, au lieu d’en définir un seul dans le cloud, pour toute l’équipe.

L’intégration continue (CI) permet aux développeurs de fusionner plus fréquemment leurs modifications de code dans une « branche partagée », ou un « tronc », parfois même tous les jours. Une fois que les modifications apportées par un développeur sont fusionnées, elles sont validées par la création automatique de l’application et l’exécution de différents niveaux de test automatisés (généralement des tests unitaires et d’intégration) qui permettent de vérifier que les modifications n’entraînent pas de dysfonctionnement au sein de l’application. En d’autres termes, il s’agit de tester absolument tout, des classes et fonctions jusqu’aux différents modules qui constituent l’application. En cas de détection d’un conflit entre le code existant et le nouveau code, le processus d’intégration continue permet de résoudre les dysfonctionnements plus facilement, plus rapidement et plus fréquemment.
En savoir plus
Distribution continue

Après l’automatisation de la création et des tests unitaires et d’intégration dans le cadre de l’intégration continue, la distribution continue automatise la publication du code validé dans un référentiel. Aussi, pour garantir l’efficacité du processus de distribution continue, il faut d’abord introduire le processus d’intégration continue dans le pipeline de développement. La distribution continue permet de disposer d’un code base toujours prêt à être déployé dans un environnement de production.

Dans le cadre de la distribution continue, chaque étape (de la fusion des modifications de code jusqu’à la distribution des versions prêtes pour la production) implique l’automatisation des processus de test et de publication du code. À la fin de ce processus, l’équipe d’exploitation est en mesure de déployer facilement et rapidement une application dans un environnement de production.
Se lancer dans l’automatisation du déploiement
Déploiement continu

L’étape finale d’un pipeline CI/CD mature est le déploiement continu. En complément du processus de distribution continue, qui automatise la publication d’une version prête pour la production dans un référentiel de code, le déploiement continu automatise le lancement d’une application dans un environnement de production. En l’absence de passerelle manuelle entre la production et l’étape précédente du pipeline, le déploiement continu dépend surtout de la conception de l’automatisation des processus de test.

Dans la pratique, dans le cadre du déploiement continu, une modification apportée par un développeur à une application cloud pourrait être publiée quelques minutes seulement après la rédaction du code en question (en supposant qu’elle passe les tests automatisés). Il est ainsi beaucoup plus facile de recevoir et d’intégrer en continu les commentaires des utilisateurs. Ensemble, ces trois pratiques CI/CD réduisent les risques liés au déploiement des applications, puisqu’il est plus simple de publier des modifications par petites touches qu’en un seul bloc. Cette approche nécessite néanmoins un investissement de départ considérable, car les tests automatisés devront être rédigés de manière à s’adapter à diverses étapes de test et de lancement dans le pipeline CI/CD.
En savoir plus
Les outils de CI/CD courants

Les outils de CD/CI permettent aux équipes d’automatiser le développement, le déploiement et le test. Certains outils gèrent spécifiquement la partie intégration (CI), d’autres le développement et le déploiement (CD), et d’autres encore les tests continus ou les fonctions connexes.

Le serveur d’automatisation Jenkins compte parmi les outils Open Source de CI/CD les plus connus. Cette solution permet de gérer toutes les situations, du simple serveur de CI à un hub de CD complet.
Déployer Jenkins sur Red Hat OpenShift

Les pipelines Tekton servent de framework CI/CD pour les plateformes Kubernetes et offrent une expérience CI/CD cloud-native standard avec conteneurs.
Déployer Jenkins sur Red Hat OpenShift

En dehors des pipelines Jenkins et Tekton voici d’autres outils de CI/CD Open Source qui pourraient vous intéresser :

Spinnaker : plateforme de CD conçue pour les environnements multicloud

GoCD : serveur de CI/CD particulièrement axé sur la modélisation et la visualisation

Concourse : outil Open Source basé sur une approche d'automatisation continue

Screwdriver : plateforme de construction conçue pour le CD

Vous pouvez également vous tourner vers des outils de CI/CD gérés proposés par différents fournisseurs. Les principaux fournisseurs de cloud public offrent tous des solutions de CI/CD, tout comme GitLab, CircleCI, Travis CI, Atlassian Bamboo et bien d’autres.

Par ailleurs, la plupart des outils essentiels au DevOps font partie du processus de CI/CD. Les outils qui servent à l’automatisation de la configuration (comme Ansible, Chef et Puppet), à l’exécution de conteneurs (comme Docker, rkt et cri-o) et à l’orchestration des conteneurs (Kubernetes) ne sont pas des outils de CI/CD à proprement parler, mais ils apparaissent dans de nombreux worflows de CI/CD.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

DevSecOps

A

development, security, and operations

DevSecOps stands for development, security, and operations. It’s an approach to culture, automation, and platform design that integrates security as a shared responsibility throughout the entire IT lifecycle.
DevSecOps vs. DevOps

DevOps isn’t just about development and operations teams. If you want to take full advantage of the agility and responsiveness of a DevOps approach, IT security must also play an integrated role in the full life cycle of your apps.

Why? In the past, the role of security was isolated to a specific team in the final stage of development. That wasn’t as problematic when development cycles lasted months or even years, but those days are over. Effective DevOps ensures rapid and frequent development cycles (sometimes weeks or days), but outdated security practices can undo even the most efficient DevOps initiatives.

Illustration representing a linear progression from Development to Security and then to Operations

Now, in the collaborative framework of DevOps, security is a shared responsibility integrated from end to end. It’s a mindset that is so important, it led some to coin the term “DevSecOps” to emphasize the need to build a security foundation into DevOps initiatives.

Illustration representing collaboration between Development, Security, and Operations roles

DevSecOps means thinking about application and infrastructure security from the start. It also means automating some security gates to keep the DevOps workflow from slowing down. Selecting the right tools to continuously integrate security, like agreeing on an integrated development environment (IDE) with security features, can help meet these goals. However, effective DevOps security requires more than new tools—it builds on the cultural changes of DevOps to integrate the work of security teams sooner rather than later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

IaC

A

Infrastructure as Code

Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes. With IaC, configuration files are created that contain your infrastructure specifications, which makes it easier to edit and distribute configurations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

ML

A

Machine Learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

AI

A

Artificial Intelligence

Recently I visited with the cybersecurity teams at NTT Communications, British Telecom (BT) and DBS Bank. Each has mature, useful and metrics-driven security solutions.

NTT excels at 24x7 security monitoring. Some of the subtleties of its threat management program are pretty amazing; it feels it can identify characteristics of not only groups of attackers, but actual individuals.

BT has an incident response capability that is second to none, driven partly by its interest in combining red team and blue team tactics. These two security teams carefully hone their incident response steps and techniques.

All of these companies have taken a unique approach, in that they are upskilling all dedicated security workers to consider not just the defender’s dilemma, but also the hacker’s dilemma. This means they are not just focused on what happens if the hacker gets past their defenses. They’re focused, instead, on the mistakes an attacker makes, rather than the mistakes a defender can make.
Enter Artificial Intelligence (AI) and Machine Learning

Like many others, these three organizations are looking into the benefits of Artificial Intelligence (AI). While AI might not be fully ready for prime time, only a fool would look the other way or put their head in the sand when it comes to how AI might be able to help improve cybersecurity operations.
Why Use AI?

In the study Emerging Business Opportunities in AI, CompTIA found that only 29% of today’s companies are using AI for mission-critical services. The research shows some of the ways, though, that AI will unlock tremendous potential moving forward.

I’ve been lucky enough to interview a few people about future technologies, including automation and AI. For example, at the CompTIA Communities and Councils Forum (CCF), I interviewed Smith.AI’s Maddy Martin and CrushBank’s David Tan about how AI is being used today. (You can also watch that conversation on our YouTube Channel.)

Both Maddy and David were adamant: While AI can possibly replace jobs, for the foreseeable future, we’ll see AI enhance capabilities. But, there are a few things to consider.

There are two primary reasons why today’s companies want to use AI:

To automate the collection of internet of things (IoT) devices and the huge amount of data that they generate.
To identify problems with how information flows – or doesn’t – between business units.

If this is the case, let’s take two common IT job roles into consideration: help desk technician and cybersecurity analyst.
AI and the Help Desk

Recently, I spoke with the team at Dell Computing in India about their use of AI. They use machine learning to triage help desk calls, and its doing wonders. While AI isn’t all that good (right now) when it comes to telling the difference between sarcasm and earnestness, it is pretty good at language translation and telling if people are angry. It can pattern math very, very well.

Because AI is good at pattern matching, companies such as Dell, NTT and others are very interested in using AI to quickly identify any repetitive patterns. One BT executive told me that while it is unlikely for AI to take away any particular job roles yet, it is important for today’s help desk workers to focus on skills such as troubleshooting, advanced networking and security. Many of the activities in these three buckets are far less repetitious.

But, there’s a warning, here: if you find yourself repeating a message or screen presented to you quite often, chances are you’ll need to upskill yourself.
AI and Cybersecurity

At both RSA San Francisco and Infosecurity Europe, I saw quite a few cybersecurity vendors claim they were using machine learning and AI.

I heard some of the following claims:

Automated signature enhancement: Security information and event management (SIEM) tools that use machine learning to automatically improve performance and change alerting signatures.
The ability to do rudimentary threat hunting: Using machine learning techniques, algorithms can run in the background and identify certain patterns made by hackers and hacker groups. In the same way that, say, Mitre Corporation, has been able to identify the threat characteristics of threat actor groups such as FIN 6 and FIN 7, some organizations say they are close to automating this procedure.

The organizations I’ve been talking to haven’t quite bought into these claims, but they’re very interested in seeing the promise of these automated solutions becoming real.

A cybersecurity analyst, for example, tends to spend time in three major areas:

Capturing: Obtaining data from the network or from network hosts
Slicing: Breaking data into categories and turning it into useful trend-based, actionable information – this is the analytics part of the job
Dicing: Visualizing this data so that a human being can make a decision

When talking with cybersecurity analysts from organizations such as BT and DBS, they’ve told me they spend a lot of time tweaking how their security tools capture traffic. They feel that AI and machine learning–based programs can help them free up time, because capturing is a very repetitive thing. If they can be freed up from capturing traffic, they can spend more time analyzing and visualizing data. This is where humans excel. It’s a pretty good example of how AI can free up security workers to focus on more important tasks.

I don’t want to get ahead of myself, here. AI can be used for far more things than just the help desk and cybersecurity. Nevertheless, there are some major considerations that today’s organizations – large and small – need to consider.
How Do You Use AI For IT?

The companies I’ve talked to concerning AI seem to be pretty wise. They’re slowly looking into the realities of AI. For example, one of the important things to consider is that many AI implementations need to be primed and maintained. Let me explain.

Usually, to get machine learning working well, you first must prime the pump with useful information derived from a company’s experience. You can’t just turn on the programming and hope for the best.

The old computer science truism of “garbage in, garbage out” remains in force. This means that even when we start using automated, intelligent solutions, we’ll still need to teach them best practices.

So, even though there are automated pen testing solutions, such as Red Canary, it’s still necessary to teach them useful techniques. And those techniques aren’t universal – they are based on the organization’s specific needs. A health care organization will have a different set of practices than, say, a service provider/tech organization such as NTT or BT.

The organizations that I’ve talked with aren’t skeptical about AI. Far from it. They simply want to make sure that they have organized themselves properly. After all, if AI and machine learning are really forms of automation, it’s extremely important that organizations don’t automate processes and communications paths that are full of problems. One of the realities, then, is that AI will be implemented once organizations feel they have processes that are worth automating.
The Future of AI and Business

It’s tempting to ask the question, “What is the future of AI and business?” But after talking with organizations who are implementing it, it’s best to reverse that question.

Today’s companies want to be relevant, so they are asking careful questions about AI. The smart companies seem to be asking where they can use AI, rather than how AI can use them; the tail can’t wag the dog, here.

Want to learn more about the future of AI?
Check out the study, Emerging Business Opportunities in AI.
Practical Benefits of AI and Machine Learning: Is It Really Cost Savings?

The companies I’ve spoken with often cite cost savings as one of the major benefits of using AI. I have to say that this makes me a bit queasy.

Why?

Because I remember when voice over IP (VoIP) was going to save money. It really didn’t. What it did, though, was improve business communications and enable more efficiencies.

In the long run, this doesn’t save money so much as allow businesses to remain, well, in business. There’s a difference, here. I feel AI will do much the same thing. It may not save money, but wise implementation will save businesses.

With AI and machine learning, companies will be able to do the following:

Eliminate repetitive tasks
Personalize services
More easily “crunch” data to find useful trends

So, I commend the organizations that are using AI and machine learning. They’re neither afraid of it, nor are they being naïve or overly enthusiastic. They see the advent of another useful tool that will help them improve processes and create efficiencies. As long as decisions are made without cynicism, and with an eye toward improving what humans can do best, what’s wrong with that?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

ANN

A

Artificial Neural Network

Artificial Neural Network is a computational data model used in the development of Artificial Intelligence (AI) systems capable of performing “intelligent” tasks. Neural Networks are commonly used in Machine Learning (ML) applications, which are themselves one implementation of AI. Deep Learning is a subset of ML.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

DLL

A

Dynamic Link Libraries - Windows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

Media

A

Media - CDs, DVDs, USB Thumb Drive, external HDDs, tape backups, floppy disks…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

NAT

A

Network Address Translation - FW filtering method

IP filtering and network address translation

Last Updated: 2021-04-14

IP filtering and network address translation (NAT) act like a firewall to protect your internal network from intruders.

IP filtering lets you control what IP traffic will be allowed into and out of your network. Basically, it protects your network by filtering packets according to the rules that you define. NAT, allows you to hide your unregistered private IP addresses behind a set of registered IP addresses. This helps to protect your internal network from the outside networks. NAT also helps to alleviate the IP address depletion problem, because many private addresses can be represented by a small set of registered addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

PAT

A

Port Address Translation - FW filtering method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

ALG

A

Application Layer Gateway - FW filtering method

Application Layer Gateway
What is an Application Layer Gateway?

An application layer gateway (ALG) is a type of security software or device that acts on behalf of the application servers on a network, protecting the servers and applications from traffic that might be malicious.
What does an ALG do?

An application layer gateway–also known as an application proxy gateway- may perform various functions at the application layer of infrastructure. For example, it’s often used to bypass firewalls or provide other access control features that are not available natively by the application protocol itself.

These functions may include address and port translation, resource allocation, application response control, and synchronization of data and control traffic.

A web proxy is a tool that allows you to act as a proxy for the webserver, enabling you to manage application layer protocols such as SIP and FTP and shield the webserver by blocking connections when appropriate.
Why are application layer gateways important?

Applications are vital to business operations and daily life, and cyber-attacks often target the application layer of IT infrastructures. To ensure business continuity and protect sensitive data and personally identifiable information (PII), you must protect them at every stage of the process, specifically by addressing the application layer. Application layer gateways are one option for securing applications and their data.
How does an application layer gateway work?

A secure web proxy acts like a proxy server for the applications and manages the secure connection between the web browser and the webserver. Typically, a web proxy will perform deep packet inspection and block malicious content. Application Layer Gateways (ALG) is a good fit for organizations that want to create a secure perimeter by filtering traffic for applications and websites. ALG capabilities typically exceed those of application firewalls, which are designed to prevent access to applications, not content and data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

CLG

A

Circuit-Level gateway - FW type

A circuit-level gateway is a type of firewall.

Circuit-level gateways work at the session layer of the OSI model, or as a “shim-layer” between the application layer and the transport layer of the TCP/IP stack. They monitor TCP handshaking between packets to determine whether a requested session is legitimate. Information passed to a remote computer through a circuit-level gateway appears to have originated from the gateway. Firewall traffic is cleaned based on particular session rules and may be controlled to acknowledged computers only. Circuit-level firewalls conceal the details of the protected network from the external traffic, which is helpful for interdicting access to impostors. Circuit-level gateways are relatively inexpensive and have the advantage of hiding information about the private network they protect. However, they do not filter individual packets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

WAF

A

Web Application Firewall

A web application firewall (WAF) is a specific form of application firewall that filters, monitors, and blocks HTTP traffic to and from a web service. By inspecting HTTP traffic, it can prevent attacks exploiting a web application’s known vulnerabilities, such as SQL injection, cross-site scripting (XSS), file inclusion, and improper system configuration.[1]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

DLP

A

data loss prevention - Also called Information Leak Protection (ILP) or Extrusion Prevention Systems (EPS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
140
Q

ILP

A

Information Leak Protection - Also called Information Leak Protection (ILP) or Extrusion Prevention Systems (EPS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
141
Q

EPS

A

Extrusion Prevention Systems - Also called Information Leak Protection (ILP) or data loss prevention (DLP)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
142
Q

UTM

A

Unified Threat Management

Unified threat management (UTM) is an approach to information security where a single hardware or software installation provides multiple security functions. This contrasts with the traditional method of having point solutions for each security function.[1] UTM simplifies information-security management by providing a single management and reporting point for the security administrator rather than managing multiple products from different vendors.[2][3] UTM appliances have been gaining popularity since 2009, partly because the all-in-one approach simplifies installation, configuration and maintenance.[4] Such a setup saves time, money and people when compared to the management of multiple security systems. Instead of having several single-function appliances, all needing individual familiarity, attention and support, network administrators can centrally administer their security defenses from one computer. Some of the prominent UTM brands are Cisco, Fortinet, Sophos, Netgear, FortiGate, Huawei, WiJungle, SonicWall and Check Point.[5] UTMs are now typically called next-generation firewalls.

Features

UTMs at the minimum should have some converged security features like

Network firewall
Intrusion detection service (IDS)
Intrusion prevention service (IPS)

Some of the other features commonly found in UTMs are:

Gateway anti-virus
Application layer (Layer 7) firewall and control
Deep packet inspection
Web proxy and content filtering
Email filtering for spam and phishing attacks
Data loss prevention (DLP)
Security information and event management (SIEM)
Virtual private network (VPN)
Network access control
Network tarpit
Additional security services against Denial of Services (DoS), Distributed Denial of service (DDoS), Zero day, Spyware protection

Disadvantages

Although an UTM offers ease of management from a single device, it also introduces a single point of failure within the IT infrastructure. Additionally, the approach of a UTM may go against one of the basic information assurance / security approaches of defense in depth, as a UTM would replace multiple security products, and compromise at the UTM layer will break the entire defense-in-depth approach.[6]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
143
Q

NIDS

A

Network Intrusion Detection Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
144
Q

NIPS

A

Network Intrusion Prevention Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
145
Q

VDI

A

Virtual Desktop Infrastructure

With VDI, or a Virtual Desktop Infrastructure, you’re running applications in the cloud or in a data center, and you’re running as little of the application as possible on the local device. This virtualization of a user’s desktop is sometimes called VDE, or Virtual Desktop Environment.

This puts all of the computing power in the data center or in the cloud. What the end user sees is really a virtual desktop. All of the work is really happening in this centralized environment. This means that the client’s workstation has relatively small computing requirements, and the operating system that’s running on the client is less important, as long it can run the software required to connect to this virtual desktop infrastructure.

Security professionals like VDI because it makes security a lot more centralized. All of the data and applications are in the data center or in a centralized cloud infrastructure. If you need to make any changes, you make them in one single central place, and all of the virtual desktops are able to take advantage of those changes. And all of the data and all of the applications never leave the data center, making it that much more of a secure application environment.

As more applications are moving to the cloud, it becomes a lot more difficult to provide the same level of security. If the clients are working, but the data is in the cloud, how do you manage to keep everything secure?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
146
Q

VPC

A

Virtual Private Cloud

Amazon Virtual Private Cloud (VPC) is a commercial cloud computing service that provides users a virtual private cloud, by “provisioning a logically isolated section of Amazon Web Services (AWS) Cloud”.[1] Enterprise customers are able to access the Amazon Elastic Compute Cloud (EC2) over an IPsec based virtual private network.[2][3] Unlike traditional EC2 instances which are allocated internal and external IP numbers by Amazon, the customer can assign IP numbers of their choosing from one or more subnets.[4] By giving the user the option of selecting which AWS resources are public facing and which are not, VPC provides much more granular control over security. For Amazon it is “an endorsement of the hybrid approach, but it’s also meant to combat the growing interest in private clouds”.[5]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
147
Q

CASB

A

Cloud Access Security Broker

Un CASB, Cloud Access Security Broker, est un type de logiciel qui tend à sécuriser les applications SaaS des entreprises (Salesforce, Box,…) et IaaS (OCI, AWS, Azure,..) de manière à ce que les données de l’organisation soient sécurisées.

Un CASB permet de sécuriser les données de bout en bout, depuis le Cloud au périphérique. Le Cloud Access Security Broker offre de nombreux services :

visibilité sur l’utilisation des applications cloud de l’entreprise et détection du shadow IT
analyse des comportements utilisateurs (UEBA)
contrôle des accès utilisateurs
conformité : application des politiques de sécurité et aide à la mise en conformité avec le RGPD
alerte sur les menaces de sécurité
détection des malwares, etc.

A cloud access security broker (CASB) (sometimes pronounced cas-bee) is on-premises or cloud based software that sits between cloud service users and cloud applications, and monitors all activity and enforces security policies.[1] A CASB can offer services such as monitoring user activity, warning administrators about potentially hazardous actions, enforcing security policy compliance, and automatically preventing malware.

Types

CASBs deliver security and management features. Broadly speaking, “security” is the prevention of high-risk events, whilst “management” is the monitoring and mitigation of high-risk events.

CASBs that deliver security must be in the path of data access, between the user and the cloud provider. Architecturally, this might be achieved with proxy agents on each end-point device, or in agentless fashion without configuration on each device. Agentless CASBs allow for rapid deployment and deliver security on both company-managed and unmanaged BYOD devices. Agentless CASB also respect user privacy, inspecting only corporate data. Agent-based CASBs are difficult to deploy and effective only on devices that are managed by the corporation. Agent-based CASBs typically inspect both corporate and personal data.[citation needed]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
148
Q

API

A

Application Programming Interface

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
149
Q

SAML

A

Security Assertion Markup Language - Key management against cloud threat

SAML is a popular online security protocol that verifies a user’s identity and privileges. It enables single sign-on (SSO), allowing users to access multiple web-based resources across multiple domains using only one set of login credentials.

SAML stands for Security Assertion Markup Language. SAML is an open standard used for authentication. It provides single sign-on across multiple domains, allowing users to authenticate only once. Users gain access to multiple resources on different systems by supplying proof that the authenticating system successfully authenticated them.

SAML is the most widely adopted federated identity standard for authentication. It works by passing a SAML token (called an assertion) containing identifying user information between the authenticating system and a system on a different domain that offers a resource. Typically, the resource is a web- or cloud-based application. Resources can be internal to an organization, externally hosted, or delivered as a service.

Security Assertion Markup Language (SAML) is an open federation standard that allows an identity provider (IdP) to authenticate users and then pass an authentication token to another application known as a service provider (SP). SAML enables the SP to operate without having to perform its own authentication and pass the identity to integrate internal and external users. It allows security credentials to be shared with a SP across a network, typically an application or service. SAML enables secure, cross-domain communication between public cloud and other SAML-enabled systems, as well as a selected number of other identity management systems located on-premises or in a different cloud. With SAML, you can enable a single sign-on (SSO) experience for your users across any two applications that support SAML protocol and services, allowing a SSO to perform several security functions on behalf of one or more applications.

SAML relates to the XML variant language used to encode this information and can also cover various protocol messages and profiles that make up part of the standard.

SAML Provider

SAML facilitates the exchange of user identity data between two types of SAML providers:

Identity provider (IdP)—A SAML authority that centralizes user identity data and provides a single point of secure authentication. The IdP can be an in-house identity and access management (IAM) system or a hosted authentication SAML service provider, such as Google Apps.
Service provider (SP)—A SAML consumer that offers a resource to users. Typically, that resource is a web-based application or a paid subscription service, such as a customer relationship management (CRM) platform.

SAML Assertion

A SAML assertion is a packet of information (also known as an XML document) that contains all the information necessary to confirm a user’s identity, including the source of the assertion, a timestamp indicating when the assertion was issued, and the conditions that make the assertion valid. SAML defines three different types of assertion statements:

Authentication— An authentication assertion affirms that a specific identity provider authenticated a specific user at a specific time.
Attribute—An attribute is an identifying detail associated with a specific user. Examples of attributes include data such as the user’s first name, last name, email address, phone number, X.509 public certificate file, and so on.
Authorization decision—The authorization decision informs whether a specific user has been allowed or denied access to the requested resource. Typically, a SAML Policy Decision Point (PDP) issues this type of assertion when a user requests access to a resource.

A typical SAML assertion comprises a single authentication statement and an optional single attribute statement; however, in certain cases, a SAML response can contain multiple assertions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
150
Q

CDN

A

content delivery networks

A content delivery network (CDN) refers to a geographically distributed group of servers which work together to provide fast delivery of Internet content.

A CDN allows for the quick transfer of assets needed for loading Internet content including HTML pages, javascript files, stylesheets, images, and videos. The popularity of CDN services continues to grow, and today the majority of web traffic is served through CDNs, including traffic from major sites like Facebook, Netflix, and Amazon.

A properly configured CDN may also help protect websites against some common malicious attacks, such as Distributed Denial of Service (DDOS) attacks.
Is a CDN the same as a web host?

While a CDN does not host content and can’t replace the need for proper web hosting, it does help cache content at the network edge, which improves website performance. Many websites struggle to have their performance needs met by traditional hosting services, which is why they opt for CDNs.

By utilizing caching to reduce hosting bandwidth, helping to prevent interruptions in service, and improving security, CDNs are a popular choice to relieve some of the major pain points that come with traditional web hosting.
What are the benefits of using a CDN?

Although the benefits of using a CDN vary depending on the size and needs of an Internet property, the primary benefits for most users can be broken down into 4 different components:

Improving website load times - By distributing content closer to website visitors by using a nearby CDN server (among other optimizations), visitors experience faster page loading times. As visitors are more inclined to click away from a slow-loading site, a CDN can reduce bounce rates and increase the amount of time that people spend on the site. In other words, a faster a website means more visitors will stay and stick around longer.
Reducing bandwidth costs - Bandwidth consumption costs for website hosting is a primary expense for websites. Through caching and other optimizations, CDNs are able to reduce the amount of data an origin server must provide, thus reducing hosting costs for website owners.
Increasing content availability and redundancy - Large amounts of traffic or hardware failures can interrupt normal website function. Thanks to their distributed nature, a CDN can handle more traffic and withstand hardware failure better than many origin servers.
Improving website security - A CDN may improve security by providing DDoS mitigation, improvements to security certificates, and other optimizations.

How does a CDN work?

At its core, a CDN is a network of servers linked together with the goal of delivering content as quickly, cheaply, reliably, and securely as possible. In order to improve speed and connectivity, a CDN will place servers at the exchange points between different networks.

These Internet exchange points (IXPs) are the primary locations where different Internet providers connect in order to provide each other access to traffic originating on their different networks. By having a connection to these high speed and highly interconnected locations, a CDN provider is able to reduce costs and transit times in high speed data delivery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
151
Q

CORS

A

Cross-origin resource sharing (CORS) is a browser mechanism which enables controlled access to resources located outside of a given domain. It extends and adds flexibility to the same-origin policy (SOP). However, it also provides potential for cross-domain attacks, if a website’s CORS policy is poorly configured and implemented. CORS is not a protection against cross-origin attacks such as cross-site request forgery (CSRF).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
152
Q

CAM

A

Content Addressable Memory - MAC flooding and spoofing - switch memory set aside to store the MAC addresses (=CAM content addressable memory) for each port.

Une mémoire adressable par le contenu (CAM, en anglais Content-Addressable Memory) est un type de mémoire informatique spécial, utilisé dans certaines applications pour la recherche à très haute vitesse. Elle est aussi connue sous le nom de mémoire associative (associative memory, associative storage, ou associative array).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
153
Q

ARP

A

address resolution protocol - it relies on the MAC addresses as a way of combining what MAC address goes to which IP, and which IP goes to which MAC address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
154
Q

ACL

A

Access Control List - Routers - LAN, WAN and DMZ

Access Control List (ACL) — liste de contrôle d’accès en français — désigne traditionnellement deux choses en sécurité informatique :

  • un système permettant de faire une gestion plus fine des droits d’accès aux fichiers que ne le permet la méthode employée par les systèmes UNIX.
  • en réseau, une liste des adresses et ports autorisés ou interdits par un pare-feu.
    La notion d’ACL est cela dit assez généraliste, et on peut parler d’ACL pour gérer les accès à n’importe quel type de ressource.

Une ACL est une liste d’Access Control Entry (ACE) ou entrée de contrôle d’accès donnant ou supprimant des droits d’accès à une personne ou un groupe.

Sous UNIX
Sous UNIX, les ACL ne remplacent pas la méthode habituelle des droits. Pour garder une compatibilité, elles s’ajoutent à elle au sein de la norme POSIX 1re.

Les systèmes de type UNIX n’acceptent, classiquement, que trois types de droits :

lecture (Read)
écriture (Write)
exécution (eXecute)
pour trois types d’utilisateurs :

le propriétaire du fichier
les membres du groupe auquel appartient le fichier
tous les autres utilisateurs
Cependant, cette méthode ne couvre pas suffisamment de cas, notamment en entreprise. En effet, les réseaux d’entreprises nécessitent l’attribut de droits pour certains membres de plusieurs groupes distincts, ce qui nécessite diverses astuces lourdes à mettre en œuvre et à entretenir sous Unix.

L’intervention de l’administrateur est souvent nécessaire pour créer les groupes intermédiaires qui permettront de partager des fichiers entre plusieurs utilisateurs ou groupes d’utilisateurs, tout en les gardant confidentiels face aux autres.

Les ACL permettent de combler ce manque. On peut permettre à n’importe quel utilisateur, ou groupe, un des trois droits (lecture, écriture et exécution) et cela sans être limité par le nombre d’utilisateurs que l’on veut ajouter.

Mac OS X gère les ACL depuis la version 10.4 (Tiger).
$

En réseau
Une ACL sur un pare-feu ou un routeur filtrant, est une liste d’adresses ou de ports autorisés ou interdits par le dispositif de filtrage.

Les Access Control List sont divisés en trois grandes catégories, l’ACL standard, l’ACL étendue et la nommée-étendue.

L’ACL standard ne peut contrôler que deux ensembles : l’adresse IP source et une partie de l’adresse IP de destination, au moyen de masque générique.
L’ACL étendue peut contrôler l’adresse IP de destination, la partie de l’adresse de destination (masque générique), le type de protocole (TCP, UDP, ICMP, IGRP, IGMP, etc.), le port source et de destination, les flux TCP, IP TOS (Type of service) ainsi que les priorités IP.
L’ACL nommée-étendue est une ACL étendue à laquelle on a affecté un nom.
Par exemple, sous Linux c’est le système Netfilter qui gère l’ACL. La création d’ACL qui autorise le courrier électronique entrant, depuis n’importe quelle adresse IP, vers le port 25 (alloué communément à SMTP) se fait avec la commande suivante : iptables –insert INPUT –protocol tcp –destination-port 25 –jump ACCEPT

Iptables est la commande qui permet de configurer NetFilter.

Les ACL conviennent bien à des protocoles dont les ports sont statiques (connus à l’avance) comme SMTP, mais ne suffisent pas avec des logiciels comme BitTorrent où les ports peuvent varier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
155
Q

DMZ

A

De-Militarized Zone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
156
Q

BYOD

A

Bring Your Own Device - NAC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
157
Q

NAC

A

Network Access Control

Un contrôleur d’accès au réseau (network access control ou NAC) est une méthode informatique permettant de soumettre l’accès à un réseau d’entreprise à un protocole d’identification de l’utilisateur et au respect par la machine de cet utilisateur des restrictions d’usage définies pour ce réseau.

Plusieurs sociétés comme Cisco Systems, Microsoft ou Nortel Networks ont développé des frameworks permettant d’implémenter des mécanismes de protection d’accès au réseau d’entreprise et de vérifier le respect par les postes clients, des règles de sécurité imposées par l’entreprise : état de la protection antivirus, mises à jour de sécurité, présence d’un certificat, et bien d’autres.

Ces frameworks ont donné naissance à bon nombre d’“appliances”, matériels spécialisés dans le contrôle d’accès au réseau.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
158
Q

DTP

A

dynamic trunking protocol - Switch spoofing in VLAN hopping

VLAN Hopping
Le VLAN Hopping (saut de VLAN) est un exploit en sécurité informatique. Le principe est qu’un hôte attaquant sur un VLAN accède au trafic d’autres VLAN auxquels il ne devrait pas avoir accès. Il existe deux méthodes :

Switch Spoofing (usurpation de commutateur)
Double Tagging (double marquage)
Switch Spoofing
La technique de Switch Spoofing consiste à imiter un commutateur de jonction. On utilise généralement le protocole DTP (Dynamic Trunking Protocol) pour effectuer cette attaque.

Voici le déroulé de cette attaque :

On commence par envoyer des trames DTP sur un port Access
Si le mode DTP est en DYNAMIC AUTO ou DYNAMIC DESIRABLE alors l’attaque est possible
On envoie une demande de négociation pour basculer le lien en mode trunk
Remédiations
La technique de Switch Spoofing n’est exploitable que lorsque les interfaces d’un switch sont configurés pour négocier une jonction.

La première chose à faire est de désactiver le DTP : switchport nonegotiate

Ensuite, il faut s’assurer que les ports non configurés en jonction sont configurés en port d’accès : switchport mode access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
159
Q

NAT

A

Network Address Translation (NAT) - (router, ports)

160
Q

PAT

A

Port Address Translation - (router, ports)

161
Q

PABX, or PBX

A

Private Automatic Branch Exchange - Internal Telephony

162
Q

VoIP

A

Voice Over Internet Protocol

163
Q

QoS

A

Quality of Service

Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.

In the field of computer networking and other packet-switched telecommunication networks, quality of service refers to traffic prioritization and resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priorities to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.

Quality of service is particularly important for the transport of traffic with special requirements. In particular, developers have introduced Voice over IP technology to allow computer networks to become as useful as telephone networks for audio conversations, as well as supporting new applications with even stricter network performance requirements.

164
Q

SOHO

A

small office and home office

165
Q

IDS

A

Intrusion Detection System

166
Q

HIDS

A

Host-based IDS

167
Q

NIDS

A

Network-based IDS

168
Q

BIOS

A

Basic Input Output System

169
Q

TPM

A

trusted platform module (Module de plateforme sécurisée) - Chip residing on the motherboard that contains an encryption key (BIOS, UEFI) -

TPM (Trusted Platform Module) is a computer chip (microcontroller) that can securely store artifacts used to authenticate the platform (your PC or laptop). These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure that the platform remains trustworthy. Authentication (ensuring that the platform can prove that it is what it claims to be) and attestation (a process helping to prove that a platform is trustworthy and has not been breached) are necessary steps to ensure safer computing in all environments.
Trusted modules can be used in computing devices other than PCs, such as mobile phones or network equipment.

170
Q

NAS

A

Network Attached Storage

What is NAS (Network Attached Storage) and why is NAS Important?
It is accepted that data is a critical asset for companies
Without access to their corporate data, companies may not be capable of serving their customers with the expected level of service. Poor customer service, loss of sales, or even business liquidation can be the result of corporate information not being available.

But it is equally true that in most businesses, the focus is not on the storage, but on the applications that consume it. Additionally, small businesses find themselves faced with other demands such as:

Simplified operation (many small businesses do not have IT staff)
Accessibility, reliability and availability of applications and systems for day-to-day operations
Easy to use security, backup and recovery to protect application data
Availability of a wide range of applications to support their business needs
What is Network Attached Storage (NAS)?
The SNIA Dictionary defines NAS as:

A term used to refer to storage devices that connect to a network and provide file access services to computer systems. These devices generally consist of an engine that implements the file services, and one or more devices, on which data is stored. NAS uses file access protocols such as NFS or CIFS.

Designers of software applications suited to smaller business environments tend to use file-based systems to meet these goals, particularly those of flexibility, simplicity and ease of management; and there are a wide variety of easy-to-use tools to provide security, and robust backup & recovery.

NAS systems are popular with enterprise and small businesses in many industries as effective, scalable and low-cost storage solutions. They can be used to support email systems, accounting databases, payroll, video recording and editing, data logging, business analytics and more; a wide variety of other business applications are underpinned by NAS systems.

Given the flexibility and popularity of NAS systems, most cloud providers offer NAS services; that makes it possible to mix and match NAS storage systems and cloud services in a business, allowing the potential of optimizing cost, management effort and performance while giving the business complete control over location and security.

Some of the benefits of NAS include:

Simple to operate; a dedicated IT professional is generally not required
Lower cost; can significantly reduce wasted space over other storage technologies like SAN
Easy data backup and recovery, with granular security features
Centralization of data storage in a safe, reliable way for authorised network users and clients
Supports a large variety of applications
Permits data access across the network, including cloud based applications and data
With a NAS, data is continually accessible, making it easy for corporate teams to collaborate, respond to customers in a timely fashion, and to improve data management and security because information is in one place. Additionally, because NAS is like a private cloud – and the same services can be made available in the cloud – data may be accessed remotely using a network connection, meaning employees and applications can work anywhere, anytime.

171
Q

SAN

A

Storage Area Network

SAN technology addresses advanced enterprise storage demands by providing a separate, dedicated, highly scalable high-performance network designed to interconnect a multitude of servers to an array of storage devices. The storage can then be organized and managed as cohesive pools or tiers. A SAN enables an organization to treat storage as a single collective resource that can also be centrally replicated and protected, while additional technologies, such as data deduplication and RAID, can optimize storage capacity and vastly improve storage resilience – compared to traditional direct-attached storage (DAS).

172
Q

RAID

A

Redundant Array of Independent Disks - Le RAID est un ensemble de techniques de virtualisation du stockage permettant de répartir des données sur plusieurs disques durs afin d’améliorer soit les performances, soit la sécurité ou la tolérance aux pannes de l’ensemble du ou des systèmes.

What is RAID (Redundant Array of Inexpensive Disks)
RAID is a storage technology based acronym which stands for Redundant Array of Independent Disks or Redundant array of Inexpensive disks. The letter “I” stands for two words, which are mentioned above; but the term “Independent” sounds more appropriate and is far or less subjective to it and depends on the context of the conversation.

From the layman’s point of view, RAID can be termed as a confusing technical phrase, but if subjected to a good explanation, it can be as simple as it is. RAID is configuration of two or multiple hard drives to work as a single unit on a single computer system. Although, the configuration may vary, as per the intended use, but the main concept of it is to provide high- availability installation, which has to isolate the whole data storage from situations like system failures or infrastructure failures.

In the world of Information Technology, the term “high-availability” is obtained by achieving data redundancy. This can be done, by deploying RAID arrays into the data center, which reduce the likelihood of system failures, occurring due to hard drive failures. This prevents the properly configured disk arrays from failing and so, its general operations will also be unaffected making users of data center, never experience downtime.

In this article, below mentioned technological terms will be repeated occasionally and so to simplify things to the reader, the terms have been explained in brief.

Redundancy – Redundancy refers to having numerous components which offer the same function, so that the system functioning can continue in the event of partial system failure.

Fault Tolerance – fault tolerance means, in the event of a component failure, the system is designed in such a way that a backup component will be available, in order to prevent loss of service.

Types of RAID
RAID technology deployment is possible by using a software as well as hardware.

Data storage in RAID architecture
It is a fact that RAID technology tricks the computer system into believing that it is a single hard disk, but the fact is that it distributes the reproduced data across several hard disk drives. The distribution of data is done in two ways, which are two generalized concepts. Mirroring is one of the ways, which offers replication of data on another disk and the second concept is striping, where splitting of reproduced data takes place across the available disk drives. Parity is a concept, which is also streamlined into RAID technology as another way of storage method. It involves saving of information across the disk arrays, so that, the same information can be used to recreate or reconstruct the affected data, which is otherwise filled with errors or data loss, when disk drive fails.

The concept of redundancy comes into affect, as soon as the disk drives fail, while rest of the disk arrays continue to function. But the wise duty of the IT administrator is to solve the disk drive failure, as long as it is diminutive. Its prolonged existence can make other drives vulnerable to failures. This can be done, by replacing the faulty drives, without affecting the whole system and this terminology is called as hot swapping. But this type of replacement flexibility, without affecting the whole system working is seen in only certain types of RAID types. So, the data recovery strictly depends on the RAID level, which is deployed on the data center. While planning the RAID architecture, the hot swapping technique must be given utmost importance, in order to replace the faulty drives without affecting the whole working system.

In order to ensure foolproof redundancy, enterprises opt for Nested RAID levels which constitute the combination of two or more RAID configurations that offer advantages of both the methods. A paradigm is RAID 10, which is a combination of RAID level 1 and RAID level 0, where RIAD level 1 configurations works on the RAID 0 technique.

Requirements for Deploying RAID
Hardware
– RAID can be deployed on hard drives which include SATA, ATA and SCSI. The number of hard disks, required will be depending on selection of the RAID level configuration. It is always recommended for the use of matched hard drives of same capacities. It is a fact that most of the arrays will be using the capacity of the smallest drive. So, if a 250GB capacity drive is deployed in the RAID configuration, which has an 80GB hard disk drive, then the 170GB will be waste and will only be useful in JBOD Or just a bunch of disk raid level. Moreover, the hardware drives must not only match in capacities, they must in terms of writing speeds, transfer rates and so on. RAID controllers are deployed for SCSI, SATA and ATA hard disks and some systems also allow RAID arrays to be operated across controllers of different formats.

For those who are ignorant about RAID technology, here is a detail. RAID controller is a hardware, through which all the hard drives are connected and this is responsible for processing of data. It is similar to the motherboard arrangement where typical drive connections are found.

The requirement for a hot swappable drive bay is also essential, in conditions, where a hard disk drive turns faulty and needs to be replaced. It allows the replacement of failed hard drive from a live system by the simple method of unlocking the drive cage out of the case and then sliding in the locked into a the place.

Software requirements
– RAID technology can be deployed on any modern operating system, if it has all appropriate drivers provided by manufacturers of RAID controllers. The operating system and the software needs should be upgraded from the beginning and prior to this step, all the data must be backed up, so that it can be restored on to the newly laid RAID technology. If the RAID array is specifically maintained for data storage and not for any other operating system run, then things get simple.

RAID Level Configurations
There are almost dozen RAID level schemes prevailing in RAID technology. But below only the most prevailing RAID level schemes are summarized below.

RAID Level 0
– This RAID level 0 configuration doesn’t provide redundancy and so it doesn’t exactly fit into the arrays of RAID technology. In this RAID level 0, two disks are used to write data to two drives in an alternating way, which is striping. This can be explained with a paradigm. Let us assume, ten chunks of data say 1,2,3,4,5,6,7,8,9,10. From it 1, 3,5,7,9 will be written to drive one and the rest i.e. 2, 4, 6,8,10 will be written to second drive and that too in sequential order. This splitting of data will allow doubling of speed of a single hard drive and will also enhance its performance. But if one drive fails in this array, then data loss is incurred. The capacity of Raid 0 is equal to the whole sum of the individual drives. That is, if two 80GB hard disks will be deployed into RAID, then the total capacity will be around 160GB of the RAID 0 level.

RAID Level 1
– From this RAID level 1, the redundancy factor becomes influential. Minimum use of hard drives in this level will be two and the data is written to both drives. It is like cloning or mirroring the data of the first drive to the second one and making drive one identical to the second one. If the first drive fails, then the data backup will be available from the drive 2. In this RAID level, the actual capacity of RAID 1 level array will be equal to half the capacity of the sum of individual drives. That is, if two 160GB drives are deployed, then the total capacity will be just 160GB only.

RAID Level 2
– In this form of RAID data is striped in a way that each sequential bit is on different drive. Each data word is having its own hamming code and on each read, the Hamming code verifies the data accuracy and also corrects the single disk errors. In this level, the array can recover from multiple and simultaneous hard drive failures. Minimum two drives are required in this RAID level 2.

RAID Level 3
– In this level of RAID a minimum of three drives is required for implementation. The data block is split and is written on data disks. Stripe parity is generated on writes and this writing is recorded on parity disks and can be checked on reads. A raid 3 array can recover from hard drive failures and is deployed in environments where applications need high speed throughputs, which can be video production, video editing and live streaming. In this level the read/write speed is high.

RAID Level 4
– In this RAID level the requirement three drives is seen and in this level, the entire block is written on a disk. Parity is generated in writes and recorded on parity disks and checked on reads. This level of RAID has high reading speeds and is highly efficient.

RAID Level 5
– In this RAID 5 level, three drives are implemented and data block is written on a data disks and parity which is generated from writes is distributed onto three drives and is checked on reads. In the situation of drive failure, the reads can be calculated from the distributed parity and the drive failure is masked from the user. But in situations of single drive failure, the performance of the entire array gets depleted.

RAID Level 6
– This level of raid 6 (corrected as RAID 6), offers fault tolerance in case of two drive failures and still the system functions. In this level, block level stripping is observed along with double distributed parity. In case of data recovery, the time for recovery takes place on the size of the disk drive. Double parity offers additional time for rebuilding the array without the data being at risk if single additional drive fails, while the data recovery through rebuild is happening.

Now come the nested levels of RAID which are also known as Hybrid raid levels, structured by the conjugation of two RAID levels.

RAID Level 10 (1+0)
– In this level of RAID minimum requirement of drives is 4. RAID 10 will also have fault tolerance and will also have redundancy. It will have the splitting of data feature seen in RAID 0 level and will also have mirroring feature seen in raid 1 level. RAID 10 array can recover from multiple and simultaneous hard drive failures and is ideal for high end server applications.

RAID Level 0+1
– This level of RAID requires a minimum of 4 drives and multiple drive failures can be handled by this raid level. This level is a combination of raid 0 and raid 1 level and is used in imaging applications meant file servers. It offers high performance and reduces emphasis on reliability. In the raid 0+1 a second striped set to mirror the primary set is created The array continues to operated with one or more drives failure in mirror set.

173
Q

HSM

A

Hardware Security Module

Un Hardware Security Module ou HSM (en français, boîte noire transactionnelle ou BNT) est un matériel électronique offrant un service de sécurité qui consiste à générer, stocker et protéger des clefs cryptographiques. Ce matériel peut être une carte électronique enfichable PCI sur un ordinateur ou un boîtier externe SCSI/IP par exemple.

Il est envisageable aussi d’obtenir ce service de manière logicielle (Software Security Module) mais un module matériel procure un plus haut niveau de sécurité1.

Les HSMs répondent aux standards de sécurité internationaux tels que FIPS 140 (en)2et Critères communs EAL4+3 et peuvent supporter des API cryptographiques majeures PKCS#11, CryptoAPI, et Java JCA/JCE4.

Ils peuvent aussi être utilisés directement par des serveurs de base de données comme Oracle ou MS SQL Server5 (“Gestion des Clefs Extensibles”, en anglais EKM pour Extensible Key Management).

La confiance que l’on peut accorder à une infrastructure à clé publique est basée sur l’intégrité de l’autorité de certification (AC). Le certificat de cette autorité détient la clé de signature cryptographique racine. Cette clé sert à signer les clés publiques de ceux qui vont détenir des certificats de cette autorité. Mais surtout cette clé signe sa propre clé publique. Lorsque cette clé est compromise tous les certificats signés par l’autorité de certification sont suspects et la crédibilité de l’autorité de certification détruite. D’où l’importance de protéger cette clé et son usage, les HSM répondant à cette nécessité.

174
Q

AES

A

Advanced Encryption Standard

175
Q

EFS

A

Encrypting File System - Windows

L’Encrypting File System (abrégé EFS) est une fonctionnalité apparue avec la troisième version des systèmes de fichiers NTFS disponible sur Microsoft Windows 2000 et ultérieurs. Cette technologie permet d’enregistrer des fichiers chiffrés sur ce système de fichiers, ce qui protège les informations personnelles des attaques de personnes ayant un accès direct à l’ordinateur.

L’authentification de l’utilisateur et les listes de contrôle d’accès peuvent protéger les fichiers d’un accès non autorisé pendant l’exécution du système d’exploitation. Mais elles sont facilement contournables si un attaquant obtient un accès physique à l’ordinateur. Une solution est de chiffrer les fichiers sur les disques de cet ordinateur. EFS réalise cette opération en utilisant la cryptographie symétrique, et assure que le déchiffrement des fichiers est pratiquement impossible sans posséder la bonne clé. Cependant, EFS ne prévient pas les attaques par force brute contre les mots de passe des utilisateurs. Autrement dit, le chiffrement de fichier ne procure pas une bonne protection si le mot de passe utilisateur est facilement trouvable.

176
Q

SED

A

Self-Encrypting Drive - hardware-based encryption

177
Q

AV

A

Anti-virus

178
Q

SIM

A

Subscriber Identity Module

179
Q

IMSI

A

international mobile subscriber identity IMSI

The international mobile subscriber identity (IMSI) /ˈɪmziː/ is a number that uniquely identifies every user of a cellular network.[1] It is stored as a 64-bit field and is sent by the mobile device to the network. It is also used for acquiring other details of the mobile in the home location register (HLR) or as locally copied in the visitor location register. To prevent eavesdroppers from identifying and tracking the subscriber on the radio interface, the IMSI is sent as rarely as possible and a randomly-generated TMSI is sent instead.

The IMSI is used in any mobile network that interconnects with other networks. For GSM, UMTS and LTE networks, this number was provisioned in the SIM card and for cdmaOne and CDMA2000 networks, in the phone directly or in the R-UIM card (the CDMA equivalent of the SIM card). Both cards have been superseded by the UICC.

An IMSI is usually presented as a 15-digit number but can be shorter. For example, MTN South Africa’s old IMSIs that are still in use in the market are 14 digits long. The first 3 digits represent the mobile country code (MCC), which is followed by the mobile network code (MNC), either 2-digit (European standard) or 3-digit (North American standard). The length of the MNC depends on the value of the MCC, and it is recommended that the length is uniform within a MCC area.[2] The remaining digits are the mobile subscription identification number (MSIN) within the network’s customer base, usually 9 to 10 digits long, depending on the length of the MNC.

The IMSI conforms to the ITU E.212 numbering standard.

IMSIs can sometimes be mistaken for the ICCID (E.118), which is the identifier for the physical SIM card itself (or now the virtual SIM card if it is an eSIM). The IMSI lives as part of the profile (or one of several profiles if the SIM and operator support multi-IMSI SIMs) on the SIM/ICCID.

180
Q

BYOD

A

Bring Your Own Device

181
Q

MBSA

A

Microsoft Baseline Security Analyzer - This tool can help identify security misconfigurations within your network’s workstations.

What is Microsoft Baseline Security Analyzer do?
The Microsoft Baseline Security Analyzer (MBSA) is a software tool that helps determine the security of your Windows computer based on Microsoft’s security recommendations.

182
Q

GPE

A

Group Policy Editor - Windows
Open Local Group Policy Editor in Command Prompt
From the Command Prompt, type ‘gpedit.msc’ and hit ‘Enter.’

The Group Policy Editor is a Windows administration tool that allows users to configure many important settings on their computers or networks. Administrators can configure password requirements, startup programs, and define what applications or settings users can change.

These settings are called Group Policy Objects (GPOs). Attackers use GPO’s to turn off Windows Defender. System Administrators use GPOs to deal with locked out users.

This blog will deal with the Windows 10 version of Group Policy Editor (also known as gpedit), but you can find it in Windows 7, 8, and Windows Server 2003 and later.

This piece will cover how to open and use Group Policy Editor, some important security settings in GPOs, and some alternatives to gpedit.

How To Access Group Policy Editor Windows 10: 5 Options
There are several ways to open Group Policy Editor. Choose your favorite!

Option 1: Open Local Group Policy Editor in Run
Open Search in the Toolbar and type Run, or select Run from your Start Menu.
Type ‘gpedit.msc’ in the Run command and click OK.
Option 2: Open Local Group Policy Editor in Search
Open Search on the Toolbar
Type ‘gpedit’ and click ‘Edit Group Policy.’
Option 3: Open Local Group Policy Editor in Command Prompt
From the Command Prompt, type ‘gpedit.msc’ and hit ‘Enter.’
Option 4: Open Local Group Policy Editor in PowerShell
In PowerShell, type ‘gpedit’ and then ‘Enter.’
If you would prefer, you can also use PowerShell to make changes to Local GPOs without the UI.
Option 5: Open Local Group Policy Editor in Start Menu Control Panel
Open the Control Panel on the Start Menu.
Click the Windows icon on the Toolbar, and then click the widget icon for Settings.
Start typing ‘group policy’ or ‘gpedit’ and click the ‘Edit Group Policy’ option.
What Can You Do With Group Policy Editor
A better question would be, what can’t you do with Group Policy Editor! You can do anything from set a desktop wallpaper to disable services and remove Explorer from the default start menu. Group policies control what version of network protocols are available and enforce password rules. A corporate IT security team benefits significantly by setting up and maintaining a strict Group Policy. Here are a few examples of good IT security group policies:

Limit the applications users can install or access on their managed corporate devices.
Disable removable devices like USB drives or DVD drives.
Disable network protocols like TLS 1.0 to enforce usage of more secure protocols.
Limit the settings a user can change with the Control Panel. For example, let them change screen resolution but not the VPN settings.
Specify an excellent company-sanctioned wallpaper, and turn off the user’s ability to change it.
Keep users from accessing gpedit to change any of the above settings.
Those are just a few examples of how an IT security team could use Group Policies. If the goal is a more secure and hardened environment for your organization, use group policies to enforce good security habits.

Components of the Group Policy Editor
The Group Policy Editor window is a list view on the left and a contextual view on the right. When you click an item on the left side, it changes the focus of the right to show you details about that thing you clicked.

The top-level nodes on the left are “Computer Configuration” and “User Configuration.” If you open the tree for Computer Configuration, you can explore the options you have to manage different system behavior aspects.

For example, under Computer Configuration -> Administrative Templates -> Control Panel -> Personalization, you will see things like “Do not display the lock screen” on the right side.

183
Q

GPOs

A

Group Policy objectives

Local Group Policy Editor is a Microsoft Management Console (MMC) snap-in that is used to configure and modify Group Policy settings within Group Policy Objects (GPOs). Administrators need to be able to quickly modify Group Policy settings for multiple users and computers throughout a network environment.

184
Q

NTFS

A

New Technology File System

185
Q

APFS

A

Apple File System

186
Q

RPM

A

Red Hat package Manager - Patch management (Linux is OSX)

187
Q

MSCCM

A

Microsoft System Center Configuration Manager - to patch through a large network with Windows

Microsoft System Center Configuration Manager

188
Q

“PTIA”

A

planning, testing, implementing, and auditing of software patches - mnemotechnics for patch management

What is a Patch Management Process?

When patches to vulnerabilities need to be implemented, it is very important that a consistent and repeatable process is followed. This will ensure all patches are reviewed, tested, and validated prior to implementation. Developing a patch management policy should be the first step in this process. A patch management policy outlines the process an organization is to take to update code on a consistent and reliable basis to ensure systems are not negatively affected by the change.
How Do You Implement a Patch Management Process?
Implementing a Successful Patch Management Process

To reiterate a flexible and responsive security patch management process is critical to maintaining proper cyber hygiene.

There are many different methodologies and guidance to help with building a quality patch management process. The key takeaway is to make sure you implement a process that aligns with your organization’s people, processes, and resources.

The process implemented must be repeatable and there must be buy-in throughout the entire organization, from the administrators installing the patches all the way up to the executives and board of directors. If there is no buy-in, it does not matter how great your process is because the chances that the process is being followed are very low.

If you do not have a process in place or are taking this time to review and update, the SANS Institute InfoSec Reading Room has provided a good methodology on how to implement a patch management process. At a very high level the methodology is:

Baseline and Harden
Develop a Test Environment
Develop a Backout Plan
Patch Evaluation and Collection
Configuration Management
Patch Rollout
Maintenance Phase – Procedures and Policies

Another guidance provided by IRS.gov is very similar and focuses on:

Assess
Identify
Evaluate and Plan
Deploy
Maintain
189
Q

TOS

A

Trusted Operating System

Trusted Operating System (TOS) generally refers to an operating system that provides sufficient support for multilevel security and evidence of correctness to meet a particular set of government requirements.

An OS is trusted if it can provide

Memory Protection : Each user’s program must run in a portion of memory protected against unauthorized accesses. The protection will certainly prevent outsiders’ accesses, and it may also control a user’s own access to restricted parts of the program space. Differential security, such as read, write, and execute, may be applied to parts of a user’s memory space
File Protection : aims to prevent programs from replacing critical OS files. Protecting core system files mitigates problems such as DLL hell with programs and the OS.
General object access control : Users need general objects, such as constructs to permit concurrency and allow synchronization. However, access to these objects must be controlled so that one user does not have a negative effect on other users
User Authentication : must identify each user who requests access and must ascertain that the user is actually who he or she purports to be. The most common authentication mechanism is password comparison.
I/O device access control: The OS must be able to have an I/O control with a lookup table with an access control matrix
Guaranteed fair service: All users expect CPU usage and other service to be provided so that no user is indefinitely starved from receiving service. Hardware clocks combine with scheduling disciplines to provide fairness. Hardware facilities and data tables combine to provide control To be able to design a trusted OS  we have to build the components which makes the OS trusted. An OS is trusted if policy, Model, design and trust components can be added together

Policy: Security requirements, well defined, consistent, unambiguous, implementable
Model: Representation of the policy, formal. Should not degrade functionality.
Design: Includes functionality, implementation option
Trust: Review of features, assurance makes an OS worthy of trust .To trust an OS process’s must be not containing any malicious segments and they must be absent of security flows. The OS must be evaluated, approved and it must be secured by enforced security policies which will give as an assurance to have our sensitive information or our data will be protected.

The key features of a Trusted OS are:

Identification and Authentication: The OS should have the ability to tell who is requesting access to an object, and must be able to verify the subject’s identity.
Mandatory access control (MAC) provides that access control policy decisions are made beyond the control of the individual owner of an object. A central authority determines what information is to be accessible by whom, and the user cannot change access rights.
Discretionary access control (DAC), leaves a certain amount of access control to the discretion of the object’s owner or to anyone else who is authorized to control the object’s access. The owner can determine who should have access rights to an object and what those rights should be.
Object Reuse Protection ]: OS goals include efficiency. It is often efficient to reuse objects rather than completely destroy them. Trusted systems must make sure that security cannot be abused due to the reuse of objects usually by clearing, or zeroing, out any object before it is allocated to the user.
Complete Mediation: Trusted OS’s must perform complete mediation, meaning that all accesses must be controlled and verified.
Trusted path : is a mechanism that provides confidence that the user is communicating with what the user intended to communicate with, ensuring that attackers can’t intercept or modify whatever information is being communicated.
Accountability and Audit: Accountability usually entails maintaining a log of security-relevant events that have occurred, listing each event and the person responsible for the addition, deletion, or change. A trusted OS must protected the audit logs from outsiders, and record every security-relevant event.
Audit Log Reduction: As logs can be huge in size trusted OS’s should have the ability to change the log location, or reduce the size based on needs.
Intrusion Detection: Trusted OS must be able to detect some attacks

This article explained what is a trusted operating system . If you want to learn what Operational security is :

https://www.erdalozkaya.com/operational-security/

References
[1] http://en.wikipedia.org/wiki/Trusted_operating_system[2] Security in Computing C. Pfleeger Chapter 5[3] http://www.csee.wvu.edu/~cukic/Security/NotesTrusted_OS.pdf[4] Security in Computing C. Pfleeger Chapter 5[5] Security in Computing C. Pfleeger Chapter 5[6] http://www.csee.wvu.edu/~cukic/Security/NotesTrusted_OS.pdf[7] Book : Security in Computing C. Pfleeger Chapter 5[8] http://www.fas.org/irp/nsa/rainbow/tg018.htm[9] http://en.wikipedia.org/wiki/Trusted_path

Leave a Reply

Your email address will not be published. Required fields are marked *

Comment *

Name *

Email *

Website

Save my name, email, and website in this browser for the next time I comment.

Related Posts
Categories

About Dr. Erdal Ozkaya (296)
    Awards (95)
    Erdal in the news (117)
    Feedback (90)
    My Books (54)
    Who is Dr. Erdal Ozkaya ? (2)
Announcemets (300)
Artificial Intelligence AI (9)
Certification (70)
Cloud Computing (70)
Cyber Security (311)
Cybersecurity Leadership (51)
Financial Sector (31)
Forensics (17)
Free Events (152)
General (125)
How to …? (63)
ISO 20000/2700x (12)
News (38)
Reviews (76)
    Book Reviews (33)
    Free E-Books (13)
    Hardware Review (9)
    Security Review / Reports (10)
    Software Review (7)
Video Tutorials (99)
What is new ? (27)
Windows (30)

Recent Comments

Alicia Harlow on Core isolation Memory Integrity not available – (Get it fixed)
Alicia Harlow on Core isolation Memory Integrity not available – (Get it fixed)
Erdal on Siber Güvenlik Saldiri ve Savunma Stratejileri – NEW B00K
Ersin Karadağ on Siber Güvenlik Saldiri ve Savunma Stratejileri – NEW B00K
zamsecurity on MANAGED SECURITY SERVICES FORUM AUSTRALIA – 2022

Archives

190
Q

SCCM

A

System Center Configuration Manager - to patch thRough a large network with Windows - We can use the Microsoft’s system center configuration management or the SCCM tool that allows us as admins to manage large amounts of software across the network, as well as push out new configurations and policy updates to all of our PCs (SCCM).

191
Q

ROT

A

Root Of Trust - in TPM, for hardware in supply chain assessment

Root of Trust (RoT) is a source that can always be trusted within a cryptographic system. Because cryptographic security is dependent on keys to encrypt and decrypt data and perform functions such as generating digital signatures and verifying signatures, RoT schemes generally include a hardened hardware module. A principal example is the hardware security module (HSM) which generates and protects keys and performs cryptographic functions within its secure environment.

Because this module is for all intents and purposes inaccessible outside the computer ecosystem, that ecosystem can trust the keys and other cryptographic information it receives from the root of trust module to be authentic and authorized. This is particularly important as the Internet of Things (IoT) proliferates, because to avoid being hacked, components of computing ecosystems need a way to determine information they receive is authentic. The RoT safeguards the security of data and applications and helps to build trust in the overall ecosystem.

RoT is a critical component of public key infrastructures (PKIs) to generate and protect root and certificate authority keys; code signing to ensure software remains secure, unaltered and authentic; and creating digital certificates for credentialing and authenticating proprietary electronic devices for IoT applications and other network deployments.

192
Q

PCR

A

Platform Configuration Registers - ROT

Platform Configuration Registers (PCRs) are one of the essential features of a TPM. Their prime use case is to provide a method to cryptographically record (measure) software state: both the software running on a platform and configuration data used by that software. The PCR update calculation, called an extend, is a one-way hash so that measurements can’t be removed. These PCRs can then be read to report their state. They can also be signed to return a more secure report, called an attestation (or quote). PCRs can also be used in an extended authorization policy to restrict the use of other objects

193
Q

AIK

A

Attestation Identity Key - ROT

The trusted platform module (TPM) can be used to create cryptographic public/private key pairs in such a way that the private key can never be revealed or used outside the TPM (that is, the key is non-migratable). This type of key can be used to guarantee that a certain cryptographic operation occurred in the TPM of a particular computer by virtue of the fact that any operation that uses the private key of such a key pair must occur inside that specific TPM.

It can also be useful to be able to cryptographically prove such a property of a key, so that a relying party can know that any use of the private key must have occurred inside that TPM.

An Attestation Identity Key (AIK) is used to provide such a cryptographic proof by signing the properties of the non-migratable key and providing the properties and signature to the CA for verification. Since the signature is created using the AIK private key, which can only be used in the TPM that created it, the CA can trust that the attested key is truly non-migratable and cannot be used outside that TPM.

A CA needs to know that it can trust an AIK, and that it is not being provided just any key that was created outside a TPM and can be used anywhere. This trust is formed by AIK activation, which is a process defined by the TPM that can be used to transfer trust from a TPM endorsement key (EK) to an AIK.

A TPM EK is another public/private key pair of which the private portion never leaves the TPM, but the EK is the root of the TPM’s identity, and should be assumed to be unchangeable. As the root of the TPM’s identity, there has to be a way to establish trust in the EK so that CA can have some degree of trust that the private portion of the EK will never be used outside the TPM.

Windows server supports the following methods for establishing trust in a TPM device:

Trust module key validation where a SHA2 hash of the client-provided EK public key (EKPub) or AIK public key (AIKPub) is checked against an administrator-managed list. For processing rules, see section 3.2.2.6.2.1.2.5.2.

Trust module certificate validation where the chain for the client-provided EK certificate ([TCG-Cred] section 3.2) or AIK certificate is built and verified to chain up to an administrator-selected list of CAs and root CAs. For processing rules, see section 3.2.2.6.2.1.2.5.1.

Trust the calling client's assertion that the EKPub is from a TPM. For processing rules, see section 3.2.2.6.2.1.2.5.

The Windows Client Certificate Enrollment Protocol allows clients and CAs to perform key attestation.<1> Enterprise key attestation is communicated by setting either of the following flags in the certificate template: CT_FLAG_ATTEST_REQUIRED or CT_FLAG_ATTEST_PREFERRED.

194
Q

FPGA

A

field programmable gate array - (anti-tamper for hardening)

A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term field-programmable. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration, but this is increasingly rare due to the advent of electronic design automation tools.

FPGAs contain an array of programmable logic blocks, and a hierarchy of reconfigurable interconnects allowing blocks to be wired together. Logic blocks can be configured to perform complex combinational functions, or act as simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.[1] Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.

FPGAs have a remarkable role in embedded system development due to their capability to start system software development simultaneously with hardware, enable system performance simulations at a very early phase of the development, and allow various system trials and design iterations before finalizing the system architecture.[2]

195
Q

PUF

A

physically unclonable function - (anti-tamper for hardening)

Physically unclonable functions (PUFs) are a technique in hardware security that exploits inherent device variations to produce an unclonable, unique device response to a given input. On a higher level, a PUF can be thought of as analogous to biometrics for humans – they are inherent and unique identifiers for every piece of silicon.

Due to the imperfections of silicon processing techniques, every single IC ever produced physically differs from one another. From IC to IC, these process variations manifest in ways like differing path delays, transistor threshold voltages, voltage gains, and countless others.

Importantly, while these variations may be random from IC to IC, they are deterministic and repeatable once known. A PUF exploits this inherent difference in IC behavior to generate a unique cryptographic key for each IC.

196
Q

GUI

A

Graphical User Interface

197
Q

UEFI

A

unified extensible firmware interface - newer BIOS, trusted firmware in supply chain assessment too)

UEFI remplace le BIOS traditionnel sur les PC. Il n’y a aucun moyen de passer du BIOS à l’UEFI sur un PC existant. Vous devez acheter un nouveau matériel qui prend en charge et comprend l’UEFI, comme le font la plupart des nouveaux ordinateurs.
La plupart des implémentations UEFI fournissent une émulation de BIOS via le support CSM (Compatibility Support Module) afin que vous puissiez choisir d’installer et de démarrer d’anciens systèmes d’exploitation qui s’attendent à un BIOS au lieu de l’UEFI, ils sont donc compatibles en arrière.
Du côté logiciel, la prise en charge de l’UEFI a été introduite à Windows avec Windows Vista Service Pack 1 et Windows 7. La grande majorité des ordinateurs que vous pouvez acheter aujourd’hui utilisent désormais UEFI plutôt qu’un BIOS traditionnel.

198
Q

eFuse

A

electronic fuse - UEFI and TPM

Trusted firmware terminology - eFUSE

A means for software or firmware to permanently alter the state of a transistor on a computer chip
Electronic fuse; uses one time programming to seal cryptographic keys and other security information. If someone messes with the fuse, it will blow the fuse making that product, that firmware, no longer valid or trusted
199
Q

SME

A

Secure Memory Encryption

Five ways to do secure processing - Processor Security Extensions

Low-level CPU changes and instructions that enable secure processing

Built into microprocessor

Called different things if using AMD or Intel

AMD: Secure Memory Encryption (SME) or Secure Encrypted Virtualization (SEV)

Intel: Trusted Execution Technology (TXT) or Software Guard Extensions (SGX)

All four are form of processor security extensions

200
Q

SEV

A

Secure Encrypted Virtualization - Processor Security Extensions for AMD processors

Low-level CPU changes and instructions that enable secure processing
Built into microprocessor
Called different things if using AMD or Intel
AMD: Secure Memory Encryption (SME) or Secure Encrypted Virtualization (SEV)
Intel: Trusted Execution Technology (TXT) or Software Guard Extensions (SGX)
All four are form of processor security extensions
201
Q

TXT

A

Trusted Execution Technology - Processor Security Extensions for Intel processors

Called different things if using AMD or Intel
AMD: Secure Memory Encryption (SME) or Secure Encrypted Virtualization (SEV)
Intel: Trusted Execution Technology (TXT) or Software Guard Extensions (SGX)
All four are form of processor security extensions

202
Q

SGX

A

Software Guard Extensions - Processor Security Extensions for Intel processors

Five ways to do secure processing - Processor Security Extensions

Low-level CPU changes and instructions that enable secure processing
Built into microprocessor
Called different things if using AMD or Intel
AMD: Secure Memory Encryption (SME) or Secure Encrypted Virtualization (SEV)
Intel: Trusted Execution Technology (TXT) or Software Guard Extensions (SGX)
All four are form of processor security extensions
203
Q

VMM

A

virtual machine monitor = synonym of hypervisor

What Does Virtual Machine Monitor (VMM) Mean?

A Virtual Machine Monitor (VMM) is a software program that enables the creation, management and governance of virtual machines (VM) and manages the operation of a virtualized environment on top of a physical host machine.

VMM is also known as Virtual Machine Manager and Hypervisor. However, the provided architectural implementation and services differ by vendor product.

204
Q

VM

A

virtual machine

205
Q

SRK

A

Storage Root Key - TPM - In TPM, we also have persistent memory and inside of that, we have an endorsement key, which is a digital key and a Storage Root Key or an SRK

206
Q

LSO

A

Locally Shared Object - Web Brower Security

207
Q

NOP

A

non-operation instruction - in “Smash the Stack” attack (buffer overflow)

208
Q

ASLR

A

Address space layout randomization - buffer overflow countermethod / mitigation tehcnique

209
Q

DOM

A

Document object model

210
Q

SQL

A

structured query language (langage de requête structure)

211
Q

LDAP

A

Lightweight Directory Access Protocol

212
Q

XML

A

extensible markup language

213
Q

SDLC

A

Software Development Life Cycle

214
Q

(PaSdITIDM)

A

Planning and Analysis, Software/System Design, Implementation, Testing, Integration, Deployment and Maintenance) - 7 phases in SDLC to know

215
Q

DevOps

A

DEVelopment OPerationS

216
Q

SDK

A

Software Development Kit

217
Q

SEH

A

Structured Exception Handling - error handling (runtime error)

218
Q

RAT

A

remote access Trojan - Backdoor

219
Q

RCE

A

Remote Code Execution

220
Q

CVS

A

Common Vulnerability Scoring - system of classification

221
Q

ASLR

A

Address Space Layout Randomization

222
Q

XSS

A

Cross-Site Scripting

223
Q

DOM

A

Document object model

224
Q

XSRF/CSRF

A

Cross-Site Request Forgery

225
Q

XXE

A

XML External Entity - XML vuln

226
Q

TOCTTOU

A

Time of Check to Time of Use - race condition

227
Q

SPOF

A

Single Point of Failure

228
Q

UPS

A

Uninterruptible Power Supply

229
Q

RAID

A

Redundant Array of Independent Disks

230
Q

GFS backup

A

Grandfather-Father-Son

231
Q

DRP

A

Disaster Recovery Plan

232
Q

BCP

A

Business Continuity Plan

233
Q

MTD

A

Maximum Tolerable Downtime - Durée maximale d’interruption admissible

234
Q

RTO

A

Recovery Time Objective

235
Q

WRT

A

Work Recovery Time

236
Q

RPO

A

Recovery Point Objective

237
Q

MTTR

A

Mean Time To Repair

238
Q

MTBF

A

Mean Time Between Failures

239
Q

SDLC

A

Software Development Life Cycle

240
Q

AUSSLF

A

Authority, Urgency, Social proof, Scarcity, Likability and Fear Motivation factors in social engineering

241
Q

PII

A

Personal Identifiable Information

242
Q

HIPAA

A

Health Insurance Portability and Accountability Act

243
Q

SOX

A

Sarbanes-Oxley - Affects publicly-traded U.S. corporations and requires certain accounting methods and financial reporting requirements

244
Q

GLBA

A

Gramm-Leach-Bliley Act - Affects banks, mortgage companies, loan offices, insurance companies, investment companies, and credit card providers

245
Q

FISMA

A

Federal Information Security Management (FISMA) Act of 2002 - Requires each agency to develop, document, and implement an agency-wide information systems security program to protect their data

246
Q

PHI

A

protected health information

247
Q

HIPAA

A

Health Insurance Portability and Accountability Act

248
Q

PCI DSS

A

Payment Card Industry Data Security Standard

249
Q

HAVA

A

Help America Vote Act

250
Q

GDPR

A

General Data Protection Regulation

251
Q

NDA

A

Non-Disclosure Agreement - Vendor Relationships

252
Q

MOU

A

Memorandum of Understanding

253
Q

SLA

A

Service-Level Agreement

254
Q

ISA

A

Interconnection Security Agreement

255
Q

BPA

A

Business Partnership Agreement

256
Q

ISP

A

Internet Service Provider

257
Q

SABSA

A

Sherwood Applied Business Security Architecture - IT Security Framework

258
Q

COBIT

A

Control Objectives for Information and Related Technology - IT Security Framework

259
Q

CIS

A

Center for Internet Security - Key frameworks

260
Q

RMF

A

Risk Management Framework - Key frameworks

261
Q

CSF

A

Cybersecurity Framework - Key frameworks

262
Q

SOC

A

System and Organization Controls - Key frameworks

263
Q

ISMS

A

information security management system

264
Q

ISO

A

International Organization for Standardization

265
Q

PIMS

A

Privacy information management systems

266
Q

OOB

A

out-of-band

267
Q

ARP

A

Address Resolution Protocol

268
Q

CBC

A

Enchaînement des blocs (Cipher block Chain (encryption with IV)) - SYMMETRIC ENCRYPTION

269
Q

CFB

A

Chiffrement à rétroaction (Cipher Feedback)) - SYMMETRIC ENCRYPTION

270
Q

IV

A

Initialization Vector) - SYMMETRIC ENCRYPTION

271
Q

CTR

A

Chiffrement basé sur un compteur (CounTeR, CTR)) - SYMMETRIC ENCRYPTION

272
Q

ECB

A

Dictionnaire de codes (Electronic Code Book, ECB)

273
Q

OFB

A

Chiffrement à rétroaction de sortie (Output Feedback)

274
Q

DES

A

Data Encryption Standard) - SYMMETRIC

275
Q

3DES

A

Triple DES) - SYMMETRIC

276
Q

IDEA

A

International Data Encryption Algorithm- SYMMETRIC

277
Q

RC

A

Rivest Cipher- SYMMETRIC

278
Q

PKI

A

Public Key Infrastructure

279
Q

RSA

A

Rivest, Shamir, and Adleman - Asymmetric

280
Q

ECC

A

Elliptic Curve Cryptography - Asymmetric

281
Q

ECDH

A

Elliptic Curve Diffie-Hellman - Asymmetric

282
Q

ECDHE

A

Elliptic Curve Diffie-Hellman Ephemeral - Asymmetric

283
Q

ECDSA

A

Elliptic Curve Digital Signature Algorithm - Asymmetric

284
Q

PGP

A

Pretty Good Privacy

285
Q

GPG

A

GNU Privacy Guard

286
Q

PRNG

A

Pseudo-Random Number Generator

287
Q

MD5

A

Message Digest - Hashing

288
Q

SHA-1, 2 ou 3

A

Secure Hash Algorith - Hashing

289
Q

RIPEMD

A

RACE Integrity Primitive Evaluation Message Digest - Hashing

290
Q

HMAC

A

Hash-based Message Authentication Code - Hashing

291
Q

Ipfix

A

Internet Protocol Flow Information Export

292
Q

MSF

A

Metasploit Framework Exploitation

293
Q

BeEF

A

Browser Exploitation Framework - Exploitation

294
Q

PKI

A

Public Key Infrastructure - PKI

295
Q

CA

A

Certificate authority - PKI

296
Q

SAN

A

Subject Alternative Name - PKI

297
Q

BER

A

Basic Encoding Rules - PKI, X.690 encoding standard of the chain of trust

298
Q

CER

A

Canonical Encoding Rules - PKI, X.690 encoding standard of the chain of trust

299
Q

DER

A

Distinguished Encoding Rules - PKI, X.690 encoding standard of the chain of trust

300
Q

PKCS#12

A

Public Key Cryptographic System #12 - Possible encoding types / file types with digital certificates

301
Q

PKCS#7

A

Public Key Cryptographic Systems #7

302
Q

CRL

A

Certificate Revocation List - Certificates authorities (CA)

303
Q

OCSP

A

Online Certificate Status Protocol - Certificates authorities (CA)

304
Q

HPKP

A

HTTP Public Key Pinning = Public Key Pinning - Certificates authorities (CA)

305
Q

WOT

A

Web of Trust

306
Q

PSK

A

Personal Shared Key - refers to key stretching

307
Q

UTM

A

unified threat manager (gestion unifiée de menaces)

308
Q

VDI

A

virtual desktop infrastructure

309
Q

LM

A

loga management (solution) - (Syslog…)

310
Q

SNMP

A

Simple Network Management Protocol - Monitoring and auditing - is commonly used to gather information from routers, switches, and other network devices. It provides information about a device’s status, including CPU and memory utilization, as well as many other useful details about the device.

311
Q

NMS

A

Network Management System - SNMP monitoring/auditing software on a monitoring station

312
Q

WORM

A

Write Once Read Many - Method of storage (technology like a DVD-R, etc.)

313
Q

SIEM

A

Security information and event management systems

314
Q

OSSIM

A

Open-Source Security Information Management

315
Q

LM

A

log management (like Syslog server…)

316
Q

SOAR

A

Security Orchestration, Automation, and Response provides information about network traffic

317
Q

NetFlow

A

NETwork FLOW

318
Q

MIB

A

management information base - SNMP monitoring/auditing software on a monitoring station - a database used for managing the entities in a communication network (SNMP

319
Q

NMS

A

network management systems - SNMP monitoring/auditing - The SNMP manager is going to do much of the talk to the printer (managed device with an agent). It runs an interface/ a tool, like a SNMP software, called an NMS (Network Management Station). They will work on UDP port 162 et TLP 10162 (= those are listening ports).

320
Q

DTP

A

dynamic trunking protocol - (VLAN switch spoofing)

321
Q

CAM

A

content addressable memory - Table d’adressage du switch

322
Q

ICMP

A

Internet Control Message Protocol - ECHO in a ping flood…

323
Q

STP

A

Shielding Twister Pair - CABLE TYPE

324
Q

STP

A

Spanning Tree Protocol - braodcat loop in a network between swithcer - A very common way to create problems on a network is to build a loop, is to connect two switches to each other and then connect them to each other again, and watch the packets start circling between them as fast as they can go. And as they go by, more traffic gets on to the network, and more traffic starts looping. And eventually you completely overwhelm your infrastructure devices just because of all the packets that are looping back and forth. And the only way to resolve it is to break the loop, wherever it happens to be.
Fortunately, we built mechanisms and protocols within things like our switches and our bridges to prevent these things from happening. These Mac layer protocols themselves have no way to know if they’re in the middle of a loop, so what we’ve done is put the intelligence on the switch or on the bridge. And we use a standard called IEEE 802.1D. This is something called spanning tree that prevents loops. it is very much a standard that everyone uses.
There’s three types of ports in a spanning tree technology. There is a root port, and that’s the port that talks back to the root bridge. One bridge on the network is the root bridge, and it’s usually the one with the smallest Mac address number associated with it, or one that you would designate as the root bridge. Here’s Bridge 1 at the top of my list. It is designated as the root bridge. It does not have a root port because it is the root– it doesn’t need a link to the root. What it does have are designated ports, which are ports that are available to send traffic out over the network.
STO is a great way to prevent loops. And it’s also a great way to create redundancy in your network, and if you do happen to have an outage, still maintain the availability of what’s happening. From a security perspective this also maintains uptime and prevents those loops for bringing down your network and creating a denial of service situation.

325
Q

PAP

A

Password Authentication Protocol

326
Q

PPP

A

Point-to-Point Protocol

327
Q

CHAP

A

Challenge Handshake Authentication Protocol

328
Q

MS-CHAP

A

MicroSoft Challenge Handshake Authentication Protocol

329
Q

EAP

A

extensible authentication protocol (protocole d’authentification extensible) - authentication

330
Q

EAP-PSK

A

extensible authentication protocol pre-shared key - authentication

331
Q

LEAP

A

Lightweight EAP - Proprietary Cisco Authentication _ wireless standard

332
Q

FAST

A

flexible authentication via secure tunneling - authentication

333
Q

EAP-FAST

A

EAP flexible authentication via secure tunneling - authentication, Cisco proprietary

334
Q

PEAP

A

Protected EAP (PEAP)

335
Q

TKIP

A

Temporal Key Integrity Protocol - a 48-bit Initialization Vector for WEP - CEP includ’s RC4 abandonned for WPA

336
Q

MIC

A

Message Integrity Check - WEP

337
Q

CCMP

A

Counter-Mode/CBC-Mac protocol (Counter Mode with Cipher Block Chaining) - WPA2 and 3 - CCMP est un protocole de chiffrement qui gère les clés et l’intégrité des messages. Il s’agit d’une alternative considérée comme plus sûre que TKIP qui est utilisé dans WPA sur ce système basé sur AES.

338
Q

SSID

A

service set identifier - nom d’un réseau sans fil selon la norme IEEE 802.11. Ce nom est constitué par une chaîne de caractères de 0 à 32 octets. En mode infrastructure, il sert à identifier le point d’accès sans-fil.

339
Q

HSTS

A

HTTP Strict Transport Security

340
Q

WIDS

A

Wireless intrusion detection systems - IDS for wireless networks

341
Q

WAP

A

Wireless Access Point

342
Q

APFS

A

Access Point

343
Q

VMM

A

Virtual Machine Monitor = hypervisor in virtualization

344
Q

CASB

A

Cloud Access Security Brokers (securing VMs)

345
Q

IaaS

A

Infrastructure as a service

346
Q

PaaS

A

Platform as a Service

347
Q

VDE

A

Virtual desktop environment

348
Q

VDI

A

Virtual desktop infrastructure

349
Q

HVAC

A

heating, ventilation, and air conditioning system

350
Q

MDM

A

Mobile Device Management

351
Q

MAM

A

Mobile application management

352
Q

COPE

A

Corporate-owned personally enable

353
Q

CYOD

A

Choose Your Own Device

354
Q

BYOD

A

Bring your own device

355
Q

COBO

A

Corporate Owned, business only

356
Q

OTA

A

over the air - Firmware OTA updates

357
Q

OTG

A

On-The-Go - USB OTG - Also known as Flash cookies, they are stored in your Windows user profile under the Flash folder inside of your AppData folder

358
Q

LSO

A

Locally Shared Object

359
Q

UAC

A

User Account Control - A key tool from rogue applications - Prevents unauthorized access and avoid user error in the form of accidental changes (because it’s running everything as a standard user, as opposed to running it as an administrator)

360
Q

NIC

A

network interface card - OSI layer 2

361
Q

PDU

A

protocol data units - -a generic term to describe a LAYER’s information
-each TCP/IP layer has a PDU associated with it (________________________
Apppliction Layer:
PDU = DATA (clear text, encrypted, compresses)
________________________
Transport Layer PDU:
..SEGMENT for TCP
..DATAGRAM for UDP (user datagram protocol)

(*) they have header + [trailer]
________________________
Internet Layer PDU:
PACKET (e.g. IP packet !!!)
________________________
Network Access Lauer PDU:
.. more complicated; needs OSI model to explain.
…FRAMES - for Data Link Layer
…BITS - for Physical Layer) [Application Layer -> DATA
Transport Layer -> Segments or Datagrams
Internet Layer -> Packets
Network Access Layer -> Frames and Bits]

362
Q

ECDH

A

Elliptic Curve Diffie-Hellman - key exchange, for instance in TLS protocol….

363
Q

ECDHE

A

Elliptic Curve Diffie-Hellman Ephemeral - key exchange, for instance in TLS protocol…. - key exchange, for instance in TLS protocol….

364
Q

ECDSA

A

Elliptic Curve Digital Signature Algorithm - key exchange, for instance in TLS protocol….

365
Q

ECC

A

Elliptic Curve Cryptography -used by the US Government in their digital signatures - key exchange, for instance in TLS protocol….

366
Q

MBSA

A

Microsoft Baseline Security Analyzer

367
Q

MAC

A

Media Access Control/Mandatory Access Control

368
Q

XEE

A

XML External Entity - An attack that embeds a request for a local resource

369
Q

XSRF/CSRF

A

cross-site request forgery

370
Q

XSS

A

Cross-Site Scripting

371
Q

ASLR

A

Address Space Layout Randomization - Method used by programmers to randomly arrange the different address spaces used by a program or process to prevent buffer overflow exploits

372
Q

LDAP

A

lightweight directory access protocol Secure (based on the X.500 protocol) - Port 389 and port 636 for LDAPS (LDAP Secure using SSL). - Application layer protocol for accessing and modifying directory services data (AD uses it).AD is Microsoft’s version of LDAP.

373
Q

MSF

A

Metasploit framework

374
Q

GPOs

A

Group Policy Objects

375
Q

UAC

A

User Account Control

376
Q

AAA

A

authentication, authorization and accounting

377
Q

PAP

A

Password Authentication Protocol - not encrypted = in the clear

378
Q

KDC

A

key distribution center (Kerberos) - UDP nad TCP port 88

379
Q

TGT

A

ticket granting ticket - one of the 2 services in Keberos (authentication service and TGT)

380
Q

SAML

A

Security Assertion Markup Language - authentication method

381
Q

RMI

A

radio frequency interference

382
Q

EMI

A

electromagnetic interference

383
Q

ESD

A

ele trostatic discharge

384
Q

HSM

A

Hardware Security Module

385
Q

SED

A

Self-Encryption Drive

386
Q

TPM

A

trusted platform module (Module de plateforme sécurisée)

387
Q

FDE

A

Full disk encryption

388
Q

TOS

A

Trusted Operating System

389
Q

FIC

A

File integrity check

390
Q

ICS

A

Industrial Control Systems

391
Q

ABAC

A

Attribute-based access control - 1 out of the 5 access methos (MAC, DAC, RBAC…)

392
Q

lanman

A

Local Area Network Manager - Weak password encryption. Authentication to Windows only.

393
Q

KDC

A

Key distribution Center - Kerberos is used only in Windows Domain Controller domains. The DC in Kerberos is known as the KDC.

394
Q

Idp

A

Identity Provider - SAML (SSO tool for multiple webs, ICS, SCADA… in Windows SSO is thru AD)

395
Q

EAP

A

extensible authentication protocol - the authentication protocol used in wireless networks and Point-to-Point connections

396
Q

DSA

A

Digital Signature Algorithm

igital Signature Algorithm (DSA) is one of the Federal Information Processing Standard for making digital signatures depends on the mathematical concept or we can say the formulas of modular exponentiation and the discrete logarithm problem to cryptograph the signature digitally in this algorithm.

It is Digital signatures are the public-key primitives of message authentication in cryptography. In fact, in the physical world, it is common to use handwritten signatures on handwritten or typed messages at this time. Mainly, they are used to bind signatory to the message to secure the message.

Therefore, a digital signature is a technique that binds a person or entity to the digital data of the signature. Now, this will binding can be independently verified by the receiver as well as any third party to access that data.

Here, Digital signature is a cryptographic value that is calculated from the data and a secret key known only by the signer or the person whose signature is that.

In fact, in the real world, the receiver of message needs assurance that the message belongs to the sender and he should not be able to hack the origination of that message for misuse or anything. Their requirement is very crucial in business applications or any other things since the likelihood of a dispute over exchanged data is very high to secure that data.

Block Diagram of Digital Signature
The digital signature scheme depends on public-key cryptography in this algorithm.
Explanation of the block diagram
Firstly, each person adopting this scheme has a public-private key pair in cryptography.
The key pairs used for encryption or decryption and signing or verifying are different for every signature. Here, the private key used for signing is referred to as the signature key and the public key as the verification key in this algorithm.
Then, people take the signer feeds data to the hash function and generates a hash of data of that message.
Now, the Hash value and signature key are then fed to the signature algorithm which produces the digital signature on a given hash of that message. This signature is appended to the data and then both are sent to the verifier to secure that message.
Then, the verifier feeds the digital signature and the verification key into the verification algorithm in this DSA. Thus, the verification algorithm gives some value as output as a ciphertext.
Thus, the verifier also runs the same hash function on received data to generate hash value in this algorithm.
Now, for verification, the signature, this hash value, and output of verification algorithm are compared with each variable. Based on the comparison result, the verifier decides whether the digital signature is valid for this or invalid.
Therefore, the digital signature is generated by the ‘private’ key of the signer and no one else can have this key to secure the data, the signer cannot repudiate signing the data in the future to secure that data by the cryptography.
Importance of Digital Signature
Therefore, all cryptographic analysis of the digital signature using public-key cryptography is considered a very important or main and useful tool to achieve information security in cryptography in cryptoanalysis.

Thus, apart from the ability to provide non-repudiation of the message, the digital signature also provides message authentication and data integrity in cryptography.

This is achieved by the digital signature are,

Message authentication: Therefore, when the verifier validates the digital signature using the public key of a sender, he is assured that signature has been created only by a sender who possesses the corresponding secret private key and no one else does by this algorithm.
Data Integrity: In fact, in this case, an attacker has access to the data and modifies it, the digital signature verification at the receiver end fails in this algorithm, Thus, the hash of modified data and the output provided by the verification algorithm will not match the signature by this algorithm. Now, the receiver can safely deny the message assuming that data integrity has been breached for this algorithm.
Non-repudiation: Hence, it is just a number that only the signer knows the signature key, he can only create a unique signature on a given data of that message to change in cryptography. Thus, the receiver can present data and the digital signature to a third party as evidence if any dispute arises in the future to secure the data.