Comptia Security Plus Acronyms Revisions - Diamond Flashcards
ICMP
Il existe de nombreux équipements réseau qui communiquent entre eux.
Mais comment notifier d’un problème de connexion, de transmissions de données ou encore la nécessité d’utiliser tel ou tel chemin pour se connecter à un équipement réseau.
C’est là que le protocole ICMP (Internet message protocol) entre en jeu grâce à la possibilité d’envoyer des messages.
Avec TCP et UDP, le protocole ICMP est l’un des protocoles fondamentaux qui permet à un réseau de fonctionner.
Qu’est ce que le protocole ICMP
ICMP est un protocole de niveau réseau (couche 3 du modèle OSI).
ICMP signifie Internet message protocol et comme son nom l’indique est un protocole axé message.
Les périphériques d’un réseau utilisent les messages ICMP pour communiquer les problèmes de transmission de données.
Il est donc utilisé pour transmettre des informations sur les problèmes de connectivité du réseau à la source de la transmission compromise. Il envoie des messages de contrôle tels que “destination network unreachable”, “source route failed” et “source quench”.
Par exemple :
Annoncer les erreurs de réseau : par exemple, un hôte ou une partition entière du réseau est inaccessible, en raison d’une défaillance quelconque. Un paquet TCP ou UDP dirigé vers un numéro de port sans récepteur attaché est également signalé par ICMP Annoncer l’encombrement du réseau : lorsqu’un routeur commence à recevoir trop de paquets, en raison d’une incapacité à les transmettre aussi vite qu’ils sont reçus, il génère des messages ICMP Source Quench. Dirigés vers l’expéditeur, ces messages devraient entraîner un ralentissement du rythme de transmission des paquets
Pour cela, l’une des principales façons dont ICMP est utilisé est de déterminer si les données arrivent à destination et au bon moment. L’ICMP est donc un aspect important du processus de signalement des erreurs et des tests visant à déterminer si un réseau transmet bien les données.
ICMP est utilisé aussi par les outils tels que ping et traceroute.
Ping s’appuie sur le message 0 (ICMP Echo) pour vérifier si un hôte est en ligne par sa réponse. Cela permet de vérifier la vitesse de réponse : La latence. Traceroute s’appuie sur le TTL, lorsque ce dernier atteint 0, un message ICMP est envoyé. Traceroute analyse ces retours pour générer le chemin ou carte de la connexion
Quelles sont les messages d’erreur d’ICMP ?
Liste des codes erreurs et messages d’erreur
Type Code Description
3 0-15 Destination unreachable Notification d’un paquet qui ne peut être transmis.
Ce dernier est abandonné.
Le champs du code fournit une explication.
5 0-3 Redirect Informe d’une route alternative pour le datagramme et devrait entraîner une mise à jour de la table de routage.
Le champ code explique la raison du changement de route.
11 0,1 Time excedeed Envoyé lorsque le champ TTL a atteint zéro (code 0) ou lorsqu’il y a un délai d’attente pour le réassemblage des segments (code 1).
12 0,11 Parameter Problem Envoyé lorsque l’en-tête IP est invalide (code 0) ou lorsqu’une option de l’en-tête IP est manquante (code 1).
Les code et messages ICMP
Le type Destination unreachable
Ce type a pour but d’annoncer les erreurs de réseau lorsqu’un équipement réseau ne parvient pas à communiquer avec un autre.
Voici les codes erreurs et messages d’erreur pour le type Destination unreachable (Code 3).
Code erreur Message ICMP
0 Destination network unreachable
1 Destination host unreachable
2 Destination protocol unreachable
3 Destination port unreachable
4 Fragmentation required, and DF flag set
5 Source route failed
6 Destination network unknown
7 Destination host unknown
8 Source host isolated
9 Network administratively prohibited
10 Host administratively prohibited
11 Network unreachable for ToS
12 Host unreachable for ToS
13 Communication administratively prohibited
14 Host Precedence Violation
15 Precedence cutoff in effect
Les messages ICMP pour le type Destination unreachable
Message d’erreur Description
Destination Unreachable Ce message est généré lorsqu’un paquet de données ne peut atteindre sa destination finale pour une autre raison. Par exemple, il peut y avoir des défaillances matérielles, des défaillances de port, des déconnexions de réseau, etc.
Redirection Error Ce message est généré lorsque l’ordinateur source (tel que le PDC) demande que le flux de paquets de données soit envoyé sur une autre route que celle prévue à l’origine. Cela est souvent fait afin d’optimiser le trafic réseau, en particulier s’il existe un autre moyen pour que les paquets de données atteignent leur destination en moins de temps. Cela implique la mise à jour des tables de routage dans les routeurs associés concernés.
Source Quench Il s’agit d’un message généré par l’ordinateur source pour réduire ou diminuer le flux de trafic réseau envoyé à l’ordinateur de destination. En d’autres termes, le PDC détecte que le taux de transmission des paquets de données est trop élevé et qu’il doit ralentir afin de s’assurer que l’ordinateur de destination reçoit tous les paquets de données qu’il est censé recevoir
Time Exceeded : Il s’agit du même événement que le Time to Live (TTL) basé sur le réseau
Les descriptions du type ICMP Destination unreachable
Le type redirect Message
Ce type est utiliser par des routeurs pour informer d’une route alternative pour le datagramme.
Il est est conçu pour informer un hôte d’un itinéraire plus optimal à travers un réseau, ce qui peut induire une une mise à jour de la table de routage.
Code erreur Message ICMP
0 Redirect Datagram for the Network
1 Redirect Datagram for the Host
2 Redirect Datagram for the ToS & network
3 Redirect Datagram for the ToS & host
Les messages ICMP pour le type redirect
Le type Time Exceeded
Le message ICMP – Time exceeded est généré lorsque la passerelle qui traite le datagramme (ou le paquet, selon la façon dont vous le regardez) constate que le champ Time To Live (ce champ se trouve dans l’en-tête IP de tous les paquets) est égal à zéro et doit donc être éliminé. Cette même passerelle peut également notifier l’hôte source via le message de dépassement de temps.
Le terme “fragment” signifie “couper en morceaux”. Lorsque les données sont trop volumineuses pour tenir dans un seul paquet, elles sont découpées en petits morceaux et envoyées à la destination. À l’autre bout, l’hôte de destination recevra les morceaux fragmentés et les rassemblera pour créer le gros paquet de données original qui a été fragmenté à la source.
Code erreur Message ICMP
0 TTL expired in transit
1 Fragment reassembly time exceeded
Les messages ICMP pour le type Time Exceeded
Le type Parameter Problem
Les messages ICMP Parameter Problem sont utilisés pour indiquer qu’un hôte ou un routeur n’a pas pu interpréter un paramètre invalide dans un en-tête de datagramme IPv4.
Lorsqu’un hôte ou un routeur du réseau trouve un mauvais paramètre dans un en-tête de datagramme IPv4, il abandonne le paquet et envoie un message ICMP Parameter Problem à l’expéditeur d’origine.
Le message ICMP Parameter Problem comporte également une option pour un pointeur spécial afin d’informer l’expéditeur de l’endroit où l’erreur s’est produite dans l’en-tête IPv4 original.
Code erreur Message ICMP
0 Pointer indicates the error
1 Missing a required option
2 Bad length
Les messages ICMP pour le type Parameter Problem
Quelle est la structure d’un paquet ICMP (Datagramme)
Il utilise une structure de paquet de données avec un en-tête de 8 octets et une section de données de taille variable.
Quelle est la structure d’un paquet ICMP (Datagramme)
La structure d’un paquet ICMP (Datagramme)
Voici un description des champs de l’en-tête ICMP (header) :
Type : Le type de message ICMP Code : Sous-type du message ICMP Somme de contrôle : Similaire à la somme de contrôle de l’en-tête IP. La somme de contrôle est calculée en fonction du message ICMP entier Reste de l’en-tête : Données additionnelles qui peuvent être à zéro si aucune
Les messages d’erreur ICMP contiennent une section de données qui comprend une copie de l’intégralité de l’en-tête IPv4, plus au moins les huit premiers octets de données du paquet IPv4 à l’origine du message d’erreur.
La longueur des messages d’erreur ICMP ne doit pas dépasser 576 octets.
Ces données sont utilisées par l’hôte pour faire correspondre le message au processus approprié. Si un protocole de niveau supérieur utilise des numéros de port, ils sont supposés se trouver dans les huit premiers octets des données du datagramme original[6].
Les attaques par ICMP
Il existe des détournements du protocole ICMP afin de mener des attaques.
Voici quelques exemples.
Le type Time Exceeded peut être utilisé de manière malveillante pour des attaques visant à rediriger le trafic vers un système spécifique. Dans ce type d’attaque, le pirate, se faisant passer pour un routeur, envoie un message de redirection ICMP (Internet Control Message Protocol) à un hôte, qui indique que tout le trafic futur doit être dirigé vers un système spécifique en tant que route plus optimale pour la destination.
Un IDS peut être utilisé pour avertir lorsque ces messages de redirection ICMP se produisent ou pour qu’il les ignore.
Puis on trouve les attaques DoS et ICMP flood :
Ping of the death : Cette attaque vise à exploiter la taille variable de la section des données du paquet ICMP.
Dans le “Ping of death”, les paquets ICMP volumineux ou fragmentés sont utilisés pour des attaques par déni de service. Les données ICMP peuvent également être utilisées pour créer des canaux secrets de communication. Ces canaux sont connus sous le nom de tunnels ICMP.
Smurf attack : L’attaquant transmet un paquet ICMP dont l’adresse IP est usurpée ou falsifiée. Lorsque l’équipement du réseau répond, chaque réponse est envoyée à l’adresse IP usurpée, et la cible est inondée d’une tonne de paquets ICMP. Ce type d’attaque n’est généralement un problème que pour les équipements anciens.
Twinge Attack : Cette attaque est similaire à l’attaque Ping Flood, mais les demandes d’écho ICMP ne proviennent pas d’un seul ordinateur, mais de plusieurs. Elles comportent également une fausse adresse IP source dans l’en-tête du paquet de données.
En lien, vous pouvez lire :
Faut-il bloquer ICMP : Pour et contre Iptables : bloquer ping (ICMP)
SLE
Single Loss Expectancy = AV x EF
AV
asset value
EF
exposure factor
NDP
Network Discovery Protocol
Neighbor Discovery Protocol est un protocole utilisé par IPv6. Il opère en couche 3 et est responsable de la découverte des autres hôtes sur le même lien, de la détermination de leur adresse et de l’identification des routeurs présents.
ARO
Annualized Rate of Occurrence
ALE
Annual Loss Expectancy = SLE x ARO
qualitative risk analysis
guessing, subjective experience based
quantitative risk analysis
numbers and costs- La loi fédérale sur la gestion de la sécurité des informations (Federal Information Security est une loi-cadre visant à protéger le gouvernement américain contre la cybercriminalité et les catastrophes naturelles qui représentent un risque pour les données sensibles.
FISMA
Federal Information Security Management Act
PCI-DSS
Payment Card Industry Digital Security Standard
(PTA)
physical (fences and door locks and alarm systems and security guards), technical (safeguards, countermeasures), and administrative (changing the behavior of people ) - 1st ctrl type (security controls)
NIST
National Institute of Standards and Technology
DSA
Digital Signature Algorithm
Digital Signature Algorithm (DSA) is one of the Federal Information Processing Standard for making digital signatures depends on the mathematical concept or we can say the formulas of modular exponentiation and the discrete logarithm problem to cryptograph the signature digitally in this algorithm.
It is Digital signatures are the public-key primitives of message authentication in cryptography. In fact, in the physical world, it is common to use handwritten signatures on handwritten or typed messages at this time. Mainly, they are used to bind signatory to the message to secure the message.
Therefore, a digital signature is a technique that binds a person or entity to the digital data of the signature. Now, this will binding can be independently verified by the receiver as well as any third party to access that data.
Here, Digital signature is a cryptographic value that is calculated from the data and a secret key known only by the signer or the person whose signature is that.
In fact, in the real world, the receiver of message needs assurance that the message belongs to the sender and he should not be able to hack the origination of that message for misuse or anything. Their requirement is very crucial in business applications or any other things since the likelihood of a dispute over exchanged data is very high to secure that data.
Block Diagram of Digital Signature
The digital signature scheme depends on public-key cryptography in this algorithm.
Explanation of the block diagram
Firstly, each person adopting this scheme has a public-private key pair in cryptography.
The key pairs used for encryption or decryption and signing or verifying are different for every signature. Here, the private key used for signing is referred to as the signature key and the public key as the verification key in this algorithm.
Then, people take the signer feeds data to the hash function and generates a hash of data of that message.
Now, the Hash value and signature key are then fed to the signature algorithm which produces the digital signature on a given hash of that message. This signature is appended to the data and then both are sent to the verifier to secure that message.
Then, the verifier feeds the digital signature and the verification key into the verification algorithm in this DSA. Thus, the verification algorithm gives some value as output as a ciphertext.
Thus, the verifier also runs the same hash function on received data to generate hash value in this algorithm.
Now, for verification, the signature, this hash value, and output of verification algorithm are compared with each variable. Based on the comparison result, the verifier decides whether the digital signature is valid for this or invalid.
Therefore, the digital signature is generated by the ‘private’ key of the signer and no one else can have this key to secure the data, the signer cannot repudiate signing the data in the future to secure that data by the cryptography.
Importance of Digital Signature
Therefore, all cryptographic analysis of the digital signature using public-key cryptography is considered a very important or main and useful tool to achieve information security in cryptography in cryptoanalysis.
Thus, apart from the ability to provide non-repudiation of the message, the digital signature also provides message authentication and data integrity in cryptography.
This is achieved by the digital signature are,
Message authentication: Therefore, when the verifier validates the digital signature using the public key of a sender, he is assured that signature has been created only by a sender who possesses the corresponding secret private key and no one else does by this algorithm.
Data Integrity: In fact, in this case, an attacker has access to the data and modifies it, the digital signature verification at the receiver end fails in this algorithm, Thus, the hash of modified data and the output provided by the verification algorithm will not match the signature by this algorithm. Now, the receiver can safely deny the message assuming that data integrity has been breached for this algorithm.
Non-repudiation: Hence, it is just a number that only the signer knows the signature key, he can only create a unique signature on a given data of that message to change in cryptography. Thus, the receiver can present data and the digital signature to a third party as evidence if any dispute arises in the future to secure the data.
Public Key Cryptography
Asymmetric algorithms are also known as Public Key Cryptography
▪ Confidentiality
▪ Integrity
▪ Authentication
▪ Non-repudiation
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a public key and a corresponding private key.[1][2] Key pairs are generated with cryptographic algorithms based on mathematical problems termed one-way functions. Security of public-key cryptography depends on keeping the private key secret; the public key can be openly distributed without compromising security.[3]
In a public-key encryption system, anyone with a public key can encrypt a message, yielding a ciphertext, but only those who know the corresponding private key can decrypt the ciphertext to obtain the original message.[4]
For example, a journalist can publish the public key of an encryption key pair on a web site so that sources can send secret messages to the news organization in ciphertext. Only the journalist who knows the corresponding private key can decrypt the ciphertexts to obtain the sources’ messages—an eavesdropper reading email on its way to the journalist cannot decrypt the ciphertexts. However, public-key encryption does not conceal metadata like what computer a source used to send a message, when they sent it, or how long it is. Public-key encryption on its own also does not tell the recipient anything about who sent a message—it just conceals the content of a message in a ciphertext that can only be decrypted with the private key.
In a digital signature system, a sender can use a private key together with a message to create a signature. Anyone with the corresponding public key can verify whether the signature matches the message, but a forger who does not know the private key cannot find any message/signature pair that will pass verification with the public key.[5][6]
For example, a software publisher can create a signature key pair and include the public key in software installed on computers. Later, the publisher can distribute an update to the software signed using the private key, and any computer receiving an update can confirm it is genuine by verifying the signature using the public key. As long as the software publisher keeps the private key secret, even if a forger can distribute malicious updates to computers, they cannot convince the computers that any malicious updates are genuine.
Public key algorithms are fundamental security primitives in modern cryptosystems, including applications and protocols which offer assurance of the confidentiality, authenticity and non-repudiability of electronic communications and data storage. They underpin numerous Internet standards, such as Transport Layer Security (TLS), SSH, S/MIME and PGP. Some public key algorithms provide key distribution and secrecy (e.g., Diffie–Hellman key exchange), some provide digital signatures (e.g., Digital Signature Algorithm), and some provide both (e.g., RSA). Compared to symmetric encryption, asymmetric encryption is rather slower than good symmetric encryption, too slow for many purposes.[7] Today’s cryptosystems (such as TLS, Secure Shell) use both symmetric encryption and asymmetric encryption, often by using asymmetric encryption to securely exchange a secret key which is then used for symmetric encryption.
MOT
management (decision-making and the management of risk), operational (things that are done by people) and technical (controls)- NIST MOT controls (2nd ctrl type) - management controls are all about how your system’s security is going to be managed and overseen (things like policies, procedures, legal compliance, software development methodologies ). operational controls are focused on things that are done by people = user training, configuration management, testing our disaster recovery plans, and conducting incident handling. technical controls are put into a system to help secure it. This is things like AAA, the authentication, authorization, and accounting, access control, encryption technology, passwords, and configuring your security devices= Anything that is technical and performed by the computer can really be put into this category.
PDC
preventive, detective, and corrective - 3rd control type - some things can go into multiple categories. Preventative controls are security controls that are installed before an event happens and they’re designed to prevent something from occurring. Detective controls are used during an event to find out whether or not something bad may have happened. Corrective controls are used after an event occurs. a closed-circuit TV system = a detective control & physical control. a password policy = a management control but it’s also an administrative control (policies…). a compensating control is used whenever you can’t meet the requirements for a normal control.
IP
Intellectual Property
DLP
data loss prevention systems - to fight IP theft…
SMB
Server Message Block is a service for file sharing on port 445
TTX
Table-top exercises - exercises that use an incident scenario against a framework of controls or a red team.
Tabletop Exercise (TTX): A security incident preparedness activity, taking participants through the process of dealing with a simulated incident scenario and providing hands-on training for participants that can then highlight flaws in incident response planning.
The exercise begins with the Incident Response Plan and gauges team performance against the following questions:
What happens when you encounter a breach?
Who does what, when, how, and why?
What roles will legal, IT, law enforcement, marketing, and company officers play?
Who is spearheading the effort and what authority do they have?
What resources are available when you need them?
Since most companies are unprepared when a cyber attack occurs, every company needs a well-executed Incident Response Plan. You do not want to wait until a cyber attack occurs to figure out what you need to do.
https://www.redlegg.com/solutions/advisory-services/tabletop-exercise-pretty-much-everything-you-need-to-know
pentest
penetration test
OVAL (pentest)
Open Vulnerability and Assessment Language - OVAL is an attempt to create a standard way for vulnerability management software, scanners, and other tools to share their data with each other and with other programs.
(KOCLA)
Knowledge, ownership, characteristic, location, and action - Are five basic factors of authentication that you can consider when determining if somebody is who they say they are). - Well, because a username and a password are both something you know, or a knowledge factor, this is still considered single-factor authentication.
OTP
one-time password - These are implemented generally using either a time-based or a hash-based mechanism. this time-based approach is actually a variation of the hash-based approach known as the HMAC-based One Time Password
TOTP
Time-based One-Time use Password - a password is computed from a shared secret and a current time.
HOTP
HMAC-based One Time Password - HMAC-based one-time password (HOTP) is a one-time password (OTP) algorithm based on hash-based message authentication codes (HMAC).
HMAC
hash-based message authentication code
SSO
Single Sign-On
FIDM
Federated IDentity Management
SAML
Security Assertion Markup Language
OpenID
open standard decentralized protocol
RADIUS
remote authentication dialing user service - standard ports 1812/1813 or proprietary ports 1645/1646 (UDP). - RADIUS is cross-platform that uses port 1812 for its authentication messages and port 1813 for its accounting messages. It provides centralized administration of dial-up, VPN, and wir (less authentication so that you can use that with both 802.1x and the Extensible Authentication Protocol (EAP). RADIUS is a client/server protocol that runs over the application layer (layer 7th).
TACACS+,
terminal access controller access control system plus - port 49 anbd UDP - a proprietary protocol from Cisco called TACACS+ which can perform the role of an authenticator in an 802.1x network. It supports all network protocols and uses AAA processes.
EAP
extensible authentication protocol (protocole d’authentification extensible) - EAP is actually not a single protocol by itself, but a framework in a series of protocol that allows for numerous different mechanisms of authentication, including things like simple passwords, digital certificates, and public key infrastructure.
TTLS
Tunneled Transport Layer Security
EAP-TTLS
EAP Tunneled Transport Layer Security (EAP-TTLS) is an EAP protocol that extends TLS.
This form of EAP that is going to require a digital certificate on the server, but not on the client. Instead, the client is going to use a password for its authentication. - This makes it more secure than the traditional EAP-MD5, which just uses passwords, but it is less secure than the EAP-TLS because that one removes the password vulnerability by using two-digit certificates.
EAP Tunneled Transport Layer Security (EAP-TTLS)
“TTLS” redirects here. For the children’s song, see Twinkle, Twinkle, Little Star.
EAP Tunneled Transport Layer Security (EAP-TTLS) is an EAP protocol that extends TLS. It was co-developed by Funk Software and Certicom and is widely supported across platforms. Microsoft did not incorporate native support for the EAP-TTLS protocol in Windows XP, Vista, or 7. Supporting TTLS on these platforms requires third-party Encryption Control Protocol (ECP) certified software. Microsoft Windows started EAP-TTLS support with Windows 8,[19] support for EAP-TTLS[20] appeared in Windows Phone version 8.1.[21]
The client can, but does not have to be authenticated via a CA-signed PKI certificate to the server. This greatly simplifies the setup procedure since a certificate is not needed on every client.
After the server is securely authenticated to the client via its CA certificate and optionally the client to the server, the server can then use the established secure connection (“tunnel”) to authenticate the client. It can use an existing and widely deployed authentication protocol and infrastructure, incorporating legacy password mechanisms and authentication databases, while the secure tunnel provides protection from eavesdropping and man-in-the-middle attack. Note that the user’s name is never transmitted in unencrypted clear text, improving privacy.
EAP-FAST
EAP flexible authentication via secure tunneling
EAP-FAST (Flexible Authentication via Secure Tunneling) was developed by Cisco*. Instead of using a certificate to achieve mutual authentication. EAP-FAST authenticates by means of a PAC (Protected Access Credential) which can be managed dynamically by the authentication server. The PAC can be provisioned (distributed one time) to the client either manually or automatically. Manual provisioning is delivery to the client via disk or a secured network distribution method. Automatic provisioning is an in-band, over the air, distribution.
PEAP
Protected EAP
LEAP
Lightweight EAP
LDAP
lightweight directory access protocol Secure - Port 389 and port 636 for LDAPS (LDAP Secure using SSL). - Application layer protocol for accessing and modifying directory services data (AD uses it).AD is Microsoft’s version of LDAP.
LDAPS
lightweight directory access protocol - Port 389 and port 636 for LDAPS (LDAP Secure using SSL).
AD
Active Directory
DC
domain controller - DC in Kerberos acts as the key distribution center, or KDC. This KDC has two basic functions, authentication and ticket granting.
KDC
key distribution center (Kerberos) - This KDC has two basic functions, authentication and ticket granting.
TGT
ticket-granting ticket. (ticket d’octroi de tickets) (Kerberos authentication process)
RDP
Remote desktop protocol (port 3389) for remote desktop service.
VNC
Virtual Network Computing for remote desktop service.
GUI
graphical user interface - see VNC
PAP
Password Authentication Protocol (for remote access service)
CHAP
Challenge Handshake Authentication Protocol for remote access service. Authentication scheme that is used in dial-up connections. - used mostly with dial-up
EAP
Extensible Authentication Protocol (for remote access service) - used mostly with dial-up
MS-CHAP
MicroSoft Challenge Handshake Authentication Protocol for remote access service.
VPN
Virtual private network
PTP
point tunneling protocol (#2) - VPNs rely on two different protocols when they’re being operated.
L2TP
layer two tunneling protocol (#1) VPNs rely on two different protocols when they’re being operated.
RAS
Remote Access Services
MITM
Man-In-The-Middle attack - attaque de l’home “du milieu” - A man-in-the-middle attack (MITM) is an attack where the attacker secretly relays and possibly alters the communications between two parties who believe they are directly communicating with each other. One example of a MITM attack is active eavesdropping, in which the attacker makes independent connections with the victims and relays messages between them to make them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. The attacker must be able to intercept all relevant messages passing between them.
MITB
Man-In-The-Browser
POS
Point of sale. A payment terminal, also known as a POS terminal, or credit card terminal. - A payment terminal allows a merchant to capture required credit and debit card information and to transmit this data to the merchant services provider or bank for authorization and finally, to transfer funds to the merchant. T
DAC
Discretionary Access Control (file owner) - access control models
MAC
Mandatory Access Control (here not Message Authentication Code (HMAC= hash-based… and Media Access Control (MAC address). - access control models. do not confuse with Media Access Control address (MAC address) - access control models
Mandatory Access Control
Mandatory access control is a method of limiting access to resources based on the sensitivity of the information that the resource contains and the authorization of the user to access information with that level of sensitivity.
You define the sensitivity of the resource by means of a security label. The security label is composed of a security level and zero or more security categories. The security level indicates a level or hierarchical classification of the information (for example, Restricted, Confidential, or Internal). The security category defines the category or group to which the information belongs (such as Project A or Project B). Users can access only the information in a resource to which their security labels entitle them. If the user’s security label does not have enough authority, the user cannot access the information in the resource.
Message Authentication Code (HMAC= hash-based
Hash-based Message Authentication Code (HMAC) is a message authentication code that uses a cryptographic key in conjunction with a hash function. Hash-based message authentication code (HMAC) provides the server and the client each with a private key that is known only to that specific server and that specific client.
Media Access Control
Media access control, medium access control or simply MAC, is a specific network data transfer policy. It determines how data transmits through a regular network cable. The protocol exists to ease data packets’ transfer between two computers and ensure no collision or simultaneous data transit occurs.
What is a MAC address?
Sending data between computers is only possible if both software and hardware are involved. However, for every device to know where to send the data, a third component is required – addresses.
Since both hardware and software are involved, there are two types of addresses here. The software address is the IP address, while the hardware address is the media access control address.
The MAC address ties to the network interface card, or network interface controller (NIC), located inside each computer today. The NIC acts as the transmission medium that turns data into electrical signals, which can then transmit over the web.
It consists of six sets of two characters or digits that colons or hyphens may separate. The limitations of this number are due to the address itself being 48-bits in length.
A typical MAC address has six groups of two hexadecimal digits. For example, 00:05:85:00:34:SG or 00-05-85-00-BZ-05. The first three groups here are intentionally the same, as they correspond to the same NIC manufacturer. In this case – Juniper.
Every NIC manufacturer has its own unique Organizationally Unique Identifier (OUI), or the first 24-bit part of the MAC address. This addressing scheme helps manufacturers distinguish themselves and their products.
MAC addresses are static and never change, unlike dynamic IP addEvery data packet sent over the network is sent from one MAC address to another. So, when the network adapter receives a packet, it compares the packet’s MAC address to its own. These addresses need to match so that the network interface card or network adapter can receive information.
This part is seamless, but it cannot happen without the help of IP addresses. Why are they important? They are a part of the data transmission process.
In plain terms, MAC addresses use IP addresses to recognize devices on the wide web. IP is a protocol above ethernet networks, and ethernet solely uses MAC addresses. And since they cannot send packets between each other unless they are part of the same network, be it cable or wireless, they need to go above.
In other words, there is no routing between MAC addresses. So, they use something called the Address Resolution Protocol (ARP). ARP’s primary function is to map IP addresses to MAC addresses. It’s also a protocol above ethernet, on the same level as IP.
Thanks to APR, when a piece of hardware needs to know the MAC address of the IP address that sends information, it sends a packet asking that question. ARP is responsible that the device with the proper MAC address can respond, confirming its identity. Once that’s ready, the two devices can finally exchange data packets.
. Each MAC address is a unique identifier, making it more reliable for network administrators who have to identify the ones sending and those receiving data.
RBAC
Role-Based Access Control - access control models - Role-based access control (RBAC) is a modification of DAC that provides a set of organizational roles that users may be assigned in order to gain access rights. The system is non-discretionary since the individual users cannot modify the ACL of a resource. Users gain their access rights implicitly based on the groups to which they are assigned as members.
ABAC
Attribute-Based Access Control - access control models
MAC (address)
Media Access Control
UAC (Windows)
User Account Controls - separation of duties under Windows
ADUC
Active Directory Users and Computers - a program in Windows
Active Directory Users and Computers (ADUC) is built as an add-on for the Microsoft Management Console (MMC), and it’s the go-to tool for IT Pros to manage their Active Directory (AD) environments. You can use ADUC to:
Create AD objects like users, groups, organizational units (OUs), and even printers.
Make changes to existing users, groups, OUs, etc.
Delegate permissions
Move FSMO roles
Raise the domain functional level
Work with advanced features like the LostAndFound container, NTDS Quotas, Program Data, and System information.
chmod
Change Mod in Linux.
CCTV
closed-circuit TV
PTZ
Pan Tilt Zoom - PTZ (=CCTV) with a joystick and move the camera to look at different direction and tilt it up and down, pan it left and right, or zoom in or zoom out.
FAR
false acceptance rate (biometrics)
FRR
false rejection or false rejection rate
CER
crossover error rate (=EER)
EER
equal error rate (=CER)
HVAC
heating, ventilation, and air conditioning system
ICS
industrial control systems - Système de contrôle industriel
SCADA system
supervisory control and data acquisition system - So, when I’m talking about ICS, I’m looking at one plant. When I talk about SCADA, I’m talking about multiple plants. SCADA (supervisory control and data acquisition) networks is a type of network that works off of an ICS (industry control system) and is used to maintain sensors and control systems over large geographic areas.
STP
Shielded Twisted Pair cables
EMP
electromagnetic pulse
CAN
Controller Area Network - vehicular vulnerabilities
CAN
Campus Area Network (network) - network
OBD-II
Onboard Diagnostic module (primary CAN method)
IoT
Internet of Things - Supervisory control and data acquisition (SCADA) systems, industrial control systems (ICS), internet-connected televisions, thermostats, and many other things examples of devices classified as the Internet of Things (IoT).
PLC
programmable logic controller (contrôleur de logique programmable).
A programmable logic controller (PLC) or programmable controller is an industrial computer that has been ruggedized and adapted for the control of manufacturing processes, such as assembly lines, machines, robotic devices, or any activity that requires high reliability, ease of programming, and process fault diagnosis. Dick Morley is considered as the father of PLC as he had invented the first PLC, the Modicon 084, for General Motors in 1968.
PLCs can range from small modular devices with tens of inputs and outputs (I/O), in a housing integral with the processor, to large rack-mounted modular devices with thousands of I/O, and which are often networked to other PLC and SCADA systems.[1]
They can be designed for many arrangements of digital and analog I/O, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.
PLCs were first developed in the automobile manufacturing industry to provide flexible, rugged and easily programmable controllers to replace hard-wired relay logic systems. Since then, they have been widely adopted as high-reliability automation controllers suitable for harsh environments.
A PLC is an example of a hard real-time system since output results must be produced in response to input conditions within a limited time, otherwise unintended operation will result.
SoC
system on a chip - more performant than PLCs
RTOS
real-time operating system - OS for embedded systems
FPGA
field programmable gate array - OS for embedded systems
Un circuit prédiffusé programmable (FPGA, Field-Programmable Gate Array) est un circuit intégré que l’on peut programmer sur le terrain après sa sortie des chaînes de fabrication. Sur le principe, ces circuits FPGA ressemblent aux puces de mémoire morte programmable (PROM), mais leur potentiel d’application est bien plus vaste. Les ingénieurs s’en servent pour concevoir des circuits intégrés spécialisés qui seront ensuite câblés et produits en grandes quantités pour être commercialisés auprès des fabricants d’ordinateurs et des consommateurs. À terme, les circuits FPGA pourraient permettre aux utilisateurs de fabriquer des microprocesseurs adaptés à leur besoins propres.
What is an FPGA?
Field Programmable Gate Arrays (FPGAs) are semiconductor devices that are based around a matrix of configurable logic blocks (CLBs) connected via programmable interconnects. FPGAs can be reprogrammed to desired application or functionality requirements after manufacturing. This feature distinguishes FPGAs from Application Specific Integrated Circuits (ASICs), which are custom manufactured for specific design tasks. Although one-time programmable (OTP) FPGAs are available, the dominant types are SRAM based which can be reprogrammed as the design evolves. - Learn More
What is the difference between an ASIC and an FPGA?
ASIC and FPGAs have different value propositions, and they must be carefully evaluated before choosing any one over the other. Information abounds that compares the two technologies. While FPGAs used to be selected for lower speed/complexity/volume designs in the past, today’s FPGAs easily push the 500 MHz performance barrier. With unprecedented logic density increases and a host of other features, such as embedded processors, DSP blocks, clocking, and high-speed serial at ever lower price points, FPGAs are a compelling proposition for almost any type of design. - Learn More
FPGA Applications
Due to their programmable nature, FPGAs are an ideal fit for many different markets. As the industry leader, Xilinx provides comprehensive solutions consisting of FPGA devices, advanced software, and configurable, ready-to-use IP cores for markets and applications such as:
Aerospace & Defense - Radiation-tolerant FPGAs along with intellectual property for image processing, waveform generation, and partial reconfiguration for SDRs. ASIC Prototyping - ASIC prototyping with FPGAs enables fast and accurate SoC system modeling and verification of embedded software Automotive - Automotive silicon and IP solutions for gateway and driver assistance systems, comfort, convenience, and in-vehicle infotainment. - Learn how Xilinx FPGA's enable Automotive Systems Broadcast & Pro AV - Adapt to changing requirements faster and lengthen product life cycles with Broadcast Targeted Design Platforms and solutions for high-end professional broadcast systems. Consumer Electronics - Cost-effective solutions enabling next generation, full-featured consumer applications, such as converged handsets, digital flat panel displays, information appliances, home networking, and residential set top boxes. Data Center - Designed for high-bandwidth, low-latency servers, networking, and storage applications to bring higher value into cloud deployments. High Performance Computing and Data Storage - Solutions for Network Attached Storage (NAS), Storage Area Network (SAN), servers, and storage appliances. Industrial - Xilinx FPGAs and targeted design platforms for Industrial, Scientific and Medical (ISM) enable higher degrees of flexibility, faster time-to-market, and lower overall non-recurring engineering costs (NRE) for a wide range of applications such as industrial imaging and surveillance, industrial automation, and medical imaging equipment. Medical - For diagnostic, monitoring, and therapy applications, the Virtex FPGA and Spartan® FPGA families can be used to meet a range of processing, display, and I/O interface requirements. Security - Xilinx offers solutions that meet the evolving needs of security applications, from access control to surveillance and safety systems. Video & Image Processing - Xilinx FPGAs and targeted design platforms enable higher degrees of flexibility, faster time-to-market, and lower overall non-recurring engineering costs (NRE) for a wide range of video and imaging applications. Wired Communications - End-to-end solutions for the Reprogrammable Networking Linecard Packet Processing, Framer/MAC, serial backplanes, and more Wireless Communications - RF, base band, connectivity, transport and networking solutions for wireless equipment, addressing standards such as WCDMA, HSDPA, WiMAX and others.
ASIC
application-specific integrated circuit - circuit intégré (micro-électronique) spécialisé.
An application-specific integrated circuit (ASIC /ˈeɪsɪk/) is an integrated circuit (IC) chip customized for a particular use, rather than intended for general-purpose use. For example, a chip designed to run in a digital voice recorder or a high-efficiency video codec (e.g. AMD VCE) is an ASIC. Application-specific standard product (ASSP) chips are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series.[1] ASIC chips are typically fabricated using metal-oxide-semiconductor (MOS) technology, as MOS integrated circuit chips.[2]
As feature sizes have shrunk and design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 logic gates to over 100 million. Modern ASICs often include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs often use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.[1]
Field-programmable gate arrays (FPGA) are the modern-day technology improvement on breadboards, meaning that they are not made to be application-specific as opposed to ASICs. Programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications. For smaller designs or lower production volumes, FPGAs may be more cost-effective than an ASIC design, even in production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars. Therefore, device manufacturers typically prefer FPGAs for prototyping and devices with low production volume and ASICs for very large production volumes where NRE costs can be amortized across many devices.[3]
OT (OT & IT)
operational technology
HMI
Human-machine interface
BAS
Building Automation System for premise systems - A building automation system (BAS) for offices and data centers (“smart buildings”) can include physical access control systems (PACS), but also heating, ventilation, and air conditioning (HVAC), fire control, power and lighting, and elevators and escalators.
PACS
Physical Access Control System (PACS)
What is a Physical Access Control System?
Physical access control systems (PACS) are a form of physical security system that allows or restricts entry to a specific area or building. PACS are frequently in place to safeguard businesses and property. For example, from vandalism, theft, and trespassing, and they are particularly effective in locations that require higher levels of security and protection.
Also, physical access control processes, unlike physical obstacles such as retaining walls, fences, or strategic landscaping, regulate who, how, and when a person can get access.
Thus, being able to control physical access is an essential part of any security program.
Different Physical Access Control Systems
Here are examples of different physical access control systems.
1. Property monitoring
This is helpful to keep watch over the security of a certain area. This helps make sure that no one breaks into restricted areas or steals something off of someone’s property.
2. Entry control
Entry control aims to track who enters and exits a building. This can be very useful for recording employee hours, tracking visitors, and seeing who has come in contact with certain data or information.
Perhaps you can equip doors with sensors that can detect if someone has opened the door without authorization. This is what you call an exit detection system. The sensor sends a signal or alarm to the security staff if anyone opens the door without permission.
3. Video surveillance
Video surveillance enables video cameras to monitor entry and exit points around an organization’s perimeter. Also, inside buildings and even within sensitive areas like server rooms and workstations. We then use these recorded data to analyze and to see how threats enter or exit the facility.
4. Time-and-attendance systems
Time-and-attendance systems are used to manage employees’ access to specific areas during certain times of the day. Employees must swipe their proximity card at the beginning of their shift and swipe it again at the end of their shift. By doing so, you can record their hours worked (it also records when they leave).
5. Geo-fencing
Geo-fencing is a feature that creates virtual boundaries around real-world geographical areas. Such as cities or counties, by configuring the GPS location on the device with the geo-fence location. This method is used by businesses that want to provide access (or deny access) based on where a person is located concerning their company’s boundaries.
6. Visitor tracking
Visitor tracking is a feature that allows security personnel to indicate whether or not a person is authorized to enter a certain area. A person may be allowed to enter the company campus, but not be allowed to enter certain buildings.
This feature allows security personnel to know when unauthorized people are attempting to enter an area.
Two Types of Access Control
Access control systems also have two types of access control. They are “passive” access control and “active” access control.
Passive access control systems are automatic, meaning that they detect whether or not you have permission to enter the facility without human interaction. A simple example would be if you have a retinal scanner, an infrared beam will check your eyes for permission to enter the room. This system requires no human interaction since it detects your presence automatically, hence the term “passive”.
Active access control systems are manual, meaning that they rely on human interaction in some way. For example, if the person at the front desk looks up your name on their computer and manually allows you to enter the building after verifying you are allowed to do so.
Conclusion
Physical access control systems are a form of physical security system that allows or restricts entry to a specific area or building. Also, it aims to track who enters and exits a building.
More importantly, physical access control processes, unlike physical obstacles such as retaining walls, fences, or strategic landscaping, regulate who, how, and when a person can get access.
Thus, being able to control physical access is an essential part of any security program.
IANA
Internet assigned numbers authority
ICMP
Internet Control Message Protocol - = ping = port 1 TCP/IP, network layer of OSI model
- TheICMP echo requestand theICMP echo replymessages are commonly known aspingmessages.
Pingis a troubleshooting tool used by system administrators to manually test for connectivity between network devices, and also to test for network delay and packet loss.
Thepingcommand sends anICMP echo requestto a device on the network, and the device immediately responds with anICMP echo reply.
Sometimes, a company’s network security policy requiresping(ICMP echo reply) to be disabled on all devices to make them more difficult to be discovered by unauthorized persons.
PDOS
Permanent Denial of Service
DDoS attack.
distributed denial of service
DNS
Domain Name Service - Domain Name Service is used to resolve hostnames to IPs and IPs to hostnames
MFA
Multi-Factor Authentications - But what types of cyberattacks does MFA protect against?
• Phishing
• Spear phishing
• Keyloggers
• Credential stuffing
• Brute force and reverse brute force attacks
• Man-in-the-middle (MITM) attacks, spoofing…
XSS
Cross-site scripting
IPC
Inter Process Communication (IPC)
Interprocess communications share known as the IPC dollar (ipc $) sign. Windows, see Fraggle attack
A process can be of two types:
Independent process. Co-operating process.
An independent process is not affected by the execution of other processes while a co-operating process can be affected by other executing processes. Though one can think that those processes, which are running independently, will execute very efficiently, in reality, there are many situations when co-operative nature can be utilized for increasing computational speed, convenience, and modularity. Inter-process communication (IPC) is a mechanism that allows processes to communicate with each other and synchronize their actions. The communication between these processes can be seen as a method of co-operation between them. Processes can communicate with each other through both:
Shared Memory Message passing
ARP
address resolution protocol - it’s used to convert an IP address into a MAC address
DHCP
Dynamic Host Configuration Protocol (DHCP, protocole de configuration dynamique des hôtes) est un protocole réseau dont le rôle est d’assurer la configuration automatique des paramètres IP d’une station ou d’une machine, notamment en lui attribuant automatiquement une adresse IP et un masque de sous-réseau. DHCP peut aussi configurer l’adresse de la passerelle par défaut, des serveurs de noms DNS et des serveurs de noms NBNS (connus sous le nom de serveurs WINS sur les réseaux de la société Microsoft).
EMI
Electromagnetic Interference
RFI
Radio Frequency Interference
PDS
Protected Distribution System
Wire line or fiber optic system that includes adequate safeguards and/or countermeasures (e.g., acoustic, electric, electromagnetic, and physical) to permit its use for the transmission of unencrypted information through an area of lesser classification or control.
A PDS is used to protect unencrypted national security information (NSI) that is transmitted
on wire line or optical fiber. Because the NSI is unencrypted, the PDS must provide
safeguards to deter exploitation. The emphasis is on intrusion detection rather than
prevention of penetration.
A PDS is intended primarily for use in low and medium threat locations, and is not recommended for use in high or critical threat locations. It is also NOT PERMITTED in uncontrolled access areas. For those areas, you must use an encryption solution instead.
SSID
Service Set Identifier
WEP
Wired Equivalent Privacy
PSK
Pre-Shared Key - in wifi
IV
Initialization Vector - in WEP nd WPA
What is an initialization vector (IV)?
An initialization vector (IV) is an arbitrary number that can be used with a secret key for data encryption to foil cyber attacks. This number, also called a nonce (number used once), is employed only one time in any session to prevent unauthorized decryption of the message by a suspicious or malicious actor.
Initialization Vector (IV) attacks with WEP
Christophe March 6, 2022
0 Comments
Understanding Initialization Vector (IV) attacks is important for the CompTIA Security+ exam, but it can be confusing if you’re not as familiar with cryptography concepts. In this post, we’ll explain what an IV is, how it’s used to encrypt data, what IV attacks are, and how to defend against them.
What are Initialization Vectors (IVs) for anyway?
When it comes to encrypting data, there are many different types of encryption. Some are more effective than others, and some are more complicated than others.
There are even different ways of encrypting blocks of information, and we call those different methods modes of operation.
Some approaches involve using something called an Initialization Vector (aka IV). The IV is combined with the secret key in order to encrypt data that’s about to be transmitted.
CBC mode encryption with initialization vector (IV)
Just before encryption occurs, we add the initialization vector, or IV, and it adds extra randomization to the final ciphertext. Then, on the second block of data, we use the resulting ciphertext as the IV for the next block, and so on.
This is important because it ensures that even if we’re using the exact same plaintext and secret key more than once, the resulting encryption will look different every time. This also makes it much more difficult for an attacker to reverse engineer a network’s encryption, even if they were able to gain access to plaintext information.
What are IV attacks?
There can be some situations where an IV attack can overcome the protection that we just talked about, and end up allowing an attacker to figure out the secret key being used. More modern wireless protocols like WPA2 and WPA3 prevent this from happening, but WEP was vulnerable to this attack.
Because WEP uses 24-bit IVs, which is quite small, IVs ended up being re-used with the same key. Because IV keys are transferred with the data in plaintext so that the receiving party is able to decrypt the communication, an attacker can capture these IVs.
WEP IV attack, step 1
By capturing enough repeating IVs, an attacker can easily crack the WEP secret key, because they’re able to make sense of the encrypted data, and they’re able to decrypt the secret key.
WEP IV attack, step 2
This is one of the many reasons that WEP was deprecated and replaced with much more secure wireless protocols.
Defenses against IV attacks
Defending against IV attacks comes down to using more secure wireless protocols such as WPA2 or WPA3. WEP was deprecated a while ago, and WPA is considered less secure than WPA2, so both should be avoided.
WPA2 and 3 use 48-bit IVs instead of 24-bit IVs, which may not sound like much, but it adds a massive number of new potential IV combinations as compared to WEP, which makes it far less likely to repeat.
That’s not the only reason that WPA2 and 3 are stronger than WEP, but it certainly does help. We’ll review some of the other reasons in a future blog post and in our CompTIA Security+ preparation course.
WPA
WiFi Protected Access
TKIP
Temporal Key Integrity Protocol (TKIP) - WPA
The Temporal Key Integrity Protocol, or TKIP, is a wireless network technology encryption protocol. It was designed and implemented as an emergency, short-term fix for the security vulnerabilities in WEP (Wired Equivalent Privacy). TKIP is the core component of WPA (Wi-Fi Protected Access) and works on legacy WEP hardware.
TKIP was developed and endorsed by the Wi-Fi Alliance and the IEEE 802.11i task group 2002-2004, and was limited because it had to work on older WEP hardware. It could only be implemented by software (not firmware), had limited processing power, and had to use WEP’s per-packet encryption process using the RC4 (Rivest Cipher 4) stream cipher.
TKIP includes three main parts: a 64-bit MIC (Message Integrity Check) called Michael, a packet sequencing control, and a per-packet key mixing function. The mixing function uses a pairwise transient key, the sender’s MAC address, and the packet’s 48-bit serial number. It is combined with the IV (initialization vector) or SV (starting variable) and sent to the RC4 cipher.
TKIP is vulnerable to attacks originating in the same network and PSK (pre-shared key) attacks. The vulnerability is due to the session secret not changing and being the same for everyone on that network.
TKIP was officially deprecated in the 802.11 standard in 2012.
MIC
Message Integrity Check - WPA
A message integrity check (MIC), is a security improvement for WEP encryption found on wireless networks. The check helps network administrators avoid attacks that focus on using the bit-flip technique on encrypted network data packets. Unlike the older ICV (Integrity Check Value) method, MIC is able to protect both the data payload and header of the respective network packet.
RC4
Rivest Cipher 4 - WPA
Rivest Cipher 4 is a type of encryption that has been around since the 1980s. It’s one of the most common and earliest stream ciphers. It has been widely used in the Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols, Wireless Equivalent Protocol (WEP), and IEEE 802.11 wireless LAN standard.
While its use has been quite widespread over the years because of its speed and ease of use, today, RC4 is considered to pose many security risks.
Stream ciphers work byte by byte on a data stream. RC4, in particular, is a variable key-size stream cipher using 64-bit and 128-bit sizes. The cipher uses a permutation and two 8-bit index pointers to generate the keystream. The permutation itself is done with the Key Scheduling Algorithm (KSA) and then is entered into a Pseudo-Random Generation Algorithm (PRG), which generates a bitstream.
The pseudorandom stream that the RC4 generates is as long as the plaintext stream. Then through the Exclusive Or (X-OR) operation, the stream and the plaintext generate the ciphertext. Unlike stream ciphers, block ciphers separate plaintext into different blocks. Then it attaches to the blocks the plaintext and performs encryption on the blocks.
What does the encryption procedure look like for RC4? First, the user enters a plaintext file and an encryption key. Then, the RC4 encryption engine generates keystream bytes with the help of the Key Scheduling Algorithm and the Pseudo-Random Generation Algorithm. The X-OR operation is executed byte-by-byte, and the byte output is the encrypted text, which the receiver gets. Once they decrypt it through a byte-by-byte X-OR operation, they can access the plaintext stream.
WPA2
WiFi Protected Access version 2
CCMP
Counter Mode Cipher Block Chaining Message Authentication Code Protocol (Counter Mode CBC-MAC Protocol) or CCM mode Protocol (CCMP) is an encryption protocol designed for Wireless LAN products that implements the standards of the IEEE 802.11i amendment to the original IEEE 802.11 standard.
CCMP is an enhanced data cryptographic encapsulation mechanism designed for data confidentiality and based upon the Counter Mode with CBC-MAC (CCM mode) of the Advanced Encryption Standard (AES) standard. It was created to address the vulnerabilities presented by Wired Equivalent Privacy (WEP), a dated, insecure protocol
Counter-Mode/CBC-Mac protocol (Counter Mode with Cipher Block Chaining) - WPA2 and 3 - CCMP est un protocole de chiffrement qui gère les clés et l’intégrité des messages. Il s’agit d’une alternative considérée comme plus sûre que TKIP qui est utilisé dans WPA sur ce système basé sur AES.
AES
Advanced Encryption Standard - WPA3
WPA3 Security
Aruba Instant supports WPA3 security improvements that include:
Simultaneous Authentication of Equals (SAE)—Replaces WPA2-PSK with password-based authentication that is resistant to dictionary attacks.
WPA3-Enterprise 192-Bit Mode—Brings Suite-B 192-bit security suite that is aligned with Commercial National Security Algorithm (CNSA) for enterprise network. SAE-based keys are not based on PSK and are therefore pairwise and unique between clients and the AP. Suite B restricts the deployment to one of two options:
128-bit security
192-bit security without the ability to mix-and-match ciphers, Diffie-Hellman groups, hash functions, and signature modes
SAE
SAE replaces the less-secure WPA2-PSK authentication. Instead of using the PSK as the PMK, SAE arrives at a PMK, by mapping the PSK to an element of a finite cyclic group, PassWord Element (PWE), doing FCG operations on it, and exchanging it with the peer.
Aruba Instant supports:
SAE without PMK caching
SAE with PMK caching
SAE or WPA2-PSK mixed mode
SAE Without PMK Caching
Instant advertises support for SAE by using an AKM suite selector for SAE in all beacons and probe response frames. Besides, PMF is set to required (MFPR=1).
A client that wishes to perform SAE sends an 802.11 authentication request with authentication algorithm set to value 3 (SAE). This frame contains a well-formed commit message, that is, authentication transaction sequence set to 1, an FCG, commit-scalar, and commit-element.
Instant supports group 19, a 256-bit Elliptic Curve group. Instant responds with an 802.11 authentication containing its own commit message.
Instant and the client compute the PMK and send the confirm message to each other using an authentication frame with authentication transaction sequence set to 2.
The client sends an association request with the AKM suite set to SAE and Instant sends an association response.
Instant initiates a 4-way key handshake with the client to derive the PTK.
SAE With PMK Caching
If SAE has been established earlier, a client that wishes to perform SAE with PMK caching sends an authentication frame with authentication algorithm set to open. Instant sends an authentication response and the client sends a reassociation request with AKM set to SAE and includes the previously derived PMKID.
Instant checks if the PMKID is valid and sends an association response with the status code success.
Instant initiates a 4-way key handshake with the client to derive the PTK.
SAE or WPA2-PSK Mixed Mode
SAE or WPA2-PSK mixed mode allows both SAE clients and clients that can only perform WPA2-PSK to connect to the same BSSID. In this mode, the beacon or probe responses contain a AKM list which contains both PSK (00-0F-AC:2) and SAE (00-0F-AC:8). Clients that support SAE send an authentication frame with SAE payload and connect to the BSSID.
Clients that support only WPA2-PSK send an authentication frame with authentication algorithm set to open.
Instant initiates a 4-way key handshake similar to WPA2.
WPA3-Enterprise
WPA3-Enterprise enforces top secret security standards for an enterprise Wi-Fi in comparison to secret security standards. Top secret security standards includes:
Deriving at least 384-bit PMK/MSK using Suite B compatible EAP-TLS.
Securing pairwise data between STA and authenticator using AES-GCM-256.
Securing group addressed data between STA and authenticator using AES-GCM-256.
Securing group addressed management frames using BIP-GMAC-256
WPA3-Enterprise advertises or negotiates the following capabilities in beacons, probes response, or 802.11 association:
AKM Suite Selector as 00-0F-AC:12
Pairwise Cipher Suite Selector as 00-0F-AC:9
Group data cipher suite selector as 00-0F-AC:9
Group management cipher suite (MFP) selector as 00-0F-AC:12
If WPA3-Enterprise is enabled, STA is successfully associated only if it uses one of the four suite selectors for AKM selection, pairwise data protection, group data protection, and group management protection. If a STA mismatches any one of the four suite selectors, the STA association fails.
WPS
WiFi Protected Setup - in wifi
Wi-Fi Protected Setup (WPS) is a feature supplied with many routers. It is designed to make the process of connecting to a secure wireless network from a computer or other device easier.
WPA
Wireless Access Point - in wifi
AP
Access Point - in wifi
SAE
Simultaneous Authentication of Equals - WPA3
Simultaneous Authentication of Equals (SAE) is based on the Dragonfly handshake protocol and enables the secure exchange of keys of password-based authentication methods. In WPA3, SAE replaces the previous methods of negotiating session keys using pre-shared keys and is also used in WLAN mesh implementations.
What is SAE (Simultaneous Authentication of Equals)?
The acronym SAE stands for Simultaneous Authentication of Equals and refers to a secure key negotiation and exchange method for password-based authentication methods. It is a variant of the Dragonfly key exchange protocol specified in RFC 7664, which in turn is based on Diffie-Hellmann key exchange.
Among other things, SAE is used in WPA3 (Wi-Fi Protected Access 3) and replaces the previous method of negotiating session keys using pre-shared keys. In addition, Simultaneous Authentication of Equals is used in IEEE 802.11s WLAN mesh networks during the peer discovery process. SAE improves the security of key exchange in the handshake process.
Even when weak passwords are used, authentication is protected. Dictionary or brute force attacks and attack methods such as KRACK (Key Reinstallation Attack) are virtually impossible when using Simultaneous Authentication of Equals.
PFS
Perfect Forward Secrecy (PFS), also called forward secrecy (FS), in WPA3, refers to an encryption system that changes the keys used to encrypt and decrypt information frequently and automatically. This ongoing process ensures that even if the most recent key is hacked, a minimal amount of sensitive data is exposed.
Web pages, calling apps, and messaging apps all use encryption tools with perfect forward secrecy that switch their keys as often as each call or message in a conversation, or every reload of an encrypted web page. This way, the loss or theft of one decryption key does not compromise any additional sensitive information—including additional keys.
Determine whether forward secrecy is present by inspecting the decrypted, plain-text version of the data exchange from the key agreement phase of session initiation. An application or website’s encryption system provides perfect forward secrecy if it does not reveal the encryption key throughout the session.
What is Perfect Forward Secrecy?
Perfect forward secrecy helps protect session keys against being compromised even when the server’s private key may be vulnerable. A feature of specific key agreement protocols, an encryption system with forward secrecy generates a unique session key for every user initiated session. In this way, should any single session key be compromised, the rest of the data on the system remains protected. Only the data guarded by the compromised key is vulnerable.
Before perfect forward secrecy, the Heartbleed bug affected OpenSSL, one of the common SSL/TLS protocols. With forward secrecy in place, even man-in-the-middle attacks and similar attempts fail to retrieve and decrypt sessions and communications despite compromise of passwords or secret long-term keys.
RFID
Radio Frequency Identification
NFC
Near Field Communication
GPS
Global Positioning System
Wi-Fi
Wireless Fidelity
CI/CD
continuous integration, continuous delivery, and continuous deployment
L’approche CI/CD permet d’augmenter la fréquence de distribution des applications grâce à l’introduction de l’automatisation au niveau des étapes de développement des applications. Les principaux concepts liés à l’approche CI/CD sont l’intégration continue, la distribution continue et le déploiement continu. L’approche CI/CD représente une solution aux problèmes posés par l’intégration de nouveaux segments de code pour les équipes de développement et d’exploitation (ce qu’on appelle en anglais « integration hell », ou l’enfer de l’intégration).
Plus précisément, l’approche CI/CD garantit une automatisation et une surveillance continues tout au long du cycle de vie des applications, des phases d’intégration et de test jusqu’à la distribution et au déploiement. Ensemble, ces pratiques sont souvent désignées par l’expression « pipeline CI/CD » et elles reposent sur une collaboration agile entre les équipes de développement et d’exploitation, que ce soit dans le cadre d’une approche DevOps ou d’ingénierie de la fiabilité des sites (SRE).
Découvrir comment l’automatisation prend en charge les pipelines CI/CD
Quelle est la différence entre CI et CD (et l’autre CD) ?
L’acronyme « CI/CD » a plusieurs significations. « CI » désigne toujours l’« intégration continue », à savoir un processus d’automatisation pour les développeurs. Si les développeurs parviennent à apporter régulièrement des modifications au code de leur application, à les tester, puis à les fusionner dans un référentiel partagé, cela signifie que l’intégration continue est réussie. Cette solution permet d’éviter de travailler en même temps sur un trop grand nombre d’éléments d’une application, qui pourraient entrer en conflit les uns avec les autres.
« CD » désigne la « distribution continue » et/ou le « déploiement continu », qui sont des concepts très proches, parfois utilisés de façon interchangeable. Les deux concepts concernent l’automatisation d’étapes les plus avancées du pipeline, mais ils sont parfois dissociés pour illustrer le haut degré d’automatisation.
En général, dans le cadre de la distribution continue, les modifications apportées par l’équipe de développement à une application sont automatiquement testées et chargées dans un référentiel (tel que GitHub ou un registre de conteneurs), où elles peuvent être déployées dans un environnement de production actif par l’équipe d’exploitation. Le processus de distribution continue permet de résoudre les problèmes de visibilité et de communication entre l’équipe de développement et l’équipe métier. Son objectif est donc de simplifier au maximum le déploiement du nouveau code.
Le déploiement continu (l’autre signification possible de « CD ») peut désigner le transfert automatique des modifications du développeur depuis le référentiel vers l’environnement de production, où elles peuvent être utilisées par les clients. Ce processus permet de soulager les équipes d’exploitation surchargées par les tâches manuelles qui ralentissent la distribution des applications. Il repose sur la distribution continue en automatisant l’étape suivante du pipeline.
L’expression « CI/CD » peut désigner soit uniquement les deux pratiques liées d’intégration continue et de distribution continue, soit les trois pratiques, c’est-à-dire l’intégration continue, la distribution continue et le déploiement continu. Pour compliquer encore les choses, il arrive que l’expression « distribution continue » englobe également le processus de déploiement continu.
En conclusion, mieux vaut ne pas trop s’attarder sur ces questions de sémantique. Il suffit de retenir que l’approche CI/CD se rapporte à un processus, souvent représenté sous forme de pipeline, qui consiste à introduire un haut degré d’automatisation et de surveillance continues dans le processus de développement des applications.
La signification réelle de ces termes varie au cas par cas, selon le niveau d’automatisation du pipeline CI/CD. De nombreuses entreprises commencent par l’intégration continue, puis se mettent peu à peu à automatiser la distribution et le déploiement, par exemple dans le cadre du développement d’applications cloud-native.
Nos experts peuvent vous aider à mettre en place les outils, les pratiques et la culture nécessaires pour moderniser efficacement vos applications et pour en créer de nouvelles.
Obtenir l’aide d’un expert pour votre projet de développement d’applications cloud-native
Intégration continue
Le concept de développement d’applications modernes consiste à faire travailler plusieurs développeurs simultanément sur différentes fonctions d’une même application. Toutefois, si une entreprise prévoit de fusionner tous ces morceaux de code source le même jour (le « merge day » ou « jour du fusionnement »), alors la tâche risque de s’avérer laborieuse et de nécessiter beaucoup de procédures manuelles et de temps. En effet, lorsqu’un développeur qui travaille seul apporte des modifications à une application, celles-ci peuvent entrer en conflit avec les différentes modifications apportées simultanément par d’autres développeurs. Ce problème se complexifie encore si chaque développeur a personnalisé son propre environnement de développement intégré, au lieu d’en définir un seul dans le cloud, pour toute l’équipe.
L’intégration continue (CI) permet aux développeurs de fusionner plus fréquemment leurs modifications de code dans une « branche partagée », ou un « tronc », parfois même tous les jours. Une fois que les modifications apportées par un développeur sont fusionnées, elles sont validées par la création automatique de l’application et l’exécution de différents niveaux de test automatisés (généralement des tests unitaires et d’intégration) qui permettent de vérifier que les modifications n’entraînent pas de dysfonctionnement au sein de l’application. En d’autres termes, il s’agit de tester absolument tout, des classes et fonctions jusqu’aux différents modules qui constituent l’application. En cas de détection d’un conflit entre le code existant et le nouveau code, le processus d’intégration continue permet de résoudre les dysfonctionnements plus facilement, plus rapidement et plus fréquemment.
En savoir plus
Distribution continue
Après l’automatisation de la création et des tests unitaires et d’intégration dans le cadre de l’intégration continue, la distribution continue automatise la publication du code validé dans un référentiel. Aussi, pour garantir l’efficacité du processus de distribution continue, il faut d’abord introduire le processus d’intégration continue dans le pipeline de développement. La distribution continue permet de disposer d’un code base toujours prêt à être déployé dans un environnement de production.
Dans le cadre de la distribution continue, chaque étape (de la fusion des modifications de code jusqu’à la distribution des versions prêtes pour la production) implique l’automatisation des processus de test et de publication du code. À la fin de ce processus, l’équipe d’exploitation est en mesure de déployer facilement et rapidement une application dans un environnement de production.
Se lancer dans l’automatisation du déploiement
Déploiement continu
L’étape finale d’un pipeline CI/CD mature est le déploiement continu. En complément du processus de distribution continue, qui automatise la publication d’une version prête pour la production dans un référentiel de code, le déploiement continu automatise le lancement d’une application dans un environnement de production. En l’absence de passerelle manuelle entre la production et l’étape précédente du pipeline, le déploiement continu dépend surtout de la conception de l’automatisation des processus de test.
Dans la pratique, dans le cadre du déploiement continu, une modification apportée par un développeur à une application cloud pourrait être publiée quelques minutes seulement après la rédaction du code en question (en supposant qu’elle passe les tests automatisés). Il est ainsi beaucoup plus facile de recevoir et d’intégrer en continu les commentaires des utilisateurs. Ensemble, ces trois pratiques CI/CD réduisent les risques liés au déploiement des applications, puisqu’il est plus simple de publier des modifications par petites touches qu’en un seul bloc. Cette approche nécessite néanmoins un investissement de départ considérable, car les tests automatisés devront être rédigés de manière à s’adapter à diverses étapes de test et de lancement dans le pipeline CI/CD.
En savoir plus
Les outils de CI/CD courants
Les outils de CD/CI permettent aux équipes d’automatiser le développement, le déploiement et le test. Certains outils gèrent spécifiquement la partie intégration (CI), d’autres le développement et le déploiement (CD), et d’autres encore les tests continus ou les fonctions connexes.
Le serveur d’automatisation Jenkins compte parmi les outils Open Source de CI/CD les plus connus. Cette solution permet de gérer toutes les situations, du simple serveur de CI à un hub de CD complet.
Déployer Jenkins sur Red Hat OpenShift
Les pipelines Tekton servent de framework CI/CD pour les plateformes Kubernetes et offrent une expérience CI/CD cloud-native standard avec conteneurs.
Déployer Jenkins sur Red Hat OpenShift
En dehors des pipelines Jenkins et Tekton voici d’autres outils de CI/CD Open Source qui pourraient vous intéresser :
Spinnaker : plateforme de CD conçue pour les environnements multicloud GoCD : serveur de CI/CD particulièrement axé sur la modélisation et la visualisation Concourse : outil Open Source basé sur une approche d'automatisation continue Screwdriver : plateforme de construction conçue pour le CD
Vous pouvez également vous tourner vers des outils de CI/CD gérés proposés par différents fournisseurs. Les principaux fournisseurs de cloud public offrent tous des solutions de CI/CD, tout comme GitLab, CircleCI, Travis CI, Atlassian Bamboo et bien d’autres.
Par ailleurs, la plupart des outils essentiels au DevOps font partie du processus de CI/CD. Les outils qui servent à l’automatisation de la configuration (comme Ansible, Chef et Puppet), à l’exécution de conteneurs (comme Docker, rkt et cri-o) et à l’orchestration des conteneurs (Kubernetes) ne sont pas des outils de CI/CD à proprement parler, mais ils apparaissent dans de nombreux worflows de CI/CD.
DevSecOps
development, security, and operations
DevSecOps stands for development, security, and operations. It’s an approach to culture, automation, and platform design that integrates security as a shared responsibility throughout the entire IT lifecycle.
DevSecOps vs. DevOps
DevOps isn’t just about development and operations teams. If you want to take full advantage of the agility and responsiveness of a DevOps approach, IT security must also play an integrated role in the full life cycle of your apps.
Why? In the past, the role of security was isolated to a specific team in the final stage of development. That wasn’t as problematic when development cycles lasted months or even years, but those days are over. Effective DevOps ensures rapid and frequent development cycles (sometimes weeks or days), but outdated security practices can undo even the most efficient DevOps initiatives.
Illustration representing a linear progression from Development to Security and then to Operations
Now, in the collaborative framework of DevOps, security is a shared responsibility integrated from end to end. It’s a mindset that is so important, it led some to coin the term “DevSecOps” to emphasize the need to build a security foundation into DevOps initiatives.
Illustration representing collaboration between Development, Security, and Operations roles
DevSecOps means thinking about application and infrastructure security from the start. It also means automating some security gates to keep the DevOps workflow from slowing down. Selecting the right tools to continuously integrate security, like agreeing on an integrated development environment (IDE) with security features, can help meet these goals. However, effective DevOps security requires more than new tools—it builds on the cultural changes of DevOps to integrate the work of security teams sooner rather than later.
IaC
Infrastructure as Code
Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes. With IaC, configuration files are created that contain your infrastructure specifications, which makes it easier to edit and distribute configurations.
ML
Machine Learning
AI
Artificial Intelligence
Recently I visited with the cybersecurity teams at NTT Communications, British Telecom (BT) and DBS Bank. Each has mature, useful and metrics-driven security solutions.
NTT excels at 24x7 security monitoring. Some of the subtleties of its threat management program are pretty amazing; it feels it can identify characteristics of not only groups of attackers, but actual individuals.
BT has an incident response capability that is second to none, driven partly by its interest in combining red team and blue team tactics. These two security teams carefully hone their incident response steps and techniques.
All of these companies have taken a unique approach, in that they are upskilling all dedicated security workers to consider not just the defender’s dilemma, but also the hacker’s dilemma. This means they are not just focused on what happens if the hacker gets past their defenses. They’re focused, instead, on the mistakes an attacker makes, rather than the mistakes a defender can make.
Enter Artificial Intelligence (AI) and Machine Learning
Like many others, these three organizations are looking into the benefits of Artificial Intelligence (AI). While AI might not be fully ready for prime time, only a fool would look the other way or put their head in the sand when it comes to how AI might be able to help improve cybersecurity operations.
Why Use AI?
In the study Emerging Business Opportunities in AI, CompTIA found that only 29% of today’s companies are using AI for mission-critical services. The research shows some of the ways, though, that AI will unlock tremendous potential moving forward.
I’ve been lucky enough to interview a few people about future technologies, including automation and AI. For example, at the CompTIA Communities and Councils Forum (CCF), I interviewed Smith.AI’s Maddy Martin and CrushBank’s David Tan about how AI is being used today. (You can also watch that conversation on our YouTube Channel.)
Both Maddy and David were adamant: While AI can possibly replace jobs, for the foreseeable future, we’ll see AI enhance capabilities. But, there are a few things to consider.
There are two primary reasons why today’s companies want to use AI:
To automate the collection of internet of things (IoT) devices and the huge amount of data that they generate. To identify problems with how information flows – or doesn’t – between business units.
If this is the case, let’s take two common IT job roles into consideration: help desk technician and cybersecurity analyst.
AI and the Help Desk
Recently, I spoke with the team at Dell Computing in India about their use of AI. They use machine learning to triage help desk calls, and its doing wonders. While AI isn’t all that good (right now) when it comes to telling the difference between sarcasm and earnestness, it is pretty good at language translation and telling if people are angry. It can pattern math very, very well.
Because AI is good at pattern matching, companies such as Dell, NTT and others are very interested in using AI to quickly identify any repetitive patterns. One BT executive told me that while it is unlikely for AI to take away any particular job roles yet, it is important for today’s help desk workers to focus on skills such as troubleshooting, advanced networking and security. Many of the activities in these three buckets are far less repetitious.
But, there’s a warning, here: if you find yourself repeating a message or screen presented to you quite often, chances are you’ll need to upskill yourself.
AI and Cybersecurity
At both RSA San Francisco and Infosecurity Europe, I saw quite a few cybersecurity vendors claim they were using machine learning and AI.
I heard some of the following claims:
Automated signature enhancement: Security information and event management (SIEM) tools that use machine learning to automatically improve performance and change alerting signatures. The ability to do rudimentary threat hunting: Using machine learning techniques, algorithms can run in the background and identify certain patterns made by hackers and hacker groups. In the same way that, say, Mitre Corporation, has been able to identify the threat characteristics of threat actor groups such as FIN 6 and FIN 7, some organizations say they are close to automating this procedure.
The organizations I’ve been talking to haven’t quite bought into these claims, but they’re very interested in seeing the promise of these automated solutions becoming real.
A cybersecurity analyst, for example, tends to spend time in three major areas:
Capturing: Obtaining data from the network or from network hosts Slicing: Breaking data into categories and turning it into useful trend-based, actionable information – this is the analytics part of the job Dicing: Visualizing this data so that a human being can make a decision
When talking with cybersecurity analysts from organizations such as BT and DBS, they’ve told me they spend a lot of time tweaking how their security tools capture traffic. They feel that AI and machine learning–based programs can help them free up time, because capturing is a very repetitive thing. If they can be freed up from capturing traffic, they can spend more time analyzing and visualizing data. This is where humans excel. It’s a pretty good example of how AI can free up security workers to focus on more important tasks.
I don’t want to get ahead of myself, here. AI can be used for far more things than just the help desk and cybersecurity. Nevertheless, there are some major considerations that today’s organizations – large and small – need to consider.
How Do You Use AI For IT?
The companies I’ve talked to concerning AI seem to be pretty wise. They’re slowly looking into the realities of AI. For example, one of the important things to consider is that many AI implementations need to be primed and maintained. Let me explain.
Usually, to get machine learning working well, you first must prime the pump with useful information derived from a company’s experience. You can’t just turn on the programming and hope for the best.
The old computer science truism of “garbage in, garbage out” remains in force. This means that even when we start using automated, intelligent solutions, we’ll still need to teach them best practices.
So, even though there are automated pen testing solutions, such as Red Canary, it’s still necessary to teach them useful techniques. And those techniques aren’t universal – they are based on the organization’s specific needs. A health care organization will have a different set of practices than, say, a service provider/tech organization such as NTT or BT.
The organizations that I’ve talked with aren’t skeptical about AI. Far from it. They simply want to make sure that they have organized themselves properly. After all, if AI and machine learning are really forms of automation, it’s extremely important that organizations don’t automate processes and communications paths that are full of problems. One of the realities, then, is that AI will be implemented once organizations feel they have processes that are worth automating.
The Future of AI and Business
It’s tempting to ask the question, “What is the future of AI and business?” But after talking with organizations who are implementing it, it’s best to reverse that question.
Today’s companies want to be relevant, so they are asking careful questions about AI. The smart companies seem to be asking where they can use AI, rather than how AI can use them; the tail can’t wag the dog, here.
Want to learn more about the future of AI?
Check out the study, Emerging Business Opportunities in AI.
Practical Benefits of AI and Machine Learning: Is It Really Cost Savings?
The companies I’ve spoken with often cite cost savings as one of the major benefits of using AI. I have to say that this makes me a bit queasy.
Why?
Because I remember when voice over IP (VoIP) was going to save money. It really didn’t. What it did, though, was improve business communications and enable more efficiencies.
In the long run, this doesn’t save money so much as allow businesses to remain, well, in business. There’s a difference, here. I feel AI will do much the same thing. It may not save money, but wise implementation will save businesses.
With AI and machine learning, companies will be able to do the following:
Eliminate repetitive tasks Personalize services More easily “crunch” data to find useful trends
So, I commend the organizations that are using AI and machine learning. They’re neither afraid of it, nor are they being naïve or overly enthusiastic. They see the advent of another useful tool that will help them improve processes and create efficiencies. As long as decisions are made without cynicism, and with an eye toward improving what humans can do best, what’s wrong with that?
ANN
Artificial Neural Network
Artificial Neural Network is a computational data model used in the development of Artificial Intelligence (AI) systems capable of performing “intelligent” tasks. Neural Networks are commonly used in Machine Learning (ML) applications, which are themselves one implementation of AI. Deep Learning is a subset of ML.
DLL
Dynamic Link Libraries - Windows
Media
Media - CDs, DVDs, USB Thumb Drive, external HDDs, tape backups, floppy disks…
NAT
Network Address Translation - FW filtering method
IP filtering and network address translation
Last Updated: 2021-04-14
IP filtering and network address translation (NAT) act like a firewall to protect your internal network from intruders.
IP filtering lets you control what IP traffic will be allowed into and out of your network. Basically, it protects your network by filtering packets according to the rules that you define. NAT, allows you to hide your unregistered private IP addresses behind a set of registered IP addresses. This helps to protect your internal network from the outside networks. NAT also helps to alleviate the IP address depletion problem, because many private addresses can be represented by a small set of registered addresses.
PAT
Port Address Translation - FW filtering method
ALG
Application Layer Gateway - FW filtering method
Application Layer Gateway
What is an Application Layer Gateway?
An application layer gateway (ALG) is a type of security software or device that acts on behalf of the application servers on a network, protecting the servers and applications from traffic that might be malicious.
What does an ALG do?
An application layer gateway–also known as an application proxy gateway- may perform various functions at the application layer of infrastructure. For example, it’s often used to bypass firewalls or provide other access control features that are not available natively by the application protocol itself.
These functions may include address and port translation, resource allocation, application response control, and synchronization of data and control traffic.
A web proxy is a tool that allows you to act as a proxy for the webserver, enabling you to manage application layer protocols such as SIP and FTP and shield the webserver by blocking connections when appropriate.
Why are application layer gateways important?
Applications are vital to business operations and daily life, and cyber-attacks often target the application layer of IT infrastructures. To ensure business continuity and protect sensitive data and personally identifiable information (PII), you must protect them at every stage of the process, specifically by addressing the application layer. Application layer gateways are one option for securing applications and their data.
How does an application layer gateway work?
A secure web proxy acts like a proxy server for the applications and manages the secure connection between the web browser and the webserver. Typically, a web proxy will perform deep packet inspection and block malicious content. Application Layer Gateways (ALG) is a good fit for organizations that want to create a secure perimeter by filtering traffic for applications and websites. ALG capabilities typically exceed those of application firewalls, which are designed to prevent access to applications, not content and data.
CLG
Circuit-Level gateway - FW type
A circuit-level gateway is a type of firewall.
Circuit-level gateways work at the session layer of the OSI model, or as a “shim-layer” between the application layer and the transport layer of the TCP/IP stack. They monitor TCP handshaking between packets to determine whether a requested session is legitimate. Information passed to a remote computer through a circuit-level gateway appears to have originated from the gateway. Firewall traffic is cleaned based on particular session rules and may be controlled to acknowledged computers only. Circuit-level firewalls conceal the details of the protected network from the external traffic, which is helpful for interdicting access to impostors. Circuit-level gateways are relatively inexpensive and have the advantage of hiding information about the private network they protect. However, they do not filter individual packets.
WAF
Web Application Firewall
A web application firewall (WAF) is a specific form of application firewall that filters, monitors, and blocks HTTP traffic to and from a web service. By inspecting HTTP traffic, it can prevent attacks exploiting a web application’s known vulnerabilities, such as SQL injection, cross-site scripting (XSS), file inclusion, and improper system configuration.[1]
DLP
data loss prevention - Also called Information Leak Protection (ILP) or Extrusion Prevention Systems (EPS)
ILP
Information Leak Protection - Also called Information Leak Protection (ILP) or Extrusion Prevention Systems (EPS)
EPS
Extrusion Prevention Systems - Also called Information Leak Protection (ILP) or data loss prevention (DLP)
UTM
Unified Threat Management
Unified threat management (UTM) is an approach to information security where a single hardware or software installation provides multiple security functions. This contrasts with the traditional method of having point solutions for each security function.[1] UTM simplifies information-security management by providing a single management and reporting point for the security administrator rather than managing multiple products from different vendors.[2][3] UTM appliances have been gaining popularity since 2009, partly because the all-in-one approach simplifies installation, configuration and maintenance.[4] Such a setup saves time, money and people when compared to the management of multiple security systems. Instead of having several single-function appliances, all needing individual familiarity, attention and support, network administrators can centrally administer their security defenses from one computer. Some of the prominent UTM brands are Cisco, Fortinet, Sophos, Netgear, FortiGate, Huawei, WiJungle, SonicWall and Check Point.[5] UTMs are now typically called next-generation firewalls.
Features
UTMs at the minimum should have some converged security features like
Network firewall Intrusion detection service (IDS) Intrusion prevention service (IPS)
Some of the other features commonly found in UTMs are:
Gateway anti-virus Application layer (Layer 7) firewall and control Deep packet inspection Web proxy and content filtering Email filtering for spam and phishing attacks Data loss prevention (DLP) Security information and event management (SIEM) Virtual private network (VPN) Network access control Network tarpit Additional security services against Denial of Services (DoS), Distributed Denial of service (DDoS), Zero day, Spyware protection
Disadvantages
Although an UTM offers ease of management from a single device, it also introduces a single point of failure within the IT infrastructure. Additionally, the approach of a UTM may go against one of the basic information assurance / security approaches of defense in depth, as a UTM would replace multiple security products, and compromise at the UTM layer will break the entire defense-in-depth approach.[6]
NIDS
Network Intrusion Detection Systems
NIPS
Network Intrusion Prevention Systems
VDI
Virtual Desktop Infrastructure
With VDI, or a Virtual Desktop Infrastructure, you’re running applications in the cloud or in a data center, and you’re running as little of the application as possible on the local device. This virtualization of a user’s desktop is sometimes called VDE, or Virtual Desktop Environment.
This puts all of the computing power in the data center or in the cloud. What the end user sees is really a virtual desktop. All of the work is really happening in this centralized environment. This means that the client’s workstation has relatively small computing requirements, and the operating system that’s running on the client is less important, as long it can run the software required to connect to this virtual desktop infrastructure.
Security professionals like VDI because it makes security a lot more centralized. All of the data and applications are in the data center or in a centralized cloud infrastructure. If you need to make any changes, you make them in one single central place, and all of the virtual desktops are able to take advantage of those changes. And all of the data and all of the applications never leave the data center, making it that much more of a secure application environment.
As more applications are moving to the cloud, it becomes a lot more difficult to provide the same level of security. If the clients are working, but the data is in the cloud, how do you manage to keep everything secure?
VPC
Virtual Private Cloud
Amazon Virtual Private Cloud (VPC) is a commercial cloud computing service that provides users a virtual private cloud, by “provisioning a logically isolated section of Amazon Web Services (AWS) Cloud”.[1] Enterprise customers are able to access the Amazon Elastic Compute Cloud (EC2) over an IPsec based virtual private network.[2][3] Unlike traditional EC2 instances which are allocated internal and external IP numbers by Amazon, the customer can assign IP numbers of their choosing from one or more subnets.[4] By giving the user the option of selecting which AWS resources are public facing and which are not, VPC provides much more granular control over security. For Amazon it is “an endorsement of the hybrid approach, but it’s also meant to combat the growing interest in private clouds”.[5]
CASB
Cloud Access Security Broker
Un CASB, Cloud Access Security Broker, est un type de logiciel qui tend à sécuriser les applications SaaS des entreprises (Salesforce, Box,…) et IaaS (OCI, AWS, Azure,..) de manière à ce que les données de l’organisation soient sécurisées.
Un CASB permet de sécuriser les données de bout en bout, depuis le Cloud au périphérique. Le Cloud Access Security Broker offre de nombreux services :
visibilité sur l’utilisation des applications cloud de l’entreprise et détection du shadow IT analyse des comportements utilisateurs (UEBA) contrôle des accès utilisateurs conformité : application des politiques de sécurité et aide à la mise en conformité avec le RGPD alerte sur les menaces de sécurité détection des malwares, etc.
A cloud access security broker (CASB) (sometimes pronounced cas-bee) is on-premises or cloud based software that sits between cloud service users and cloud applications, and monitors all activity and enforces security policies.[1] A CASB can offer services such as monitoring user activity, warning administrators about potentially hazardous actions, enforcing security policy compliance, and automatically preventing malware.
Types
CASBs deliver security and management features. Broadly speaking, “security” is the prevention of high-risk events, whilst “management” is the monitoring and mitigation of high-risk events.
CASBs that deliver security must be in the path of data access, between the user and the cloud provider. Architecturally, this might be achieved with proxy agents on each end-point device, or in agentless fashion without configuration on each device. Agentless CASBs allow for rapid deployment and deliver security on both company-managed and unmanaged BYOD devices. Agentless CASB also respect user privacy, inspecting only corporate data. Agent-based CASBs are difficult to deploy and effective only on devices that are managed by the corporation. Agent-based CASBs typically inspect both corporate and personal data.[citation needed]
API
Application Programming Interface
SAML
Security Assertion Markup Language - Key management against cloud threat
SAML is a popular online security protocol that verifies a user’s identity and privileges. It enables single sign-on (SSO), allowing users to access multiple web-based resources across multiple domains using only one set of login credentials.
SAML stands for Security Assertion Markup Language. SAML is an open standard used for authentication. It provides single sign-on across multiple domains, allowing users to authenticate only once. Users gain access to multiple resources on different systems by supplying proof that the authenticating system successfully authenticated them.
SAML is the most widely adopted federated identity standard for authentication. It works by passing a SAML token (called an assertion) containing identifying user information between the authenticating system and a system on a different domain that offers a resource. Typically, the resource is a web- or cloud-based application. Resources can be internal to an organization, externally hosted, or delivered as a service.
Security Assertion Markup Language (SAML) is an open federation standard that allows an identity provider (IdP) to authenticate users and then pass an authentication token to another application known as a service provider (SP). SAML enables the SP to operate without having to perform its own authentication and pass the identity to integrate internal and external users. It allows security credentials to be shared with a SP across a network, typically an application or service. SAML enables secure, cross-domain communication between public cloud and other SAML-enabled systems, as well as a selected number of other identity management systems located on-premises or in a different cloud. With SAML, you can enable a single sign-on (SSO) experience for your users across any two applications that support SAML protocol and services, allowing a SSO to perform several security functions on behalf of one or more applications.
SAML relates to the XML variant language used to encode this information and can also cover various protocol messages and profiles that make up part of the standard.
SAML Provider
SAML facilitates the exchange of user identity data between two types of SAML providers:
Identity provider (IdP)—A SAML authority that centralizes user identity data and provides a single point of secure authentication. The IdP can be an in-house identity and access management (IAM) system or a hosted authentication SAML service provider, such as Google Apps. Service provider (SP)—A SAML consumer that offers a resource to users. Typically, that resource is a web-based application or a paid subscription service, such as a customer relationship management (CRM) platform.
SAML Assertion
A SAML assertion is a packet of information (also known as an XML document) that contains all the information necessary to confirm a user’s identity, including the source of the assertion, a timestamp indicating when the assertion was issued, and the conditions that make the assertion valid. SAML defines three different types of assertion statements:
Authentication— An authentication assertion affirms that a specific identity provider authenticated a specific user at a specific time. Attribute—An attribute is an identifying detail associated with a specific user. Examples of attributes include data such as the user’s first name, last name, email address, phone number, X.509 public certificate file, and so on. Authorization decision—The authorization decision informs whether a specific user has been allowed or denied access to the requested resource. Typically, a SAML Policy Decision Point (PDP) issues this type of assertion when a user requests access to a resource.
A typical SAML assertion comprises a single authentication statement and an optional single attribute statement; however, in certain cases, a SAML response can contain multiple assertions.
CDN
content delivery networks
A content delivery network (CDN) refers to a geographically distributed group of servers which work together to provide fast delivery of Internet content.
A CDN allows for the quick transfer of assets needed for loading Internet content including HTML pages, javascript files, stylesheets, images, and videos. The popularity of CDN services continues to grow, and today the majority of web traffic is served through CDNs, including traffic from major sites like Facebook, Netflix, and Amazon.
A properly configured CDN may also help protect websites against some common malicious attacks, such as Distributed Denial of Service (DDOS) attacks.
Is a CDN the same as a web host?
While a CDN does not host content and can’t replace the need for proper web hosting, it does help cache content at the network edge, which improves website performance. Many websites struggle to have their performance needs met by traditional hosting services, which is why they opt for CDNs.
By utilizing caching to reduce hosting bandwidth, helping to prevent interruptions in service, and improving security, CDNs are a popular choice to relieve some of the major pain points that come with traditional web hosting.
What are the benefits of using a CDN?
Although the benefits of using a CDN vary depending on the size and needs of an Internet property, the primary benefits for most users can be broken down into 4 different components:
Improving website load times - By distributing content closer to website visitors by using a nearby CDN server (among other optimizations), visitors experience faster page loading times. As visitors are more inclined to click away from a slow-loading site, a CDN can reduce bounce rates and increase the amount of time that people spend on the site. In other words, a faster a website means more visitors will stay and stick around longer. Reducing bandwidth costs - Bandwidth consumption costs for website hosting is a primary expense for websites. Through caching and other optimizations, CDNs are able to reduce the amount of data an origin server must provide, thus reducing hosting costs for website owners. Increasing content availability and redundancy - Large amounts of traffic or hardware failures can interrupt normal website function. Thanks to their distributed nature, a CDN can handle more traffic and withstand hardware failure better than many origin servers. Improving website security - A CDN may improve security by providing DDoS mitigation, improvements to security certificates, and other optimizations.
How does a CDN work?
At its core, a CDN is a network of servers linked together with the goal of delivering content as quickly, cheaply, reliably, and securely as possible. In order to improve speed and connectivity, a CDN will place servers at the exchange points between different networks.
These Internet exchange points (IXPs) are the primary locations where different Internet providers connect in order to provide each other access to traffic originating on their different networks. By having a connection to these high speed and highly interconnected locations, a CDN provider is able to reduce costs and transit times in high speed data delivery.
CORS
Cross-origin resource sharing (CORS) is a browser mechanism which enables controlled access to resources located outside of a given domain. It extends and adds flexibility to the same-origin policy (SOP). However, it also provides potential for cross-domain attacks, if a website’s CORS policy is poorly configured and implemented. CORS is not a protection against cross-origin attacks such as cross-site request forgery (CSRF).
CAM
Content Addressable Memory - MAC flooding and spoofing - switch memory set aside to store the MAC addresses (=CAM content addressable memory) for each port.
Une mémoire adressable par le contenu (CAM, en anglais Content-Addressable Memory) est un type de mémoire informatique spécial, utilisé dans certaines applications pour la recherche à très haute vitesse. Elle est aussi connue sous le nom de mémoire associative (associative memory, associative storage, ou associative array).
ARP
address resolution protocol - it relies on the MAC addresses as a way of combining what MAC address goes to which IP, and which IP goes to which MAC address.
ACL
Access Control List - Routers - LAN, WAN and DMZ
Access Control List (ACL) — liste de contrôle d’accès en français — désigne traditionnellement deux choses en sécurité informatique :
- un système permettant de faire une gestion plus fine des droits d’accès aux fichiers que ne le permet la méthode employée par les systèmes UNIX.
- en réseau, une liste des adresses et ports autorisés ou interdits par un pare-feu.
La notion d’ACL est cela dit assez généraliste, et on peut parler d’ACL pour gérer les accès à n’importe quel type de ressource.
Une ACL est une liste d’Access Control Entry (ACE) ou entrée de contrôle d’accès donnant ou supprimant des droits d’accès à une personne ou un groupe.
Sous UNIX
Sous UNIX, les ACL ne remplacent pas la méthode habituelle des droits. Pour garder une compatibilité, elles s’ajoutent à elle au sein de la norme POSIX 1re.
Les systèmes de type UNIX n’acceptent, classiquement, que trois types de droits :
lecture (Read)
écriture (Write)
exécution (eXecute)
pour trois types d’utilisateurs :
le propriétaire du fichier
les membres du groupe auquel appartient le fichier
tous les autres utilisateurs
Cependant, cette méthode ne couvre pas suffisamment de cas, notamment en entreprise. En effet, les réseaux d’entreprises nécessitent l’attribut de droits pour certains membres de plusieurs groupes distincts, ce qui nécessite diverses astuces lourdes à mettre en œuvre et à entretenir sous Unix.
L’intervention de l’administrateur est souvent nécessaire pour créer les groupes intermédiaires qui permettront de partager des fichiers entre plusieurs utilisateurs ou groupes d’utilisateurs, tout en les gardant confidentiels face aux autres.
Les ACL permettent de combler ce manque. On peut permettre à n’importe quel utilisateur, ou groupe, un des trois droits (lecture, écriture et exécution) et cela sans être limité par le nombre d’utilisateurs que l’on veut ajouter.
Mac OS X gère les ACL depuis la version 10.4 (Tiger).
$
En réseau
Une ACL sur un pare-feu ou un routeur filtrant, est une liste d’adresses ou de ports autorisés ou interdits par le dispositif de filtrage.
Les Access Control List sont divisés en trois grandes catégories, l’ACL standard, l’ACL étendue et la nommée-étendue.
L’ACL standard ne peut contrôler que deux ensembles : l’adresse IP source et une partie de l’adresse IP de destination, au moyen de masque générique.
L’ACL étendue peut contrôler l’adresse IP de destination, la partie de l’adresse de destination (masque générique), le type de protocole (TCP, UDP, ICMP, IGRP, IGMP, etc.), le port source et de destination, les flux TCP, IP TOS (Type of service) ainsi que les priorités IP.
L’ACL nommée-étendue est une ACL étendue à laquelle on a affecté un nom.
Par exemple, sous Linux c’est le système Netfilter qui gère l’ACL. La création d’ACL qui autorise le courrier électronique entrant, depuis n’importe quelle adresse IP, vers le port 25 (alloué communément à SMTP) se fait avec la commande suivante : iptables –insert INPUT –protocol tcp –destination-port 25 –jump ACCEPT
Iptables est la commande qui permet de configurer NetFilter.
Les ACL conviennent bien à des protocoles dont les ports sont statiques (connus à l’avance) comme SMTP, mais ne suffisent pas avec des logiciels comme BitTorrent où les ports peuvent varier.
DMZ
De-Militarized Zone
BYOD
Bring Your Own Device - NAC
NAC
Network Access Control
Un contrôleur d’accès au réseau (network access control ou NAC) est une méthode informatique permettant de soumettre l’accès à un réseau d’entreprise à un protocole d’identification de l’utilisateur et au respect par la machine de cet utilisateur des restrictions d’usage définies pour ce réseau.
Plusieurs sociétés comme Cisco Systems, Microsoft ou Nortel Networks ont développé des frameworks permettant d’implémenter des mécanismes de protection d’accès au réseau d’entreprise et de vérifier le respect par les postes clients, des règles de sécurité imposées par l’entreprise : état de la protection antivirus, mises à jour de sécurité, présence d’un certificat, et bien d’autres.
Ces frameworks ont donné naissance à bon nombre d’“appliances”, matériels spécialisés dans le contrôle d’accès au réseau.
DTP
dynamic trunking protocol - Switch spoofing in VLAN hopping
VLAN Hopping
Le VLAN Hopping (saut de VLAN) est un exploit en sécurité informatique. Le principe est qu’un hôte attaquant sur un VLAN accède au trafic d’autres VLAN auxquels il ne devrait pas avoir accès. Il existe deux méthodes :
Switch Spoofing (usurpation de commutateur)
Double Tagging (double marquage)
Switch Spoofing
La technique de Switch Spoofing consiste à imiter un commutateur de jonction. On utilise généralement le protocole DTP (Dynamic Trunking Protocol) pour effectuer cette attaque.
Voici le déroulé de cette attaque :
On commence par envoyer des trames DTP sur un port Access
Si le mode DTP est en DYNAMIC AUTO ou DYNAMIC DESIRABLE alors l’attaque est possible
On envoie une demande de négociation pour basculer le lien en mode trunk
Remédiations
La technique de Switch Spoofing n’est exploitable que lorsque les interfaces d’un switch sont configurés pour négocier une jonction.
La première chose à faire est de désactiver le DTP : switchport nonegotiate
Ensuite, il faut s’assurer que les ports non configurés en jonction sont configurés en port d’accès : switchport mode access