RHCSA MISSED Flashcards
Create a disk with the EXT4 file system, mount it, add data and change the file system to XFS without damaging the data
dd the data to a file
change the file system
dd the data back to the disk
Look at extensive info about a disk
parted /dev/nvme0n2
Using parted, make an ext3 that is primary and 1024 megs big
View the file to confirm if the kernel recognizes the partition
Label the partition as /work
Mount the partition as the label via fstab
Remember, parted won’t make the file system, only prepare your disk for its file system creation.
unit GB (if you want in gigs)
mklabel msdos of gpt
mkpart primary ext3 1024 2048
mkpartfs works too I think
Guide says ext3 isn’t available with mkpartfs so you’ll need to mkfs.ext3
cat /proc/partitions
FOR EXT4
e2label /dev/sda6 /work (label can be anything)
FOR XFS
xfs_admin -L label /dev/sda2
LABEL=/work /work ext3 defaults 1 2
Remove minor number 3 (partition 3) on sda
Resize partition 2
parted /dev/sda
rm 3
resize 2 1024 2048
Can you add /boot partition to a logical volume?
No, the boot loader can’t read it. If / partition is on a logical volume create a separate /boot partition.
What are the three aspects of LVM
Logical Volume Management
Physical Volumes - disks themselves
Volume groups - aggregation of physical volumes
Logical Volumes - assigned mount points and file types. When partitions reach their full capacity free space allocated from volume group can be added to the logical volume.
Can a Logical Volume contain partitions?
Yes, like / and /home
If you don’t want to use LVM and would prefer to use RAID, what would you use?
Disk Druid
What is the LVM default configuration
/boot <- this is a non-lvm partition residing on the disks first partition (sda1)
The remaining space goes into a volume group.
Two logical volumes are created from the volume group.
One goes to swap of the recommended size and the remainder goes to /.
What is VDB
Virtio Block Device
B - second device, A would be the first
What are three ways to display physical volumes
What are the commands to add and remove /dev/sda and /dev/sdb?
pvdisplay
pvs
pvscan
pvcreate /dev/sda /dev/sdb
pvremove /dev/sda /dev/sdb
(if the disks are part of a volume group you’ll have to remove them from that first with vgreduce)
What is an EXTENT?
In the volume group, disk space available to be allocated is divided into fixed-sized units called extents.
In physical volumes extents are referred to as physical extents.
What is the default EXTENT size
Disk space is divided into 4MB extents.
What is the EXTENT size?
Minimum amount of space a logical volume can be increased or decreased.
What option do you use to modify the extent size?
What options do you use to limit the physical and logical volumes the volume group can have?
vgcreate -s
vgcreate -p -l
What are the three different ways to check volume group info?
vgdisplay vg1
vgdisplay
vgs
vgscan
Add vdb3 to your vg1 volume group, then rename it
vgextend vg1 /dev/vdb3
vgrenamevg1 myvg
You have an inactive volume group “databases” that should be added to “myvg”
These will need the same size physical extent sizes
vgmerge -v myvg databases
You need to remove a pv from a vg. You are currently using a pv and need to migrate the data to the other pvs, how do you do this?
Let’s say you have no additional extents to give the data too, create vdb4 as a physcial volume, add it to the vg, and move the data to it
Now remove vdb3 and verify
pvmove /dev/vdb3
pvcreate /dev/vdb4
vgextend myvg /dev/vdb4
pvmove /dev/vdb3 /dev/vdb4
vgreduce myvg /dev/vdb3
pvs
Why would you split a device from a group?
vgsplit is a combo of vgreduce and vgcreate, so it just makes things easier.
Basically a split gives a pv from one vg to another.
vg1 has /dev/vdb1-3 available. give vdb3 to the vg vg2
vgsplit vg1 vg2 /dev/vdb3
Say that you have a logical volume mounted name mylv and want to transfer its volume groups to a new system. How would you do this
umount /dev/mnt/mylv
vgchange -an myvg (this deactivates your vg)
vgexport myvg (make inaccessable from system)
pvscan
plug new disks into server
vgimport myvg
vgchange -ay myvg (activate vg)
Remove LVM volume groups
Use command first to stop lockspace in other servers if clustered first! Don’t do this on the server that you’re performing the removal
VG must contain no logical volumes
vgchange –lockstop vg1
vgremove vg1
Create a logcial volume that has 500Megs named mylv from myvg
What option would you use if you wanted to do a raid configuration rather than linear (normal)?
Looks at your logical volume info in three different ways
make mylv xfs
lvcreate -n mylv -L 500M myvg
–type=raid
lvs
lvdisplay
lvscan
mkfs.xfs /dev/myvg/mylv
Striped logical volumes writes data to a predetermined number of physical volumes in round-robin fashion. I/O can be done in parallel. Even when using LVM this stripes the actual disks themselves, not anything virtual
Create a RAID0 striped logical volume with three strips and a stripe size of 4kB
view the RAID0 stripped logical volume
You need at least 3 physical volumes
lvcreate –type=raid0 -L 2G –stripes=3 –stripesize=4 -n mylv myvg
lvs -a -o +devices,segtyp myvg
You have an LV named mylv1 that’s mounted at /mnt and want to name it mylv
umount /mnt
lvrename myvg mylv1 mylv
OR
lvrename /dev/myvg/mylv1 /dev/myvg/mylv
Remove a disk from a logical volume:
First view the free space of physical volumes, if there enough free extents on the other pvs in the vg move the data
If an LV contains a pv that fails, you won’t be able to use that LV. Remove the PV from the VG
You must first move the extents on the physical volume to a diff disk or set of disks
pvs -o+pv_used
pvmove /dev/vdb3
vgreduce mybg /dev/vdb3
vgreduce –removemissing myvg
We have an LV mounted at /mnt and want to remove mylv1 from myvg.
The LV is on a cluster, so let’s take it deactivate it on the other servers.
umount /mnt
lvchange –activate n mybg/mylv1
lvremove /dev/myvg/mylv1
Extend myvg with /dev/vdb3
fill mylv with 3G
fill mylv with 100% of myvg
WARNING, THIS WON’T EXTEND THE FILE SYSTEM ITSELF
vgextedn myvg /dev/vdb3
lvextend -r -L 3g /dev/myvg/mylv
lvextend -r -l +100%FREE /dev/myvg/mylv
Reduce the mylv to 500M
Reduce the mylv by 64M
What is the option to use here that will make this safer so you know you’re not reducing the LV lower than what it is using?
–resizefs <- this will attempt to first resize the file system. If it’s full it won’t attempt to resize the LV.
lvreduce –resizefs -L 500M myvg/mylv
lvreduce –resizefs -L -64M myvg/mylv
Sort pvs by name, size, free space, least to greatest
then
greates to least free
pvs -o pv_name,pv_size,pv_free -0 pv_free
pvs -o pv_name,pv_size,pv_free -0 pv_free
Have vgs display in BINARY base 2 = gigs (1024)
then in base 10 decimal gigs (1000)
Where can you specify this to become standard
show lvs output rounded > < (slightly more or less)
LVs round to nearest decimal when not an exact multiple of say, GiB.
pvs –units g /dev/sda
vgs –units G /dev/sda
/etc/lvm/lvm.conf
lvs –units r mylv
When performing pvs or whatever reporting command you’re using, what is a way to filter what you want displayed?
For instance, say you only want to display nvme devices
pvs -S name=~nvme
pvs -S –help
Sometimes LVM devices will be attached to a host and passed to a guest VM.
How do you prevent the VM storage from being exposed to the host?
Filter the path to exclude the device
Further protect the devices
8.1
Configure LVM device access and LVM system ID
vi /etc/lvm/lvm.conf
filter = [
Match System ID on host and VM
vi /etc/lvm/lvm.conf
system_id_source = “uname”
Set the VGs system ID to match VM system ID
vgchange –systemid <vm_system_id> <vm_vg_name></vm_vg_name></vm_system_id>
Define RAID
0
1
4
5
6
10
Linear
0 - striping- no redundancy but fast, bits of info on all disks
1 - mirror - same data on all disks
4 - parity and striping - 3 disks striped, one disk has parity data for redundancy. This causes write performance bottlenecking
5 - Same as 4 but parity goes to all disks, no bottleneck
6 - double parity so better redundancy, two drives can fail
10 - 1 and 0 - mirroring and striping, 4 disks
Linear - one disk fills up, next gets filled up
You’ve created vg001, create an LV named origin from it.
Create a snapshot that’s 100M
Display the origin volume and snapshot volume current use percentage, put the device column at the end
Have your snapshot auto extend so it’s not unusable. Make a 1G snap extned to 1.2 when it exceeds 700M
How would you extend manually?
lvcreate -L 1G -n origin vg001
lvcreate –size 100M –name snap –snapshot /dev/vg001/origin
lvs -a -o +devices
vi /etc/lvm.conf
snapshot_autoextend_threshold = 70
(by default it’s set to 100 which means disabled, minimum value is 50)
snapshot_autoextend_percent = 20
lvextend -L+100M /dev/vg001/snap
How does an LVM snapshot work?
If you create a snapshot it will originally be empty and slowly take up more and more space. It doesn’t have to be the size of the origin, just enough to contain changes.
How this works is that when you delete or modify something, a change occurs on the snapshot. It writes what was deleted and saves it, or maybe what was originally in a file before it was modified.
It does this to actually save space. If your snap was a one for one of your origin, that would take unnecessary space. But if you only record little changes for repairs, this will take basically no data.
Merge a snapshot
View the origin volume via lvs, append the devices column
lvconvert –merge vg001/snap
lvs -a -o +devices
What is thin provisioning
You can provision a larger pool of block storage that may be larger in size than the physical device storing the data. This is called “over-provisioning” and it’s viable because individual blocks are not allocated until they are actually used.
Data is allocated in a as needed basis.
11.2
Create a thin provisioned pool out of vg001 that is “100M”
Then Create a thin pool and thin volume that’s going to show it has 1T data but really only using 100M, This should also show a chunk size of 256kb. Add how many stripes and the amount of data per stripe. Call your volume thinvolume
create a thin volume on its own
lvcreate -L 100M -T vg001/mythinpool
lvcreate -i 2 -I 64 -c 256 -L 100M vg001/thinpool -V 1T –name thinvolume
lvcreate -V 1G vg001/mythinpool -n thinvolume
lvcreate -V 1G –thin vg001/thinpool -n thinvolume (this works too)
11.2 looking at chunk, -i is amount of disks to stripe, 64 is the amount in KBs before striping next disk, V means the virtual size of the storage, -V can be used interchangeably between –thinpool and –thin which is just the virtual size.
lvcreate -i 2 -I 64 -c 256 -L 100M -T vg001/thinpool -V 1T –name thinvolume
Convert a logical volume to a thin pool
Convert another logical volume into thin pool meta data
lvconvert –thinpool vg001/lv1 –poolmetadata vg001/lv1
How is a thinly provisioned snapshot different from a regular one?
thinly provisioned snapshots share the same space with the origin.
You don’t need to merge or activate, just remove the origin and leave the snapshot.
What is chunk size?
Largest unit of physical disk dedicated to snapshot storage.
Smaller needs more meta data and hinders performance the opposite for bigger chunk
Create a snapshot named mysnapshot1 out of vg001/thinvolume
Remember if you specify size it won’t be a thinly-provisioned snapshot
lvcreate -s –name mysnapshot1 vg001/thinvolume
How thin snapshots work
They point at the same pool as the thin LV they’re a snapshot for, so data is shared.
This can actually save some disk utilization because new data isn’t copied to the snapshot, a new block is created for the main thin LV to point to but the Snap still points to the original.
You don’t have to merge snaps and origins. Just delete the origin silly!
Traditionally the snaps have their separate volume where they store changes that must be copied back to the origin (merged)
Create a thin snapshot volume
WARNING: DO NOT GIVE YOUR THIN SNAPSHOT A DATA LIMIT/SIZE WITH -L THIS WILL CREATE A REGULAR VOLUME
lvcreate -s –name mysnapshot vg001/thinvolume
To use the snapshot you have to remove the origin volume and then activate it. After mounting, if you’re already in the directory, leave and go back to see the changes.
For this example you will need a thin pool created named “pool”.
You will also need an LV named “origin” and your snapshot should be named mythinsnap
Create a thin snapshot of the “external origin”
Create a snapshot of the snapshot
Deactivate your lv so you can create the snapshot:
lvchange -an –permission r myvg <- deactivate
lvcreate -s –thinpool vg001/pool origin –name mythinsnap
lvcreate -s vg001/mysnapshot1 –name mysnapshot2
You can reactivate your LV and make it rw again with the below command.
lvchange -ay –permission rw myvg <- activate
What does enabling caching on an Logical Volume do?
What different options do you have?
What components make up caching?
Improves performance
A second LV is created just for caching. Normally a faster device is used for caching, so an SSD for caching while your main LV is just a harddrive
OPTIONS
dm-cache - speeds up access to freq. used data by cacing it on the faster volume. Caches both read and write. - volume type = cache
dm-writecache - write only. Faster volume stores write operations and migrates them to the slower disk in the background. volume type = write cache
COMPONENTS
Main LV - larger, slower, original
Cache pool LV - LV used for caching. Has two sub-LVs: data for holding cache data and metadata for managing cache data
Cachevol LV - Linear LV used for caching data from the main LV. You can’t configure separate disks for data/metadata. Cachevol can only be used with dm-cache or dm-writecache
These must all be in the same volume group
Cachevol vs Cachepool
Cachevol - faster device stores both the cached copies of data blocks and metadata for managing cache.
Cachepool - separate devices can store the cached copies of data black and metadata for managing cache - dm-writecache can’t be used here. Yes I said two different things, no, I don’t know why.
When you create a cache, what device will you see at the forefront
New device with the original’s name
Create a dm-cache cachevol on your fast devices
attach the cachevol to the main logical volume
verify
lvcreate –size 5G –name fastboi vg001 /dev/nvmne02 (location of the ssd that’s part of the VG)
lvconvert –type cache –cachevol fastboi vg001/origin
lvs –all –options +devices vg001
Enable dm-cache with a cachepool for and LV and verify
create cachepool on fast device:
lvcreate –type cache-pool –size 5G –name fastpool vg001 /dev/nvmne01
Attach cachepool to main logical volume:
lvconvert –type cache –cachepool fastpool vg001/origin
lvs –all –options +devices vg001
Enable dm-writecache caching for an LV
deactive main LV:
lvchange -an vg001/origin
Create a deactivated cachevol volume on fast device:
lvcreat -an –size 5G –name fastvol vg001 /dev/nvmne01
Attach cachevol to main LV:
lvconvert –type writecache –cachevol fastvol vg001/origin
activate resulting volume (should be the same name I think):
lvchange -ay vg001/origin
lvs –all –options +devices vg001
Disable dm-cache or dm-writecache
deactivate the LV
lvchange -an vg001/origin
detach the cachevol or cachepool
lvconvert –splitcache vg001/origin
lvs –all –options +devices vg001
What is autoactivation for LVM
Event base activation of LVM during system startup.
Devices becoming available on system = device online events
systemd/udev run lvm2-pvscan which runs
pvscan –cache -aay device which reads the named device, if device is in a VG pvscan will check if all pvs are online for that VG and if so it will activate the LV
change autoactivation on vgs and lvs
vgchange –setautoactivation <y/n>
lvchange –setautoactivation <y/n>
OR
/etc/lvm/lvm.conf
If you turn off global/event_activation then it will only autoactivate at startup
Setting activation/auto_activation_volume_list to an empyt list disables autoactivation entirely. Or you can just set it to cert VGs and LVs
What is the activation skip flag used for?
How can you tell if an LV is skipped?
Skip this VL during activations
lvchange –setactivationskip <y|n>
It will have the k at the end of it’s attributes
thin1s1 vg Vwi—tz-k 1.00t pool0 thin1
Set and then reset volume activation skip flag
Remove the skip flag from an LV
lvchange -an –setactivationskip y vg001/lv001
OR
lvchange -k y vg001/lv001
to activate
lvchange -ay -K vg001/lv001
lvchange -kn vg001/lv001
Say that we have LVMs that are shared between multiple servers.
Input a command that makes it to where you can only activate your LVs on one server
Activate the LV ins shared mode allowing multiple hosts to activate.
lvchange -ay|-aey
lvchange -asy
what options do you have for allowing lvs to be activated that have missing disks?
lvchange –activationmode partial|degraded|complete
Complete - LVs with no missing PVs can be activated
Degraded - Raid LVs with missing PVs can be activated
Partial - Any LV with missing PVs to be activated
Allocate only extents from /dev/sda, unless it doesn’t have enought then dsb will be used too
lvcreate -n lv1 -L1g vg001 /dev/sda
Create a raid1 LV making the first image allocated from sda and the second from sdb
lvcreate –type mirror -m 1 -n lvraid -L 1G vg001 /dev/sda /dev/sdb
-m 1 = two images
Prevent allocation of physical extents on /dev/sdk1
then turn it back on
pvchange -x n /dev/sdk1
pvchange -x y /dev/sdk1
What are LVM tags used for?
Group LVs together, this will make it easier if you need to activate them all at once
List all lvs with the database tag
List currently active host tags
Add tag to an lv
(this is the same for VGs PVs even for their create methods)
Next remove the tag
lvs @database
lvm tags
lvchange –addtag @tag lv001
lvchange –deltag @tag lv001