Linux Flashcards

(296 cards)

1
Q

The Red Hat Family

A

Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux and Oracle Linux

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Key Facts About the Red Hat Family

A
  • It often serves as an upstream testing platform for RHEL.
  • CentOS and Scientific Linux are close clones of RHEL, while Oracle Linux is mostly a copy with some changes.
  • Kernel version 3.10 is used in RHEL/CentOS 7.
  • It supports hardware platforms such as x86, x86-64, Itanium, PowerPC, and IBM System z.
  • It uses the RPM-based yum package manager (we cover it in more detail later) to install, update, and remove packages in the system.
  • RHEL is widely used by enterprises which host their own systems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The SUSE Family

A

SUSE, SUSE Linux Enterprise Server (SLES), and openSUSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Key Facts About the SUSE Family

A
  • SUSE Linux Enterprise Server (SLES) is upstream for openSUSE.
  • Kernel version 4.12 is used in openSUSE Leap 15.
  • It uses the RPM-based zypper package manager (we cover it in more detail later) to install, update, and remove packages in the system.
  • It includes the YaST (Yet Another Setup Tool) application for system administration purposes.
  • SLES is widely used in retail and many other sectors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The Debian Family

A

Ubuntu, Linux Mint etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Key Facts About the Debian Family

A
  • The Debian family is upstream for Ubuntu, and Ubuntu is upstream for Linux Mint and others.
  • Kernel version 4.15 is used in Ubuntu 18.04 LTS.
  • It uses the DPKG-based APT package manager (using apt-get, apt-cache, etc. which we cover in more detail later) to install, update, and remove packages in the system.
  • Ubuntu has been widely used for cloud deployments.
  • While Ubuntu is built on top of Debian and is GNOME-based under the hood, it differs visually from the interface on standard Debian, as well as other distributions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The boot process

A
  1. Power ON
  2. BIOS(Basic Input/Output system) - initializes screen and keyboard and tests the main memory(POST - power on self Test)
  3. MasterBootRecord(MBR) also known as first sector of the Hard Disk
  4. Boot loader(e.g. GRUB)
  5. Kernel(Linux OS)
  6. Initial RAM Disk - initramfs Image
  7. /sbin/init (parent process)
  8. Command Shell using getty
  9. X Window System
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A number of boot loaders exist for Linux

A

the most common ones are GRUB (for GRand Unified Boot loader), ISOLINUX (for booting from removable media), and DAS U-Boot (for booting on embedded devices/appliances).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

For systems using the BIOS/MBR method, the boot loader resides at the {1} of the hard disk, also known as the {2}. The size of the {2} is just {3} bytes. In this stage, the boot loader examines the partition table and finds a bootable partition. Once it finds a bootable partition, it then searches for the second stage boot loader, for example GRUB, and loads it into RAM (Random Access Memory).

A
  1. first sector
  2. Master Boot Record (MBR)
  3. 512
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

For systems using the EFI/UEFI method, UEFI firmware reads its {1} data to determine which {2} is to be launched and from where (i.e. from which disk and partition the EFI partition can be found). The firmware then launches the {2}, for example GRUB, as defined in the boot entry in the firmware’s boot manager.

A
  1. Boot Manager

2. UEFI application

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The second stage boot loader resides under…

A

/boot. A splash screen is displayed, which allows us to choose which operating system (OS) to boot. After choosing the OS, the boot loader loads the kernel of the selected operating system into RAM and passes control to it. The boot loader loads the selected kernel image and passes control to it. Kernels are almost always compressed, so its first job is to uncompress itself. After this, it will check and analyze the system hardware and initialize any hardware device drivers built into the kernel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The initramfs filesystem image contains

A

programs and binary files that perform all actions needed to mount the proper root filesystem, like providing kernel functionality for the needed filesystem and device drivers for mass storage controllers with a facility called udev (for user device), which is responsible for figuring out which devices are present, locating the device drivers they need to operate properly, and loading them. After the root filesystem has been found, it is checked for errors and mounted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The mount program instructs the operating system that a filesystem is ready for use, and associates it with a particular point in the overall hierarchy of the filesystem (the mount point). If this is successful,

A

the initramfs is cleared from RAM and the init program on the root filesystem (/sbin/init) is executed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

init handles

A

the mounting and pivoting over to the final real root filesystem. If special hardware drivers are needed before the mass storage can be accessed, they must be in the initramfs image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The initial RAM Disk

A
  1. mount proper root filesystem
  2. providing kernel functionality
  3. locating devices
  4. locating drivers and load them
  5. checking for errors in root filesystem
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Most distributions start six text terminals and one graphics terminal starting with F1 or F2. Within a graphical environment, switching to a text console requires pressing

A

CTRL-ALT + the appropriate function key (with F7 or F1 leading to the GUI).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The boot loader loads both

A

the kernel and an initial RAM–based file system (initramfs) into memory, so it can be used directly by the kernel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When the kernel is loaded in RAM, it immediately initializes and configures

A

the computer’s memory and also configures all the hardware attached to the system. This includes all processors, I/O subsystems, storage devices, etc. The kernel also loads some necessary user space applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

/sbin/init and Services

A

Once the kernel has set up all its hardware and mounted the root filesystem, the kernel runs /sbin/init. This then becomes the initial process, which then starts other processes to get the system running. Most other processes on the system trace their origin ultimately to init; exceptions include the so-called kernel processes. These are started by the kernel directly, and their job is to manage internal operating system details.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Besides starting the system, init is responsible

A

for keeping the system running and for shutting it down cleanly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Startup Alternatives

A

1) Upstart
- Developed by Ubuntu and first included in 2006
- Adopted in Fedora 9 (in 2008) and in RHEL 6 and its clones.

2) systemd
- Adopted by Fedora first (in 2011)
- Adopted by RHEL 7 and SUSE
- Replaced Upstart in Ubuntu 16.04.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

systemd Features

A
  • Systems with systemd start up faster than those with earlier init methods. This is largely because it replaces a serialized set of steps with aggressive parallelization techniques, which permits multiple services to be initiated simultaneously.
  • Complicated startup shell scripts are replaced with simpler configuration files, which enumerate what has to be done before a service is started, how to execute service startup, and what conditions the service should indicate have been accomplished when startup is finished. One thing to note is that /sbin/init now just points to /lib/systemd/systemd; i.e. systemd takes over the init process.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Starting, stopping, restarting a service (using nfs as an example) on a currently running system:
{1}
Enabling or disabling a system service from starting up at system boot:
{2}

A

1) $ sudo systemctl start|stop|restart nfs.service

2) $ sudo systemctl enable|disable nfs.service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Linux Filesystems

A

1) Conventional disk filesystems: ext2, ext3, ext4, XFS, Btrfs, JFS, NTFS, etc.
2) Flash storage filesystems: ubifs, JFFS2, YAFFS, etc.
3) Database filesystems
4) Special purpose filesystems: procfs, sysfs, tmpfs, squashfs, debugfs, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
A partition is a
physically contiguous section of a disk, or what appears to be so in some advanced setups.
26
A filesystem is a
method of storing/finding files on a hard disk (usually in a partition).
27
The Filesystem Hierarchy Standard
Linux systems store their important files according to a standard layout called the Filesystem Hierarchy Standard (FHS)
28
1) /bin/ 2) /boot/ 3) /dev/ 4) /etc/ 5) /home/ 6) /lib/ 7) /media/ 8) /mnt/ 9) /opt/ 10) /sbin/ 11) /srv/ 12) /tmp/ 13) /usr/ 14) /var/ 15) /root/ 16) /proc/
1) essential user command binaries 2) static files of the boot loader 3) device files 4) HOST-specific system configuration 5) user home directories 6) essential shared libraries and kernel modules 7) mount point for removable media 8) mount point for a temporarily mounted filesystems 9) add-on application software packages 10) system binaries 11) data for services provided by this system 12) temporary files 13) multiuser utilities and applications(secondary hierarchy) (required bin, include, lib, local, sbin, share) 14) variable files 15) home directory for root user 16) virtual filesystem documenting kernel and process status as text files
29
Choosing a distributive: 1) for server 2) for desktop 3) for embedded
1) RHEL, CentOS, Ubuntu Server, SLES, Debian 2) Ubuntu, Fedora, LinuxMint, Debian 3) Yocto, Open Embedded, Android
30
Many installers can do an installation completely automatically, using a configuration file to specify installation options. This file is called a {1} for Red Hat-based systems, an {2} for SUSE-based systems, and a {3} for Debian-based systems.
1) Kickstart file 2) AutoYAST profile 3) Preseed file
31
What does display manager
1) Display management 2) Loads X Server 3) Manage Graphical logins
32
A desktop environment consists of a
session manager, which starts and maintains the components of the graphical session, and the window manager, which controls the placement and movement of windows, window title-bars, and controls.
33
Seamless desktop environment
Session manager + Window manager + A set of utilities
34
The default display manager for GNOME is called
gdm
35
To show hidden files, select Show Hidden Files from the menu or press
CTRL-H
36
o open the File Manager from the command line, on most systems simply type
nautilus
37
Another quick way to access a specific directory is to press
CTRL-L
38
The default text editor in GNOME is
gedit
39
Deleting a file in Nautilus will automatically move the deleted files to the
.local/share/Trash/files/
40
To delete a file without trashing it
select the file or directory you want to permanently delete and press Shift-Delete
41
Find the latest modified file in
/var/log
42
The X server, which actually provides the GUI, uses the what file?
/etc/X11/xorg.conf
43
he Network Time Protocol (NTP) is the most popular and reliable protocol for setting the
local time via Internet servers.
44
more detailed configuration is possible by editing the standard NTP configuration file which located in...
/etc/ntp.conf
45
ascertain your current resolution by typing at the command line
xdpyinfo | grep dim
46
For Debian-based systems, the higher-level package management system is the
apt (Advanced Package Tool)
47
underlying package manager for DEBIAN-BASED systems.
dpkg
48
Most input lines entered at the shell prompt have three basic elements:
1) Command 2) Options 3) Arguments.
49
creating sudo user
1) su 2) echo "username ALL=(ALL) ALL" > /etc/sudoers.d/username 3) chmod 440 /etc/sudoers.d/username
50
Virtual Terminals (VT) are console sessions that use the entire display and keyboard outside of a graphical environment. Such terminals are considered "virtual" because
although there can be multiple active terminals
51
To switch between VTs, press
CTRL-ALT-function key for the VT. For example, press CTRL-ALT-F6 for VT 6. Actually, you only have to press the ALT-F6 key combination if you are in a VT and want to switch to another VT.
52
Turning Off the Graphical Desktop
$ sudo systemctl stop gdm (or sudo telinit 3)
53
restart Graphical Desktop (after logging into the console)
$ sudo systemctl start gdm (or sudo telinit 5)
54
The preferred method to shut down or reboot the system is to use the {1} command. This sends a warning message, and then prevents further users from logging in. The init process will then control shutting down or rebooting the system. It is important to always shut down properly; failure to do so can result in damage to the system and/or loss of data.
1) shutdown
55
The halt and poweroff commands issue
shutdown -h to halt the system;
56
reboot issues
shutdown -r
57
When administering a multiuser system, you have the option of notifying all users prior to shutdown, as in:
$ sudo shutdown -h 10:00 "Shutting down for scheduled maintenance."
58
In general, executable programs and scripts should live in the
/bin, /usr/bin, /sbin, /usr/sbin directories, or somewhere under /opt, or /usr/local/bin and /usr/local/sbin, or in a directory in a user's account space, such as /home/student/bin.
59
One way to locate programs is to employ the
which
60
Broader way to locate programs
whereis
61
Multiple slashes (/) between directories and files are allowed, but all but
one slash between elements in the pathname is ignored by the system.
62
List all files, including hidden files
ls –a
63
Suppose that file1 already exists. A hard link, called file2, is created with the command
$ ln file1 file2
64
The {1} option to ls prints out in the first column the inode number, which is a unique quantity for each file object
-i
65
Soft (or Symbolic) links are created with the -s option, as in:
ln -s file1 file3
66
push directory, pop directory, list directories
pushd, popd, dirs
67
cat
Used for viewing files that are not very long; it does not provide any scroll-back.
68
tac
Used to look at a file backwards, starting with the last line.
69
less
Used to view larger files because it is a paging program. It pauses at each screen full of text, provides scroll-back capabilities, and lets you search and navigate within the file. Note: Use / to search for a pattern in the forward direction and ? for a pattern in the backward direction. An older program named more is still used, but has fewer capabilities: "less is more".
70
tail
Used to print the last 10 lines of a file by default. You can change the number of lines by doing -n 15 or just -15 if you wanted to look at the last 15 lines instead of the default.
71
head
The opposite of tail; by default, it prints the first 10 lines of a file.
72
touch is often used to
set or update the access, change, and modify times of files. By default, it resets a file's timestamp to match the current time. However, you can also create an empty file using touch: $ touch touch provides several useful options. For example, the -t option allows you to set the date and timestamp of the file to a specific value, as in: $ touch -t 12091600 myfile This sets the myfile file's timestamp to 4 p.m., December 9th (12 09 1600).
73
rm –i
interactivily remove file
74
The PS1 variable is the
character string that is displayed as the prompt on the command line. Most distributions set PS1 to a known default value, which is suitable in most cases. However, users may want custom information to show on the command line.
75
there are three standard file streams (or descriptors) always open for use:
standard input (standard in or stdin), standard output (standard out or stdout) and standard error (or stderr). stdin is file descriptor 0, stdout is file descriptor 1, and stderr is file descriptor 2
76
send input data to program
$ do_something < input_file
77
If you want to send the output to a file
$ do_something > output-file * Because stderr is not the same as stdout, error messages will still be seen on the terminal windows in the above example.
78
If you want to redirect stderr to a separate file
$ do_something 2> error-file Note: By the same logic, do_something 1> output-file is the same as do_something > output-file.
79
A special shorthand notation can send anything written to file descriptor 2 (stderr) to the same place as file descriptor 1 (stdout)
$ do_something > all-output-file 2>&1 bash permits an easier syntax for the above: $ do_something >& all-output-file
80
locate
performs a search taking advantage of a previously constructed database of files and directories on your system, matching all entries that contain a specified character string. This can sometimes result in a very long list. locate utilizes a database created by a related utility, updatedb. Most Linux systems run this automatically once a day.
81
wildcards
? Matches any single character * Matches any string of characters [set] Matches any character in the set of characters, for example [adf] will match any occurrence of "a", "d", or "f" [!set] Matches any character not in the set of characters
82
find
Searching for files and directories named gcc: $ find /usr -name gcc Searching only for directories named gcc: $ find /usr -type d -name gcc Searching only for regular files named gcc: $ find /usr -type f -name gcc
83
To find and remove all files that end with .swp
: $ find -name "*.swp" -exec rm {} ’;’ The {} (squiggly brackets) is a placeholder that will be filled with all the file names that result from the find expression, and the preceding command will be run on each one individually. Please note that you have to end the command with either ‘;’ (including the single-quotes) or "\;". Both forms are fine. One can also use the -ok option, which behaves the same as -exec, except that find will prompt you for permission before executing the command. This makes it a good way to test your results before blindly executing any potentially dangerous commands.
84
To find files based on time
$ find / -ctime 3
85
To find files based on sizes
$ find / -size 0
86
For example, to find files greater than 10 MB in size and running a command on those files
$ find / -size +10M -exec command {} ’;’
87
Both package management systems operate on two distinct levels
a low-level tool (such as dpkg or rpm) takes care of the details of unpacking individual packages, running scripts, getting the software installed correctly, while a high-level tool (such as apt-get, yum, dnf or zypper) works with groups of packages, downloads packages from the vendor, and figures out dependencies.
88
List all installed packages(dpkg)
dpkg -l или dpkg --list
89
List all files contains in package(dpkg)
dpkg --listfiles bzip2
90
Package remove(dpkg)
dpkg --remove bzip2
91
Поиск по всем пакетам(apt)
sudo apt-cache search wget2
92
Man list all pages on the topic
man -f topic | same as whatis
93
Man list all pages that discuss a specified topic (even if the specified subject is not present in the name)
man -k topic | same as apropos
94
man will display all pages with the given name in all chapters, one after the other, as in:
man -a socket
95
man show topic in chapter n(n - integer)
man n topic
96
displays an index of available topics(not man)
``` info Items function like browser links and are identified by an asterisk (*) at the beginning of the item name. Named items (outside a menu) are identified with double-colons (::) at the end of the item name. Items can refer to other nodes within the file or to other files. n Go to the next node p Go to the previous node u Move one node up in the index ```
97
Processes can be of different types according to the task being performed
Interactive Processes ------------------------------- Need to be started by a user, either at a command line or through a graphical interface such as an icon or a menu selection. examples: bash, firefox, top Batch Processes ------------------------ Automatic processes which are scheduled from and then disconnected from the terminal. These tasks are queued and work on a FIFO (first-in, first-out) basis. examples: updatedb Daemons -------------- Server processes that run continuously. Many are launched during system startup and then wait for a user or system request indicating that their service is required. examples: httpd, xinetd, sshd Threads ------------ Lightweight processes. These are tasks that run under the umbrella of a main process, sharing memory and other resources, but are scheduled and run by the system on an individual basis. An individual thread can end without terminating the whole process and a process can create new threads at any time. Many non-trivial programs are multi-threaded. examples: firefox, gnome-terminal-server Kernel Threads ---------------------- Kernel tasks that users neither start nor terminate and have little control over. These may perform actions like moving a thread from one CPU to another, or making sure input/output operations to disk are completed. examples: kthreadd, migration, ksoftirqd
98
process ids
``` Process ID (PID) Unique Process ID number ``` Parent Process ID (PPID) Process (Parent) that started this process. If the parent dies, the PPID will refer to an adoptive parent; on recent kernels, this is kthreadd which has PPID=2. ``` Thread ID (TID) Thread ID number. This is the same as the PID for single-threaded processes. For a multi-threaded process, each thread shares the same PID, but has a unique TID. ```
99
To terminate a process, you can type
kill -SIGKILL or kill -9 .
100
The operating system identifies the user who starts the process by the
Real User ID (RUID) assigned to the user.
101
The user who determines the access rights for the users is identified by the
Effective UID (EUID). The EUID may or may not be the same as the RUID.
102
Users can be categorized into various groups. Each group is identified by the {1}. The access rights of the group are determined by the {2}. Each user can be a member of one or more groups.
1) Real Group ID (RGID). | 2) Effective Group ID (EGID).
103
The priority for a process can be set by specifying a
nice value, or niceness, for the process. The lower the nice value, the higher the priority.
104
In Linux, what represent highest and lowest priority
of -20 represents the highest priority and 19 represents the lowest
105
The load average can be viewed by running
w, top or uptime
106
Load average is the
average of the load number for a given period of time
107
Assuming our system is a single-CPU system, the three load average numbers 0.45, 0.17, 0.12 are interpreted as follows
0. 45: For the last minute the system has been 45% utilized on average. 0. 17: For the last 5 minutes utilization has been 17%. 0. 12: For the last 15 minutes utilization has been 12%.
108
You can put a job in the background by
suffixing & to the command, for example: updatedb &.
109
You can either use {1} to suspend a foreground job or {2} to terminate a foreground job and can always use the {3} and {4} commands to run a process in the background and foreground, respectively.
1) CTRL-Z 2) CTRL-C 3) bg 4) fg
110
The {1} utility displays all jobs running in background.
jobs
111
{1} provides the same information as jobs, including the PID of the background jobs.
jobs -l
112
{1} provides information about currently running processes keyed by PID
ps
113
displays all the processes in the system in full detail.
ps -ef
114
show processes in realtime with updates
top, htop, atop
115
displays the processes running on the system in the form of a tree diagram showing the relationship between a process and its parent process and any other processes that it created.
pstree
116
The first line of the top output displays a quick summary of what is happening in the system, including
- How long the system has been up - How many users are logged on - What is the load average.
117
The second line of the top output displays
- the total number of processes, | - the number of running, sleeping, stopped, and zombie processes.
118
The third line of the top output indicates
how the CPU time is being divided between the users (us) and the kernel (sy) by displaying the percentage of CPU time used for each. The percentage of user jobs running at a lower priority (niceness - ni) is then listed. Idle mode (id) should be low if the load average is high, and vice versa. The percentage of jobs waiting (wa) for I/O is listed. Interrupts include the percentage of hardware (hi) vs. software interrupts (si). Steal time (st) is generally used with virtual machines, which has some of its idle CPU time taken for other uses.
119
The fourth and fifth lines of the top output indicate
memory usage, which is divided in two categories: ``` Physical memory (RAM) – displayed on line 4. Swap space – displayed on line 5. ``` Both categories display total memory, used memory, and free space.
120
Each line in the process list of the top output displays information about a process. By default, processes are ordered by highest CPU usage. The following information about each process is displayed:
- Process Identification Number (PID) - Process owner (USER) - Priority (PR) and nice values (NI) - Virtual (VIRT), physical (RES), and shared memory (SHR) - Status (S) - Percentage of CPU (%CPU) and memory (%MEM) used - Execution time (TIME+) - Command (COMMAND).
121
The table lists what happens when pressing various keys when running top
t Display or hide summary information (rows 2 and 3) m Display or hide memory information (rows 4 and 5) A Sort the process list by top resource consumers r Renice (change the priority of) a specific processes k Kill a specific process f Enter the top configuration screen o Interactively select a new sort order in the process list
122
You can use the at utility program to
execute any non-interactive command at a specified time
123
cron is
a time-based scheduling utility program. It can launch routine background jobs at specific times and/or days on an on-going basis.
124
cron is driven by a configuration file called
/etc/crontab (cron table), which contains the various shell commands that need to be run at the properly scheduled times. There are both system-wide crontab files and individual user-based ones. Each line of a crontab file represents a job, and is composed of a so-called CRON expression, followed by a shell command to execute.
125
The crontab -e command will open the crontab editor to edit existing jobs or to create new jobs. Each line of the crontab file will contain 6 fields
``` Field Description Values MIN Minutes 0 to 59 HOUR Hour field 0 to 23 DOM Day of Month 1-31 MON Month field 1-12 DOW Day Of Week 0-6 (0 = Sunday) CMD Command Any command to be executed ```
126
The entry * * * * * /usr/local/bin/execute/this/script.sh will {1} The entry 30 08 10 06 * /home/sysadmin/full-backup will {2}
1) schedule a job to execute 'script.sh' every minute of every hour of every day of the month, and every month and every day in the week. 2) schedule a full-backup at 8.30 a.m., 10-June, irrespective of the day of the week.
127
sleep
suspends execution for at least the specified period of time, which can be given as the number of seconds (the default), minutes, hours, or days. After that time has passed (or an interrupting signal has been received), execution will resume. The syntax is: sleep NUMBER[SUFFIX]... where SUFFIX may be: s for seconds (the default) m for minutes h for hours d for days.
128
mount command
is used to attach a filesystem To mount $ sudo mount /dev/sda5 /home To unmount the partition, the command would be: $ sudo umount /home Typing mount without any arguments will show all presently mounted filesystems.
129
automount
If you want it to be automatically available every time the system starts up, you need to edit /etc/fstab accordingly (the name is short for filesystem table)
130
df -Th (disk-free) will display
information about mounted filesystems, including the filesystem type, and usage statistics about currently used and available space.
131
The most common such Network filesystem is named
simply NFS (the Network Filesystem). It has a very long history and was first developed by Sun Microsystems. Another common implementation is CIFS (also termed SAMBA), which has Microsoft roots.
132
On the server machine, NFS uses daemons (built-in networking and service processes in Linux) and other system servers are started at the command line by typing:
$ sudo systemctl start nfs
133
The text file /etc/exports
contains the directories and permissions that a host is willing to share with other systems over NFS. A very simple entry in this file may look like the following: /projects *.example.com(rw)
134
After modifying the /etc/exports file, you can use
the exportfs -av command to notify Linux about the directories you are allowing to be remotely mounted using NFS. You can also restart NFS with sudo systemctl restart nfs, but this is heavier, as it halts NFS for a short while before starting it up again. To make sure the NFS service starts whenever the system is booted, issue sudo systemctl enable nfs.
135
On the client machine, if it is desired to have the remote filesystem mounted automatically upon system boot,
the /etc/fstab file is modified to accomplish this. For example, an entry in the client's /etc/fstab file might look like the following: servername:/projects /mnt/nfs/projects nfs defaults 0 0 You can also mount the remote filesystem without a reboot or as a one-time mount by directly using the mount command: $ sudo mount servername:/projects /mnt/nfs/projects
136
/sbin directory is intended
for essential binaries related to system administration, such as fsck and shutdown.
137
The /bin directory contains
executable binaries, essential commands used to boot the system or in single-user mode, and essential commands required by all system users, such as cat, cp, ls, mv, ps, and rm.
138
Commands that are not essential (theoretically) for the system to boot or operate in single-user mode are placed in the
/usr/bin and /usr/sbin directories. Historically, this was done so /usr could be mounted as a separate filesystem that could be mounted at a later stage of system startup or even over a network.However, nowadays most find this distinction is obsolete. In fact, many distributions have been discovered to be unable to boot with this separation, as this modality had not been used or tested for a long time. Thus, on some of the newest Linux distributions /usr/bin and /bin are actually just symbolically linked together, as are /usr/sbin and /sbin.
139
The /proc filesystem contains
virtual files (files that exist only in memory) that permit viewing constantly changing kernel data. This filesystem contains files and directories that mimic kernel structures and configuration information. It does not contain real files, but runtime system information, e.g. system memory, devices mounted, hardware configuration, etc. Some important files in /proc are: ``` /proc/cpuinfo /proc/interrupts /proc/meminfo /proc/mounts /proc/partitions /proc/version ``` /proc has subdirectories as well, including: /proc/ /proc/sys The first example shows there is a directory for every process running on the system, which contains vital information about it. The second example shows a virtual directory that contains a lot of information about the entire system, in particular its hardware and configuration. The /proc filesystem is very useful because the information it reports is gathered only as needed and never needs storage on the disk.
140
The /dev directory
contains device nodes, a type of pseudo-file used by most hardware and software devices, except for network devices. This directory is: Empty on the disk partition when it is not mounted Contains entries which are created by the udev system, which creates and manages device nodes on Linux, creating them dynamically when devices are found. The /dev directory contains items such as: /dev/sda1 (first partition on the first hard disk) /dev/lp1 (second printer) /dev/random (a source of random numbers).
141
The /var directory
contains files that are expected to change in size and content as the system is running (var stands for variable), such as the entries in the following directories: ``` System log files: /var/log Packages and database files: /var/lib Print queues: /var/spool Temporary files: /var/tmp. The /var directory may be put on its own filesystem so that growth of the files can be accommodated and the file sizes do not fatally affect the system. Network services directories such as /var/ftp (the FTP service) and /var/www (the HTTP web service) are also found under /var ```
142
The /etc directory
the home for system configuration files. It contains no binary programs, although there are some executable scripts. For example, /etc/resolv.conf tells the system where to go on the network to obtain host name to IP address mappings (DNS). Files like passwd, shadow and group for managing user accounts are found in the /etc directory. While some distributions have historically had their own extensive infrastructure under /etc (for example, Red Hat and SUSE have used /etc/sysconfig), with the advent of systemd there is much more uniformity among distributions today. Note that /etc is for system-wide configuration files and only the superuser can modify files there. User-specific configuration files are always found under their home directory.
143
The /boot directory
contains the few essential files needed to boot the system. For every alternative kernel installed on the system there are four files: vmlinuz The compressed Linux kernel, required for booting. initramfs The initial ram filesystem, required for booting, sometimes called initrd, not initramfs. config The kernel configuration file, only used for debugging and bookkeeping. System.map Kernel symbol table, only used for debugging. Each of these files has a kernel version appended to its name. The Grand Unified Bootloader (GRUB) files such as /boot/grub/grub.conf or /boot/grub2/grub2.cfg are also found under the /boot directory.
144
The /lib directory
contains libraries (common code shared by applications and needed for them to run) for the essential programs in /bin and /sbin. These library filenames either start with ld or lib. For example, /lib/libncurses.so.5.9. Most of these are what is known as dynamically loaded libraries (also known as shared libraries or Shared Objects (SO)). On some Linux distributions there exists a /lib64 directory containing 64-bit libraries, while /lib contains 32-bit versions.
145
Kernel modules (kernel code, often device drivers, that can be loaded and unloaded without re-starting the system) are located in
/lib/modules/.
146
Removable media Directories
/media, /run and /mnt
147
The /usr directory tree contains theoretically non-essential programs and scripts (in the sense that they should not be needed to initially boot the system) and has at least the following sub-directories
/usr/include Header files used to compile applications /usr/lib Libraries for programs in /usr/bin and /usr/sbin /usr/lib64 64-bit libraries for 64-bit programs in /usr/bin and /usr/sbin /usr/sbin Non-essential system binaries, such as system daemons /usr/share Shared data used by applications, generally architecture-independent /usr/src Source code, usually for the Linux kernel /usr/local Data and programs specific to the local machine. Subdirectories include bin, sbin, lib, share, include, etc. /usr/bin This is the primary directory of executable commands on the system
148
diff is used to compare files and directories. This often-used utility program has many useful options (see: man diff) including
- c Provides a listing of differences that include three lines of context before and after the lines differing in content - r Used to recursively compare subdirectories, as well as the current directory - i Ignore the case of letters - w Ignore differences in spaces and tabs (white space) - q Be quiet: only report if files are different without listing the differences
149
To compare two files, at the command prompt, type
diff [options] . diff is meant to be used for text files; for binary files, one can use cmp. diff [options] . diff is meant to be used for text files; for binary files, one can use cmp. diff [options] . diff is meant to be used for text files; for binary files, one can use cmp.
150
You can compare three files at once using
diff3, which uses one file as the reference basis for the other two. For example, suppose you and a co-worker both have made modifications to the same file working at the same time independently. diff3 can show the differences based on the common file you both started with. The syntax for diff3 is as follows: $ diff3 MY-FILE COMMON-FILE YOUR-FILE
151
Many modifications to source code and configuration files are distributed utilizing patches, which are applied, not surprisingly, with the
patch program. A patch file contains the deltas (changes) required to update an older version of a file to the new one. The patch files are actually produced by running diff with the correct options, as in: $ diff -Nur originalfile newfile > patchfile
152
To apply a patch, you can just do either of the two methods below
$ patch -p1 < patchfile $ patch originalfile patchfile The first usage is more common, as it is often used to apply changes to an entire directory tree, rather than just one file, as in the second example. To understand the use of the -p1 option and many others, see the man page for patch.
153
The real nature of a file can be ascertained by using the f
ile utility. For the file names given as arguments, it examines the contents and certain characteristics to determine whether the files are plain text, shared libraries, executable programs, scripts, or something else.
154
Basic ways to do so include the
use of simple copying with cp and use of the more robust rsync.
155
a very useful way to back up a project directory might be to use the following command
$ rsync -r project-X archive-machine:archives/project-X
156
Linux uses a number of methods to perform this compression, including
Command Usage gzip The most frequently used Linux compression utility bzip2 Produces files significantly smaller than those produced by gzip xz The most space-efficient compression utility used in Linux zip Is often required to examine and decompress archives from other operating systems
157
the WHAT utility is often used to group files in an archive and then compress the whole archive at once.
tar
158
gzip is the most often used Linux compression utility. It compresses very well and is very fast. The following table provides some usage examples
Command Usage gzip * Compresses all files in the current directory; each file is compressed and renamed with a .gz extension gzip -r projectX Compresses all files in the projectX directory, along with all files in all of the directories under projectX gunzip foo De-compresses foo found in the file foo.gz. Under the hood, the gunzip command is actually the same as gzip –d
159
bzip2 has a syntax that is similar to gzip but it uses a different compression algorithm and produces significantly smaller files, at the price of taking a longer time to do its work. Thus, it is more likely to be used to compress larger files. Examples of common usage are also similar to gzip:
Command Usage bzip2 * Compresses all of the files in the current directory and replaces each file with a file renamed with a .bz2 extension bunzip2 *.bz2 Decompresses all of the files with an extension of .bz2 in the current directory. Under the hood, bunzip2 is the same as calling bzip2 -d
160
xz is the most space efficient compression utility used in Linux and is now used to store archives of the Linux kernel. Once again, it trades a slower compression speed for an even higher compression ratio. Some usage examples:
Command Usage $ xz * Compresses all of the files in the current directory and replaces each file with one with a .xz extension xz foo Compresses the file foo into foo.xz using the default compression level (-6), and removes foo if compression succeeds xz -dk bar.xz Decompresses bar.xz into bar and does not remove bar.xz even if decompression is successful xz -dcf a.txt b.txt.xz > abcd.txt Decompresses a mix of compressed and uncompressed files to standard output, using a single command $ xz -d *.xz Decompresses the files compressed using xz
161
The zip program is not often used to compress files in Linux, but is often required to examine and decompress archives from other operating systems. It is only used in Linux when you get a zipped file from a Windows user. It is a legacy program.
Command Usage zip backup * Compresses all files in the current directory and places them in the file backup.zip zip -r backup.zip ~ Archives your login directory (~) and all files and directories under it in the file backup.zip unzip backup.zip Extracts all files in the file backup.zip and places them in the current directory
162
Historically, tar stood for "tape archive" and was used to archive files to a magnetic tape. It allows you to create or extract files from an archive file, often called a tarball. At the same time, you can optionally compress while creating the archive, and decompress while extracting its contents. Here are some examples of the use of tar:
Command Usage $ tar xvf mydir.tar Extract all the files in mydir.tar into the mydir directory $ tar zcvf mydir.tar.gz mydir Create the archive and compress with gzip $ tar jcvf mydir.tar.bz2 mydir Create the archive and compress with bz2 $ tar Jcvf mydir.tar.xz mydir Create the archive and compress with xz $ tar xvf mydir.tar.gz Extract all the files in mydir.tar.gz into the mydir directory Note: You do not have to tell tar it is in gzip format You can separate out the archiving and compression stages, as in: $ tar cvf mydir.tar mydir ; gzip mydir.tar $ gunzip mydir.tar.gz ; tar xvf mydir.tar
163
The dd program is very useful for making copies of raw disk space. For example, to back up your Master Boot Record (MBR) (the first 512-byte sector on the disk that contains a table describing the partitions on that disk), you might type
$ dd if=/dev/sda of=sda.mbr bs=512 count=1 WARNING! Typing: $ dd if=/dev/sda of=/dev/sdb to make a copy of one disk onto another, will delete everything that previously existed on the second disk.
164
If you want to create a file without using an editor, there are two standard ways to create one from the command line and fill it with content.
The first is to use echo repeatedly: $ echo line one > myfile $ echo line two >> myfile $ echo line three >> myfile Note that while a single greater-than sign (>) will send the output of a command to a file, two of them (>>) will append the new output to an existing file. The second way is to use cat combined with redirection: ``` $ cat << EOF > myfile > line one > line two > line three > EOF $ ```
165
To identify the current user, type
whoami
166
To list the currently logged-on users, type
who. | Giving who the -a option will give more detailed information.
167
The standard prescription is that when you first login to Linux, /etc/profile is read and evaluated, after which the following files are searched (if they exist) in the listed order
~/.bash_profile ~/.bash_login ~/.profile
168
You can create customized commands or modify the behavior of already existing ones by creating aliases. To create, delete, and show aliases
create: alias take=mkdir delete: unalias take show all: alias
169
All Linux users are assigned a unique user ID (uid), which is just an integer; normal users start with a uid of
1000 or greater.
170
Groups are collections of accounts with certain shared permissions. Control of group membership is administered through the
/etc/group file, which shows a list of groups and their members. By default, every user belongs to a default or primary group. When a user logs in, the group membership is set for their primary group and all the members enjoy the same level of access and privilege. Permissions on various files and directories can be modified at the group level.
171
Users also have one or more group IDs (gid), including a default one which is the same as the user ID. These numbers are associated with names through the files
/etc/passwd and /etc/group. Groups are used to establish a set of users who have common interests for the purposes of access rights, privileges, and security considerations. Access rights to files (and devices) are granted on the basis of the user and the group they belong to. For example, /etc/passwd might contain george:x:1002:1002:George Metesky:/home/george:/bin/bash and /etc/group might contain george:x:1002.
172
Adding a new user is done with useradd and removing an existing user is done with userdel. In the simplest form, an account for the new user bjmoose would be done with
$ sudo useradd bjmoose which, by default, sets the home directory to /home/bjmoose, populates it with some basic files (copied from /etc/skel) and adds a line to /etc/passwd such as: bjmoose:x:1002:1002::/home/bjmoose:/bin/bash and sets the default shell to /bin/bash.
173
Removing a user account is as easy as typing
$ sudo userdel bjmoose. However, this will leave the /home/bjmoose directory intact. This might be useful if it is a temporary inactivation. To remove the home directory while removing the account one needs to use the -r option to userdel.
174
Typing id with no argument gives
information about the current user, as in: $ id uid=1002(bjmoose) gid=1002(bjmoose) groups=106(fuse),1002(bjmoose)
175
Adding a new group is done with groupadd:
$ sudo /usr/sbin/groupadd anewgroup
176
The group can be removed with:
$ sudo /usr/sbin/groupdel anewgroup
177
Adding a user to an already existing group is done with usermod:
For example, you would first look at what groups the user already belongs to: $ groups rjsquirrel bjmoose : rjsquirrel and then add the new group: $ sudo /usr/sbin/usermod -a -G anewgroup rjsquirrel $ groups rjsquirrel rjsquirrel: rjsquirrel anewgroup
178
Removing a user from the group is somewhat trickier.
The -G option to usermod must give a complete list of groups. Thus, if you do: $ sudo /usr/sbin/usermod -G rjsquirrel rjsquirrel $ groups rjsquirrel rjsquirrel : rjsquirrel only the rjsquirrel group will be left.
179
Elevating to root account
su and then is prompted for the root password.
180
To execute just one command with root privilege type sudo . When the command is complete, you will return to being a normal unprivileged user.
sudo configuration files are stored in the /etc/sudoers file and in the /etc/sudoers.d/ directory. By default, the sudoers.d directory is empty.
181
There are a number of ways to view the values of currently set environment variables; one can type
set, env, or export. Depending on the state of your system, set may print out many more lines than the other two methods.
182
By default, variables created within a script are only available to the current shell; child processes (sub-shells) will not have access to values that have been set or modified. Allowing child processes to see the values requires use of the export command.
Task Command Show the value of a specific variable echo $SHELL Export a new variable value export VARIABLE=value (or VARIABLE=value; export VARIABLE) Add a variable permanently Edit ~/.bashrc and add the line export VARIABLE=value Type source ~/.bashrc or just . ~/.bashrc (dot ~/.bashrc); or just start a new shell by typing bash You can also set environment variables to be fed as a one shot to a command as in: $ SDIRS=s_0* KROOT=/lib/modules/$(uname -r)/build make modules_install which feeds the values of the SDIRS and KROOT environment variables to the command make modules_install.
183
HOME is
an environment variable that represents the home (or login) directory of the user. cd without arguments will change the current working directory to the value of HOME. Note the tilde character (~) is often used as an abbreviation for $HOME. Thus, cd $HOME and cd ~ are completely equivalent statements.
184
PATH is
an ordered list of directories (the path) which is scanned when a command is given to find the appropriate program or script to run. Each directory in the path is separated by colons (:). A null (empty) directory name (or ./) indicates the current directory at any given time. :path1:path2 path1::path2 In the example :path1:path2, there is a null directory before the first colon (:). Similarly, for path1::path2 there is a null directory between path1 and path2. To prefix a private bin directory to your path: $ export PATH=$HOME/bin:$PATH $ echo $PATH /home/student/bin:/usr/local/bin:/usr/bin:/bin/usr
185
The environment variable SHELL points
to the user's default command shell (the program that is handling whatever you type in a command window, usually bash) and contains the full pathname to the shell: $ echo $SHELL /bin/bash
186
Prompt Statement (PS) is used to
customize your prompt string in your terminal windows to display the information you want. PS1 is the primary prompt variable which controls what your command line prompt looks like. The following special characters can be included in PS1: ``` \u - User name \h - Host name \w - Current working directory \! - History number of this command \d - Date ```
187
To view the list of previously executed commands, you can just type
history at the command line. The list of commands is displayed with the most recent command appearing last in the list. This information is stored in ~/.bash_history. If you have multiple terminals open, the commands typed in each session are not saved until the session terminates.
188
Several associated environment variables can be used to get information about the history file.
``` HISTFILE The location of the history file. HISTFILESIZE The maximum number of lines in the history file (default 500). HISTSIZE The maximum number of commands in the history file. HISTCONTROL How commands are stored. HISTIGNORE Which command lines can be unsaved. ```
189
Specific keys to perform various tasks on history
Key Usage Up/Down arrow keys Browse through the list of commands previously executed !! (Pronounced as bang-bang) Execute the previous command CTRL-R Search previously used commands
190
Executing previous commands
``` Syntax Task ! Start a history substitution !$ Refer to the last argument in a line !n Refer to the nth command line !string Refer to the most recent command starting with string ```
191
You can use keyboard shortcuts to perform different tasks quickly. The table lists some of these keyboard shortcuts and their uses. Note the case of the "hotkey" does not matter, e.g. doing CTRL-a is the same as doing CTRL-A .
Keyboard Shortcut Task CTRL-L Clears the screen CTRL-D Exits the current shell CTRL-Z Puts the current process into suspended background CTRL-C Kills the current process CTRL-H Works the same as backspace CTRL-A Goes to the beginning of the line CTRL-W Deletes the word before the cursor CTRL-U Deletes from beginning of line to cursor position CTRL-E Goes to the end of the line Tab Auto-completes files, directories, and binaries
192
The following utility programs involve user and group ownership and permission setting
Command Usage chown Used to change user ownership of a file or directory chgrp Used to change group ownership chmod Used to change the permissions on the file, which can be done separately for owner, group and the rest of the world (often named as other)
193
Files have three kinds of permissions
read (r), write (w), execute (x). These are generally represented as in rwx. These permissions affect three groups of owners: user/owner (u), group (g), and others (o). As a result, you have the following three groups of three permissions: rwx: rwx: rwx u: g: o
194
There are a number of different ways to use chmod
For instance, to give the owner and others execute permission and remove the group write permission: $ ls -l somefile -rw-rw-r-- 1 student student 1601 Mar 9 15:04 somefile $ chmod uo+x,g-w somefile $ ls -l somefile -rwxr--r-x 1 student student 1601 Mar 9 15:04 somefile where u stands for user (owner), o stands for other (world), and g stands for group. This kind of syntax can be difficult to type and remember, so one often uses a shorthand which lets you set all the permissions in one step. This is done with a simple algorithm, and a single digit suffices to specify all three permission bits for each entity. This digit is the sum of: 4 if read permission is desired 2 if write permission is desired 1 if execute permission is desired. Thus, 7 means read/write/execute, 6 means read/write, and 5 means read/execute. When you apply this to the chmod command, you have to give three digits for each degree of freedom, such as in: $ chmod 755 somefile $ ls -l somefile -rwxr-xr-x 1 student student 1601 Mar 9 15:04 somefile
195
cat
cat file1 file2 Concatenate multiple files and display the output; i.e. the entire content of the first file is followed by that of the second file cat file1 file2 > newfile Combine multiple files and save the output into a new file cat file >> existingfile Append a file to the end of an existing file cat > file Any subsequent lines typed will go into the file, until CTRL-D is typed cat >> file Any subsequent lines are appended to the file, until CTRL-D is typed To create a new file, at the command prompt type cat > and press the Enter key. This command creates a new file and waits for the user to edit/enter the text. After you finish typing the required text, press CTRL-D at the beginning of the next line to save and exit the editing. Another way to create a file at the terminal is cat > << EOF. A new file is created and you can type the required input. To exit, enter EOF at the beginning of a line.
196
To continually monitor new output in a growing log file
$ tail -f somefile.log
197
When working with compressed files, many standard commands cannot be used directly. For many commonly-used file and text manipulation programs, there is also a version especially designed to work directly with compressed files. These associated utilities have the letter "z" prefixed to their name. For example, we have utility programs such as
zcat, zless, zdiff and zgrep. Here is a table listing some z family commands: Command Description $ zcat compressed-file.txt.gz To view a compressed file $ zless somefile.gz or $ zmore somefile.gz To page through a compressed file $ zgrep -i less somefile.gz To search inside a compressed file $ zdiff file1.txt.gz file2.txt.gz To compare two compressed files There are also equivalent utility programs for other compression methods besides gzip, for example, we have bzcat and bzless associated with bzip2, and xzcat and xzless associated with xz.
198
You can invoke sed using commands like those listed in the accompanying table.
Command Usage sed -e command Specify editing commands at the command line, operate on file and put the output on standard out (e.g., the terminal) sed -f scriptfile Specify a scriptfile containing sed commands, operate on file and put output on standard out Now that you know that you can perform multiple editing and filtering operations with sed, let’s explain some of them in more detail. The table explains some basic operations, where pattern is the current string and replace_string is the new string: Command Usage sed s/pattern/replace_string/ file Substitute first string occurrence in every line sed s/pattern/replace_string/g file Substitute all string occurrences in every line sed 1,3s/pattern/replace_string/g file Substitute all string occurrences in a range of lines sed -i s/pattern/replace_string/g file Save changes for string substitution in the same file
199
awk has the following features
- It is a powerful utility and interpreted programming language. - It is used to manipulate data files, retrieving, and processing text. - It works well with fields (containing a single piece of data, essentially a column) and records (a collection of fields, essentially a line in a file).
200
awk command
Command Usage awk ‘command’ file Specify a command directly at the command line awk -f scriptfile file Specify a file that contains the script to be executed awk '{ print $0 }' /etc/passwd Print entire file awk -F: '{ print $1 }' /etc/passwd Print first field (column) of every line, separated by a space awk -F: '{ print $1 $7 }' /etc/passwd Print first and seventh field of every line
201
sort can be used as follows
Syntax Usage sort Sort the lines in the specified file, according to the characters at the beginning of each line cat file1 file2 | sort Combine the two files, then sort the lines and display the output on the terminal sort -r Sort the lines in reverse order sort -k 3 Sort the lines by the 3rd field on each line instead of the beginning When used with the -u option, sort checks for unique values after sorting the records (lines). It is equivalent to running uniq (which we shall discuss) on the output of sort.
202
uniq
removes duplicate consecutive lines in a text file and is useful for simplifying the text display. Because uniq requires that the duplicate entries must be consecutive, one often runs sort first and then pipes the output into uniq; if sort is used with the -u option, it can do all this in one step. To remove duplicate entries from multiple files at once, use the following command: sort file1 file2 | uniq > file3 OR sort -u file1 file2 > file3 To count the number of duplicate entries, use the following command: uniq -c filename
203
paste
can be used to create a single file containing all three columns. The different columns are identified based on delimiters (spacing used to separate two fields). For example, delimiters can be a blank space, a tab, or an Enter. In the image provided, a single space is used as the delimiter in all files. paste accepts the following options: - d delimiters, which specify a list of delimiters to be used instead of tabs for separating consecutive values on a single line. Each delimiter is used in turn; when the list has been exhausted, paste begins again at the first delimiter. - s, which causes paste to append the data in series rather than in parallel; that is, in a horizontal rather than vertical fashion. paste can be used to combine fields (such as name or phone number) from different files, as well as combine lines from multiple files. For example, line one from file1 can be combined with line one of file2, line two from file1 can be combined with line two of file2, and so on. To paste contents from two files one can do: $ paste file1 file2 The syntax to use a different delimiter is as follows: $ paste -d, file1 file2 Common delimiters are 'space', 'tab', '|', 'comma', etc.
204
To combine two files on a common field, at the command prompt type
join file1 file2 and press the Enter key.
205
split
$ split american-english dictionary will split the American-English file into 100 equal-sized segments named 'dictionaryxx. The last one will of course be somewhat smaller.
206
grep is extensively used as
a primary text searching tool. It scans files for specified patterns and can be used with regular expressions, as well as simple strings, as shown in the table: Command Usage grep [pattern] Search for a pattern in a file and print all matching lines grep -v [pattern] Print all lines that do not match the pattern grep [0-9] Print the lines that contain the numbers 0 through 9 grep -C 3 [pattern] Print context of lines (specified number of lines above and below the pattern) for matching the pattern. Here, the number of lines is specified as 3
207
strings is used to
extract all printable character strings found in the file or files given as arguments. It is useful in locating human-readable content embedded in binary files; for text files one can just use grep. For example, to search for the string my_string in a spreadsheet: $ strings book1.xls | grep my_string
208
The tr utility is used to
# translate specified characters into other characters or to delete them. The general syntax is as follows: $ tr [options] set1 [set2] Command Usage $ tr abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ Convert lower case to upper case $ tr '{}' '()' < inputfile > outputfile Translate braces into parenthesis $ echo "This is for testing" | tr [:space:] '\t' Translate white-space to tabs $ echo "This is for testing" | tr -s [:space:] Squeeze repetition of characters using -s $ echo "the geek stuff" | tr -d 't' Delete specified characters using -d option $ echo "my username is 432234" | tr -cd [:digit:] Complement the sets using -c option $ tr -cd [:print:] < file.txt Remove all non-printable character from a file $ tr -s '\n' ' ' < file.txt Join all the lines in a file into a single line
209
tee
takes the output from any command, and, while sending it to standard output, it also saves it to a file. In other words, it "tees" the output stream from the command: one stream is displayed on the standard output and the other is saved to a file. For example, to list the contents of a directory on the screen and save the output to a file, at the command prompt type ls -l | tee newfile and press the Enter key. Typing cat newfile will then display the output of ls –l.
210
wc (word count)
counts the number of lines, words, and characters in a file or list of files. Options are given in the table below. Option Description –l Displays the number of lines -c Displays the number of bytes -w Displays the number of words
211
cut is used for
manipulating column-based files and is designed to extract specific columns. The default column separator is the tab character. A different delimiter can be given as a command option. For example, to display the third column delimited by a blank space, at the command prompt type ls -l | cut -d" " -f3 and press the Enter key.
212
Class A addresses
use the first octet of an IP address as their Net ID and use the other three octets as the Host ID. The first bit of the first octet is always set to zero. So you can use only 7-bits for unique network numbers. As a result, there are a maximum of 126 Class A networks available (the addresses 0000000 and 1111111 are reserved). Not surprisingly, this was only feasible when there were very few unique networks with large numbers of hosts. As the use of the Internet expanded, Classes B and C were added in order to accommodate the growing demand for independent networks. Each Class A network can have up to 16.7 million unique hosts on its network. The range of host address is from 1.0.0.0 to 127.255.255.255.
213
Class B addresses
use the first two octets of the IP address as their Net ID and the last two octets as the Host ID. The first two bits of the first octet are always set to binary 10, so there are a maximum of 16,384 (14-bits) Class B networks. The first octet of a Class B address has values from 128 to 191. The introduction of Class B networks expanded the number of networks but it soon became clear that a further level would be needed. Each Class B network can support a maximum of 65,536 unique hosts on its network. The range of host address is from 128.0.0.0 to 191.255.255.255.
214
Class C addresses
use the first three octets of the IP address as their Net ID and the last octet as their Host ID. The first three bits of the first octet are set to binary 110, so almost 2.1 million (21-bits) Class C networks are available. The first octet of a Class C address has values from 192 to 223. These are most common for smaller networks which don't have many unique hosts. Each Class C network can support up to 256 (8-bits) unique hosts. The range of host address is from 192.0.0.0 to 223.255.255.255
215
DHCP
Dynamic Host Configuration Protocol (DHCP) is used to assign IP addresses.
216
You can view your system’s hostname simply by typing
hostname with no argument.
217
host
host nameserver.com returns information about server
218
nslookup
nslookup nameserver.com
219
For Debian family configurations, the basic network configuration files could be found under {1}, while for Fedora and SUSE family systems one needed to inspect
1) /etc/network/ | 2) /etc/sysconfig/network
220
Information about a particular network interface or all network interfaces can be reported by the
ip and ifconfig utilities, which you may have to run as the superuser, or at least, give the full path, i.e. /sbin/ifconfig, on some distributions. ip is newer than ifconfig and has far more capabilities, but its output is uglier to the human eye
221
To view the IP address
$ /sbin/ip addr show
222
To view the routing information
$ /sbin/ip route show
223
ping is used to
check whether or not a machine attached to the network can receive and send data; i.e. it confirms that the remote host is online and is responding. To check the status of the remote host, at the command prompt, type ping .
224
One can use the route utility or the newer ip route command to
view or change the IP routing table to add, delete, or modify specific (static) routes to specific hosts or networks. The table explains some commands that can be used to manage IP routing: Task Command Show current routing table $ route –n or ip route Add static route $ route add -net address or ip route add Delete static route $ route del -net address or ip route del
225
traceroute is used to
inspect the route which the data packet takes to reach the destination host, which makes it quite useful for troubleshooting network delays and errors. By using traceroute, you can isolate connectivity issues between hops, which helps resolve them faster. To print the route taken by the packet to reach the network host, at the command prompt, type traceroute
.
226
ethtool
Queries network interfaces and can also set various parameters such as the speed
227
netstat
Displays all active connections and routing tables. Useful for monitoring performance and troubleshooting
228
nmap
Scans open ports on a network. Important for security analysis
229
tcpdump
Dumps network traffic for analysis
230
iptraf
Monitors network traffic in text mode
231
mtr
Combines functionality of ping and traceroute and gives a continuously updated display
232
dig
Tests DNS workings. A good replacement for host and nslookup
233
To download a web page, you can simply type
wget
234
You can read a URL using
curl
235
File Transfer Protocol (FTP) is a
well-known and popular method for transferring files between computers using the Internet.
236
Some command line FTP clients are
ftp sftp ncftp yafc (Yet Another FTP Client).
237
To copy a local file to a remote system, at the command prompt, type
scp :/home/user/ and press Enter.
238
Linux provides a wide choice of shells; exactly what is available on the system is listed in
/etc/shells
239
let's see how to create a more interactive example using a bash script. The user will be prompted to enter a value, which is then displayed on the screen. The value is stored in a temporary variable, name. We can reference the value of a shell variable by using a $ in front of the variable name, such as $name. To create this script, you need to create a file named getname.sh in your favorite editor with the following content
``` #!/bin/bash # Interactive reading of a variable echo "ENTER YOUR NAME" read name # Display variable input echo The name given was :$name ```
240
As a script executes, one can check for a specific value or condition and return success or failure as the result. By convention, success is returned as 0, and failure is returned as a non-zero value. An easy way to demonstrate success and failure completion is to execute ls on a file that exists as well as one that does not, the return value is stored in the environment variable represented by $?:
$ ls /etc/logrotate.conf /etc/logrotate.conf $ echo $? 0
241
Scripts require you to follow a standard language syntax. Rules delineate how to define variables and how to construct and format allowed statements, etc. The table lists some special character usages within bash scripts:
Character Description # Used to add a comment, except when used as \#, or as #! when starting a script \ Used at the end of a line to indicate continuation on to the next line ; Used to interpret what follows as a new command to be executed next $ Indicates what follows is an environment variable > Redirect output >> Append output < Redirect input | Used to pipe the result into the next command
242
Users sometimes need to combine several commands and statements and even conditionally execute them based on the behavior of operators used in between them. This method is called chaining of commands. There are several different ways to do this, depending on what you want to do.
The ; (semicolon) character is used to separate these commands and execute them sequentially, as if they had been typed on separate lines. Each ensuing command is executed whether or not the preceding ones succeed. Thus, the three commands in the following example will all execute, even if the ones preceding them fail: $ make ; make install ; make clean However, you may want to abort subsequent commands when an earlier one fails. You can do this using the && (and) operator as in: $ make && make install && make clean If the first command fails, the second one will never be executed. A final refinement is to use the || (or) operator, as in: $ cat file1 || cat file2 || cat file3
243
Shell scripts execute sequences of commands and other types of statements. These commands can be
- Compiled applications - Built-in bash commands - Shell scripts or scripts from other interpreted languages, such as perl and Python.
244
Within a script, the parameter or an argument is represented with a $ and a number or special character. The table lists some of these parameters.
``` Parameter Meaning $0 Script name $1 First parameter $2, $3, etc. Second, third parameter, etc. $* All parameters $# Number of arguments ```
245
At times, you may need to substitute the result of a command as a portion of another command. It can be done in two ways:
By enclosing the inner command in $( ) By enclosing the inner command with backticks (``) The second, backticks form, is deprecated in new scripts and commands. No matter which method is used, the specified command will be executed in a newly launched shell environment, and the standard output of the shell will be inserted where the command substitution is done. Virtually any command can be executed this way. While both of these methods enable command substitution, the $( ) method allows command nesting. New scripts should always use of this more modern method. For example: $ ls /lib/modules/$(uname -r)/
246
By default, the variables created within a script are available only to the subsequent steps of that script. Any child processes (sub-shells) do not have automatic access to the values of these variables. To make them available to child processes, they must be promoted to environment variables using the export statement, as in
export VAR=value or VAR=value ; export VAR
247
The function declaration requires a name which is used to invoke it. The proper syntax is
function_name () { command... }
248
if
In compact form, the syntax of an if statement is: if TEST-COMMANDS; then CONSEQUENT-COMMANDS; fi A more general definition is: ``` if condition then statements else statements fi ```
249
In the following example, an if statement checks to see if a certain file exists, and if the file is found, it displays a message indicating success or failure
``` if [ -f "$1" ] then echo file "$1 exists" else echo file "$1" does not exist fi ```
250
You can use the elif statement to perform more complicated tests, and take action appropriate actions. The basic syntax is
``` if [ sometest ] ; then echo Passed test1 elif [ somothertest ] ; then echo Passed test2 fi ```
251
You can use the if statement to test for file attributes, such as: File or directory existence Read or write permission Executable permission
Condition Meaning - e file Checks if the file exists. - d file Checks if the file is a directory. - f file Checks if the file is a regular file (i.e. not a symbolic link, device node, directory, etc.) - s file Checks if the file is of non-zero size. - g file Checks if the file has sgid set. - u file Checks if the file has suid set. - r file Checks if the file is readable. - w file Checks if the file is writable. - x file Checks if the file is executable.
252
Similarly, to check if the value of number1 is greater than the value of number2, use the following conditional test
[ $number1 -gt $number2 ]
253
You can use the if statement to compare strings using the operator == (two equal signs). The syntax is as follows
if [ string1 == string2 ] ; then ACTION fi
254
You can use specially defined operators with the if statement to compare numbers. The various operators that are available are listed in the table
Operator Meaning - eq Equal to - ne Not equal to - gt Greater than - lt Less than - ge Greater than or equal to - le Less than or equal to The syntax for comparing numbers is as follows: exp1 -op exp2
255
Arithmetic expressions can be evaluated in the following three ways (spaces are important!)
Using the expr utility expr is a standard but somewhat deprecated program. The syntax is as follows: expr 8 + 8 echo $(expr 8 + 8) Using the $((...)) syntax This is the built-in shell format. The syntax is as follows: echo $((x+1)) Using the built-in shell command let. The syntax is as follows: let x=( 1 + 2 ); echo $x
256
String operators include those that do comparison, sorting, and finding the length. The following table demonstrates the use of some basic string operators
Operator Meaning [[ string1 > string2 ]] Compares the sorting order of string1 and string2. [[ string1 == string2 ]] Compares the characters in string1 with the characters in string2. myLen1=${#string1} Saves the length of string1 in the variable myLen1.
257
To extract the first n characters of a string we can specify
${string:0:n}.
258
To extract all characters in a string after a dot (.), use the following expression
${string#*.}
259
basic structure of the case statement
``` case expression in pattern1) execute commands;; pattern2) execute commands;; pattern3) execute commands;; pattern4) execute commands;; * ) execute some default commands or nothing ;; esac ```
260
The for loop operates on each element of a list of items. The syntax for the for loop is
for variable-name in list do execute one iteration for each item in the list until the list is finished done
261
The until loop repeats a set of statements as long as the control command is false. Thus, it is essentially the opposite of the while loop. The syntax is:
``` until condition is false do Commands for execution ---- done ```
262
You can run a bash script in debug mode either by doing bash –x ./script_file, or bracketing parts of the script with
set -x and set +x. The debug mode helps identify the error because: It traces and prefixes each command with the + character. It displays each command before executing it. It can debug only selected parts of a script (if desired) with: set -x # turns on debugging ... set +x # turns off debugging
263
In UNIX/Linux, all programs that run are given three open file streams when they are started as listed in the table
ile stream Description File Descriptor stdin Standard Input, by default the keyboard/terminal for programs run from the command line 0 stdout Standard output, by default the screen for programs run from the command line 1 stderr Standard error, where output error messages are shown or saved 2
264
to create temporary files
Command Usage TEMP=$(mktemp /tmp/tempfile.XXXXXXXX) To create a temporary file TEMPDIR=$(mktemp -d /tmp/tempdir.XXXXXXXX) To create a temporary directory
265
Certain commands (like find) will produce voluminous amounts of output, which can overwhelm the console. To avoid this, we can redirect the large output to a special file (a device node) called
/dev/null. This pseudofile is also called the bit bucket or black hole. All data written to it is discarded and write operations never return a failure condition. Using the proper redirection operators, it can make the output disappear from commands that would normally generate output to stdout and/or stderr: $ ls -lR /tmp > /dev/null In the above command, the entire standard output stream is ignored, but any errors will still appear on the console. However, if one does: $ ls -lR /tmp >& /dev/null both stdout and stderr will be dumped into /dev/null.
266
random numbers can be generated by using the
$RANDOM environment variable, which is derived from the Linux kernel’s built-in random number generator, or by the OpenSSL library function, which uses the FIPS140 (Federal Information Processing Standard) algorithm to generate random numbers for encryption
267
/dev/random and /dev/urandom
The Linux kernel offers the /dev/random and /dev/urandom device nodes, which draw on the entropy pool to provide random numbers which are drawn from the estimated number of bits of noise in the entropy pool. /dev/random is used where very high quality randomness is required, such as one-time pad or key generation, but it is relatively slow to provide values. /dev/urandom is faster and suitable (good enough) for most cryptographic purposes. Furthermore, when the entropy pool is empty, /dev/random is blocked and does not generate any number until additional environmental noise (network traffic, mouse movement, etc.) is gathered, whereas /dev/urandom reuses the internal pool to produce more pseudo-random bits.
268
The Linux standard for printing software is the
Common UNIX Printing System (CUPS).
269
CUPS carries out the printing process with the help of its various components
``` Configuration Files Scheduler Job Files Log Files Filter Printer Drivers Backend. ```
270
The print scheduler reads server settings from several configuration files, the two most important of which are
cupsd. conf and printers.conf. These and all other CUPS related configuration files are stored under the /etc/cups/ directory. cupsd. conf is where most system-wide settings are located; it does not contain any printer-specific details. Most of the settings available in this file relate to network security, i.e. which systems can access CUPS network capabilities, how printers are advertised on the local network, what management features are offered, and so on. printers. conf is where you will find the printer-specific settings. For every printer connected to the system, a corresponding section describes the printer’s status and capabilities. This file is generated only after adding a printer to the system and should not be modified by hand. You can view the full list of configuration files by typing: ls -l /etc/cups/.cupsd.conf and printers.conf. These and all other CUPS related configuration files are stored under the /etc/cups/ directory. cupsd. conf is where most system-wide settings are located; it does not contain any printer-specific details. Most of the settings available in this file relate to network security, i.e. which systems can access CUPS network capabilities, how printers are advertised on the local network, what management features are offered, and so on. printers. conf is where you will find the printer-specific settings. For every printer connected to the system, a corresponding section describes the printer’s status and capabilities. This file is generated only after adding a printer to the system and should not be modified by hand. You can view the full list of configuration files by typing: ls -l /etc/cups/.
271
CUPS stores print requests as files under the
/var/spool/cups directory (these can actually be accessed before a document is sent to a printer). Data files are prefixed with the letter "d" while control files are prefixed with the letter "c".
272
CUPS Log files are placed in
/var/log/cups and are used by the scheduler to record activities that have taken place. These files include access, error, and page records. To view what log files exist, type: $ sudo ls -l /var/log/cups
273
CUPS uses filters to convert job file formats to printable formats. Printer drivers contain descriptions for currently connected and configured printers, and are usually stored under
/etc/cups/ppd/. The print data is then sent to the printer through a filter and via a backend that helps to locate devices connected to the system. So, in short, when you execute a print command, the scheduler validates the command and processes the print job, creating job files according to the settings specified in the configuration files. Simultaneously, the scheduler records activities in the log files. Job files are processed with the help of the filter, printer driver, and backend, and then sent to the printer.
274
Assuming CUPS has been installed you'll need to start and manage the CUPS daemon so that CUPS is ready for configuring a printer. Managing the CUPS daemon is simple; all management features can be done with the systemctl utility:
$ systemctl status cups $ sudo systemctl [enable|disable] cups $ sudo systemctl [start|stop|restart] cups
275
CUPS web page available throue
localhost:631
276
Some lp commands and other printing utilities you can use are listed in the table
Command Usage lp To print the file to default printer lp -d printer To print to a specific printer (useful if multiple printers are available) program | lp echo string | lp To print the output of a program lp -n number To print multiple copies lpoptions -d printer To set the default printer lpq -a To show the queue status lpadmin To configure printer queues
277
In Linux, command line print job management commands allow you to monitor the job state as well as managing the listing of all printers and checking their status, and canceling or moving print jobs to another printer. Some of these commands are listed in the table.
Command Usage lpstat -p -d To get a list of available printers, along with their status lpstat -a To check the status of all connected printers, including job numbers cancel job-id OR lprm job-id To cancel a print job lpmove job-id newprinter To move a print job to new printer
278
enscript is a tool that is used to convert a text file to PostScript and other formats. It also supports Rich Text Format (RTF) and HyperText Markup Language (HTML). For example, you can convert a text file to two columns (-2) formatted PostScript using the command:
$ enscript -2 -r -p psfile.ps textfile.txt This command will also rotate (-r) the output to print so the width of the paper is greater than the height (aka landscape mode) thereby reducing the number of pages required for printing. The commands that can be used with enscript are listed in the table below (for a file called textfile.txt). Command Usage enscript -p psfile.ps textfile.txt Convert a text file to PostScript (saved to psfile.ps) enscript -n -p psfile.ps textfile.txt Convert a text file to n columns where n=1-9 (saved in psfile.ps) enscript textfile.txt Print a text file directly to the default printer
279
From time to time, you may need to convert files from one format to the other, and there are very simple utilities for accomplishing that task. ps2pdf and pdf2ps are part of the ghostscript package installed on or available on all Linux distributions. As an alternative, there are pstopdf and pdftops which are usually part of the poppler package, which may need to be added through your package manager. Unless you are doing a lot of conversions or need some of the fancier options (which you can read about in the man pages for these utilities), it really does not matter which ones you use. Another possibility is to use the very powerful convert program, which is part of the ImageMagick package. Some usage examples:
Command Usage pdf2ps file.pdf Converts file.pdf to file.ps ps2pdf file.ps Converts file.ps to file.pdf pstopdf input.ps output.pdf Converts input.ps to output.pdf pdftops input.pdf output.ps Converts input.pdf to output.ps convert input.ps output.pdf Converts input.ps to output.pdf convert input.pdf output.ps Converts input.pdf to output.ps
280
You can accomplish a wide variety of tasks using qpdf including:
Command Usage qpdf --pages 1.pdf 2.pdf -- 12.pdf Merge the two documents 1.pdf and 2.pdf. The output will be saved to 12.pdf. qpdf --pages 1.pdf 1-2 -- new.pdf Write only pages 1 and 2 of 1.pdf. The output will be saved to new.pdf. qpdf --encrypt file.pdf file-encrypted.pdf Encrypt file.pdf with output as file-encrypted.pdf qpdf --decrypt --pasword=apword file-encrypted.pdf file-decrypted.pdf Decrypt file-encrypted.pdf with output as file-decrypted.pdf
281
You can accomplish a wide variety of tasks using pdftk including:
Command Usage pdftk 1.pdf 2.pdf cat output 12.pdf Merge the two documents 1.pdf and 2.pdf. The output will be saved to 12.pdf. pdftk A=1.pdf cat A1-2 output new.pdf Write only pages 1 and 2 of 1.pdf. The output will be saved to new.pdf. pdftk A=1.pdf cat A1-endright output new.pdf Rotate all pages of 1.pdf 90 degrees clockwise and save result in new.pd
282
If you’re working with PDF files that contain confidential information and you want to ensure that only certain people can view the PDF file, you can apply a password to it using the user_pw option. One can do this by issuing a command such as:
$ pdftk public.pdf output private.pdf user_pw PROMPT
283
Ghostscript is widely available as an interpreter for the Postscript and PDF languages. The executable program associated with it is abbreviated to gs. This utility can do most of the operations pdftk can, as well as many others; see man gs for details. Use is somewhat complicated by the rather long nature of the options. For example:
Combine three PDF files into one: $ gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -sOutputFile=all.pdf file1.pdf file2.pdf file3.pdf Split pages 10 to 20 out of a PDF file: $ gs -sDEVICE=pdfwrite -dNOPAUSE -dBATCH -dDOPDFMARKS=false -dFirstPage=10 -dLastPage=20\ -sOutputFile=split.pdf file.pdf
284
pdfinfo
It can extract information about PDF files, especially when the files are very large or when a graphical interface is not available.
285
flpsed
It can add data to a PostScript document. This tool is specifically useful for filling in forms or adding short comments into the document.
286
pdfmod
It is a simple application that provides a graphical interface for modifying PDF documents. Using this tool, you can reorder, rotate, and remove pages; export images from a document; edit the title, subject, and author; add keywords; and combine documents using drag-and-drop action.
287
For each user, the following seven fields are maintained in the /etc/passwd file
Field Name Details Remarks Username User login name Should be between 1 and 32 characters long Password User password (or the character x if the password is stored in the /etc/shadow file) in encrypted format Is never shown in Linux when it is being typed; this stops prying eyes User ID (UID) Every user must have a user id (UID) UID 0 is reserved for root user UID's ranging from 1-99 are reserved for other predefined accounts UID's ranging from 100-999 are reserved for system accounts and groups Normal users have UID's of 1000 or greater Group ID (GID) The primary Group ID (GID); Group Identification Number stored in the /etc/group file Is covered in detail in the chapter on Processes User Info This field is optional and allows insertion of extra information about the user such as their name For example: Rufus T. Firefly Home Directory The absolute path location of user's home directory For example: /home/rtfirefly Shell The absolute location of a user's default shell For example: /bin/bash
288
By default, Linux distinguishes between several account types in order to isolate processes and workloads. Linux has four types of accounts
root System Normal Network.
289
In Linux you can use either su or sudo to temporarily grant root access to a normal user; these methods are actually quite different. Listed below are the differences between the two commands:
su When elevating privilege, you need to enter the root password. Giving the root password to a normal user should never, ever be done. sudo When elevating privilege, you need to enter the user’s password and not the root password. su Once a user elevates to the root account using su, the user can do anything that the root user can do for as long as the user wants, without being asked again for a password. sudo Offers more features and is considered more secure and more configurable. Exactly what the user is allowed to do can be precisely controlled and limited. By default the user will either always have to keep giving their password to do further operations with sudo, or can avoid doing so for a configurable time interval. su The command has limited logging features. sudo The command has detailed logging features.
290
Users' authorization for using sudo is based on configuration information stored in the
/etc/sudoers file and in the /etc/sudoers.d directory.
291
A message such as the following would appear in a system log file (usually {1}) when trying to execute sudo bash without successfully authenticating the user: authentication failure; logname=op uid=0 euid=0 tty=/dev/pts/6 ruser=op rhost= user=op conversation failed auth could not identify password for [op] op : 1 incorrect password attempt ; TTY=pts/6 ; PWD=/var/log ; USER=root ; COMMAND=/bin/bash
/var/log/secure
292
You can edit the sudoers file by using
visudo
293
The basic structure of an entry in sudoers file is
who where = (as_whom) what
294
By default, sudo commands and any failures are logged in
/var/log/auth.log under the Debian distribution family, and in /var/log/messages and/or /var/log/secure on other systems. This is an important safeguard to allow for tracking and accountability of sudo use. A typical entry of the message contains: ``` Calling username Terminal info Working directory User account invoked Command with arguments. ```
295
More recent additional security mechanisms that limit risks even further include
Control Groups (cgroups) Allows system administrators to group processes and associate finite resources to each cgroup. Containers Makes it possible to run multiple isolated Linux systems (containers) on a single system by relying on cgroups. Virtualization Hardware is emulated in such a way that not only processes can be isolated, but entire systems are run simultaneously as isolated and insulated guests (virtual machines) on one physical host.
296
Originally, encrypted passwords were stored in the
/etc/passwd file, which was readable by everyone. This made it rather easy for passwords to be cracked. On modern systems, passwords are actually stored in an encrypted format in a secondary file named /etc/shadow. Only those with root access can modify/read this file.