API Flashcards
fork
int fork(void);
create a new process that is an exact copy of the old process
- returns pid of new process in parent
- returns 0 in child
gets called once, returns twice
waitpid
int waitpid(int pid, int * status, int options);
attempt to get exit status of child
- returns pid or -1 on error
- pid = pid to wait for, -1 for any
- status = exit value/signal of child
- options = 0 means wait for child to terminate, WNOHANG means do not wait
exit
void exit(int status);
exit the current process
- status = shows up if waitpid called
0 is success, non-zero is failure
kill
int kill(int pid, int sig);
sends signal to a process
- pid = pid of target
- sig = code
SIGTERM most common, kills process by default but app can catch for cleanup
SIGKILL is stronger, always kills
execve
int execve(char * prog, char ** argv, char ** envp);
replace the current program with a new one
- prog = path to new program
- argv = arguments to prog’s main
- envp = environment for new program
execvp
int execvp(char * prog, char ** argv);
searches PATH for prog, uses current environment
execlp
int execlp(char * prog, char * arg, …);
execvp, but listing each argument terminated by NULL
dup2
int dup2(int oldfd, int newfd);
makes newfd an exact copy of oldfd
closes newfd, if it was valid
both fds share the same offset (lseek on one affects both)
pipe
int pipe(int fds[2]);
output of fds[1] is input to fds[0]
- returns 0 on success, -1 on error
- when fds[1] is invalidated, fds[0] returns EOF
x86 lock
makes the following instruction atomic
x86 lfence
waits for all reads to be done before continuing
sfence
waits for all writes to be done before continuing
mutex_lock implementation
while(xchg(mutex, true)) {};
uses atomic test and set
_Atomic
wrap most basic types with _Atomic() and all standard operations are seqcon
atomic_flag
special type:
- atomic bool, no load/store
- uses test_and_set or clear instead
relaxed
no ordering
consume
slightly relaxed acquire
acquire
reads/writes after the load cannot be reordered before
release
reads/writes before the store cannot be reordered after
acq_rel
acquire and release
seq_cst
full seqcon
how syscalls are performed
- load syscall args into registers
- load syscall number into rdi
- int 60 (T_SYS = 60)
- interrupt handler looks to IDT for entry point
- jumps to the entry point
TLB instructions
tlbwr, tlbwi, tlbr, tlbp
to use each, load:
c0_entryhi
c0_entrylo
c0_index
tlbwr
TLB write a random slot
tlbwi
TLB write a specific slot
tlbr
TLB read a specific slot
tlbp
probe the entry that matches c0_entryhi
brk
char * brk(const char * addr);
- set and return new value of breakpoint
sbrk
char * sbrk(int incr);
- increment value of the breakpoint and return the old value
mmap
void * mmap(void * addr, size_t len, int prot, int flags, int fd, off_t offset);
- treat a file as if it were memory
- map the file fd to the virtual address addr, except that changes are backed up to the file
- file can be shared with other processes
- addr = NULL, let kernel choose addr
- prot specifies protection of region (PROT_EXEC | PROT_READ | PROT_WRITE | PROT_NONE)
- flags: MAP_SHARED - modifications seen by everyone, MAP_PRIVATE - modifications are private, MAP_ANON - anonymous memory (no file associated with this address range)
munmap
int munmap(void * addr, size_t len);
- remove mmapped object from address space
mprotect
int mprotect(void * addr, size_t len, int prot);
- change protection on pages to prot
msync
int msync(void * addr, size_t len, int flags);
flush changes of mmapped file to backing store i.e. the file
mincore
int mincore(void * addr, size_t len, char * vec);
- return which pages present in RAM (i.e. core) vs swap space in vec
madvise
int madvise(void * addr, size_t len, int behav);
- advise the OS on memory use e.g. MADV_RANDOM (don’t prefetch) vs MADV_SEQUENTIAL
- MADV_WILLNEED or MADV_DONTNEED (don’t prefetch)
sigaction
int sigaction(int sig, const struct sigaction * act, struct sigaction * oact);
- specify what function to call for SIGSEGV or other signals
- if oact not null, save the old sigaction in oact
p_nice
user-set weighting factor in [-20, 20], negative -> higher priority
p_estcpu
per-process estimated CPU usage
- incremented whenever it got preempted
- decayed every second while process runnable
load (with p_nice, p_estcpu)
the sampled average length of the run queue + short term sleep queue over last minute
- the higher the load, the less p_estcpu is reduced